Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
5,400 | 5,889 | Softstar: Heuristic-Guided Probabilistic Inference
Mathew Monfort
Computer Science Department
University of Illinois at Chicago
Chicago, IL 60607
mmonfo2@uic.edu
Brenden M. Lake
Center for Data Science
New York University
New York, NY 10003
brenden@nyu.edu
Patrick Lucey
Disney Research Pittsburgh
Pittsburgh, PA 15232
patrick.lucey@disneyresearch.com
Brian D. Ziebart
Computer Science Department
University of Illinois at Chicago
Chicago, IL 60607
bziebart@uic.edu
Joshua B. Tenenbaum
Brain and Cognitive Sciences Department
Massachusetts Institute of Technology
Cambridge, MA 02139
jbt@mit.edu
Abstract
Recent machine learning methods for sequential behavior prediction estimate the
motives of behavior rather than the behavior itself. This higher-level abstraction
improves generalization in different prediction settings, but computing predictions often becomes intractable in large decision spaces. We propose the Softstar algorithm, a softened heuristic-guided search technique for the maximum
entropy inverse optimal control model of sequential behavior. This approach supports probabilistic search with bounded approximation error at a significantly reduced computational cost when compared to sampling based methods. We present
the algorithm, analyze approximation guarantees, and compare performance with
simulation-based inference on two distinct complex decision tasks.
1
Introduction
Inverse optimal control (IOC) [13], also known as inverse reinforcement learning [18, 1] and inverse
planning [3], has become a powerful technique for learning to control or make decisions based on
expert demonstrations [1, 20]. IOC estimates the utilities of a decision process that rationalizes an
expert?s demonstrated control sequences. Those estimated utilities can then be used in an (optimal)
controller to solve new decision problems, producing behavior that is similar to demonstrations.
Predictive extensions to IOC [17, 23, 2, 16, 19, 6] recognize the inconsistencies, and inherent suboptimality, of repeated behavior by incorporating uncertainty. They provide probabilistic forecasts
of future decisions in which stochasticity is due to this uncertainty rather than the stochasticity of
the decision process?s dynamics. These models? distributions over plans and policies can typically
be defined as softened versions of optimal sequential decision criteria.
A key challenge for predictive IOC is that many decision sequences are embedded within large
decision processes. Symmetries in the decision process can be exploited to improve efficiency [21],
but decision processes are not guaranteed to be (close to) symmetric. Approximation approaches to
probabilistic structured prediction include approximate maxent IOC [12], heuristic-guided sampling
[15], and graph-based IOC [7]. However, few guarantees are provided by these approaches; they are
not complete and the set of variable assignments uncovered may not be representative of the model?s
distribution.
Seeking to provide stronger guarantees and improve efficiency over previous methods, we present
Softstar, a heuristic-guided probabilistic search algorithm for inverse optimal control. Our approach
1
generalizes the A* search algorithm [8] to calculate distributions over decision sequences in predictive IOC settings allowing for efficient bounded approximations of the near-optimal path distribution
through a decision space. This distribution can then be used to update a set of trainable parameters,
?, that motivate the behavior of the decision process via a cost/reward function [18, 1, 3, 23].
We establish theoretical guarantees of this approach and demonstrate its effectiveness in two settings: learning stroke trajectories for Latin characters and modeling the ball-handling decision process of professional soccer.
2
2.1
Background
State-space graphs and Heuristic-guided search
In this work, we restrict our consideration to deterministic planning tasks with discrete state spaces.
The space of plans and their costs can be succinctly represented using a state-space graph, G =
(S, E, cost). With vertices, s ? S, representing states of the planning task and directed edges,
eab ? E, representing available transitions between states sa and sb . The neighbor set of state s,
N (s), is the set of states to which s has a directed edge and a cost function, cost(s, s0 ), represents
the relative desirability of transitioning between states s and s0 .
The optimal plan from state s1 to goal state sT is a variable-length sequence of states (s1 , s2 , . . . , sT )
forming a path through the graph minimizing a cumulative penalty. Letting h(s) represent the cost
of the optimal path from state s to state sT (i.e., the cost-to-go or value of s) and defining h(sT ) , 0,
the optimal path corresponds to a fixed-point solution of the next state selection criterion [5]:
h(s) = 0 min h(s0 ) + cost(s, s0 ),
st+1 = argmin h(s0 ) + cost(st , s0 ).
s ?N (s)
(1)
s0 ?N (st )
The optimal path distance to the start state, d(s), can be similarly defined (with d(s1 ) , 0) as
d(s) =
min
s0 :s?N (s0 )
d(s0 ) + cost(s0 , s).
(2)
Dynamic programming algorithms, such as Dijkstra?s [9], search the space of paths through the
state-space graph in order of increasing d(s) to find the optimal path. Doing so implicitly considers
all paths up to the length of the optimal path to the goal.
Additional knowledge can significantly reduce the portion of the state space needed to be explored
to obtain an optimal plan. For example, A* search [11] explores partial state sequences by expanding
?
states that minimize an estimate, f (s) = d(s) + h(s),
combining the minimal cost to reach state s,
?
d(s), with a heuristic estimate of the remaining cost-to-go, h(s).
A priority queue is used to keep
track of expanded states and their respective estimates. A* search then expands the state at the top of
the queue (lowest f (s)) and adds its neighboring states to the queue. When the heuristic estimate is
?
admissible (i.e. h(s)
? h(s) ? s ? S), the algorithm terminates with a guaranteed optimal solution
once the best ?unexpanded? state?s estimate, f (s), is worse than the best discovered path to the goal.
2.2
Predictive inverse optimal control
Maximum entropy IOC algorithms [23, 22] estimate a stochastic action policy that is most uncertain
while still guaranteeing the same expected cost as demonstrated behavior on an unknown cost function [1]. For planning settings with deterministic dynamics, this yields a probability distribution over
state sequences that are consistent with paths through the state-space graph, P? (s1:T ) ? e?cost? (s1:T ) ,
T
?1
X
where cost? (s1:T ) =
?T f (st , st+1 ) is a linearly weighted vector of state-transition features
t=1
combined using the feature function, f (st , st+1 ), and a learned parameter vector, ?. Calculating the
marginal state probabilities of this distribution is important for estimating model parameters. The
forward-backward algorithm [4] can be employed, but for large state-spaces it may not be practical.
2
3
Approach
Motivated by the efficiency of heuristic-guided search algorithms for optimal planning, we define
an analogous approximation task in the predictive inference setting and present an algorithm that
leverages heuristic functions to accomplish this task efficiently with bounded approximation error.
The problem being addressed is the inefficiency of existing inference methods for reward/cost-based
probabilistic models of behavior. We present a method using ideas from heuristic-guided search (i.e.,
A*) for estimating path distributions through large scale deterministic graphs with bounded approximation guarantees. This is an improvement over previous methods as it results in more accurate
distribution estimations without the complexity/sub-optimality concerns of path sampling and is
suitable for any problem that can be represented as such a graph.
Additionally, since the proposed method does not sample paths, but instead searches the space as in
A*, it does not need to retrace its steps along a previously searched trajectory to find a new path to
the goal. It will instead create a new branch from an already explored state. Sampling would require
retracing an entire sequence until this branching state was reached. This allows for improvements in
efficiency in addition to the distribution estimation improvements.
3.1
Inference as softened planning
We begin our investigation by recasting the inference task from the perspective of softened planning
where the predictive IOC distribution over state sequences factors into a stochastic policy [23],
?(st+1 |st ) = ehsoft (st )?hsoft (st+1 )??
T
f (st ,st+1 )
,
(3)
according to a softened cost-to-go , hsoft (s), recurrence that is a relaxation of the Bellman equation:
X
hsoft (st ) = ? log
e?cost? (st:T ) = softmin hsoft (st+1 ) + ?T f (st , st+1 )
(4)
st+1 ?N (st )
st:T ??st ,sT
where ?st ,sT is the set of all paths from st to sT ; the softmin, softmin ?(x) , ? log
x
X
e??(x) , is a
x
smoothed relaxation of the min function1 , and the goal state value is initially 0 and ? for others.
A similar softened minimum distance exists in the forward direction from the start state,
X
dsoft (st ) = ? log
e?cost? (s1:t ) = softmin dsoft (st?1 ) + ?T f (st?1 , st ) .
st?1 ?N (st )
s1:t ??s1 ,st
By combining forward and backward soft distances, important marginal expectations are obtained
and used to predict state visitation probabilities and fit the maximum entropy IOC model?s parameters [23]. Efficient search and learning require accurate estimates of dsoft and hsoft values since the
expected number of occurrences of the transition from sa to sb under the soft path distribution is:
e?dsoft (sa )?hsoft (sb )??
T
f (sa ,sb )+dsoft (sT )
.
(5)
These cost-to-go and distance functions can be computed in closed-form using a geometric series,
B = A(I ? A)?1 = A + A2 + A3 + A4 + ? ? ? ,
(6)
where Ai,j = e?cost(si ,sj ) for any states si and sj ? S.
The (i, j)th entry of B is related to the softmin of all the paths from si to sj . Specifically, the
softened cost-to-go can be written as hsof t (si ) = ? log bsi ,sT . Unfortunately, the required matrix
inversion operation is computationally expensive, preventing its use in typical inverse optimal control applications. In fact, power iteration methods used for sparse matrix inversion closely resemble
the softened Bellman updates of Equation (4) that have instead been used for IOC [22].
1
n
o
Equivalently, min ?(x) + softmin ?(x) ? min ?(x) is employed to avoid overflow/underflow in pracx
x
x
tice.
3
3.2
Challenges and approximation desiderata
In contrast with optimal control and planning tasks, softened distance functions, dsoft (s), and costto-go functions, hsoft (s), in predictive IOC are based on many paths rather than a single (best) one.
Thus, unlike in A* search, each sub-optimal path cannot simply be ignored; its influence must instead be incorporated into the softened distance calculation (4). This key distinction poses a significantly different objective for heuristic-guided probabilistic search: Find a subset of paths for which
the softmin distances closely approximate the softmin of the entire path set. While we would hope
that a small subset of paths exists that provides a close approximation, the cost function weights
and the structure of the state-space graph ultimately determine if this is the case. With this in mind,
we aim to construct a method with the following desiderata for an algorithm that seeks a small
approximation set and analyze its guarantees:
1. Known bounds on approximation guarantees;
2. Convergence to any desired approximation guarantee;
3. Efficienct finding small approximation sets of paths.
3.3
Regimes of Convergence
In A* search, theoretical results are based on the assumption that all infinite length paths have
infinite cost (i.e., any cycle has a positive cost) [11]. This avoids a negative cost cycle regime of
non-convergence. Leading to a stronger requirement for our predictive setting are three regimes of
convergence for the predictive IOC distribution, characterized by:
1. An infinite-length most likely plan;
2. A finite-length most likely plan with expected infinite-length plans; and
3. A finite expected plan length.
The first regime results from the same situation described for optimal planning: reachable cycles
of negative cost. The second regime arises when the number of paths grows faster than the penalization of the weights from the additional cost of longer paths (without negative cycles) and is
non-convergent. The final regime is convergent.
An additional assumption is needed in the predictive IOC setting to avoid the second regime of nonconvergence. We assume that a fixed bound on the entropy of the distribution of paths, H(S1:T ) ,
E[? log P (S1:T )] ? Hmax , is known.
Theorem 1 Expected costs under the predictive IOC distribution are related to entropy and softmin
path costs by E[cost? (S1:T )] = H(S1:T ) ? dsoft (sT ).
Together, bounds on the entropy and softmin distance function constrain expected costs under the
predictive IOC distribution (Theorem 1).
3.4
Computing approximation error bounds
A* search with a non-monotonic heuristic function guarantees optimality when the priority queue?s
? soft (s) exceeding the best start-to-goal path cost,
minimal element has an estimate dsoft (s) + h
dsoft (sT ). Though optimality is no longer guaranteed in the softmin search setting, approximations
to the softmin distance are obtained by considering a subset of paths (Lemma 1).
Lemma 1 Let ? represent the entire set (potentially infinite in size) of paths from state s to sT . We
can partition the set ? into two sets ?a and ?b such that ?a ? ?b = ? and ?a ? ?b = ? and define
d?
soft as the softmin over all paths in set ?. Then, given a lower bound estimate for the distance,
?a
?
??b
d?soft (s) ? dsoft (s), we have e?dsoft (s) ? e?dsoft (s) ? e?dsoft (s) .
We establish a bound on the error introduced by considering the set of paths through a set of states
S? in the following Theorem.
Theorem 2 Given an approximation
state subset S? ? S with neighbors of the approximation set
[
defined as N (S? ) ,
N (s) ? S? , the approximation loss of exact search for paths through
s?S?
4
this approximation set (i.e., paths with non-terminal vertices from S? and terminal vertices from
S?
S? ? N (S? )) is bounded by the softmin of the set?s neighbors estimates, e?dsoft (sT ) ? e?dsoft (sT ) ?
S?
?
e? softmins?N (S? ) {dsoft (s)+hsoft (s)} , where dS? (s) is the softmin of all paths with terminal state s and
soft
all previous states within S ? .
Thus, for a dynamic construction of the approximation set S ? , a bound on approximation error can
be maintained by tracking the weights of all states in the neighborhood of that set.
In practice, even computing the exact softened distance function for paths through a small subset of
states may be computationally impractical. Theorem 3 establishes the approximate search bounds
when only a subset of paths in the approximation set are employed to compute the soft distance.
Theorem 3 If a subset of paths ?0S? ? ?S? (and ??0 S? ? ?S? ? ?0S? represents a set of paths that
are prefixes for all of the remaining paths within S? ) through the approximation set S? is employed
to compute the soft distance, the error of the resulting estimate is bounded by:
?0S
?dsoft ?
e?dsoft (sT ) ? e
3.5
(sT )
? softmin
?e
(
softmins?N (S? )
?0S
)
? soft (s) ,softmins?S
dsoft ? (s)+h
?
(
)
?0
?
S
? soft (s)
dsoft ? (s)+h
.
Softstar: Greedy forward path exploration and backward cost-to-go estimation
Our algorithm greedily expands nodes by considering the state contributing the most to the approximation bound (Theorem 3). This is accomplished by extending A* search in the following algorithm.
Algorithm 1 Softstar: Greedy forward and approximate backward search with fixed ordering
? soft , and approximation bound
Input: State-space graph G, initial state s1 , goal sT , heuristic h
?
Output: Approximate soft distance to goal hSsoft
? soft (s1 )
Set hsoft (s) = dsoft (s) = fsoft (s) = ? ? s ? S, hsoft (sT ) = 0, dsoft (s1 ) = 0 and fsoft (s1 ) = h
Insert hs1 , fsoft (s1 )i into priority queue P and initialize empty stack O
while softmin(fsoft (s)) + ? dsoft (sT ) do
s?P
Set s ? min element popped from P
Push s onto O
for s0 ? N (s) do
? soft (s0 ))
fsoft (s0 ) = softmin(fsoft (s0 ), dsoft (s) +cost(s, s0 )+ h
0
0
0
dsoft (s ) = softmin(dsoft (s ), dsoft (s) +cost(s, s ))
(Re-)Insert hs0 , fsoft (s0 )i into P
end
end
while O not empty do
Set s ? top element popped from O
for s0 ? N (s) do
hsoft (s) = softmin(hsoft (s), hsoft (s0 ) + cost(s, s0 ))
end
end
return hsoft
For insertions to the priority queue, if s0 already exists in the queue, its estimate is updated to the
softmin of its previous estimate and the new insertion estimate. Additionally, the softmin of all of the
estimates of elements on the queue can be dynamically updated as elements are added and removed.
The queue contains some states that have never been explored and some that have. The former
correspond to the neighbors of the approximation state set and the latter correspond to the search
approximation error within the approximation state set (Theorem 3). The softmin over all elements
of the priority queue thus provides a bound on the approximation error of the returned distance measure. The exploration order, O, is a stack containing the order that each state is explored/expanded.
A loop through the reverse of the node exploration ordering (stack O) generated by the forward
search computes complementary backward cost-to-go values, hsoft . The expected number of occur5
rences of state transitions can then be calculated for the approximate distribution (5). The bound
on the difference between the expected path cost of this approximate distribution and the actual
distribution over the entire state set is established in Theorem 4.
Theorem 4 The cost expectation inaccuracy introduced by employing state set S? is bounded by
S
|E[cost? (S1:T )] ? ES? [cost? (S1:T )]| ?
dsoft? (sT )?softmin(fsoft (s))
e
s?P
EP [cost? (S1:T )] ? ES? [cost? (S1:T )],
where: ES? is the expectation under the approximate state set produced by the algorithm;
softmin(fsoft (s)) is the softmin of fsoft for all the states remaining on the priority queue after the first
s?P
while loop of Algorithm 1; and EP is the expectation over all paths not considered in the second
while loop (i.e., remaining on the queue). EP is unknown, but can be bounded using Theorem 1.
3.6
Completeness guarantee
The notion of monotonicity extends to the probabilistic setting, guaranteeing that the expansion of a
state provides no looser bounds than the unexpanded state (Definition 1).
?
Definition
n 1 A heuristic function
o hsoft is monotonic if and only if ?s
0
0
?
softmin hsoft (s ) + cost(s, s ) .
?
? soft (s)
S, h
?
s0 ?N (s)
Assuming this, the completeness of the proposed algorithm can be established (Theorem 5).
Theorem 5 For monotonic heuristic functions and finite softmin distances, convergence to any level
of softmin approximation is guaranteed by Algorithm 1.
4
Experimental Validation
We demonstrate the effectiveness of our approach on datasets for Latin character construction using
sequences of pen strokes and ball-handling decisions of professional soccer players. In both cases we
learn the parameters of a state-action cost function that motivates the behavior in the demonstrated
data and using the softstar algorithm to estimate the state-action feature distributions needed to
update the parameters of the cost function [23]. We refer to the appendix for more information.
We focus our experimental results on estimating state-action feature distributions through large state
spaces for inverse optimal control as there is a lot of room for improvement over standard approaches
which typically use sampling based methods to estimate the distributions providing few (if any)
approximation guarantees. Softstar directly estimates this distribution with bounded approximation
error allowing for a more accurate estimation and more informed parameter updates.
4.1
Comparison approaches
We compare our approach to heuristic guided maximum entropy sampling [15], approximate maximum entropy sampling [12], reversible jump Markov chain Monte Carlo (MCMC) [10], and a search
that is not guided by heuristics (comparable to Dijkstra?s algorithm for planning). For consistency,
we use the softmin distance to generate the values of each state in MCMC. Results were collected
on an Intel i7-3720QM CPU at 2.60GHz.
4.2
Character drawing
We apply our approach to the task of predicting the sequential pen strokes used to draw characters
from the Latin alphabet. The task is to learn the behavior of how a person draws a character given
some nodal skeleton. Despite the apparent simplicity, applying standard IOC methods is challenging due to the large planning graph corresponding to a fine-grained representation of the task. We
demonstrate the effectiveness of our method against other commonly employed techniques.
Demonstrated data: The data consists of a randomly separated training set of 400 drawn characters, each with a unique demonstrated trajectory, and a separate test set of 52 examples where
the handwritten characters are converted into skeletons of nodes within a unit character frame [14].
6
For example, the character in Figure 1 was drawn using two strokes, red and
green respectively. The numbering indicates the start of each stroke.
State and feature representation: The state consists of a two node history
(previous and current node) and a bitmap signifying which edges are covered/uncovered. The state space size is 2|E| (|V | + 1)2 with |E| edges and Figure 1: Character
|V | nodes. The number of nodes is increased by one to account for the ini- skeleton with two
tial state. For example, a character with 16 nodes and 15 edges with has a pen strokes.
corresponding state space of about 9.47 million states.
The initial state has no nodal history and a bitmap with all uncovered edges. The goal state will have
a two node history as defined above, and a fully set bitmap representing all edges as covered. Any
transition between nodes is allowed, with transitions between neighbors defined as edge draws and
all others as pen lifts. The appendix provides additional details on the feature representation.
Heuristic: We consider a heuristic function that combines the (soft) minimum costs of covering
each remaining uncovered edge in a character assuming all moves that do not cross an uncovered
edge have zero cost. Formally, it is expressed using the set ofX
uncovered edges, Eu , and the set of
? soft (s) =
all possible costs of traversing edge i, cost(ei ), as h
softmin cost(ei ).
ei ?Eu
4.3
ei
Professional Soccer
In addition, we apply our approach to the task of modeling the discrete spatial decision process of the
ball-handler for single possession open plays in professional soccer. As in the character drawing task,
we demonstrate the effectiveness of our approach against other commonly employed techniques.
Demonstrated data: Tracking information from 93 games consisting of player locations and time
steps of significant events/actions were pre-processed into sets of sequential actions in single possessions. Each possession may include multiple different team-mates handling the ball at different
times resulting in a team decision process on actions rather than single player actions/decisions.
Discretizing the soccer field into cells leads to a very large decision process when considering actions
to each cell at each step. We increase generalization by reformatting the field coordinates so that the
origin lies in the center of the team?s goal and all playing fields are normalized to 105m by 68m and
discretized into 5x4m cells. Formatting the field coordinates based on the distances from the goal of
the team in possession doubles the amount of training data for similar coordinates. The positive and
negative half planes of the y axis capture which side of the goal the ball is located on.
We train a spatial decision model on 92 of the games and evaluate the learned ball trajectories on a
single test game. The data contains 20,337 training possession sequences and 230 test sequences.
State and feature representation: The state consists of a two action history where an action is
designated as a type-cell tuple where the type is the action (pass, shot, clear, dribble, or cross) and
the cell is the destination cell with the most recent action containing the ball?s current location. There
are 1433 possible actions at each step in a trajectory resulting in about 2.05 million possible states.
There are 28 Euclidean features for each action type and 29 that apply to all action types resulting in
168 total features.We use the same features as the character drawing model and include a different
set of features for each action type to learn unique action based cost functions.
Heuristic: We use the softmin cost over all possible actions from the current state as a heuristic.
? soft (s) = softmin {cost(s, s0 )}.
It is admissible if the next state is assumed to always be the goal: h
0
s ?N (s)
4.4
Comparison of learning efficiency
We compare Softstar to other inference procedures for large scale IOC and measure the average test
set log-loss, equivalent to the difference between the cost of the demonstrated path, cost(s1:T ), and
the softmin distance to the goal, dsoft (goal), ? log P (path) = cost(s1:T ) ? dsoft (goal).
7
250
Approximate Max Ent
Heuristic Max Ent
SoftStar
35
Average Test Log-Loss
Average Test Log-Loss
Log-Loss After Each Training Epoch
40
30
25
20
15
10
Approximate Max Ent
Heuristic Max Ent
SoftStar
200
150
100
50
5
0
2
4
6
8
10
0
5
Training Epoch
10
15
20
25
Training Epoch
Figure 2: Training efficiency on the Character (left) and Soccer domains (right).
Figure 2 shows the decrease of the test set log-loss after each training epoch. The proposed method
learns the models far more efficiently than both approximate max ent IOC [12] and heuristic guided
sampling [15]. This is likely due to the more accurate estimation of the feature expectations that
results from searching the graph rather than sampling trajectories.
The improved efficiency of the proposed method is also evident if we analyze the respective time
taken to train each model. Softstar took ~5 hours to train 10 epochs for the character model and ~12
hours to train 25 epochs for the soccer model. To compare, heuristic sampling took ~9 hours for the
character model and ~17 hours for the soccer model, and approximate max ent took ~10 hours for
the character model and ~20 hours for the soccer model.
4.5
Analysis of inference efficiency
In addition to evaluating learning efficiency, we compare the average time efficiency for generating
lower bounds on the estimated softmin distance to the goal for each model in Figure 3.
100
50
MCMC
Approximate Max Ent
Heuristic Max Ent
Soft Star
0
0
20
40
60
80
Estimated Softmin Distance
Estimated Softmin Distance
Softmin Distance Estimation as a Function of Time
150
100
200
150
100
MCMC
Approximate Max Ent
Heuristic Max Ent
Softstar
50
0
0
Seconds
20
40
60
80
100
Seconds
Figure 3: Inference efficiency evaluations for the Character (left) and Soccer domains (right).
The MCMC approach has trouble with local optima. While the unguided algorithm does not experience this problem, it instead explores a large number of improbable paths to the goal. The proposed method avoids low probability paths and converges much faster than the comparison methods.
MCMC fails to converge on both examples even after 1,200 seconds, matching past experience with
the character data where MCMC proved incapable of efficient inference.
5
Conclusions
In this work, we extended heuristic-guided search techniques for optimal planning to the predictive
inverse optimal control setting. Probabilistic search in these settings is significantly more computationally demanding than A* search, both in theory and practice, primarily due to key differences between the min and softmin functions. However, despite this, we found significant performance improvements compared to other IOC inference methods by employing heuristic-guided search ideas.
Acknowledgements
This material is based upon work supported by the National Science Foundation under Grant No.
#1227495, Purposeful Prediction: Co-robot Interaction via Understanding Intent and Goals.
8
References
[1] Peter Abbeel and Andrew Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings International Conference on Machine Learning, pages 1?8, 2004.
[2] Monica Babes, Vukosi Marivate, Kaushik Subramanian, and Michael L Littman. Apprenticeship learning
about multiple intentions. In International Conference on Machine Learning, 2011.
[3] Chris L. Baker, Joshua B. Tenenbaum, and Rebecca R. Saxe. Goal inference as inverse planning. In
Conference of the Cognitive Science Society, 2007.
[4] Leonard E Baum. An equality and associated maximization technique in statistical estimation for probabilistic functions of markov processes. Inequalities, 3:1?8, 1972.
[5] Richard Bellman. A Markovian decision process. Journal of Mathematics and Mechanics, 6:679?684,
1957.
[6] Abdeslam Boularias, Jens Kober, and Jan Peters. Relative entropy inverse reinforcement learning. In Proceedings of the International Conference on Artificial Intelligence and Statistics, pages 182?189, 2011.
[7] Arunkumar Byravan, Mathew Monfort, Brian Ziebart, Byron Boots, and Dieter Fox. Graph-based inverse
optimal control for robot manipulation. In Proceedings of the International Joint Conference on Artificial
Intelligence, 2015.
[8] Rina Dechter and Judea Pearl. Generalized best-first search strategies and the optimality of a*. J. ACM,
July 1985.
[9] Edsger W. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik, 1959.
[10] Peter J. Green. Reversible jump markov chain monte carlo computation and bayesian model determination. Biometrika, 82:711?732, 1995.
[11] Peter E. Hart, Nils J. Nilsson, and Bertram Raphael. A formal basis for the heuristic determination of
minimum cost paths. IEEE Transactions on Systems Science and Cybernetics, 4:100?107, 1968.
[12] De-An Huang, Amir massoud Farahman, Kris M. Kitani, and J. Andrew Bagnell. Approximate maxent
inverse optimal control and its application for mental simulation of human interactions. In AAAI, 2015.
[13] Rudolf E. Kalman. When is a linear control system optimal? Trans. ASME, J. Basic Engrg., 86:51?60,
1964.
[14] Brenden M Lake, Ruslan Salakhutdinov, and Josh Tenenbaum. One-shot learning by inverting a compositional causal process. In NIPS, 2013.
[15] Mathew Monfort, Brenden M. Lake, Brian D. Ziebart, and Joshua B. Tenenbaum. Predictive inverse
optimal control in large decision processes via heuristic-based search. In ICML Workshop on Robot
Learning, 2013.
[16] Mathew Monfort, Anqi Liu, and Brian Ziebart. Intent prediction and trajectory forecasting via predictive
inverse linear-quadratic regulation. In AAAI, 2015.
[17] Gergely Neu and Csaba Szepesv?ari. Apprenticeship learning using inverse reinforcement learning and
gradient methods. In Proceedings UAI, pages 295?302, 2007.
[18] Andrew Y. Ng and Stuart Russell. Algorithms for inverse reinforcement learning. In Proceedings International Conference on Machine Learning, 2000.
[19] Deepak Ramachandran and Eyal Amir. Bayesian inverse reinforcement learning. In Proceedings International Joint Conferences on Artificial Intelligence, pages 2586?2591, 2007.
[20] Nathan D. Ratliff, J. Andrew Bagnell, and Martin A. Zinkevich. Maximum margin planning. In Proceedings International Conference on Machine Learning, pages 729?736, 2006.
[21] Paul Vernaza and Drew Bagnell. Efficient high dimensional maximum entropy modeling via symmetric
partition functions. In Advances in Neural Information Processing Systems, pages 575?583, 2012.
[22] Brian D. Ziebart, J. Andrew Bagnell, and Anind K. Dey. Modeling interaction via the principle of maximum causal entropy. In International Conference on Machine Learning, 2010.
[23] Brian D. Ziebart, Andrew Maas, J. Andrew Bagnell, and Anind K. Dey. Maximum entropy inverse
reinforcement learning. In Association for the Advancement of Artificial Intelligence, 2008.
9
| 5889 |@word version:1 inversion:2 stronger:2 open:1 simulation:2 seek:1 shot:2 tice:1 initial:2 inefficiency:1 uncovered:6 series:1 contains:2 liu:1 prefix:1 past:1 existing:1 bitmap:3 current:3 com:1 unguided:1 anqi:1 si:4 written:1 must:1 dechter:1 chicago:4 recasting:1 partition:2 update:4 eab:1 greedy:2 half:1 intelligence:4 advancement:1 amir:2 plane:1 mental:1 provides:4 completeness:2 node:10 location:2 marivate:1 along:1 nodal:2 become:1 consists:3 combine:1 apprenticeship:3 expected:8 behavior:11 planning:14 mechanic:1 brain:1 terminal:3 bellman:3 discretized:1 salakhutdinov:1 actual:1 cpu:1 considering:4 increasing:1 becomes:1 provided:1 estimating:3 bounded:9 begin:1 baker:1 lowest:1 argmin:1 disneyresearch:1 informed:1 finding:1 possession:5 csaba:1 impractical:1 guarantee:11 expands:2 biometrika:1 qm:1 control:14 unit:1 grant:1 producing:1 positive:2 bziebart:1 local:1 despite:2 path:50 dynamically:1 challenging:1 co:1 directed:2 practical:1 unique:2 practice:2 procedure:1 jan:1 vukosi:1 significantly:4 matching:1 pre:1 intention:1 cannot:1 close:2 selection:1 onto:1 influence:1 applying:1 equivalent:1 deterministic:3 demonstrated:7 center:2 baum:1 zinkevich:1 go:8 numerische:1 simplicity:1 searching:1 notion:1 coordinate:3 analogous:1 updated:2 construction:2 play:1 exact:2 programming:1 origin:1 pa:1 element:6 expensive:1 located:1 ep:3 capture:1 calculate:1 cycle:4 nonconvergence:1 rina:1 ordering:2 eu:2 removed:1 decrease:1 russell:1 complexity:1 insertion:2 ziebart:6 reward:2 skeleton:3 littman:1 dynamic:4 ultimately:1 motivate:1 predictive:15 upon:1 efficiency:11 basis:1 abdeslam:1 joint:2 represented:2 alphabet:1 train:4 separated:1 distinct:1 monte:2 artificial:4 lift:1 neighborhood:1 apparent:1 heuristic:31 solve:1 drawing:3 statistic:1 uic:2 itself:1 final:1 sequence:11 took:3 propose:1 interaction:3 raphael:1 kober:1 neighboring:1 combining:2 loop:3 ent:10 convergence:5 empty:2 requirement:1 extending:1 double:1 optimum:1 generating:1 guaranteeing:2 converges:1 andrew:7 handler:1 pose:1 sa:4 resemble:1 direction:1 guided:13 closely:2 stochastic:2 exploration:3 human:1 saxe:1 material:1 require:2 abbeel:1 generalization:2 investigation:1 brian:6 extension:1 insert:2 considered:1 predict:1 a2:1 estimation:7 ruslan:1 hs1:1 create:1 establishes:1 weighted:1 hope:1 mit:1 always:1 desirability:1 aim:1 rather:5 avoid:2 focus:1 improvement:5 indicates:1 contrast:1 greedily:1 inference:12 motif:1 abstraction:1 sb:4 typically:2 entire:4 initially:1 plan:8 spatial:2 initialize:1 marginal:2 field:4 once:1 construct:1 never:1 ng:2 sampling:10 ioc:20 represents:2 stuart:1 icml:1 future:1 others:2 inherent:1 few:2 primarily:1 richard:1 randomly:1 recognize:1 national:1 consisting:1 evaluation:1 chain:2 accurate:4 edge:12 tuple:1 partial:1 experience:2 improbable:1 respective:2 retrace:1 traversing:1 fox:1 euclidean:1 maxent:2 desired:1 re:1 causal:2 theoretical:2 minimal:2 uncertain:1 increased:1 rationalizes:1 modeling:4 soft:19 tial:1 markovian:1 assignment:1 maximization:1 cost:61 dribble:1 vertex:3 subset:7 entry:1 costto:1 accomplish:1 combined:1 st:51 person:1 explores:2 international:8 probabilistic:10 destination:1 michael:1 together:1 monica:1 gergely:1 aaai:2 boularias:1 containing:2 huang:1 priority:6 worse:1 cognitive:2 expert:2 leading:1 return:1 account:1 converted:1 de:1 star:1 lot:1 closed:1 eyal:1 analyze:3 doing:1 portion:1 start:4 reached:1 red:1 minimize:1 il:2 efficiently:2 yield:1 correspond:2 handwritten:1 bayesian:2 produced:1 carlo:2 trajectory:7 cybernetics:1 kris:1 history:4 stroke:6 reach:1 neu:1 definition:2 against:2 associated:1 judea:1 proved:1 massachusetts:1 knowledge:1 improves:1 higher:1 improved:1 though:1 dey:2 babe:1 until:1 d:1 ramachandran:1 ei:4 reversible:2 grows:1 normalized:1 former:1 equality:1 bsi:1 symmetric:2 kitani:1 jbt:1 game:3 branching:1 recurrence:1 kaushik:1 maintained:1 covering:1 soccer:10 suboptimality:1 criterion:2 generalized:1 ini:1 asme:1 evident:1 complete:1 demonstrate:4 consideration:1 ari:1 function1:1 million:2 association:1 refer:1 significant:2 cambridge:1 ai:1 consistency:1 mathematics:1 similarly:1 engrg:1 illinois:2 stochasticity:2 reachable:1 robot:3 longer:2 patrick:2 add:1 recent:2 perspective:1 reverse:1 manipulation:1 incapable:1 discretizing:1 inequality:1 inconsistency:1 accomplished:1 joshua:3 exploited:1 jens:1 minimum:3 additional:4 employed:6 determine:1 converge:1 vernaza:1 july:1 branch:1 multiple:2 faster:2 characterized:1 calculation:1 cross:2 determination:2 hart:1 prediction:6 desideratum:2 bertram:1 basic:1 controller:1 expectation:5 iteration:1 represent:2 cell:6 background:1 addition:3 fine:1 szepesv:1 addressed:1 unlike:1 byron:1 ofx:1 effectiveness:4 monfort:4 near:1 leverage:1 latin:3 fit:1 restrict:1 reduce:1 idea:2 connexion:1 i7:1 motivated:1 utility:2 forecasting:1 penalty:1 queue:12 returned:1 peter:4 york:2 compositional:1 action:19 ignored:1 covered:2 clear:1 amount:1 tenenbaum:4 processed:1 reduced:1 generate:1 massoud:1 estimated:4 track:1 discrete:2 visitation:1 key:3 purposeful:1 drawn:2 backward:5 graph:14 relaxation:2 inverse:20 powerful:1 uncertainty:2 extends:1 looser:1 lake:3 draw:3 decision:24 appendix:2 comparable:1 bound:14 guaranteed:4 convergent:2 quadratic:1 mathew:4 constrain:1 nathan:1 min:7 optimality:4 expanded:2 martin:1 softened:11 department:3 structured:1 according:1 numbering:1 designated:1 ball:7 terminates:1 character:20 formatting:1 s1:24 nilsson:1 dieter:1 taken:1 computationally:3 equation:2 previously:1 mathematik:1 needed:3 mind:1 letting:1 popped:2 end:4 generalizes:1 available:1 operation:1 apply:3 occurrence:1 professional:4 top:2 remaining:5 include:3 trouble:1 a4:1 calculating:1 establish:2 overflow:1 society:1 seeking:1 objective:1 move:1 already:2 added:1 strategy:1 bagnell:5 gradient:1 distance:23 separate:1 chris:1 considers:1 collected:1 assuming:2 length:7 kalman:1 providing:1 demonstration:2 minimizing:1 equivalently:1 regulation:1 unfortunately:1 potentially:1 negative:4 intent:2 ratliff:1 motivates:1 policy:3 unknown:2 allowing:2 boot:1 datasets:1 markov:3 finite:3 mate:1 dijkstra:3 defining:1 situation:1 incorporated:1 team:4 disney:1 frame:1 discovered:1 extended:1 stack:3 smoothed:1 brenden:4 rebecca:1 introduced:2 inverting:1 required:1 learned:2 distinction:1 established:2 hour:6 inaccuracy:1 pearl:1 trans:1 nip:1 regime:7 challenge:2 green:2 max:10 power:1 suitable:1 event:1 demanding:1 subramanian:1 predicting:1 representing:3 improve:2 technology:1 axis:1 epoch:6 geometric:1 acknowledgement:1 understanding:1 contributing:1 relative:2 embedded:1 loss:6 fully:1 penalization:1 validation:1 foundation:1 consistent:1 s0:23 principle:1 playing:1 succinctly:1 maas:1 supported:1 lucey:2 side:1 formal:1 institute:1 neighbor:5 deepak:1 sparse:1 ghz:1 calculated:1 transition:6 cumulative:1 avoids:2 computes:1 preventing:1 forward:6 commonly:2 reinforcement:7 jump:2 evaluating:1 employing:2 far:1 transaction:1 sj:3 approximate:16 implicitly:1 keep:1 monotonicity:1 uai:1 pittsburgh:2 assumed:1 search:30 pen:4 additionally:2 learn:3 expanding:1 symmetry:1 expansion:1 softmin:39 complex:1 domain:2 linearly:1 s2:1 paul:1 repeated:1 complementary:1 allowed:1 representative:1 intel:1 ny:1 sub:2 fails:1 exceeding:1 lie:1 learns:1 hmax:1 admissible:2 grained:1 theorem:13 transitioning:1 nyu:1 explored:4 concern:1 a3:1 intractable:1 incorporating:1 exists:3 workshop:1 sequential:5 drew:1 anind:2 push:1 margin:1 forecast:1 entropy:12 simply:1 likely:3 forming:1 josh:1 expressed:1 tracking:2 monotonic:3 corresponds:1 underflow:1 acm:1 ma:1 goal:20 hs0:1 leonard:1 room:1 specifically:1 typical:1 infinite:5 lemma:2 total:1 nil:1 pas:1 e:3 experimental:2 player:3 formally:1 rudolf:1 support:1 searched:1 latter:1 arises:1 signifying:1 evaluate:1 mcmc:7 trainable:1 handling:3 |
5,401 | 589 | Automatic Learning Rate Maximization
by On-Line Estimation of the Hessian's
Eigenvectors
Yann LeCun,l Patrice Y. Simard,l and Barak Pearlmutter 2
1 AT&T Bell Laboratories 101 Crawfords Corner Rd, Holmdel, NJ 07733
2CS&E Dept. Oregon Grad. Inst., 19600 NW vonNeumann Dr, Beaverton, OR 97006
Abstract
We propose a very simple, and well principled way of computing
the optimal step size in gradient descent algorithms. The on-line
version is very efficient computationally, and is applicable to large
backpropagation networks trained on large data sets. The main
ingredient is a technique for estimating the principal eigenvalue(s)
and eigenvector(s) of the objective function's second derivative matrix (Hessian), which does not require to even calculate the Hessian. Several other applications of this technique are proposed for
speeding up learning, or for eliminating useless parameters.
1
INTRODUCTION
Choosing the appropriate learning rate, or step size, in a gradient descent procedure
such as backpropagation, is simultaneously one of the most crucial and expertintensive part of neural-network learning. We propose a method for computing the
best step size which is both well-principled, simple, very cheap computationally,
and, most of all, applicable to on-line training with large networks and data sets.
Learning algorithms that use Gradient Descent minimize an objective function E
of the form
p
E(W)
=
~EEP(W)
EP
=
E(W,XP)
(1)
p=O
where W is the vector of parameters (weights), P is the number of training patterns,
and XP is the p-th training example (including the desired output if necessary). Two
basic versions of gradient descent can be used to minimize E. In the first version,
156
Automatic Learning Rate Maximization by Estimation of Hessian's Eigenvectors
called the batch version, the exact gradient of E with respect to W is calculated,
and the weights are updated by iterating the procedure
(2)
W - W - 1]VE(W)
where 1] is the learning rate or step size, and VE(W) is the gradient of E with
respect to W. In the second version, called on-line, or Stochastic Gradient Descent,
the weights are updated after each pattern presentation
(3)
Before going any further, we should emphasize that our main interest is in training
large networks on large data sets. As many authors have shown, Stochastic Gradient
Descent (SGD) is much faster on large problems than the "batch" version. In fact,
on large problems, a carefully tuned SGD algorithm outperforms most accelerated
or second-order batch techniques, including Conjugate Gradient. Although there
have been attempts to "stochasticize" second-order algorithms (Becker and Le Cun,
1988) (Moller, 1992), most of the resulting procedures also rely on a global scaling
parameter similar to 1]. Therefore, there is considerable interest in finding ways of
optimizing 1].
2
COMPUTING THE OPTIMAL LEARNING RATE:
THE RECIPE
In a somewhat unconventional way, we first give our simple "recipe" for computing
the optimal learning rate 1]. In the subsequent sections, we sketch the theory behind
the recipe.
Here is the proposed procedure for estimating the optimal learning rate in a backpropagation network trained with Stochastic Gradient Descent. Equivalent procedures for other adaptive machines are strai~htforward. In the following, the notation
N(V) designates the normalized vector V /11 VII. Let W be the N dimensional weight
vector,
1. pick a normalized, N dimensional vector \If at random. Pick two small
positive constants a and " say a
0.01 and, 0.01.
2. pick a training example (input and desired output) XP. Perform a regular
forward prop and a backward prop. Store the resulting gradient vector
G 1 = VEP(W).
=
=
3. add aNew) to the current weight vector W,
4. perform a forward prop and a backward prop on the same pattern using the perturbed weight vector. Store the resulting gradient vector
G2 ::: VEP(W + aN(w?
5. update
vector
W
with
the
runmng
average
formula
W - (1
-,)w + ;( G2 -
G.).
6. restore the weight vector to its original value W.
7. loop to step 2 until Ilwll stabilizes.
8. set the learning rate
1]
to IIWII- 1 , and go on to a regular training session .
The constant a controls the size of the perturbation. A small a gives a better estimate, but is more likely to cause numerical errors. , controls the tradeoff between
the convergence speed of wand the accuracy of the result. It is better to start with
157
158
LeCun, Simard, and Pearlmutter
W2
E(W)
principal
eigenvector
W
~--------------~~Wl
z
(b)
(a)
Figure 1: Gradient descent with optimal learning rate in (a) one dimension, and
(b) two dimensions (contour plot).
a relatively large 'Y (say 0.1) and progressively decrease it until the fluctuations on
1I\]i1l are less than say 10%. In our experience accurate estimates can be obtained
with between one hundred and a few hundred pattern presentations: for a large
problem, the cost is very small compared to a single learning epoch.
3
STEP SIZE, CURVATURE AND EIGENVALUES
The procedure described in the previous section makes "\]ill converge to the largest
positive eigenvalue of the second derivative matrix of the average obJective function.
In this section we informally explain why the best learning rate is the inverse of this
eigenvalue. More detailed analysis of gradient descent procedures can be found in
Optimization, Statistical Estimation, or Adaptive Filtering textbooks (see for example (Widrow and Stearns, 1985?. For didactical purposes, consider an objective
function of the form E(w) = ~(w - z)2 + C where w is a scalar parameter (see
fig l(a?. Assuming w is the current value of the parameter, what is the optimal
1] that takes us to the minimum in one step? It is easy to visualize that, as it has
been known since Newton, the optimal TJ is the inverse of the second derivative of
E, i.e. 1/ h. Any smaller or slightly larger value will yield slower convergence. A
value more then twice the optimal will cause divergence.
In multidimension, things are more complicated. If the objective function is
quadratic, the surfaces of equal cost are ellipsoids (or ellipses in 2D as shown on
figure l(b?. Intuitively, if the learning rate is set for optimal convergence along the
direction of largest second derivative, then it will be small enough to ensure (slow)
convergence along all the other directions. This corresponds to setting the learning
rate to the inverse of the second derivative in the direction in which it is the largest.
The largest learning rate that ensures convergence is twice that value. The actual
optimal TJ is somewhere in between. Setting it to the inverse of the largest second
derivative is both safe, and close enough to the optimal. The second derivative
information is contained in the Hessian matrix of E(W): the symmetric matrix H
whose (i,j) component is ()2 E(W)/OWiOWj. If the learning machine has N free
parameters (weights), H is an N by N matrix. The Hessian can be decomposed
(diagonalized) into a product of the form H
RART, where A is a diagonal matrix
whose diagonal terms (the eigenvalues of H) are the second derivatives of E(W)
=
Automatic Learning Rate Maximization by Estimation of Hessian's Eigenvectors
along the principal axes of the ellipsoids of equal cost, and R is a rotation matrix
which defines the directions of these principal axes. The direction of largest second
derivative is the principal eigenvector of H, and the largest second derivative is
the corresponding eigenvalue (the largest one). In short, it can be shown that the
optimal learning rate is the inverse of the largest eigenvalue of H:
1Jopt
4
=
1
(4)
~
"max
COMPUTING THE HESSIAN'S LARGEST
EIGENVALUE WITHOUT COMPUTING THE
HESSIAN
This section derives the recipe given in section 2. Large learning machines, such as
backpropagation networks can have several thousand free parameters. Computing,
or even storing, the full Hessian matrix is often prohibitively expensive. So at first
glance, finding its largest eigenvalue in a reasonable time seems rather hopeless.
We are about to propose a shortcut based on three simple ideas: 1- the Taylor
expansion, 2- the power method, 3- the running average. The method described
here is general, and can be applied to any differentiable objective function that can
be written as an average over "examples" (e.g. RBFs, or other statistical estimation
techniques).
Taylor expansion: Although it is often unrealistic to compute the Hessian H,
there is a simple way to approximate the product of H by a vector of our choosing.
Let \II be an N dimensional vector, and a a small real constant, the Taylor expansion
of the gradient of E(W) around W along the direction \II gives us
H\II = V'E(W + a\ll) - V'E(W) + O(a 2 )
(5)
a
Assuming E is locally quadratic (i.e. ignoring the O(a 2 ) term), the product of H by
any vector W can be estimated by subtracting the gradient of E at point (W + a\ll)
from the gradient at W. This is an O(N) process, compared to the O(N2) direct
product. In the usual neural network context, this can be done with two forward
propagations and two backward propagations. More accurate methods which do
not use perturbations for computing H\II exist, but they are more complicated to
implement than this one. (Pearlmutter, 1993).
The power method: Let Amax be the largest eigenvalue! of H, and Vmax the
corresponding normalized eigenvector (or a vector in the eigenspace if >'max is degenerate). If we pick a vector \II (say, at random) which is non-orthogonal to Vmax ,
then iterating the procedure
\II .- H N(\II)
(6)
will make N(\II) converge to Vmax , and IIwll converge to I>'maxl. The procedure
is slow if good accuracy is required, but a good estimate of the eigenvalue can be
obtained with a very small number of iterations (typically about 10). The reason
for introducing equation (5), is now clear: we can use it to compute the right hand
side of (6), yielding
\II .-
1.a (V'E (W + aN(\II?
- V'E(W?
llargest in absolute value, not largest algebraically
(7)
159
160
LeCun, Simard, and Pearlmutter
where W is the current estimate of the principal eigenvector of H, and a is a small
constant.
The "on-line" version: One iteration of the procedure (7) requires the computation of the gradient of E at two different points of the parameter space. This means
that one iteration of (7) is roughly equivalent to two epochs of gradient descent
learning (two passes through the entire training set). Since (7) needs to be iterated,
say 10 times, the total cost of estimating Amax would be approximately equivalent
to 20 epochs.
This excessive cost can be drastically reduced with an "on-line" version of (7) which
exploits the stationarity of the second-order information over large (and redundant)
training sets. Essentially, the hidden "average over patterns" in VE can be replaced
by a running average. The procedure becomes
\II
<-
(1 -
,)w + ,-a1 (VE (W + aN(w? -
VE(W?
(8)
where , is a small constant which controls the tradeoff between the convergence
speed and the accuracy 2. The "recipe" given in section 2 is a direct implementation
of (8). Empirically, this procedure yields sufficiently accurate values in a very short
time. In fact, in all the cases we have tried, it converged with only a few dozen
pattern presentations: a fraction of the time of an entire learning pass through
the training set (see the results section). It looks like the essential features of the
Hessian can be extracted from only a few examples of the training set. In other
words, the largest eigenvalue of the Hessian seems to be mainly determined by the
network architecture and initial weights, and by short-term, low-order statistics of
the input data. It should be noted that the on-line procedure can only find positive
eigenvalues.
5
A FEW RESULTS
Experiments will be described for two different network architectures trained on segmented handwritten digits taken from the NIST database. Inputs to the networks
were 28x28 pixel images containing a centered and size-normalized image of the
character. Network 1 was a 4-hidden layer, locally-connected network with shared
weights similar to (Le Cun et al., 1990a) but with fewer feature maps. Each layer
is only connected to the layer above. the input is 32x32 (there is a border around
the 28x28 image), layer 1 is 2x28x28, with 5x5 convolutional (shared) connections.
Layer 2 is 2x14x14 with 2x2 subsampled, averaging connections. Layer 3 is 4xl0xl0,
with 2x5x5 convolutional connections. Layer 4 is 4x5x5 with 2x2 averaging connections, and the output layer is 10xlxl with 4x5x5 convolutional connections. The
network has a total of 64,638 connections but only 1278 free parameters because
of the weight sharing. Network 2 was a regular 784x30xlO fully-connected network (23860 weights). The sigmoid function used for all units in both nets was
1.7159 tanh(2/3x). Target outputs were set to +1 for the correct unit, and -1 for
the others.
To check the validity of our assumptions, we computed the full Hessian of Network 1
on 300 patterns (using finite differences on the gradient) and obtained the eigenvalues and eigenvectors using one of the EISPACK routines. We then computed
2the procedure (8) is not an unbiased estimator of (7). Large values of 'Yare likely
to produce slightly underestimated eigenvalues, but this inaccuracy has no practical
consequences.
Automatic Learning Rate Maximization by Estimation of Hessian's Eigenvectors
80
70
II)
60
IS
E
;: 50
8
II)
40
ii
>
30
9II)
20
::;,
Ii
'}'I=O.1
'}'I=O.03
'}'I=O.01
10
o
0
60
100
150
'}'I=O.003
200
250
300
350
400
Number of pattern presentations
Figure 2: Convergence of the on-line eigenvalue estimation (Network 1)
the principal eigenvector and eigenvalue using procedures (7), and (8). All three
methods agreed within less than a percent on the eigenvalue. An example run of
(8) on a 1000 pattern set is shown on figure 2. A 10% accurate estimate of the
largest eigenvalue is obtained in less than 200 pattern presentations (one fifth of
the database). As can be seen, the value is fairly stable over small portions of the
set, which means that increasing the set size would not require more iterations of
the estimation procedure.
A second series of experiments were run to verify the accuracy of the learning rate
prediction. Network 1 was trained on 1000 patterns, and network 2 on 300 patterns,
both with SGD. Figure 3 shows the Mean Squared Error of the two networks after
1,2,3,4 and 5 passes through the training set as a function of the learning rate, for
one particular initial weight vector. The constant I was set to 0.1 for the first 20
patterns, 0.03 for the next 60, 0.01 for the next 120, and 0.003 for the next 200 (400
total pattern presentations), but it was found that adequate values were obtained
after only 100 to 200 pattern presentations. The vertical bar represents the value
predicted by the method for that particular run. It is clear that the predicted
optimal value is very close to the correct optimal learning rate. Other experiments
with different training sets and initial weights gave similar results. Depending on
the initial weights, the largest eigenvalue for Network 1 varied between 80 and 250,
and for Network 2 between 250 and 400. Experiments tend to suggest that the
optimal learning rate varies only slightly during the early phase of training. The
learning rate may need to be decreased for long learning sessions, as SGD converts
from the "getting near the minimum" mode to the "wobbling around" mode.
There are many other method for adjusting the learning rate. Unfortunately, most
of them are based on some measurement of the oscillations of the gradient (Jacobs,
1987). Therefore, they are difficult to apply to stochastic gradient descent.
6
MORE ON EIGENVALUES AND EIGENVECTORS
We believe that computing the optimal learning rate is only one of many applications
of our eigenvector estimation technique. The procedure can be adapted to serve
many applications.
161
162
LeCun, Simard, and Pearl mutter
2
a::
2
(a)
~
ffi
1.5
::>
0CI)
1
(b)
g
ffi
1.5
c
ILl
a::
c
ILl
a::
c
~
i3
1
CI)
z
z
;
;
0.5
0.5
n
n
A
m
o
06-4_____-+-_......_ _~~-~...-
o
0.5
1
1.5
2
2.5
0
0.5
1
1.5
2
2.5
3
3.5
4
Figure 3: Mean Squared Error after 1,2,3,4, and 5 epochs (from top to bottom) as a
function of the ratio between the learning rate TJ and the learning rate predicted by
the proposed method 1I'l111- 1. (a) Network 1 trained on 1000 patterns, (b) Network
2 trained on 300 patterns.
An important variation of the learning rate estimation is when, instead of update
rule 3, we use a "scaled SGD" rule of the form W +- W - TJcI>V'EP(W), where cI> is
a diagonal matrix (each weight has its own learning rate TJ4Jd. For example, each
4Ji can be the inverse of the corresponding diagonal term of the average Hessian,
which can be computed efficiently as suggested in (Le Cun, 1987; Becker and Le
Cun, 1988). Then procedure 8 must be changed to
!
'l1 +- (1 - ,)'l1 +, cI>~
(V'E (W + acI>~ N('l1)) - V'E(W))
(9)
where the terms of cI>~ are the square root of the corresponding terms in cI>. More
generally, the above formula applies to any transformation of the parameter space
whose Jacobian is cI>~. The added cost is small since cI>~ is diagonal.
Another extension of the procedure can compute the first J( principal eigenvectors
and eigenvalues. The idea is to store J( ei~envector estimates 'l1k' k = 1 .. . J(,
updated simultaneously with equation (8) tthis costs a factor J( over estimating
only one). We must also ensure that the 'l1 k'S remain orthogonal to each other.
This can be performed by projecting each 'l1 k onto the space orthogonal to the
space sub tended by the 'l1l' I < k. This is an N J( process, which is relatively
cheap if the network uses shared weights. A generalization of the acceleration
method introduced in (Le Cun, Kanter and SoHa, 1991) can be implemented with
this technique. The idea is to use a "Newton-like" weIght update formula of the
type
K
W
+-
W-
L II'l1 ll- 1Pk
k
k=1
where Pk, k = 1 ... J( - 1 is the projection of V'E( W) onto 'l1 k, and PK is the
projection of V'E(W) on the space orthogonal to the 'l1k' (k
1 ... J( - 1). In
theory, this procedure can accelerate the training by a factor 1I'l1111/II'l1KII, which is
between 3 and 10 for J( = 5 in a typical backprop network. Results will be reported
in a later publication.
=
Interestingly, the method can be slightly modified to yield the smallest eigenvalues/eigenvectors. First, the largest eigenvalue Amax must be computed (or bounded
Automatic Learning Rate Maximization by Estimation of Hessian's Eigenvectors
above). Then, by iterating
W ~ (1 - ,)w
+ AmaxN(w) - ,.!.
(VE (W + o:N(w? a
VE(W?
(10)
one can compute the eigenvector corresponding to the smallest (probably negative)
eigenvalue of (H - AmaxI), which is the same as H's. This can be used to determine the direction(s) of displacement in parameter space that will cause the least
increase of the objective function. There are obvious applications of this to weight
elimination methods: a better version of OBD (Le Cun et al., 1990b) or a more
efficient version of OBS (Hassibi and Stork, 1993).
We have proposed efficient methods for (a) computing the product of the Hessian by
any vector, and (b) estimating the few eigenvectors of largest or smallest eigenvalues.
The methods were successfully applied the estimation of the optimal learning rate
in Stochastic Gradient Descent learning We feel that we have only scratched the
surface of the many applications of the proposed techniques.
Acknowledgements
Yann LeCun and Patrice Simard would like to thank the members of the Adaptive Systems
Research dept for their support and comments. Barak Pearlmutter was partially supported
by grants NSF ECS-9114333 and ONR N00014-92-J-4062 to John Moody.
References
Becker, S. and Le Cun, Y. (1988). Improving the Convergence of Back-Propagation
Learning with Second-Order Methods. Technical Report CRG-TR-88-5, University of Toronto Connectionist Research Group.
Hassibi, B. and Stork, D. (1993). Optimal Brain Surgeon. In Giles, L., Hanson, S.,
and Cowan, J., editors, Advances in Neural Information Processing Systems,
volume 5, (Denver, 1992). Morgan Kaufman.
Jacobs, R. A. (1987). Increased Rates of Convergence Through Learning Rate
Adaptation. Department of Computer and Information Sciences COINS-TR87-117, University of Massachusetts, Amherst, Ma.
Le Cun, Y. (1987). Modeles connexionnistes de l'apprentissage (connectionist learning models). PhD thesis, Universite P. et M. Curie (Paris 6).
Le Cun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard,
W., and Jackel, L. D. (1990a). Handwritten digit recognition with a backpropagation network. In Touretzky, D., editor, Advances in Neural Information
Processing Systems 2 (NIPS *89} , Denver, CO. Morgan Kaufman.
Le Cun, Y., Denker, J. S., SolI a, S., Howard, R. E., and Jackel, L. D. (1990b). Optimal Brain Damage. In Touretzky, D., editor, Advances in Neural Information
Processing Systems 2 (NIPS*89), Denver, CO. Morgan Kaufman.
Le Cun, Y., Kanter, I., and Solla, S. (1991). Eigenvalues of covariance matrices:
application to neural-network learning. Physical Review Letters, 66(18):23962399.
Moller, M. (1992). supervised learning on large redundant training sets. In Neural
Networks for Signal Processing 2. IEEE press.
Pearlmutter, B. (1993). Phd thesis, Carnegie-Mellon University. Pittsburgh PA.
Widrow, B. and Stearns, S. D. (1985). Adaptive Signal Processing. Prentice-Hall.
163
| 589 |@word version:10 eliminating:1 seems:2 tried:1 jacob:2 covariance:1 pick:4 sgd:5 tr:1 initial:4 series:1 tuned:1 rart:1 interestingly:1 outperforms:1 diagonalized:1 current:3 written:1 must:3 john:1 subsequent:1 numerical:1 i1l:1 cheap:2 plot:1 update:3 progressively:1 fewer:1 short:3 toronto:1 along:4 direct:2 roughly:1 brain:2 decomposed:1 actual:1 increasing:1 becomes:1 estimating:5 notation:1 bounded:1 eigenspace:1 what:1 modeles:1 kaufman:3 eigenvector:8 textbook:1 finding:2 transformation:1 nj:1 prohibitively:1 scaled:1 control:3 unit:2 grant:1 l1l:1 before:1 positive:3 consequence:1 fluctuation:1 approximately:1 twice:2 co:2 practical:1 lecun:5 implement:1 backpropagation:5 digit:2 procedure:20 displacement:1 bell:1 projection:2 word:1 regular:3 suggest:1 onto:2 close:2 prentice:1 context:1 equivalent:3 map:1 go:1 x32:1 estimator:1 rule:2 amax:3 tthis:1 variation:1 updated:3 feel:1 target:1 exact:1 us:1 pa:1 expensive:1 recognition:1 connexionnistes:1 database:2 ep:2 bottom:1 calculate:1 thousand:1 ensures:1 connected:3 solla:1 decrease:1 principled:2 trained:6 surgeon:1 serve:1 accelerate:1 l1k:2 choosing:2 whose:3 kanter:2 larger:1 say:5 statistic:1 patrice:2 eigenvalue:27 differentiable:1 net:1 propose:3 subtracting:1 product:5 adaptation:1 loop:1 degenerate:1 getting:1 recipe:5 convergence:9 produce:1 depending:1 widrow:2 implemented:1 c:1 predicted:3 direction:7 safe:1 correct:2 stochastic:5 centered:1 elimination:1 backprop:1 require:2 generalization:1 crg:1 extension:1 around:3 sufficiently:1 hall:1 nw:1 visualize:1 stabilizes:1 early:1 smallest:3 purpose:1 estimation:12 applicable:2 tanh:1 jackel:2 hubbard:1 largest:18 wl:1 successfully:1 i3:1 rather:1 modified:1 publication:1 ax:2 check:1 mainly:1 inst:1 typically:1 entire:2 hidden:2 going:1 pixel:1 ill:3 fairly:1 equal:2 represents:1 look:1 excessive:1 others:1 report:1 connectionist:2 few:5 simultaneously:2 ve:7 divergence:1 subsampled:1 replaced:1 phase:1 attempt:1 stationarity:1 interest:2 henderson:1 yielding:1 behind:1 tj:3 accurate:4 solo:1 necessary:1 experience:1 orthogonal:4 taylor:3 desired:2 increased:1 giles:1 eep:1 maximization:5 cost:7 introducing:1 hundred:2 reported:1 perturbed:1 varies:1 amherst:1 moody:1 squared:2 thesis:2 containing:1 dr:1 corner:1 simard:5 derivative:10 de:1 oregon:1 scratched:1 performed:1 root:1 later:1 portion:1 start:1 complicated:2 rbfs:1 curie:1 minimize:2 square:1 accuracy:4 convolutional:3 efficiently:1 yield:3 handwritten:2 iterated:1 converged:1 explain:1 tended:1 touretzky:2 sharing:1 obvious:1 universite:1 adjusting:1 massachusetts:1 agreed:1 routine:1 carefully:1 back:1 supervised:1 l111:1 mutter:1 done:1 until:2 sketch:1 hand:1 ei:1 propagation:3 glance:1 defines:1 mode:2 believe:1 validity:1 normalized:4 unbiased:1 verify:1 symmetric:1 laboratory:1 ll:3 x5:1 during:1 noted:1 pearlmutter:6 l1:7 percent:1 image:3 x5x5:3 sigmoid:1 rotation:1 physical:1 empirically:1 ji:1 stork:2 denver:3 volume:1 measurement:1 mellon:1 automatic:5 rd:1 vep:2 session:2 stable:1 surface:2 add:1 curvature:1 own:1 optimizing:1 store:3 n00014:1 onr:1 seen:1 minimum:2 morgan:3 somewhat:1 converge:3 algebraically:1 determine:1 redundant:2 signal:2 ii:18 full:2 segmented:1 technical:1 faster:1 x28:2 long:1 tr87:1 ellipsis:1 a1:1 prediction:1 basic:1 essentially:1 iteration:4 decreased:1 underestimated:1 crucial:1 w2:1 pass:2 probably:1 comment:1 tend:1 thing:1 member:1 cowan:1 near:1 easy:1 enough:2 gave:1 architecture:2 idea:3 tradeoff:2 grad:1 becker:3 hessian:18 cause:3 adequate:1 generally:1 iterating:3 detailed:1 eigenvectors:10 informally:1 clear:2 locally:2 reduced:1 stearns:2 exist:1 nsf:1 estimated:1 carnegie:1 group:1 backward:3 fraction:1 convert:1 wand:1 run:3 inverse:6 letter:1 reasonable:1 yann:2 oscillation:1 ob:1 holmdel:1 scaling:1 layer:8 quadratic:2 adapted:1 x2:2 speed:2 relatively:2 department:1 conjugate:1 smaller:1 slightly:4 jopt:1 iiwii:1 character:1 remain:1 cun:11 intuitively:1 projecting:1 taken:1 computationally:2 equation:2 ffi:2 unconventional:1 yare:1 apply:1 denker:2 appropriate:1 batch:3 coin:1 slower:1 original:1 top:1 running:2 ensure:2 beaverton:1 newton:2 somewhere:1 exploit:1 objective:7 added:1 damage:1 usual:1 diagonal:5 gradient:23 thank:1 reason:1 assuming:2 useless:1 ellipsoid:2 ratio:1 difficult:1 unfortunately:1 negative:1 implementation:1 perform:2 vertical:1 howard:2 nist:1 finite:1 descent:12 x14x14:1 perturbation:2 varied:1 introduced:1 required:1 paris:1 connection:6 hanson:1 boser:1 pearl:1 inaccuracy:1 nip:2 bar:1 suggested:1 pattern:17 including:2 max:2 power:2 unrealistic:1 rely:1 restore:1 crawford:1 speeding:1 epoch:4 review:1 acknowledgement:1 fully:1 filtering:1 ingredient:1 xp:3 apprentissage:1 editor:3 storing:1 hopeless:1 changed:1 obd:1 supported:1 free:3 drastically:1 side:1 barak:2 absolute:1 fifth:1 dimension:2 calculated:1 maxl:1 contour:1 author:1 forward:3 adaptive:4 vmax:3 ec:1 approximate:1 emphasize:1 global:1 anew:1 pittsburgh:1 aci:1 designates:1 why:1 ignoring:1 improving:1 expansion:3 moller:2 iiwll:1 pk:3 main:2 border:1 n2:1 fig:1 slow:2 hassibi:2 sub:1 jacobian:1 dozen:1 formula:3 derives:1 essential:1 ilwll:1 ci:8 phd:2 vii:1 soha:1 likely:2 contained:1 g2:2 scalar:1 partially:1 applies:1 corresponds:1 extracted:1 ma:1 prop:4 presentation:7 acceleration:1 shared:3 considerable:1 shortcut:1 determined:1 typical:1 averaging:2 principal:8 called:2 total:3 pas:1 support:1 accelerated:1 dept:2 |
5,402 | 5,890 | Gradient-free Hamiltonian Monte Carlo
with Efficient Kernel Exponential Families
Heiko Strathmann? Dino Sejdinovic+ Samuel Livingstoneo Zoltan Szabo? Arthur Gretton?
?
Gatsby Unit
University College London
+
Department of Statistics
University of Oxford
o
School of Mathematics
University of Bristol
Abstract
We propose Kernel Hamiltonian Monte Carlo (KMC), a gradient-free adaptive
MCMC algorithm based on Hamiltonian Monte Carlo (HMC). On target densities
where classical HMC is not an option due to intractable gradients, KMC adaptively learns the target?s gradient structure by fitting an exponential family model
in a Reproducing Kernel Hilbert Space. Computational costs are reduced by two
novel efficient approximations to this gradient. While being asymptotically exact,
KMC mimics HMC in terms of sampling efficiency, and offers substantial mixing
improvements over state-of-the-art gradient free samplers. We support our claims
with experimental studies on both toy and real-world applications, including Approximate Bayesian Computation and exact-approximate MCMC.
1
Introduction
Estimating expectations using Markov Chain Monte Carlo (MCMC) is a fundamental approximate
inference technique in Bayesian statistics. MCMC itself can be computationally demanding, and
the expected estimation error depends directly on the correlation between successive points in the
Markov chain. Therefore, efficiency can be achieved by taking large steps with high probability.
Hamiltonian Monte Carlo [1] is an MCMC algorithm that improves efficiency by exploiting gradient information. It simulates particle movement along the contour lines of a dynamical system
constructed from the target density. Projections of these trajectories cover wide parts of the target?s
support, and the probability of accepting a move along a trajectory is often close to one. Remarkably, this property is mostly invariant to growing dimensionality, and HMC here often is superior to
random walk methods, which need to decrease their step size at a much faster rate [1, Sec. 4.4].
Unfortunately, for a large class of problems, gradient information is not available. For example, in
Pseudo-Marginal MCMC (PM-MCMC) [2, 3], the posterior does not have an analytic expression,
but can only be estimated at any given point, e.g. in Bayesian Gaussian Process classification [4]. A
related setting is MCMC for Approximate Bayesian Computation (ABC-MCMC), where the posterior is approximated through repeated simulation from a likelihood model [5, 6]. In both cases,
HMC cannot be applied, leaving random walk methods as the only mature alternative. There have
been efforts to mimic HMC?s behaviour using stochastic gradients from mini-batches in Big Data
[7], or stochastic finite differences in ABC [8]. Stochastic gradient based HMC methods, however,
often suffer from low acceptance rates or additional bias that is hard to quantify [9].
Random walk methods can be tuned by matching scaling of steps and target. For example, Adaptive
Metropolis-Hastings (AMH) [10, 11] is based on learning the global scaling of the target from the
history of the Markov chain. Yet, for densities with nonlinear support, this approach does not work
very well. Recently, [12] introduced a Kernel Adaptive Metropolis-Hastings (KAMH) algorithm
whose proposals are locally aligned to the target. By adaptively learning target covariance in a
Reproducing Kernel Hilbert Space (RKHS), KAMH achieves improved sampling efficiency.
1
In this paper, we extend the idea of using kernel methods to learn efficient proposal distributions [12].
Rather than locally smoothing the target density, however, we estimate its gradients globally. More
precisely, we fit an infinite dimensional exponential family model in an RKHS via score matching
[13, 14]. This is a non-parametric method of modelling the log unnormalised target density as an
RKHS function, and has been shown to approximate a rich class of density functions arbitrarily well.
More importantly, the method has been empirically observed to be relatively robust to increasing
dimensionality ? in sharp contrast to classical kernel density estimation [15, Sec. 6.5]. Gaussian
Processes (GP) were also used in [16] as an emulator of the target density in order to speed up
HMC, however, this requires access to the target in closed form, to provide training points for the
GP.
We require our adaptive KMC algorithm to be computationally efficient, as it deals with highdimensional MCMC chains of growing length. We develop two novel approximations to the infinite
dimensional exponential family model. The first approximation, score matching lite, is based on
computing the solution in terms of a lower dimensional, yet growing, subspace in the RKHS. KMC
with score matching lite (KMC lite) is geometrically ergodic on the same class of targets as standard random walks. The second approximation uses a finite dimensional feature space (KMC finite),
combined with random Fourier features [17]. KMC finite is an efficient online estimator that allows
to use all of the Markov chain history, at the cost of decreased efficiency in unexplored regions. A
choice between KMC lite and KMC finite ultimately depends on the ability to initialise the sampler
within high-density regions of the target; alternatively, the two approaches could be combined.
Experiments show that KMC inherits the efficiency of HMC, and therefore mixes significantly better
than state-of-the-art gradient-free adaptive samplers on a number of target densities, including on
synthetic examples, and when used in PM-MCMC and ABC-MCMC. All code can be found at
https://github.com/karlnapf/kernel_hmc
2
Background and Previous Work
Let the domain of interest X be a compact1 subset of Rd , and denote the unnormalised target density on X by ?. We are interested in constructing a Markov chain x1 ? x2 ? . . . such that
limt?? xt ? ?. By running the Markov chain for a long time T , we can consistently approximate
any expectation w.r.t. ?. Markov chains are constructed using the Metropolis-Hastings algorithm,
which at the current state xt draws a point from a proposal mechanism x? ? Q(?|xt ), and sets
xt+1 ? x? with probability min(1, [?(x? )Q(xt |x? )]/[?(xt )Q(x? |xt )]), and xt+1 ? xt otherwise.
We assume that ? is intractable,2 i.e. that we can neither evaluate ?(x) nor3 ? log ?(x) for any x,
but can only estimate it unbiasedly via ?
? (x). Replacing ?(x) with ?
? (x) results in PM-MCMC [2, 3],
which asymptotically remains exact (exact-approximate inference).
(Kernel) Adaptive Metropolis-Hastings In the absence of ? log ?, the usual choice of Q is a
random walk, i.e. Q(?|xt ) = N (?|xt , ?t ). A popular choice of the scaling is ?t ? I. When the
scale of the target density is not uniform across dimensions, or if there are strong correlations, the
AMH algorithm [10, 11] improves mixing by adaptively learning global covariance structure of ?
from the history of the Markov chain. For cases where the local scaling does not match the global
covariance of ?, i.e. the support of the target is nonlinear, KAMH [12] improves mixing by learning
the target covariance in a RKHS. KAMH proposals are Gaussian with a covariance that matches the
local covariance of ? around the current state xt , without requiring access to ? log ?.
Hamiltonian Monte Carlo Hamiltonian Monte Carlo (HMC) uses deterministic, measurepreserving maps to generate efficient Markov transitions [1, 18]. Starting from the negative log
target, referred to as the potential energy U (q) = ? log ?(q), we introduce an auxiliary momentum variable p ? exp(?K(p)) with p ? X . The joint distribution of (p, q) is then proportional
to exp (?H(p, q)), where H(p, q) := K(p) + U (q) is called the Hamiltonian. H(p, q) defines a
? ?
Hamiltonian flow, parametrised by a trajectory length t ? R, which is a map ?H
t : (p, q) 7? (p , q )
for which H(p? , q ? ) = H(p, q). This allows constructing ?-invariant Markov chains: for a chain at
state q = xt , repeatedly (i) re-sample p0 ? exp(?K(?)), and then (ii) apply the Hamiltonian flow
1
The compactness restriction is imposed to satisfy the assumptions in [13].
? is analytically intractable, as opposed to computationally expensive in the Big Data context.
3
Throughout the paper ? denotes the gradient operator w.r.t. to x.
2
2
0
for time t, giving (p? , q ? ) = ?H
t (p , q). The flow can be generated by the Hamiltonian operator
?K ?
?U ?
?
?p ?q
?q ?p
(1)
In practice, (1) is usually unavailable and we need to resort to approximations. Here, we limit ourselves to the leap-frog integrator; see [1] for details. To correct for discretisation error, a Metropolis
acceptance procedure can be applied: starting from (p0 , q), the end-point of the approximate trajectory is accepted with probability min [1, exp (?H(p? , q ? ) + H(p0 , q))]. HMC is often able to
propose distant, uncorrelated moves with a high acceptance probability.
Intractable densities In many cases the gradient of log ?(q) = ?U (q) cannot be written in closed
form, leaving random-walk based methods as the state-of-the-art [11, 12]. We aim to overcome
random-walk behaviour, so as to obtain significantly more efficient sampling [1].
3
Kernel Induced Hamiltonian Dynamics
KMC replaces the potential energy in (1) by a kernel induced surrogate computed from the history of
the Markov chain. This surrogate does not require gradients of the log-target density. The surrogate
induces a kernel Hamiltonian flow, which can be numerically simulated using standard leap-frog
integration. As with the discretisation error in HMC, any deviation of the kernel induced flow
from the true flow is corrected via a Metropolis acceptance procedure. This here also contains
the estimation noise from ?
? and re-uses previous values of ?
? , c.f. [3, Table 1]. Consequently,
the stationary distribution of the chain remains correct, given that we take care when adapting the
surrogate.
Infinite Dimensional Exponential Families in a RKHS We construct a kernel induced potential
energy surrogate whose gradients approximate the gradients of the true potential energy U in (1),
without accessing ? or ?? directly, but only using the history of the Markov chain. To that end, we
model the (unnormalised) target density ?(x) with an infinite dimensional exponential family model
[13] of the form
const ? ?(x) ? exp (hf, k(x, ?)iH ? A(f )) ,
(2)
which in particular implies ?f ? ??U = ? log ?. Here H is a RKHS of real valued functions
on X . The RKHS has a uniquely associated symmetric, positive definite kernel k : X ? X ? R,
which satisfies f (x) = hf, k(x, ?)iH for any f ? H [19]. The canonical feature map k(?, x) ? H
here takes the? role of the sufficient statistics while f ? H are the natural parameters, and
A(f ) := log X exp(hf, k(x, ?)iH )dx is the cumulant generating function. Eq. (2) defines broad
class of densities: when universal kernels are used, the family is dense in the space of continuous
densities on compact domains, with respect to e.g. Total Variation and KL [13, Section 3]. It is possible to consistently fit an unnormalised version of (2) by directly minimising the expected gradient
mismatch between the model (2) and the true target density ? (observed through the Markov chain
history). This is achieved by generalising the score matching approach [14] to infinite dimensional
parameter spaces. The technique avoids the problem of dealing with the intractable A(f ), and reduces the problem to solving a linear system. More importantly, the approach is observed to be
relatively robust to increasing dimensions. We return to estimation in Section 4, where we develop
two efficient approximations. For now, assume access to an f? ? H such that ?f (x) ? ? log ?(x).
Kernel Induced Hamiltonian Flow We define a kernel induced Hamiltonian operator by replac?
ing U in the potential energy part ?U
?p ?q in (1) by our kernel surrogate Uk = f . It is clear that,
depending on Uk , the resulting kernel induced Hamiltonian flow differs from the original one. That
said, any bias on the resulting Markov chain, in addition to discretisation error from the leap-frog
integrator, is naturally corrected for in the Pseudo-Marginal Metropolis step. We accept an end-point
0
0
k
?H
t (p , q) of a trajectory starting at (p , q) along the kernel induced flow with probability
h
i
0
0
k
min 1, exp ?H ?H
,
(3)
t (p , q) + H(p , q)
Hk 0
0
k
where H ?H
t (p , q) corresponds to the true Hamiltonian at ?t (p , q). Here, in the PseudoMarginal context, we replace both terms in the ratio in (3) by unbiased estimates, i.e., we replace
3
Acceptance prob.
HMC
Acceptance prob.
KMC
1.00
1.00
0.95
0.95
0.90
0.90
0.85
0.85
0.80
0.80
0.75
0.75
0.70
0.70
0
100
200
300
400
500
0
Leap-frog steps
100
200
300
400
500
Leap-frog steps
Figure 1: Hamiltonian trajectories on a 2-dimensional standard Gaussian. End points of such trajectories (red stars to blue stars) form the proposal of HMC-like algorithms. Left: Plain Hamiltonian
trajectories oscillate on a stable orbit, and acceptance probability is close to one. Right: Kernel
induced trajectories and acceptance probabilities on an estimated energy function.
?(q) within H with an unbiased estimator ?
? (q). Note that this also involves ?recycling? the estimates of H from previous iterations to ensure anyymptotic correctness, c.f. [3, Table 1]. Any
deviations of the kernel induced flow from the true flow result in a decreased acceptance probability
(3). We therefore need to control the approximation quality of the kernel induced potential energy
to maintain high acceptance probability in practice. See Figure 1 for an illustrative example.
4
Two Efficient Estimators for Exponential Families in RKHS
We now address estimating the infinite dimensional exponential family model (2) from data. The
original estimator in [13] has a large computational cost. This is problematic in the adaptive MCMC
context, where the model has to be updated on a regular basis. We propose two efficient approximations, each with its strengths and weaknesses. Both are based on score matching.
4.1 Score Matching
Following [14], we model an unnormalised log probability density log ?(x) with a parametric model
log ?
?Z (x; f ) := log ?
? (x; f ) ? log Z(f ),
(4)
where f is a collection of parameters of yet unspecified dimension (c.f. natural parameters of (2)),
and Z(f ) is an unknown normalising constant. We aim to find f? from a set of n samples4 D :=
{xi }ni=1 ? ? such that ?(x) ? ?
? (x; f?) ? const. From [14, Eq. 2], the criterion being optimised is
the expected squared distance between gradients of the log density, so-called score functions,
?
1
2
J(f ) =
?(x) k? log ?
? (x; f ) ? ? log ?(x)k2 dx,
2 X
where we note that the normalising constants vanish from taking the gradient ?. As shown in [14,
Theorem 1], it is possible to compute an empirical version without accessing ?(x) or ? log ?(x)
other than through observed samples,
"
2 #
d
2
XX
?
log
?
?
(x;
f
)
1
?
log
?
?
(x;
f
)
1
? )=
+
.
(5)
J(f
n
?x2`
2
?x`
x?D `=1
Our approximations of the original model (2) are based on minimising (5) using approximate scores.
4.2
Infinite Dimensional Exponential Families Lite
The original estimator of f in (2) takes a dual form in a RKHS sub-space spanned by nd + 1 kernel
derivatives, [13, Thm. 4]. The update of the proposal at the iteration t of MCMC requires inversion
of a (td + 1) ? (td + 1) matrix. This is clearly prohibitive if we are to run even a moderate number
of iterations of a Markov chain. Following [12], we take a simple approach to avoid prohibitive
computational costs in t: we form a proposal using a random sub-sample of fixed size n from the
Markov chain history, z := {zi }ni=1 ? {xi }ti=1 . In order to avoid excessive computation when d is
n
large, we replace the full dual solution with a solution in terms of span ({k(zi , ?)}i=1 ), which covers
the support of the true density by construction, and grows with increasing n. That is, we assume that
the model (4) takes the ?light? form
4
We assume a fixed sample set here but will use both the full chain history {xi }ti=1 or a sub-sample later.
4
f (x) =
n
X
?i k(zi , x),
(6)
i=1
where ? ? Rn are real valued parameters that are obtained by minimising the empirical score matching objective (5). This representation is of a form similar to [20, Section 4.1], the main differences
being that the basis functions are chosen randomly, the basis set grows with n, and we will require
an additional regularising term. The estimator is summarised in the following proposition, which is
proved in Appendix A.
Pn
f (x) = i=1 ?i k(zi , x) for the
Proposition 1. Given a set of samples z = {zi }ni=1 and assuming
Gaussian kernel of the form k(x, y) = exp ?? ?1 kx ? yk22 , and ? > 0, the unique minimiser of
the ?kf k2H -regularised empirical score matching objective (5) is given by
?
?
? ? = ? (C + ?I)?1 b,
(7)
2
where b ? Rn and C ? Rn?n are given by
b=
d
X
2
`=1
?
d
X
(Ks` + Ds` K1 ? 2Dx` Kx` ) ? K1 and C =
[Dx` K ? KDx` ] [KDx` ? Dx` K] ,
`=1
with entry-wise products s` := x` x` and Dx := diag(x).
The estimator costs O(n3 + dn2 ) computation (for computing C, b, and for inverting C) and O(n2 )
storage, for a fixed random chain history sub-sample size n. This can be further reduced via low-rank
approximations to the kernel matrix and conjugate gradient methods, which are derived in Appendix
A.
Pn
Gradients of the model are given as ?f (x) = i=1 ?i ?k(x, xi ), i.e. they simply require to evaluate gradients of the kernel function. Evaluation and storage of ?f (?) both cost O(dn).
4.3 Exponential Families in Finite Feature Spaces
Instead of fitting an infinite-dimensional model on a subset of the available data, the second estimator
is based on fitting a finite dimensional approximation using all available data {xi }ti=1 , in primal
form. As we will see, updating the estimator when a new data point arrives can be done online.
Define an m-dimensional approximate feature space Hm = Rm , and denote by ?x ? Hm the
embedding of a point x ? X = Rd into Hm = Rm . Assume that the embedding approximates the
kernel function as a finite rank expansion k(x, y) ? ?>
x ?y . The log unnormalised density of the
infinite model (2) can be approximated by assuming the model in (4) takes the form
f (x) = h?, ?x iHm = ?> ?x
(8)
m
To fit ? ? R , we again minimise the score matching objective (5), as proved in Appendix B.
Proposition 2. Given a set of samples x = {xi }ti=1 and assuming f (x) = ?> ?x for a finite
dimensional feature embedding x 7? ?x ? Rm , and ? > 0, the unique minimiser of the ?k?k22 regularised empirical score matching objective (5) is given by
??? := (C + ?I)?1 b,
(9)
where
t
b := ?
d
1 X X ?`
?xi ? Rm ,
n i=1
t
C :=
`=1
with ?? `x :=
?
?x` ?x
and ??`x :=
d
1 X X ? ` ? ` T
?xi ?xi
? Rm?m ,
n i=1
`=1
?2
? .
?x2` x
An example feature
features [17, 21] and a standard Gaussian
q embedding based on random Fourier
T
T
2
kernel is ?x = m cos(?1 x + u1 ), . . . , cos(?m x + um ) , with ?i ? N (?) and ui ? Uniform[0, 2?].
The estimator has a one-off cost of O(tdm2 + m3 ) computation and O(m2 ) storage. Given that we
have computed a solution based on the Markov chain history {xi }ti=1 , however, it is straightforward
to update C, b, and the solution ??? online, after a new point xt+1 arrives. This is achieved by
storing running averages and performing low-rank updates of matrix inversions, and costs O(dm2 )
computation and O(m2 ) storage, independent of t. Further details are given in Appendix B.
>
Gradients of the model are ?f (x) = [??x ] ?? , i.e., they require the evaluation of the gradient of
the feature space embedding, costing O(md) computation and and O(m) storage.
5
Algorithm 1 Kernel Hamiltonian Monte Carlo ? Pseudo-code
Input: Target (possibly noisy estimator) ?
? , adaptation schedule at , HMC parameters,
Size of basis m or sub-sample size n.
At iteration t + 1, current state xt , history {xi }ti=1 , perform (1-4) with probability at
KMC lite:
KMC finite:
1.
2.
3.
4.
Update sub-sample z ? {xi }ti=1
1. Update to C, b from Prop. 2
2. Perform rank-d update to C ?1
Solve ?
? ? = ? ?2 (C + ?I)?1 b
P
?f (x) ? n
i=1 ?i ?k(x, zi )
3. Update ??? = (C + ?I)?1 b
Re-compute C, b from Prop. 1
4. ?f (x) ? [??x ]> ??
5. Propose (p0 , x? ) with kernel induced Hamiltonian flow, using ?x U = ?x f
6. Perform Metropolis step using ?? : accept xt+1 ? x? w.p. (3) and reject xt+1 ? xt otherwise
If ?
? is noisy and x? was accepted, store above ?
? (x? ) for evaluating (3) in the next iteration
5
Kernel Hamiltonian Monte Carlo
Constructing a kernel induced Hamiltonian flow as in Section 3 from the gradients of the infinite dimensional exponential family model (2), and approximate estimators (6),(8), we arrive at a gradient
free, adaptive MCMC algorithm: Kernel Hamiltonian Monte Carlo (Algorithm 1).
Computational Efficiency, Geometric Ergodicity, and Burn-in KMC finite using (8) allows for
online updates using the full Markov chain history, and therefore is a more elegant solution than
KMC lite, which has greater computational cost and requires sub-sampling the chain history. Due
to the parametric nature of KMC finite, however, the tails of the estimator are not guaranteed to
decay. For example, the random Fourier feature embedding described below Proposition 2 contains
periodic cosine functions, and therefore oscillates in the tails of (8), resulting in a reduced acceptance
probability. As we will demonstrate in the experiments, this problem does not appear when KMC
finite is initialised in high-density regions, nor after burn-in. In situations where information about
the target density support is unknown, and during burn-in, we suggest to use the lite estimator (7),
whose gradients decay outside of the training data. As a result, KMC lite is guaranteed to fall back
to a Random Walk Metropolis in unexplored regions, inheriting its convergence properties, and
smoothly transitions to HMC-like proposals as the MCMC chain grows. A proof of the proposition
below can be found in Appendix C.
Proposition 3. Assume d = 1, ?(x) has log-concave tails, the regularity conditions of [22, Thm
2.2] (implying ?-irreducibility and smallness of compact sets), that MCMC adaptation stops after
a fixed time, and a fixed number L of -leapfrog steps. If lim supkxk2 ?? k?f (x)k2 = 0, and
?M : ?x : k?f (x)k2 ? M , then KMC lite is geometrically ergodic from ?-almost any starting
point.
Vanishing adaptation MCMC algorithms that use the history of the Markov chain for constructing proposals might not be asymptotically correct. We follow [12, Sec. 4.2] and the idea of ?vanish?
ing adaptation? [11], to avoid
P?such biases. Let {at }i=0 be a schedule of decaying probabilities such
that limt?? at = 0 and t=0 at = ?. We update the density gradient estimate according to this
schedule in Algorithm 1. Intuitively, adaptation becomes less likely as the MCMC chain progresses,
but never fully stops, while sharing asymptotic convergence with adaptation that stops at a fixed
point [23, Theorem 1]. Note that Proposition 3 is a stronger statement about the convergence rate.
Free Parameters KMC has two free parameters: the Gaussian kernel bandwidth ?, and the regularisation parameter ?. As KMC?s performance depends on the quality of the approximate infinite
dimensional exponential family model in (6) or (8), a principled approach is to use the score matching objective function in (5) to choose ?, ? pairs via cross-validation (using e.g. ?hot-started? blackbox optimisation). Earlier adaptive kernel-based MCMC methods [12] did not address parameter
choice.
6
Experiments
We start by quantifying performance of KMC finite on synthetic targets. We emphasise that these
results can be reproduced with the lite version.
6
0.9
n=5000
1.0
d=8
1.0
0.8
0.7
101
d
0.6
0.5
0.4
0.8
0.8
0.6
0.6
HMC
KMC median
KMC 25%-75%
KMC 5%-95%
0.4
0.3
100
102
0.2
0.2
0.1
0.0
100
104
103
0.4
0.2
0.0
102
101
n
103
d
104
n
Figure 2: Hypothetical acceptance probability of KMC finite on a challening target in growing
dimensions. Left: As a function of n = m (x-axis) and d (y-axis). Middle/right: Slices through
left plot with error bars for fixed n = m and as a function of d (left), and for fixed d as a function of
n = m (right).
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
Acc. rate
?
kE[X]k
10
Minimum ESS
35
HMC
KMC
RW
KAMH
30
8
25
6
20
4
15
10
2
0
500
1000
n
1500
2000
0
5
0
500
1000
n
1500
2000
0
0
500
1000
1500
2000
n
Figure 3: Results for the 8-dimensional synthetic Banana. As the amout of observed data increases,
KMC performance approaches HMC ? outperforming KAMH and RW. 80% error bars over 30 runs.
KMC Finite: Stability of Trajectories in High Dimensions In order to quantify efficiency in
growing dimensions, we study hypothetical acceptance rates along trajectories on the kernel induced
Hamiltonian flow (no MCMC yet) on a challenging Gaussian target: We sample the diagonal entries
of the covariance matrix from a Gamma(1,1) distribution and rotate with a uniformly sampled
random orthogonal matrix. The resulting target is challenging to estimate due to its ?non-singular
smoothness?, i.e., substantially differing length-scales across its principal components. As a single
Gaussian kernel is not able to effeciently represent such scaling families, we use a rational quadratic
kernel for the gradient estimation, whose random features are straightforward to compute. Figure
2 shows the average acceptance over 100 independent trials as a function of the number of (ground
truth) samples and basis functions, which are set to be equal n = m, and of dimension d. In low to
moderate dimensions, gradients of the finite estimator lead to acceptance rates comparable to plain
HMC. On targets with more ?regular? smoothness, the estimator performs well in up to d ? 100,
with less variance. See Appendix D.1 for details.
KMC Finite: HMC-like Mixing on a Synthetic Example We next show that KMC?s performance approaches that of HMC as it sees more data. We compare KMC, HMC, an isotropic random
walk (RW), and KAMH on the 8-dimensional nonlinear banana-shaped target; see Appendix D.2.
We here only quantify mixing after a sufficient burn-in (burn-in speed is included in next example).
We quantify performance on estimating the target?s mean, which is exactly 0. We tuned the scaling
of KAMH and RW to achieve 23% acceptance. We set HMC parameters to achieve 80% acceptance
and then used the same parameters for KMC. We ran all samplers for 2000+200 iterations from a
random start point, discarded the burn-in and computed acceptance rates, the norm of the empirical
?
mean kE[x]k,
and the minimum effective sample size (ESS) across dimensions. For KAMH and
KMC, we repeated the experiment for an increasing number of burn-in samples and basis functions
m = n. Figure 3 shows the results as a function of m = n. KMC clearly outperforms RW and
KAMH, and eventually achieves performance close to HMC as n = m grows.
KMC Lite: Pseudo-Marginal MCMC for GP Classification on Real World Data We next
apply KMC to sample from the marginal posterior over hyper-parameters of a Gaussian Process
Classification (GPC) model on the UCI Glass dataset [24]. Classical HMC cannot be used for this
problem, due to the intractability of the marginal data likelihood. Our experimental protocol mostly
follows [12, Section 5.1], see Appendix D.3, but uses only 6000 MCMC iterations without discarding a burn-in, i.e., we study how fast KMC initially explores the target. We compare convergence in
terms of all mixed moments of order up to 3 to a set of benchmark samples (MMD [25], lower is better). KMC randomly uses between 1 and 10 leapfrog steps of a size chosen uniformly in [0.01, 0.1],
7
1.2
104
103
0.8
0.6
0.5
0.6
p(?1 )
105
Autocorrelation
MMD from ground truth
KMC
KAMH
RW
106
102
0.7
KMC
RW
HABC
1.0
107
0.4
0.2
0.1
?0.2
0
1000
2000
3000
Iterations
4000
5000
0.3
0.2
0.0
?0.4
0.4
0
20
40
60
Lag
80
100
0.0
?10
0
10
20
?1
30
40
50
Figure 4: Left: Results for 9-dimensional marginal posterior over length scales of a GPC model
applied to the UCI Glass dataset. The plots shows convergence (no burn-in discarded) of all mixed
moments up to order 3 (lower MMD is better). Middle/right: ABC-MCMC auto-correlation and
marginal ?1 posterior for a 10-dimensional skew normal likelihood. While KMC mixes as well as
HABC, it does not suffer from any bias (overlaps with RW, while HABC is significantly different)
and requires fewer simulations per proposal.
a standard Gaussian momentum, and a kernel tuned by cross-validation, see Appendix D.3. We did
not extensively tune the HMC parameters of KMC as the described settings were sufficient. Both
KMC and KAMH used 1000 samples from the chain history. Figure 4 (left) shows that KMC?s burnin contains a short ?exploration phase? where produced estimates are bad, due to it falling back to a
random walk in unexplored regions, c.f. Proposition 3. From around 500 iterations, however, KMC
clearly outperforms both RW and the earlier state-of-the-art KAMH. These results are backed by the
minimum ESS (not plotted), which is around 415 for KMC and is around 35 and 25 for KAMH and
RW, respectively. Note that all samplers effectively stop improving from 3000 iterations ? indicating
a burn-in bias. All samplers took 1h time, with most time spent estimating the marginal likelihood.
KMC Lite: Reduced Simulations and no Additional Bias in ABC We now apply KMC in the
context of Approximate Bayesian Computation (ABC), which often is employed when the data likelihood is intractable but can be obtained by simulation, see e.g. [6]. ABC-MCMC [5] targets an
approximate posterior by constructing an unbiased Monte Carlo estimator of the approximate likelihood. As each such evaluation requires expensive simulations from the likelihood, the goal of all
ABC methods is to reduce the number of such simulations. Accordingly, Hamiltonian ABC was
recently proposed [8], combining the synthetic likelihood approach [26] with gradients based on
stochastic finite differences. We remark that this requires to simulate from the likelihood in every
leapfrog step, and that the additional bias from the Gaussian likelihood approximation can be problematic. In contrast, KMC does not require simulations to construct a proposal, but rather ?invests?
simulations into an accept/reject step (3) that ensures convergence to the original ABC target. Figure 4 (right) compares performance of RW, HABC (sticky random numbers and SPAS, [8, Sec. 4.3,
4.4]), and KMC on a 10-dimensional skew-normal distribution p(y|?) = 2N (?, I) ? (h?, yi) with
? = ? = 1 ? 10. KMC mixes as well as HABC, but HABC suffers from a severe bias. KMC also
reduces the number of simulations per proposal by a factor 2L = 100. See Appendix D.4 for details.
7
Discussion
We have introduced KMC, a kernel-based gradient free adaptive MCMC algorithm that mimics
HMC?s behaviour by estimating target gradients in an RKHS. In experiments, KMC outperforms
random walk based sampling methods in up to d = 50 dimensions, including the recent kernelbased KAMH [12]. KMC is particularly useful when gradients of the target density are unavailable,
as in PM-MCMC or ABC-MCMC, where classical HMC cannot be used. We have proposed two
efficient empirical estimators for the target gradients, each with different strengths and weaknesses,
and have given experimental evidence for the robustness of both.
Future work includes establishing theoretical consistency and uniform convergence rates for the
empirical estimators, for example via using recent analysis of random Fourier Features with tight
bounds [21], and a thorough experimental study in the ABC-MCMC context where we see a lot of
potential for KMC. It might also be possible to use KMC as a precomputing strategy to speed up
classical HMC as in [27]. For code, see https://github.com/karlnapf/kernel_hmc
8
References
[1] R.M. Neal. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo,
2, 2011.
[2] M.A. Beaumont. Estimation of population growth or decline in genetically monitored populations. Genetics, 164(3):1139?1160, 2003.
[3] C. Andrieu and G.O. Roberts. The pseudo-marginal approach for efficient Monte Carlo computations. The Annals of Statistics, 37(2):697?725, April 2009.
[4] M. Filippone and M. Girolami. Pseudo-marginal Bayesian inference for Gaussian Processes.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014.
[5] P. Marjoram, J. Molitor, V. Plagnol, and S. Tavar?e. Markov chain Monte Carlo without likelihoods. Proceedings of the National Academy of Sciences, 100(26):15324?15328, 2003.
[6] S.A. Sisson and Y. Fan. Likelihood-free Markov chain Monte Carlo. Handbook of Markov
chain Monte Carlo, 2010.
[7] T. Chen, E. Fox, and C. Guestrin. Stochastic Gradient Hamiltonian Monte Carlo. In ICML,
pages 1683?1691, 2014.
[8] E. Meeds, R. Leenders, and M. Welling. Hamiltonian ABC. In UAI, 2015.
[9] M. Betancourt. The Fundamental Incompatibility of Hamiltonian Monte Carlo and Data Subsampling. arXiv preprint arXiv:1502.01510, 2015.
[10] H. Haario, E. Saksman, and J. Tamminen. Adaptive proposal distribution for random walk
Metropolis algorithm. Computational Statistics, 14(3):375?395, 1999.
[11] C. Andrieu and J. Thoms. A tutorial on adaptive MCMC. Statistics and Computing, 18(4):343?
373, December 2008.
[12] D. Sejdinovic, H. Strathmann, M. Lomeli, C. Andrieu, and A. Gretton. Kernel Adaptive
Metropolis-Hastings. In ICML, 2014.
[13] B. Sriperumbudur, K. Fukumizu, R. Kumar, A. Gretton, and A. Hyv?arinen. Density Estimation
in Infinite Dimensional Exponential Families. arXiv preprint arXiv:1312.3516, 2014.
[14] A. Hyv?arinen. Estimation of non-normalized statistical models by score matching. JMLR,
6:695?709, 2005.
[15] Larry Wasserman. All of nonparametric statistics. Springer, 2006.
[16] C.E. Rasmussen. Gaussian Processes to Speed up Hybrid Monte Carlo for Expensive Bayesian
Integrals. Bayesian Statistics 7, pages 651?659, 2003.
[17] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, pages
1177?1184, 2007.
[18] M. Betancourt, S. Byrne, and M. Girolami. Optimizing The Integrator Step Size for Hamiltonian Monte Carlo. arXiv preprint arXiv:1503.01916, 2015.
[19] A. Berlinet and C. Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and
Statistics. Kluwer, 2004.
[20] A. Hyv?arinen. Some extensions of score matching. Computational Statistics & Data Analysis,
51:2499?2512, 2007.
[21] B.K. Sriperumbudur and Z. Szab?o. Optimal rates for random Fourier features. In NIPS, 2015.
[22] G.O. Roberts and R.L. Tweedie. Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms. Biometrika, 83(1):95?110, 1996.
[23] G.O. Roberts and J.S. Rosenthal. Coupling and ergodicity of adaptive Markov chain Monte
Carlo algorithms. Journal of Applied Probability, 44(2):458?475, 03 2007.
[24] K. Bache and M. Lichman. UCI Machine Learning Repository, 2013.
[25] A. Gretton, K. Borgwardt, B. Sch?olkopf, A. J. Smola, and M. Rasch. A kernel two-sample test.
JMLR, 13:723?773, 2012.
[26] S. N. Wood. Statistical inference for noisy nonlinear ecological dynamic systems. Nature,
466(7310):1102?1104, 08 2010.
[27] C. Zhang, B. Shahbaba, and H. Zhao. Hamiltonian Monte Carlo Acceleration Using Neural
Network Surrogate functions. arXiv preprint arXiv:1506.05555, 2015.
[28] J. Shawe-Taylor and N. Cristianini. Kernel methods for pattern analysis. Cambridge university
press, 2004.
[29] Q. Le, T. Sarl?os, and A. Smola. Fastfood?approximating kernel expansions in loglinear time.
In ICML, 2013.
[30] K.L. Mengersen and R.L. Tweedie. Rates of convergence of the Hastings and Metropolis
algorithms. The Annals of Statistics, 24(1):101?121, 1996.
9
| 5890 |@word trial:1 repository:1 middle:2 version:3 inversion:2 stronger:1 norm:1 nd:1 hyv:3 simulation:9 covariance:7 p0:4 moment:2 contains:3 score:15 lichman:1 tuned:3 rkhs:11 outperforms:3 current:3 com:2 yet:4 dx:6 written:1 distant:1 analytic:1 plot:2 update:9 stationary:1 implying:1 prohibitive:2 fewer:1 intelligence:1 accordingly:1 isotropic:1 es:3 haario:1 hamiltonian:31 vanishing:1 short:1 accepting:1 normalising:2 successive:1 zhang:1 unbiasedly:1 along:4 constructed:2 dn:1 fitting:3 autocorrelation:1 introduce:1 expected:3 nor:1 growing:5 blackbox:1 integrator:3 globally:1 td:2 dm2:1 increasing:4 becomes:1 estimating:5 xx:1 unspecified:1 substantially:1 differing:1 beaumont:1 pseudo:6 thorough:1 unexplored:3 hypothetical:2 ti:7 concave:1 growth:1 every:1 multidimensional:1 exactly:1 oscillates:1 um:1 k2:3 rm:5 uk:2 unit:1 control:1 berlinet:1 appear:1 biometrika:1 positive:1 local:2 limit:2 oxford:1 establishing:1 optimised:1 might:2 burn:10 frog:5 k:1 challenging:2 co:2 tamminen:1 unique:2 practice:2 definite:1 differs:1 procedure:2 universal:1 empirical:7 significantly:3 adapting:1 projection:1 matching:14 reject:2 regular:2 suggest:1 cannot:4 close:3 operator:3 storage:5 context:5 restriction:1 deterministic:1 map:3 imposed:1 backed:1 straightforward:2 starting:4 ergodic:2 ke:2 wasserman:1 m2:2 estimator:19 importantly:2 spanned:1 initialise:1 embedding:6 stability:1 population:2 variation:1 updated:1 annals:2 target:38 construction:1 exact:4 us:5 regularised:2 approximated:2 expensive:3 updating:1 particularly:1 bache:1 observed:5 role:1 preprint:4 region:5 ensures:1 sticky:1 movement:1 decrease:1 ran:1 substantial:1 accessing:2 principled:1 leenders:1 ui:1 cristianini:1 dynamic:3 ultimately:1 solving:1 tight:1 efficiency:8 meed:1 basis:6 joint:1 fast:1 effective:1 london:1 monte:22 hyper:1 outside:1 sarl:1 whose:4 lag:1 valued:2 solve:1 otherwise:2 ability:1 statistic:11 gp:3 itself:1 noisy:3 online:4 reproduced:1 sisson:1 took:1 propose:4 product:1 adaptation:6 aligned:1 uci:3 combining:1 mixing:5 achieve:2 academy:1 olkopf:1 saksman:1 exploiting:1 convergence:9 regularity:1 strathmann:2 generating:1 spent:1 depending:1 develop:2 coupling:1 school:1 progress:1 eq:2 strong:1 auxiliary:1 involves:1 implies:1 quantify:4 girolami:2 rasch:1 correct:3 stochastic:5 exploration:1 larry:1 require:6 arinen:3 behaviour:3 kmc:60 proposition:8 zoltan:1 extension:1 around:4 ground:2 normal:2 exp:8 k2h:1 claim:1 achieves:2 estimation:8 leap:5 correctness:1 fukumizu:1 clearly:3 gaussian:14 heiko:1 aim:2 rather:2 avoid:3 pn:2 incompatibility:1 derived:1 inherits:1 leapfrog:3 improvement:1 consistently:2 modelling:1 likelihood:12 rank:4 hk:1 contrast:2 tavar:1 glass:2 inference:4 accept:3 compactness:1 initially:1 interested:1 classification:3 dual:2 art:4 smoothing:1 integration:1 marginal:10 equal:1 construct:2 never:1 shaped:1 sampling:5 broad:1 icml:3 excessive:1 future:1 mimic:3 randomly:2 gamma:1 national:1 lite:13 szabo:1 phase:1 ourselves:1 maintain:1 plagnol:1 acceptance:18 interest:1 evaluation:3 severe:1 weakness:2 arrives:2 light:1 primal:1 parametrised:1 chain:32 integral:1 arthur:1 minimiser:2 orthogonal:1 discretisation:3 fox:1 tweedie:2 taylor:1 walk:12 re:3 orbit:1 plotted:1 theoretical:1 earlier:2 cover:2 cost:9 deviation:2 subset:2 entry:2 uniform:3 kdx:2 periodic:1 synthetic:5 combined:2 adaptively:3 recht:1 borgwardt:1 density:27 fundamental:2 explores:1 off:1 squared:1 again:1 central:1 opposed:1 choose:1 possibly:1 resort:1 derivative:1 zhao:1 return:1 toy:1 potential:7 star:2 sec:4 includes:1 satisfy:1 depends:3 later:1 shahbaba:1 lot:1 closed:2 red:1 start:2 hf:3 option:1 decaying:1 ni:3 variance:1 bayesian:8 produced:1 carlo:22 trajectory:11 bristol:1 history:15 acc:1 suffers:1 sharing:1 sriperumbudur:2 energy:7 initialised:1 naturally:1 associated:1 proof:1 monitored:1 stop:4 sampled:1 proved:2 rational:1 popular:1 dataset:2 lim:1 improves:3 dimensionality:2 hilbert:3 schedule:3 back:2 follow:1 improved:1 april:1 done:1 ergodicity:2 smola:2 correlation:3 d:1 hastings:7 replacing:1 nonlinear:4 o:1 defines:2 quality:2 grows:4 k22:1 requiring:1 true:6 unbiased:3 normalized:1 andrieu:3 analytically:1 byrne:1 symmetric:1 neal:1 deal:1 during:1 uniquely:1 illustrative:1 samuel:1 cosine:1 criterion:1 demonstrate:1 performs:1 wise:1 novel:2 recently:2 superior:1 empirically:1 extend:1 tail:3 approximates:1 molitor:1 numerically:1 kluwer:1 cambridge:1 smoothness:2 rd:2 consistency:1 mathematics:1 pm:4 particle:1 dino:1 shawe:1 access:3 stable:1 posterior:6 recent:2 optimizing:1 moderate:2 lomeli:1 store:1 ecological:1 outperforming:1 arbitrarily:1 yi:1 guestrin:1 minimum:3 additional:4 care:1 greater:1 employed:1 ii:1 full:3 mix:3 gretton:4 reduces:2 rahimi:1 ing:2 faster:1 match:2 offer:1 long:1 minimising:3 cross:2 optimisation:1 expectation:2 arxiv:8 iteration:10 sejdinovic:2 kernel:47 limt:2 filippone:1 achieved:3 represent:1 mmd:3 proposal:13 background:1 remarkably:1 addition:1 decreased:2 pseudomarginal:1 median:1 leaving:2 singular:1 sch:1 induced:14 elegant:1 simulates:1 mature:1 december:1 flow:14 yk22:1 fit:3 zi:6 irreducibility:1 bandwidth:1 reduce:1 idea:2 decline:1 minimise:1 expression:1 effort:1 suffer:2 oscillate:1 repeatedly:1 remark:1 useful:1 gpc:2 clear:1 tune:1 nonparametric:1 locally:2 extensively:1 induces:1 rw:11 reduced:4 http:2 generate:1 canonical:1 problematic:2 tutorial:1 estimated:2 rosenthal:1 per:2 blue:1 summarised:1 falling:1 costing:1 neither:1 ihm:1 asymptotically:3 geometrically:2 wood:1 run:2 prob:2 arrive:1 family:15 throughout:1 almost:1 draw:1 appendix:10 scaling:6 comparable:1 spa:1 bound:1 guaranteed:2 fan:1 replaces:1 quadratic:1 strength:2 precisely:1 x2:3 n3:1 amh:2 fourier:5 speed:4 u1:1 min:3 span:1 simulate:1 performing:1 kumar:1 relatively:2 department:1 according:1 conjugate:1 across:3 metropolis:13 intuitively:1 invariant:2 computationally:3 remains:2 skew:2 eventually:1 mechanism:1 end:4 available:3 apply:3 alternative:1 batch:1 robustness:1 original:5 thomas:1 denotes:1 running:2 ensure:1 subsampling:1 recycling:1 const:2 giving:1 k1:2 approximating:1 classical:5 move:2 objective:5 parametric:3 strategy:1 usual:1 md:1 surrogate:7 said:1 diagonal:1 gradient:37 loglinear:1 subspace:1 distance:1 simulated:1 assuming:3 length:4 code:3 mini:1 ratio:1 hmc:30 mostly:2 unfortunately:1 statement:1 robert:3 negative:1 unknown:2 perform:3 markov:24 discarded:2 benchmark:1 finite:19 situation:1 banana:2 rn:3 reproducing:3 sharp:1 thm:2 introduced:2 inverting:1 pair:1 kl:1 nip:2 address:2 able:2 bar:2 dynamical:1 usually:1 mismatch:1 below:2 pattern:2 agnan:1 genetically:1 including:3 hot:1 overlap:1 demanding:1 natural:2 hybrid:1 marjoram:1 smallness:1 github:2 axis:2 started:1 hm:3 auto:1 unnormalised:6 geometric:2 kf:1 betancourt:2 asymptotic:1 regularisation:1 fully:1 mixed:2 proportional:1 validation:2 sufficient:3 emulator:1 intractability:1 uncorrelated:1 storing:1 invests:1 genetics:1 free:9 rasmussen:1 bias:8 wide:1 fall:1 taking:2 emphasise:1 slice:1 overcome:1 dimension:10 plain:2 world:2 transition:2 contour:1 rich:1 avoids:1 dn2:1 collection:1 adaptive:14 evaluating:1 welling:1 transaction:1 mengersen:1 approximate:16 compact:2 dealing:1 global:3 uai:1 handbook:2 generalising:1 xi:12 alternatively:1 continuous:1 table:2 learn:1 nature:2 robust:2 unavailable:2 improving:1 expansion:2 constructing:5 domain:2 precomputing:1 diag:1 inheriting:1 did:2 dense:1 main:1 protocol:1 fastfood:1 big:2 noise:1 n2:1 repeated:2 x1:1 referred:1 gatsby:1 sub:7 momentum:2 exponential:13 vanish:2 jmlr:2 learns:1 theorem:3 bad:1 xt:18 discarding:1 decay:2 evidence:1 intractable:6 ih:3 effectively:1 kx:2 chen:1 smoothly:1 simply:1 likely:1 springer:1 corresponds:1 truth:2 satisfies:1 abc:13 prop:2 goal:1 consequently:1 quantifying:1 acceleration:1 replace:3 absence:1 hard:1 regularising:1 included:1 infinite:12 corrected:2 uniformly:2 sampler:6 szab:1 principal:1 called:2 total:1 accepted:2 experimental:4 m3:1 burnin:1 indicating:1 college:1 highdimensional:1 support:6 kernelbased:1 rotate:1 cumulant:1 evaluate:2 mcmc:32 |
5,403 | 5,891 | A Complete Recipe for Stochastic Gradient MCMC
Yi-An Ma, Tianqi Chen, and Emily B. Fox
University of Washington {yianma@u,tqchen@cs,ebfox@stat}.washington.edu
Abstract
Many recent Markov chain Monte Carlo (MCMC) samplers leverage continuous
dynamics to define a transition kernel that efficiently explores a target distribution.
In tandem, a focus has been on devising scalable variants that subsample the data
and use stochastic gradients in place of full-data gradients in the dynamic simulations. However, such stochastic gradient MCMC samplers have lagged behind
their full-data counterparts in terms of the complexity of dynamics considered
since proving convergence in the presence of the stochastic gradient noise is nontrivial. Even with simple dynamics, significant physical intuition is often required
to modify the dynamical system to account for the stochastic gradient noise. In this
paper, we provide a general recipe for constructing MCMC samplers?including
stochastic gradient versions?based on continuous Markov processes specified via two matrices. We constructively prove that the framework is complete. That is,
any continuous Markov process that provides samples from the target distribution
can be written in our framework. We show how previous continuous-dynamic
samplers can be trivially ?reinvented? in our framework, avoiding the complicated
sampler-specific proofs. We likewise use our recipe to straightforwardly propose
a new state-adaptive sampler: stochastic gradient Riemann Hamiltonian Monte
Carlo (SGRHMC). Our experiments on simulated data and a streaming Wikipedia analysis demonstrate that the proposed SGRHMC sampler inherits the benefits
of Riemann HMC, with the scalability of stochastic gradient methods.
1
Introduction
Markov chain Monte Carlo (MCMC) has become a defacto tool for Bayesian posterior inference.
However, these methods notoriously mix slowly in complex, high-dimensional models and scale
poorly to large datasets. The past decades have seen a rise in MCMC methods that provide more efficient exploration of the posterior, such as Hamiltonian Monte Carlo (HMC) [8, 12] and its Reimann
manifold variant [10]. This class of samplers is based on defining a potential energy function in
terms of the target posterior distribution and then devising various continuous dynamics to explore
the energy landscape, enabling proposals of distant states. The gain in efficiency of exploration often
comes at the cost of a significant computational burden in large datasets.
Recently, stochastic gradient variants of such continuous-dynamic samplers have proven quite useful
in scaling the methods to large datasets [17, 1, 6, 2, 7]. At each iteration, these samplers use data
subsamples?or minibatches?rather than the full dataset. Stochastic gradient Langevin dynamics
(SGLD) [17] innovated in this area by connecting stochastic optimization with a first-order Langevin
dynamic MCMC technique, showing that adding the ?right amount? of noise to stochastic gradient
ascent iterates leads to samples from the target posterior as the step size is annealed. Stochastic
gradient Hamiltonian Monte Carlo (SGHMC) [6] builds on this idea, but importantly incorporates
the efficient exploration provided by the HMC momentum term. A key insight in that paper was that
the na??ve stochastic gradient variant of HMC actually leads to an incorrect stationary distribution
(also see [4]); instead a modification to the dynamics underlying HMC is needed to account for
1
the stochastic gradient noise. Variants of both SGLD and SGHMC with further modifications to
improve efficiency have also recently been proposed [1, 13, 7].
In the plethora of past MCMC methods that explicitly leverage continuous dynamics?including
HMC, Riemann manifold HMC, and the stochastic gradient methods?the focus has been on showing that the intricate dynamics leave the target posterior distribution invariant. Innovating in this
arena requires constructing novel dynamics and simultaneously ensuring that the target distribution
is the stationary distribution. This can be quite challenging, and often requires significant physical
and geometrical intuition [6, 13, 7]. A natural question, then, is whether there exists a general recipe
for devising such continuous-dynamic MCMC methods that naturally lead to invariance of the target
distribution. In this paper, we answer this question to the affirmative. Furthermore, and quite importantly, our proposed recipe is complete. That is, any continuous Markov process (with no jumps)
with the desired invariant distribution can be cast within our framework, including HMC, Riemann
manifold HMC, SGLD, SGHMC, their recent variants, and any future developments in this area.
That is, our method provides a unifying framework of past algorithms, as well as a practical tool for
devising new samplers and testing the correctness of proposed samplers.
The recipe involves defining a (stochastic) system parameterized by two matrices: a positive
semidefinite diffusion matrix, D(z), and a skew-symmetric curl matrix, Q(z), where z = (?, r)
with ? our model parameters of interest and r a set of auxiliary variables. The dynamics are then
written explicitly in terms of the target stationary distribution and these two matrices. By varying
the choices of D(z) and Q(z), we explore the space of MCMC methods that maintain the correct
invariant distribution. We constructively prove the completeness of this framework by converting a
general continuous Markov process into the proposed dynamic structure.
For any given D(z), Q(z), and target distribution, we provide practical algorithms for implementing either full-data or minibatch-based variants of the sampler. In Sec. 3.1, we cast many previous
continuous-dynamic samplers in our framework, finding their D(z) and Q(z). We then show how
these existing D(z) and Q(z) building blocks can be used to devise new samplers; we leave the
question of exploring the space of D(z) and Q(z) well-suited to the structure of the target distribution as an interesting direction for future research. In Sec. 3.2 we demonstrate our ability to construct
new and relevant samplers by proposing stochastic gradient Riemann Hamiltonian Monte Carlo, the
existence of which was previously only speculated. We demonstrate the utility of this sampler on
synthetic data and in a streaming Wikipedia analysis using latent Dirichlet allocation [5].
2
A Complete Stochastic Gradient MCMC Framework
We start with the standard MCMC goal of drawing samples from a target distribution, which we take
to be the posterior p(?|S) of model parameters ? ? Rd given an observed dataset S. Throughout,
we assumeP
i.i.d. data x ? p(x|?). We write p(?|S) ? exp(?U (?)), with potential function
U (?) = ? x?S log p(x|?) ? log p(?). Algorithms like HMC [12, 10] further augment the space
of interest with auxiliary variables r and sample from p(z|S) ? exp(?H(z)), with Hamiltonian
Z
H(z) = H(?, r) = U (?) + g(?, r), such that exp(?g(?, r))dr = constant.
(1)
Marginalizing the auxiliary variables gives us the desired distribution on ?. In this paper, we generically consider z as the samples we seek to draw; z could represent ? itself, or an augmented state
space in which case we simply discard the auxiliary variables to perform the desired marginalization.
As in HMC, the idea is to translate the task of sampling from the posterior distribution to simulating
from a continuous dynamical system which is used to define a Markov transition kernel. That is,
over any interval h, the differential equation defines a mapping from the state at time t to the state
at time t + h. One can then discuss the evolution of the distribution p(z, t) under the dynamics, as
characterized by the Fokker-Planck equation for stochastic dynamics [14] or the Liouville equation
for deterministic dynamics [20]. This evolution can be used to analyze the invariant distribution of
the dynamics, ps (z). When considering deterministic dynamics, as in HMC, a jump process must
be added to ensure ergodicity. If the resulting stationary distribution is equal to the target posterior,
then simulating from the process can be equated with drawing samples from the posterior.
If the stationary distribution is not the target distribution, a Metropolis-Hastings (MH) correction
can often be applied. Unfortunately, such correction steps require a costly computation on the entire
2
dataset. Even if one can compute the MH correction, if the dynamics do not nearly lead to the
correct stationary distribution, then the rejection rate can be high even for short simulation periods
h. Furthermore, for many stochastic gradient MCMC samplers, computing the probability of the
reverse path is infeasible, obviating the use of MH. As such, a focus in the literature is on defining
dynamics with the right target distribution, especially in large-data scenarios where MH corrections
are computationally burdensome or infeasible.
2.1
Devising SDEs with a Specified Target Stationary Distribution
Generically, all continuous Markov processes that one might consider for sampling can be written
as a stochastic differential equation (SDE) of the form:
p
dz = f (z)dt + 2D(z)dW(t),
(2)
where f (z) denotes the deterministic drift and often relates to the gradient of H(z), W(t) is a ddimensional Wiener process, and D(z) is a positive semidefinite diffusion matrix. Clearly, however,
not all choices of f (z) and D(z) yield the stationary distribution ps (z) ? exp(?H(z)).
When D(z) = 0, as in HMC, the dynamics of Eq. (2) become deterministic. Our exposition focuses
on SDEs, but our analysis applies to deterministic dynamics as well. In this case, our framework?
using the Liouville equation in place of Fokker-Planck?ensures that the deterministic dynamics
leave the target distribution invariant. For ergodicity, a jump process must be added, which is not
considered in our recipe, but tends to be straightforward (e.g., momentum resampling in HMC).
To devise a recipe for constructing SDEs with the correct stationary distribution, we propose writing
f (z) directly in terms of the target distribution:
f (z) = ? D(z) + Q(z) ?H(z) + ?(z),
?i (z) =
d
X
?
Dij (z) + Qij (z) .
?zj
j=1
(3)
Here, Q(z) is a skew-symmetric curl matrix representing the deterministic traversing effects seen
in HMC procedures. In contrast, the diffusion matrix D(z) determines the strength of the Wienerprocess-driven diffusion. Matrices D(z) and Q(z) can be adjusted to attain faster convergence to
the posterior distribution. A more detailed discussion on the interpretation of D(z) and Q(z) and
the influence of specific choices of these matrices is provided in the Supplement.
Importantly, as we show in Theorem 1, sampling the stochastic dynamics of Eq. (2) (according
to It?o integral) with f (z) as in Eq. (3) leads to the desired posterior distribution as the stationary
distribution: ps (z) ? exp(?H(z)). That is, for any choice of positive semidefinite D(z) and skewsymmetric Q(z) parameterizing f (z), we know that simulating from Eq. (2) will provide samples
from p(? | S) (discarding any sampled auxiliary variables r) assuming the process is ergodic.
Theorem 1. ps (z) ? exp(?H(z)) is a stationary distribution of the dynamics of Eq. (2) if f (z) is
restricted to the form of Eq. (3), with D(z) positive semidefinite and Q(z) skew-symmetric. If D(z)
is positive definite, or if ergodicity can be shown, then the stationary distribution is unique.
Proof. The equivalence of ps (z) and the target p(z | S) ? exp(?H(z)) can be shown using the
Fokker-Planck description of the probability density evolution under the dynamics of Eq. (2) :
?t p(z, t) = ?
X ?
X ?2
fi (z)p(z, t) +
Dij (z)p(z, t) .
?zi
?zi ?zj
i
i,j
Eq. (4) can be further transformed into a more compact form [19, 16]:
?t p(z, t) =?T ? [D(z) + Q(z)] [p(z, t)?H(z) + ?p(z, t)] .
(4)
(5)
We can verify that p(z | S) is invariant under Eq. (5) by calculating e?H(z) ?H(z) + ?e?H(z) =
0. If the process is ergodic, this invariant distribution is unique. The equivalence of the compact form
was originally proved in [16]; we include a detailed proof in the Supplement for completeness.
3
Processes with
ps(z) = p(z|S)
f(z) defined by
D(z), Q(z)
All Continuous
Markov Processes
2.2
Figure 1: The red space represents the set of all continuous Markov
processes. A point in the black space represents a continuous
Markov process defined by Eqs. (2)-(3) based on a specific choice of
D(z), Q(z). By Theorem 1, each such point has stationary distribution
ps (z) = p(z | S). The blue space represents all continuous Markov
processes with ps (z) = p(z | S). Theorem 2 states that these blue and
black spaces are equivalent (there is no gap, and any point in the blue
space has a corresponding D(z), Q(z) in our framework).
Completeness of the Framework
An important question is what portion of samplers defined by continuous Markov processes with
the target invariant distribution can we define by iterating over all possible D(z) and Q(z)? In
Theorem 2, we show that for any continuous Markov process with the desired stationary distribution,
ps (z), there exists an SDE as in Eq. (2) with f (z) defined as in Eq. (3). We know from the ChapmanKolmogorov equation [9] that any continuous Markov process with stationary distribution ps (z) can
be written as in Eq. (2), which gives us the diffusion matrix D(z). Theorem 2 then constructively
defines the curl matrix Q(z). This result implies that our recipe is complete. That is, we cover all
possible continuous Markov process samplers in our framework. See Fig. 1.
Theorem 2. For the SDE
density function ps (z) u of Eq. (2), suppose itsstationary probability
Pd
?
niquely exists, and that fi (z)ps (z) ? j=1
Dij (z)ps (z) is integrable with respect to the
??j
Lebesgue measure, then there exists a skew-symmetric Q(z) such that Eq. (3) holds.
The integrability condition is usually satisfied when the probability density function uniquely exists.
A constructive proof for the existence of Q(z) is provided in the Supplement.
2.3
A Practical Algorithm
In practice, simulation relies on an -discretization of the SDE, leading to a full-data update rule
zt+1 ? zt ? t D(zt ) + Q(zt ) ?H(zt ) + ?(zt ) + N (0, 2t D(zt )).
(6)
Calculating the gradient of H(z) involves evaluating the gradient of U (?). For a stochastic gradient
method, the assumption is that U (?) is too computationally intensive to compute as it relies on a sum
over all data points (see Sec. 2). Instead, such stochastic gradient algorithms examine independently
sampled data subsets Se ? S and the corresponding potential for these data:
X
e (?) = ? |S|
log p(x|?) ? log p(?); Se ? S.
(7)
U
e
|S|
x?Se
e (?) is an unbiased estimator of U (?). As such, a gradient
The specific form of Eq. (7) implies that U
e (?)?called a stochastic gradient [15]?is a noisy, but unbiased estimator
computed based on U
of the full-data gradient. The key question in many of the existing stochastic gradient MCMC
algorithms is whether the noise injected by the stochastic gradient adversely affects the stationary
e (?) in place of ?U (?)). One way to analyze the
distribution of the modified dynamics (using ?U
impact of the stochastic gradient is to make use of the central limit theorem and assume
e (?) = ?U (?) + N (0, V(?)),
?U
(8)
e
resulting in a noisy Hamiltonian gradient ?H(z)
= ?H(z) + [N (0, V(?)), 0]T . Simply plugging
e
in ?H(z)
in place of ?H(z) in Eq. (6) results in dynamics with an additional noise term (D(zt ) +
? t of the variance of this
Q(zt ) [N (0, V(?)), 0]T . To counteract this, assume we have an estimate B
?
additional noise satisfying 2D(zt ) ? t Bt 0 (i.e., positive semidefinite). With small , this is
always true since the stochastic gradient noise scales down faster than the added noise. Then, we
can attempt to account for the stochastic gradient noise by simulating
h
i
? t )).
e t ) + ?(zt ) + N (0, t (2D(zt ) ? t B
zt+1 ? zt ? t D(zt ) + Q(zt ) ?H(z
(9)
This provides our stochastic gradient?or minibatch? variant of the sampler. In Eq. (9), the noise
introduced by the stochastic gradient is multiplied by t (and the compensation by 2t ), implying that
4
the discrepancy between these dynamics and those of Eq. (6) approaches zero as t goes to zero. As
such, in this infinitesimal step size limit, since Eq. (6) yields the correct invariant distribution, so
does Eq. (9). This avoids the need for a costly or potentially intractable MH correction. However,
having to decrease t to zero comes at the cost of increasingly small updates. We can also use a finite,
small step size in practice, resulting in a biased (but faster) sampler. A similar bias-speed tradeoff
was used in [11, 3] to construct MH samplers, in addition to being used in SGLD and SGHMC.
3
3.1
Applying the Theory to Construct Samplers
Casting Previous MCMC Algorithms within the Proposed Framework
We explicitly state how some recently developed MCMC methods fall within the proposed framework based on specific choices of D(z), Q(z) and H(z) in Eq. (2) and (3). For the stochastic
gradient methods, we show how our framework can be used to ?reinvent? the samplers by guiding
their construction and avoiding potential mistakes or inefficiencies caused by na??ve implementations.
Hamiltonian Monte Carlo (HMC) The key ingredient in HMC [8, 12] is Hamiltonian dynamics,
which simulate the physical motion of an object with position ?, momentum r, and mass M on an
frictionless surface as follows (typically, a leapfrog simulation is used instead):
?t+1 ? ?t + t M?1 rt
(10)
rt+1 ? rt ? t ?U (?t ).
1 T
?1
Eq. (10) is aspecial case
of the proposed framework with z = (?, r), H(?, r) = U (?) + 2 r M r,
0 ?I
Q(?, r) =
and D(?, r) = 0.
I 0
Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) As discussed in [6], simply replace (?) in Eq. (10) results in the following updates:
ing ?U (?) by the stochastic gradient ?U
?t+1 ? ?t + t M?1 rt
(11)
Naive :
e (?t ) ? rt ? t ?U (?t ) + N (0, 2t V(?t )),
rt+1 ? rt ? t ?U
where the ? arises from the approximation of Eq. (8). Careful study shows that Eq. (11) cannot be
rewritten into our proposed framework, which hints that such a na??ve stochastic gradient version of
HMC is not correct. Interestingly, the authors of [6] proved that this na??ve version indeed does not
have the correct stationary distribution. In our framework, we see that the noise term N (0, 2t D(z))
is paired with
a D(z)?H(z)
term, hinting that such a term should be added to Eq. (11). Here,
0 0
D(?, r) =
, which means we need to add D(z)?H(z) = V(?)?r H(?, r) =
0 V(?)
V(?)M?1 r. Interestingly, this is the correction strategy proposed in [6], but through a physical
interpretation of the dynamics. In particular, the term V(?)M?1 r (or, generically, CM?1 r where
C V(?)) has an interpretation as friction and leads to second order Langevin dynamics:
?t+1 ? ?t + t M?1 rt
(12)
? t )).
e (?t ) ? t CM?1 rt + N (0, t (2C ? t B
rt+1 ? rt ? t ?U
? t is an estimate of V(?t ). This method now fits into our framework with H(?, r) and Q(?, r)
Here, B
0 0
as in HMC, but with D(?, r) =
. This example shows how our theory can be used to
0 C
identify invalid samplers and provide guidance on how to effortlessly correct the mistakes; this is
crucial when physical intuition is not available. Once the proposed sampler is cast in our framework
with a specific D(z) and Q(z), there is no need for sampler-specific proofs, such as those of [6].
Stochastic Gradient Langevin Dynamics (SGLD) SGLD [17] proposes to use the following first
order (no momentum) Langevin dynamics to generate samples
e (?t ) + N (0, 2t D).
?t+1 ? ?t ? t D?U
(13)
? t = 0.
This algorithm corresponds to taking z = ? with H(?) = U (?), D(?) = D, Q(?) = 0, and B
As motivated by Eq. (9) of our framework, the variance of the stochastic gradient can be subtracted
from the sampler injected noise to make the finite stepsize simulation more accurate. This variant of
SGLD leads to the stochastic gradient Fisher scoring algorithm [1].
5
Stochastic Gradient Riemannian Langevin Dynamics (SGRLD) SGLD can be generalized to
use an adaptive diffusion matrix D(?). Specifically, it is interesting to take D(?) = G?1 (?), where
G(?) is the Fisher information metric. The sampler dynamics are given by
e (?t ) + ?(?t )] + N (0, 2t G(?t )?1 ).
?t+1 ? ?t ? t [G(?t )?1 ?U
(14)
? t = 0, this SGRLD [13] method falls into our frameTaking D(?) = G(?)?1 , Q(?) = 0, and B
P ?Dij (?)
work with correction term ?i (?) =
. It is interesting to note that in earlier literature [10],
??j
j
P ?
1/2
G?1
. More recently, it was found that
?i (?) was taken to be 2 |G(?)|?1/2
ij (?)|G(?)|
j ??j
this correction term corresponds to the distribution function with respect to a non-Lebesgue measure [18]; for the Lebesgue measure, the revised ?i (?) was as determined by our framework [18].
Again, we have an example of our theory providing guidance in devising correct samplers.
Stochastic Gradient Nos?e-Hoover Thermostat (SGNHT) Finally, the SGNHT [7] method incorporates ideas from thermodynamics to further increase adaptivity by augmenting the SGHMC
system with an additional scalar auxiliary variable, ?. The algorithm uses the following dynamics:
?
?t+1 ? ?t + t rt
?
?
?
?
e
rt+1 ? rt ? t ?
U (?t ) ? t ?t rt + N (0, t (2A ? t Bt ))
(15)
1
?
?
rtT rt ? 1 .
? ?t+1 ? ?t + t
d
0
0
0
1 T
1
2
0
A
?
I
0
We can take z = (?, r, ?), H(?, r, ?) = U (?) + r r + (? ? A) , D(?, r, ?) =
,
0
0
0
2
2d
and Q(?, r, ?) =
0
I
0
?I
0
?r T /d
0
r/d
0
to place these dynamics within our framework.
Summary In our framework, SGLD and SGRLD take Q(z) = 0 and instead stress the design of
the diffusion matrix D(z), with SGLD using a constant D(z) and SGRLD an adaptive, ?-dependent
diffusion matrix to better account for the geometry of the space being explored. On the other hand,
HMC takes D(z) = 0 and focuses on the curl matrix Q(z). SGHMC combines SGLD with HMC
through non-zero D(?) and Q(?) matrices. SGNHT then extends SGHMC by taking Q(z) to be
state dependent. The relationships between these methods are depicted in the Supplement, which
likewise contains a discussion of the tradeoffs between these two matrices. In short, D(z) can
guide escaping from local modes while Q(z) can enable rapid traversing of low-probability regions,
especially when state adaptation is incorporated. We readily see that most of the product space
D(z) ? Q(z), defining the space of all possible samplers, has yet to be filled.
3.2
Stochastic Gradient Riemann Hamiltonian Monte Carlo
In Sec. 3.1, we have shown how our framework unifies existing samplers. In this section, we now use
our framework to guide the development of a new sampler. While SGHMC [6] inherits the momentum term of HMC, making it easier to traverse the space of parameters, the underlying geometry of
the target distribution is still not utilized. Such information can usually be represented by the Fisher
information metric [10], denoted as G(?), which can be used to precondition the dynamics. For our
proposed system, we consider H(?, r) = U (?) + 21 rT r, as in HMC/SGHMC methods, and modify
the D(?, r) and Q(?, r) of SGHMC to account for the geometry as follows:
0
0
0
?G(?)?1/2
D(?, r) =
;
Q(?, r) =
.
?1/2
0 G(?)?1
G(?)
0
We refer to this algorithm as stochastic gradient Riemann Hamiltonian Monte Carlo (SGRHMC).
Our theory holds for any positive definite G(?), yielding a generalized SGRHMC (gSGRHMC)
algorithm, which can be helpful when the Fisher information metric is hard to compute.
A na??ve implementation of a state-dependent SGHMC algorithm might simply (i) precondition the
e (?), and (iii) add a state-dependent friction term on the
HMC update, (ii) replace ?U (?) by ?U
order of the diffusion matrix to counterbalance the noise as in SGHMC, resulting in:
?t+1 ? ?t + t G(?t )?1/2 rt
Naive :
(16)
?1/2
?1
?1
?
e
rt+1 ?
rt ? t G(?t )
?? U (?t ) ? t G(?t )
6
rt + N (0, t (2G(?t )
? t Bt )).
Algorithm 1: Generalized Stochastic Gradient Riemann Hamiltonian Monte Carlo
initialize (?0 , r0 )
for t = 0, 1, 2 ? ? ? do
optionally, periodically resample momentum r as r(t) ? N (0, I)
? t)
?t+1 ? ?t + t G(?t )?1/2 rt , ?t ? t (2G(?t )?1 ? t B
e (?t ) + t ?? (G(?t )?1/2 ) ? t G(?t )?1 rt + N 0, ?t
rt+1 ? rt ? t G(?t )?1/2 ?? U
end
2.5
1
0.020
2
0.015
0.010
K?L Divergence
K-L Divergence
Naive gSGRHMC
SGLD
2
0.005
0.000
SGHMC
1
gSGRHMC
2
1.5
1
0.5
0
0
1
1
2
2
SGLD
SGHMC
gSGRHMC
2
4
6
log3(Steps/100)+1
8
10
Figure 2: Left: For two simulated 1D distributions defined by U (?) = ?2 /2 (one peak) and U (?) = ?4 ? 2?2
(two peaks), we compare the KL divergence of methods: SGLD, SGHMC, the na?
?ve SGRHMC of Eq. (16), and
the gSGRHMC of Eq. (17) relative to the true distribution in each scenario (left and right bars labeled by 1 and
2). Right: For a correlated 2D distribution with U (?1 , ?2 ) = ?14 /10 + (4 ? (?2 + 1.2) ? ?12 )2 /2, we see that
our gSGRHMC most rapidly explores the space relative to SGHMC and SGLD. Contour plots of the distribution
along with paths of the first 10 sampled points are shown for each method.
However, as we show in Sec. 4.1, samples from these dynamics do not converge to the desired
distribution. Indeed, this system cannot be written within our framework. Instead, we can simply
follow our framework and, as indicated by Eq. (9), consider the following update rule:
(
?t+1 ? ?t + t G(?t )?1/2 rt
? t )),
e (?t ) + ?? G(?t )?1/2 ? G(?t )?1 rt ] + N (0, t (2G(?t )?1 ? t B
rt+1 ? rt ? t [G(?)?1/2 ?? U
(17)
P ?
which includes a correction term ?? G(?)?1/2 , with i-th component
G(?)?1/2 ij . The
j ??j
practical implementation of gSGRHMC is outlined in Algorithm 1.
4
Experiments
In Sec. 4.1, we show that gSGRHMC can excel at rapidly exploring distributions with complex
landscapes. We then apply SGRHMC to sampling in a latent Dirichlet allocation (LDA) model on
a large Wikipedia dataset in Sec. 4.2. The Supplement contains details on the specific samplers
considered and the parameter settings used in these experiments.
4.1
Synthetic Experiments
In this section we aim to empirically (i) validate the correctness of our recipe and (ii) assess the
effectiveness of gSGRHMC. In Fig. 2(left), we consider two univariate distributions (shown in the
Supplement) and compare SGLD, SGHMC, the na??ve state-adaptive SGHMC of Eq. (16), and our
proposed gSGRHMC of Eq. (17). See the Supplement for the form of G(?). As expected, the na??ve
implementation does not converge to the target distribution. In contrast, the gSGRHMC algorithm
obtained via our recipe indeed has the correct invariant distribution and efficiently explores the distributions. In the second experiment, we sample a bivariate distribution with strong correlation. The
results are shown in Fig. 2(right). The comparison between SGLD, SGHMC, and our gSGRHMC
method shows that both a state-dependent preconditioner and Hamiltonian dynamics help to make
the sampler more efficient than either element on its own.
7
3500
Original LDA Expanded Mean
Parameter ?
?kw = ?kw
?kw = P?kw
w ?kw
Prior p(?) p(?k ) = Dir(?) p(?kw ) = ?(?, 1)
3000
Perplexity
Method
SGLD
SGHMC
SGRLD
SGRHMC
Average Runtime per 100 Docs
0.778s
0.815s
0.730s
0.806s
SGLD
SGHMC
SGRLD
SGRHMC
2500
2000
1500
1000
0
2000
4000
6000
Number of Documents
8000
10000
Figure 3: Upper Left: Expanded mean parameterization of the LDA model. Lower Left: Average runtime per
100 Wikipedia entries for all methods. Right: Perplexity versus number of Wikipedia entries processed.
4.2
Online Latent Dirichlet Allocation
We also applied SGRHMC (with G(?) = diag(?)?1 , the Fisher information metric) to an online
latent Dirichlet allocation (LDA) [5] analysis of topics present in Wikipedia entries. In LDA, each
topic is associated with a distribution over words, with ?kw the probability of word w under topic k.
(d)
Each document is comprised of a mixture of topics, with ?k the probability of topic k in document
(d)
d. Documents are generated by first selecting a topic zj ? ? (d) for the jth word and then drawing
(d)
the specific word from the topic as xj
? ?z(d) . Typically, ? (d) and ?k are given Dirichlet priors.
j
The goal of our analysis here is inference of the corpus-wide topic distributions ?k . Since the
Wikipedia dataset is large and continually growing with new articles, it is not practical to carry out
this task over the whole dataset. Instead, we scrape the corpus from Wikipedia in a streaming manner and sample parameters based on minibatches of data. Following the approach in [13], we first
analytically marginalize the document distributions ? (d) and, to resolve the boundary issue posed by
the Dirichlet posterior of ?k defined on the probability simplex, use an expanded mean parameterization shown in Figure 3(upper left). Under this parameterization, we then compute ? log p(?|x)
and, in our implementation, use boundary reflection to ensure the positivity of parameters ?kw . The
(d)
necessary expectation over word-specific topic indicators zj is approximated using Gibbs sampling
separately on each document, as in [13]. The Supplement contains further details.
For all the methods, we report results of three random runs. When sampling distributions with
mass concentrated over small regions, as in this application, it is important to incorporate geometric
information via a Riemannian sampler [13]. The results in Fig. 3(right) indeed demonstrate the importance of Riemannian variants of the stochastic gradient samplers. However, there also appears to
be some benefits gained from the incorporation of the HMC term for both the Riemmannian and nonReimannian samplers. The average runtime for the different methods are similar (see Fig. 3(lower
left)) since the main computational bottleneck is the gradient evaluation. Overall, this application
serves as an important example of where our newly proposed sampler can have impact.
5
Conclusion
We presented a general recipe for devising MCMC samplers based on continuous Markov processes. Our framework constructs an SDE specified by two matrices, a positive semidefinite D(z) and a
skew-symmetric Q(z). We prove that for any D(z) and Q(z), we can devise a continuous Markov
process with a specified stationary distribution. We also prove that for any continuous Markov process with the target stationary distribution, there exists a D(z) and Q(z) that cast the process in our
framework. Our recipe is particularly useful in the more challenging case of devising stochastic gradient MCMC samplers. We demonstrate the utility of our recipe in ?reinventing? previous stochastic
gradient MCMC samplers, and in proposing our SGRHMC method. The efficiency and scalability
of the SGRHMC method was shown on simulated data and a streaming Wikipedia analysis.
Acknowledgments
This work was supported in part by ONR Grant N00014-10-1-0746, NSF CAREER Award IIS-1350133, and
the TerraSwarm Research Center sponsored by MARCO and DARPA. We also thank Mr. Lei Wu for helping
with the proof of Theorem 2 and Professors Ping Ao and Hong Qian for many discussions.
8
References
[1] S. Ahn, A. Korattikara, and M. Welling. Bayesian posterior sampling via stochastic gradient
Fisher scoring. In Proceedings of the 29th International Conference on Machine Learning
(ICML?12), 2012.
[2] S. Ahn, B. Shahbaba, and M. Welling. Distributed stochastic gradient MCMC. In Proceeding
of 31st International Conference on Machine Learning (ICML?14), 2014.
[3] R. Bardenet, A. Doucet, and C. Holmes. Towards scaling up Markov chain Monte Carlo:
An adaptive subsampling approach. In Proceedings of the 30th International Conference on
Machine Learning (ICML?14), 2014.
[4] M. Betancourt. The fundamental incompatibility of scalable Hamiltonian Monte Carlo and
naive data subsampling. In Proceedings of the 31th International Conference on Machine
Learning (ICML?15), 2015.
[5] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, March 2003.
[6] T. Chen, E.B. Fox, and C. Guestrin. Stochastic gradient Hamiltonian Monte Carlo. In Proceeding of 31st International Conference on Machine Learning (ICML?14), 2014.
[7] N. Ding, Y. Fang, R. Babbush, C. Chen, R.D. Skeel, and H. Neven. Bayesian sampling using
stochastic gradient thermostats. In Advances in Neural Information Processing Systems 27
(NIPS?14). 2014.
[8] S. Duane, A.D. Kennedy, B.J. Pendleton, and D. Roweth. Hybrid Monte Carlo. Physics Letters
B, 195(2):216 ? 222, 1987.
[9] W. Feller. Introduction to Probability Theory and its Applications. John Wiley & Sons, 1950.
[10] M. Girolami and B. Calderhead. Riemann manifold Langevin and Hamiltonian Monte Carlo
methods. Journal of the Royal Statistical Society Series B, 73(2):123?214, 03 2011.
[11] A. Korattikara, Y. Chen, and M. Welling. Austerity in MCMC land: Cutting the MetropolisHastings budget. In Proceedings of the 30th International Conference on Machine Learning
(ICML?14), 2014.
[12] R.M. Neal. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo,
54:113?162, 2010.
[13] S. Patterson and Y.W. Teh. Stochastic gradient Riemannian Langevin dynamics on the probability simplex. In Advances in Neural Information Processing Systems 26 (NIPS?13). 2013.
[14] H. Risken and T. Frank. The Fokker-Planck Equation: Methods of Solutions and Applications.
Springer, 1996.
[15] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical
Statistics, 22(3):400?407, 09 1951.
[16] J. Shi, T. Chen, R. Yuan, B. Yuan, and P. Ao. Relation of a new interpretation of stochastic
differential equations to It?o process. Journal of Statistical Physics, 148(3):579?590, 2012.
[17] M. Welling and Y.W. Teh. Bayesian learning via stochastic gradient Langevin dynamics. In
Proceedings of the 28th International Conference on Machine Learning (ICML?11), pages
681?688, June 2011.
[18] T. Xifara, C. Sherlock, S. Livingstone, S. Byrne, and M. Girolami. Langevin diffusions and
the Metropolis-adjusted Langevin algorithm. Statistics & Probability Letters, 91:14?19, 2014.
[19] L. Yin and P. Ao. Existence and construction of dynamical potential in nonequilibrium processes without detailed balance. Journal of Physics A: Mathematical and General, 39(27):8593,
2006.
[20] R. Zwanzig. Nonequilibrium Statistical Mechanics. Oxford University Press, 2001.
9
| 5891 |@word version:3 seek:1 simulation:5 carry:1 inefficiency:1 contains:3 series:1 selecting:1 document:6 interestingly:2 reinvented:1 past:3 existing:3 discretization:1 yet:1 written:5 readily:1 must:2 john:1 periodically:1 distant:1 sdes:3 plot:1 sponsored:1 update:5 resampling:1 stationary:20 implying:1 devising:8 parameterization:3 hamiltonian:17 short:2 blei:1 provides:3 defacto:1 iterates:1 completeness:3 traverse:1 mathematical:2 along:1 become:2 differential:3 incorrect:1 prove:4 qij:1 yuan:2 combine:1 manner:1 indeed:4 expected:1 intricate:1 rapid:1 examine:1 growing:1 mechanic:1 riemann:9 resolve:1 considering:1 tandem:1 provided:3 underlying:2 mass:2 what:1 sde:5 cm:2 affirmative:1 developed:1 proposing:2 finding:1 runtime:3 grant:1 planck:4 continually:1 positive:8 local:1 modify:2 tends:1 limit:2 mistake:2 oxford:1 path:2 might:2 black:2 sgrhmc:11 equivalence:2 challenging:2 practical:5 unique:2 acknowledgment:1 testing:1 practice:2 block:1 definite:2 procedure:1 area:2 attain:1 word:5 cannot:2 marginalize:1 influence:1 writing:1 applying:1 equivalent:1 deterministic:7 dz:1 center:1 annealed:1 straightforward:1 go:1 shi:1 independently:1 emily:1 ergodic:2 qian:1 insight:1 parameterizing:1 rule:2 importantly:3 estimator:2 holmes:1 fang:1 dw:1 proving:1 annals:1 target:22 suppose:1 construction:2 us:1 element:1 satisfying:1 approximated:1 utilized:1 particularly:1 labeled:1 observed:1 ding:1 precondition:2 region:2 ensures:1 decrease:1 intuition:3 pd:1 feller:1 complexity:1 dynamic:48 calderhead:1 patterson:1 efficiency:3 mh:6 darpa:1 various:1 represented:1 monte:17 pendleton:1 quite:3 posed:1 drawing:3 ability:1 statistic:2 itself:1 noisy:2 online:2 subsamples:1 propose:2 product:1 adaptation:1 relevant:1 korattikara:2 rapidly:2 translate:1 poorly:1 description:1 validate:1 scalability:2 recipe:14 convergence:2 p:13 plethora:1 tianqi:1 leave:3 object:1 help:1 augmenting:1 stat:1 ij:2 eq:33 strong:1 auxiliary:6 c:1 involves:2 come:2 ddimensional:1 implies:2 girolami:2 direction:1 correct:9 stochastic:56 exploration:3 enable:1 implementing:1 require:1 ao:3 hoover:1 adjusted:2 exploring:2 helping:1 correction:9 hold:2 effortlessly:1 marco:1 considered:3 sgld:19 exp:7 mapping:1 resample:1 robbins:1 correctness:2 tool:2 clearly:1 always:1 aim:1 modified:1 rather:1 incompatibility:1 varying:1 casting:1 focus:5 inherits:2 leapfrog:1 june:1 integrability:1 contrast:2 burdensome:1 helpful:1 inference:2 austerity:1 dependent:5 streaming:4 neven:1 entire:1 bt:3 typically:2 relation:1 transformed:1 issue:1 overall:1 augment:1 denoted:1 development:2 proposes:1 special:1 initialize:1 equal:1 construct:4 once:1 having:1 washington:2 sampling:8 ng:1 represents:3 kw:8 icml:7 nearly:1 future:2 discrepancy:1 simplex:2 report:1 hint:1 simultaneously:1 ve:8 skewsymmetric:1 divergence:3 geometry:3 lebesgue:3 maintain:1 attempt:1 interest:2 arena:1 evaluation:1 generically:3 mixture:1 semidefinite:6 yielding:1 behind:1 chain:4 accurate:1 integral:1 necessary:1 traversing:2 fox:2 filled:1 desired:6 guidance:2 terraswarm:1 roweth:1 earlier:1 cover:1 cost:2 subset:1 entry:3 nonequilibrium:2 comprised:1 reimann:1 dij:4 too:1 straightforwardly:1 answer:1 dir:1 synthetic:2 st:2 density:3 international:7 explores:3 peak:2 fundamental:1 physic:3 connecting:1 na:8 again:1 central:1 satisfied:1 slowly:1 positivity:1 dr:1 adversely:1 leading:1 account:5 potential:5 sec:7 includes:1 explicitly:3 caused:1 shahbaba:1 analyze:2 red:1 start:1 portion:1 complicated:1 monro:1 ass:1 wiener:1 variance:2 efficiently:2 likewise:2 yield:2 identify:1 landscape:2 bayesian:4 unifies:1 carlo:17 notoriously:1 kennedy:1 ping:1 infinitesimal:1 energy:2 naturally:1 proof:6 riemannian:4 associated:1 gain:1 sampled:3 dataset:6 proved:2 frictionless:1 newly:1 riemmannian:1 actually:1 appears:1 originally:1 dt:1 follow:1 furthermore:2 ergodicity:3 correlation:1 preconditioner:1 hand:1 hastings:1 minibatch:2 defines:2 mode:1 lda:5 indicated:1 lei:1 building:1 effect:1 verify:1 unbiased:2 true:2 counterpart:1 evolution:3 analytically:1 byrne:1 symmetric:5 neal:1 uniquely:1 hong:1 generalized:3 stress:1 complete:5 demonstrate:5 gsgrhmc:12 motion:1 reflection:1 geometrical:1 novel:1 recently:4 fi:2 wikipedia:9 physical:5 empirically:1 discussed:1 interpretation:4 significant:3 refer:1 gibbs:1 curl:4 rd:1 trivially:1 outlined:1 surface:1 ahn:2 add:2 posterior:13 own:1 recent:2 driven:1 discard:1 reverse:1 scenario:2 perplexity:2 n00014:1 onr:1 yi:1 devise:3 assumep:1 integrable:1 seen:2 scoring:2 additional:3 guestrin:1 mr:1 converting:1 r0:1 converge:2 period:1 ii:3 relates:1 full:6 mix:1 ing:1 constructive:1 faster:3 characterized:1 award:1 plugging:1 paired:1 ensuring:1 impact:2 scalable:2 variant:10 metric:4 expectation:1 iteration:1 kernel:2 represent:1 proposal:1 addition:1 separately:1 interval:1 crucial:1 biased:1 ascent:1 incorporates:2 effectiveness:1 jordan:1 leverage:2 presence:1 iii:1 marginalization:1 affect:1 zi:2 fit:1 xj:1 escaping:1 idea:3 tradeoff:2 intensive:1 bottleneck:1 whether:2 motivated:1 utility:2 useful:2 iterating:1 detailed:3 se:3 amount:1 concentrated:1 processed:1 generate:1 zj:4 nsf:1 per:2 blue:3 write:1 key:3 sgnht:3 bardenet:1 diffusion:10 sum:1 counteract:1 run:1 parameterized:1 injected:2 letter:2 place:5 throughout:1 extends:1 wu:1 draw:1 doc:1 scaling:2 risken:1 nontrivial:1 strength:1 incorporation:1 speed:1 simulate:1 friction:2 expanded:3 according:1 march:1 increasingly:1 son:1 metropolis:2 modification:2 making:1 invariant:10 restricted:1 taken:1 computationally:2 equation:8 previously:1 skew:5 discus:1 needed:1 know:2 end:1 serf:1 available:1 rewritten:1 sghmc:22 multiplied:1 apply:1 simulating:4 stepsize:1 subtracted:1 existence:3 original:1 denotes:1 dirichlet:7 ensure:2 include:1 sgrld:6 subsampling:2 unifying:1 calculating:2 build:1 especially:2 ebfox:1 liouville:2 society:1 question:5 added:4 strategy:1 costly:2 rt:29 gradient:57 thank:1 simulated:3 topic:9 manifold:4 assuming:1 relationship:1 providing:1 balance:1 optionally:1 hmc:25 unfortunately:1 potentially:1 frank:1 rise:1 lagged:1 constructively:3 implementation:5 design:1 zt:16 perform:1 teh:2 upper:2 revised:1 markov:21 datasets:3 enabling:1 finite:2 compensation:1 defining:4 langevin:11 incorporated:1 rtt:1 drift:1 introduced:1 cast:4 required:1 specified:4 kl:1 nip:2 bar:1 dynamical:3 usually:2 sherlock:1 including:3 royal:1 metropolishastings:1 natural:1 hybrid:1 indicator:1 representing:1 thermodynamics:1 improve:1 counterbalance:1 excel:1 naive:4 prior:2 literature:2 geometric:1 marginalizing:1 relative:2 betancourt:1 adaptivity:1 interesting:3 allocation:5 proven:1 versus:1 ingredient:1 article:1 land:1 summary:1 supported:1 infeasible:2 jth:1 bias:1 guide:2 fall:2 wide:1 taking:2 benefit:2 distributed:1 boundary:2 skeel:1 transition:2 evaluating:1 avoids:1 contour:1 equated:1 author:1 adaptive:5 jump:3 welling:4 log3:1 compact:2 cutting:1 doucet:1 handbook:1 corpus:2 continuous:24 latent:5 decade:1 career:1 complex:2 constructing:3 diag:1 main:1 whole:1 subsample:1 noise:14 obviating:1 scrape:1 augmented:1 fig:5 wiley:1 momentum:6 guiding:1 position:1 theorem:9 down:1 specific:10 discarding:1 showing:2 hinting:1 explored:1 bivariate:1 burden:1 exists:6 intractable:1 thermostat:2 adding:1 importance:1 gained:1 supplement:8 babbush:1 budget:1 chen:5 gap:1 rejection:1 suited:1 easier:1 depicted:1 yin:1 simply:5 explore:2 univariate:1 scalar:1 speculated:1 applies:1 duane:1 fokker:4 corresponds:2 determines:1 relies:2 springer:1 ma:1 minibatches:2 goal:2 exposition:1 careful:1 invalid:1 towards:1 replace:2 fisher:6 professor:1 hard:1 specifically:1 determined:1 sampler:43 called:1 invariance:1 livingstone:1 arises:1 incorporate:1 mcmc:22 avoiding:2 correlated:1 |
5,404 | 5,892 | Barrier Frank-Wolfe for Marginal Inference
Rahul G. Krishnan
Courant Institute
New York University
Simon Lacoste-Julien
INRIA - Sierra Project-Team
?
Ecole
Normale Sup?erieure, Paris
David Sontag
Courant Institute
New York University
Abstract
We introduce a globally-convergent algorithm for optimizing the tree-reweighted
(TRW) variational objective over the marginal polytope. The algorithm is based
on the conditional gradient method (Frank-Wolfe) and moves pseudomarginals
within the marginal polytope through repeated maximum a posteriori (MAP) calls.
This modular structure enables us to leverage black-box MAP solvers (both exact
and approximate) for variational inference, and obtains more accurate results than
tree-reweighted algorithms that optimize over the local consistency relaxation.
Theoretically, we bound the sub-optimality for the proposed algorithm despite
the TRW objective having unbounded gradients at the boundary of the marginal
polytope. Empirically, we demonstrate the increased quality of results found by
tightening the relaxation over the marginal polytope as well as the spanning tree
polytope on synthetic and real-world instances.
1
Introduction
Markov random fields (MRFs) are used in many areas of computer science such as vision and
speech. Inference in these undirected graphical models is generally intractable. Our work focuses on
performing approximate marginal inference by optimizing the Tree Re-Weighted (TRW) objective
(Wainwright et al., 2005). The TRW objective is concave, is exact for tree-structured MRFs, and
provides an upper bound on the log-partition function.
Fast combinatorial solvers for the TRW objective exist, including Tree-Reweighted Belief Propagation (TRBP) (Wainwright et al., 2005), convergent message-passing based on geometric programming (Globerson and Jaakkola, 2007), and dual decomposition (Jancsary and Matz, 2011). These
methods optimize over the set of pairwise consistency constraints, also called the local polytope.
Sontag and Jaakkola (2007) showed that significantly better results could be obtained by optimizing
over tighter relaxations of the marginal polytope. However, deriving a message-passing algorithm
for the TRW objective over tighter relaxations of the marginal polytope is challenging. Instead,
Sontag and Jaakkola (2007) use the conditional gradient method (also called Frank-Wolfe) and offthe-shelf linear programming solvers to optimize TRW over the cycle consistency relaxation. Rather
than optimizing over the cycle relaxation, Belanger et al. (2013) optimize the TRW objective over
the exact marginal polytope. Then, using Frank-Wolfe, the linear minimization performed in the
inner loop can be shown to correspond to MAP inference.
The Frank-Wolfe optimization algorithm has seen increasing use in machine learning, thanks in
part to its efficient handling of complex constraint sets appearing with structured data (Jaggi, 2013;
Lacoste-Julien and Jaggi, 2015). However, applying Frank-Wolfe to variational inference presents
challenges that were never resolved in previous work. First, the linear minimization performed
in the inner loop is computationally expensive, either requiring repeatedly solving a large linear
program, as in Sontag and Jaakkola (2007), or performing MAP inference, as in Belanger et al.
(2013). Second, the TRW objective involves entropy terms whose gradients go to infinity near the
boundary of the feasible set, therefore existing convergence guarantees for Frank-Wolfe do not apply.
Third, variational inference using TRW involves both an outer and inner loop of Frank-Wolfe, where
the outer loop optimizes the edge appearance probabilities in the TRW entropy bound to tighten it.
1
Neither Sontag and Jaakkola (2007) nor Belanger et al. (2013) explore the effect of optimizing over
the edge appearance probabilities.
Although MAP inference is in general NP hard (Shimony, 1994), it is often possible to find exact solutions to large real-world instances within reasonable running times (Sontag et al., 2008; Allouche
et al., 2010; Kappes et al., 2013). Moreover, as we show in our experiments, even approximate
MAP solvers can be successfully used within our variational inference algorithm. As MAP solvers
improve in their runtime and performance, their iterative use could become feasible and as a byproduct enable more efficient and accurate marginal inference. Our work provides a fast deterministic
alternative to recently proposed Perturb-and-MAP algorithms (Papandreou and Yuille, 2011; Hazan
and Jaakkola, 2012; Ermon et al., 2013).
Contributions. This paper makes several theoretical and practical innovations. We propose a modification to the Frank-Wolfe algorithm that optimizes over adaptively chosen contractions of the
domain and prove its rate of convergence for functions whose gradients can be unbounded at the
boundary. Our algorithm does not require a different oracle than standard Frank-Wolfe and could be
useful for other convex optimization problems where the gradient is ill-behaved at the boundary.
We instantiate the algorithm for approximate marginal inference over the marginal polytope with
the TRW objective. With an exact MAP oracle, we obtain the first provably convergent algorithm
for the optimization of the TRW objective over the marginal polytope, which had remained an open
problem to the best of our knowledge. Traditional proof techniques of convergence for first order
methods fail as the gradient of the TRW objective is not Lipschitz continuous.
We develop several heuristics to make the algorithm practical: a fully-corrective variant of FrankWolfe that reuses previously found integer assignments thereby reducing the need for new (approximate) MAP calls, the use of local search between MAP calls, and significant re-use of computations
between subsequent steps of optimizing over the spanning tree polytope. We perform an extensive
experimental evaluation on both synthetic and real-world inference tasks.
2
Background
Markov Random Fields: MRFs are undirected probabilistic graphical models where the probability
distribution factorizes over cliques in the graph. We consider marginal inference on pairwise MRFs
with N random variables X1 , X2 , . . . , XN where each variable takes discrete states xi ? VALi . Let
G = (V, E) be the Markov network with an undirected edge {i, j} ? E for every two variables
Xi and Xj that are connected together. Let N (i) refer to the set of neighbors of variable Xi . We
organize the edge log-potentials ?ij (xi , xj ) for all possible values of xi ? VALi , xj ? VALj in
the vector ?ij , and similarly for the node log-potential vector ?i . We regroup these in the overall
~ We introduce a similar grouping for the marginal vector ?
~ : for example, ?i (xi ) gives the
vector ?.
coordinate of the marginal vector corresponding to the assignment xi to variable Xi .
~ be the partition function for the
Tree Re-weighted Objective (Wainwright et al., 2005): Let Z(?)
MRF and M be the set of all valid marginal vectors (the marginal polytope). The maximization of
the TRW objective gives the following upper bound on the log partition function:
~ ? min max h?,
~ ?
~ i + H(~
log Z(?)
?; ?),
??T ?
~ ?M
{z
}
|
(1)
~
where the TRW entropy is:
TRW(~
?;?,?)
X
X
X
X
H(~
?; ?) :=
(1 ?
?ij )H(?i ) +
?ij H(?ij ), H(?i ) := ?
?i (xi ) log ?i (xi ). (2)
i?V
j?N (i)
xi
(ij)?E
T is the spanning tree polytope, the convex hull of edge indicator vectors of all possible spanning
trees of the graph. Elements of ? ? T specify the probability of an edge being present under a
specific distribution over spanning trees. M is difficult to optimize over, and most TRW algorithms
optimize
n over a relaxation called the local consistency polytope L ? M:
o
~ ? 0,
L := ?
P
xi
?i (xi ) = 1 ?i ? V,
P
xi
?ij (xi , xj ) = ?j (xj ),
P
xj
?ij (xi , xj ) = ?i (xi ) ?{i, j} ? E .
~ ?) is a globally concave function of ?
~ over L, assuming that ? is
The TRW objective TRW(~
?; ?,
obtained from a valid distribution over spanning trees of the graph (i.e. ? ? T).
Frank-Wolfe (FW) Algorithm: In recent years, the Frank-Wolfe (aka conditional gradient) algorithm has gained popularity in machine learning (Jaggi, 2013) for the optimization of convex
2
functions over compact domains (denoted D). The algorithm is used to solve minx?D f (x) by
iteratively finding a good descent vertex by solving the linear subproblem:
s(k) = arg minh?f (x(k) ), si
(FW oracle),
(3)
s?D
and then taking a convex step towards this vertex: x(k+1) = (1 ? ?)x(k) + ?s(k) for a suitably
chosen step-size ? ? [0, 1]. The algorithm remains within the feasible set (is projection free), is
invariant to affine transformations of the domain, and can be implemented in a memory efficient
manner. Moreover, the FW gap g(x(k) ) := h??f (x(k) ), s(k) ? x(k) i provides an upper bound on
the suboptimality of the iterate x(k) . The primal convergence of the Frank-Wolfe algorithm is given
by Thm. 1 in Jaggi (2013), restated here for convenience: for k ? 1, the iterates x(k) satisfy:
2Cf
f (x(k) ) ? f (x? ) ?
,
(4)
k+2
where Cf is called the ?curvature constant?. Under the assumption that ?f is L-Lipschitz continuous1 on D, we can bound it as Cf ? L diam||.|| (D)2 .
~ ?) with Frank-Wolfe,
Marginal Inference with Frank-Wolfe: To optimize max?
?; ?,
~ ?M TRW(~
?
~ i, where the perturbed potentials ?? correspond
the linear subproblem (3) becomes arg max?
~ ?M h?, ?
~ ?) with respect to ?
~ . Elements of ?? are of the form ?c (xc ) + Kc (1 +
to the gradient of TRW(~
?; ?,
log ?c (xc )), evaluated at the pseudomarginals? current location in M, where Kc is the coefficient
of the entropy for the node/edge term in (2). The FW linear subproblem here is thus equivalent
to performing MAP inference in a graphical model with potentials ?? (Belanger et al., 2013), as
the vertices of the marginal polytope are in 1-1 correspondence with valid joint assignments to the
random variables of the MRF, and the solution of a linear program is always achieved at a vertex
of the polytope. The TRW objective does not have a Lipschitz continuous gradient over M, and so
standard convergence proofs for Frank-Wolfe do not hold.
3
Optimizing over Contractions of the Marginal Polytope
Motivation: We wish to (1) use the fewest possible MAP calls, and (2) avoid regions near the
boundary where the unbounded curvature of the function slows down convergence. A viable option
to address (1) is through the use of correction steps, where after a Frank-Wolfe step, one optimizes over the polytope defined by previously visited vertices of M (called the fully-corrective
Frank-Wolfe (FCFW) algorithm and proven to be linearly convergence for strongly convex objectives (Lacoste-Julien and Jaggi, 2015)). This does not require additional MAP calls. However, we
found (see Sec. 5) that when optimizing the TRW objective over M, performing correction steps can
surprisingly hurt performance. This leaves us in a dilemma: correction steps enable decreasing the
objective without additional MAP calls, but they can also slow global progress since iterates after
correction sometimes lie close to the boundary of the polytope (where the FW directions become
less informative). In a manner akin to barrier methods and to Garber and Hazan (2013)?s local linear
oracle, our proposed solution maintains the iterates within a contraction of the polytope. This gives
us most of the mileage obtained from performing the correction steps without suffering the consequences of venturing too close to the boundary of the polytope. We prove a global convergence rate
for the iterates with respect to the true solution over the full polytope.
~ ?) for ?
~ ? M. The approach we adopt
We describe convergent algorithms to optimize TRW(~
?; ?,
to deal with the issue of unbounded gradients at the boundary is to perform Frank-Wolfe within
a contraction of the marginal polytope given by M? for ? ? [0, 1], with either a fixed ? or an
adaptive ?.
Definition 3.1 (Contraction polytope). M? := (1 ? ?)M + ? u0 , where u0 ? M is the vector
representing the uniform distribution.
Marginal vectors that lie within M? are bounded away from zero as all the components of u0 are
strictly positive. Denoting V (?) as the set of vertices of M? , V as the set of vertices of M and
~ ?), the key insight that enables our novel approach is that:
f (~
?) := ?TRW(~
?; ?,
D
E
arg min ?f, v (?)
v (?) ?V (?)
{z
}
|
(Linear Minimization over M? )
1
? arg min h?f, (1 ? ?)v + ?u0 i ?
|
{z
}
v?V
(Definition of v (?) )
(1 ? ?) arg min h?f, vi + ?u0 .
v?V
|
{z
}
(Run MAP solver and shift vertex)
I.e. k?f (x) ? ?f (x0 )k? ? Lkx ? x0 k for x, x0 ? D. Notice that the dual norm k?k? is needed here.
3
Algorithm 1: Updates to ? after a MAP call (Adaptive ? variant)
At iteration k. Assuming x(k) , u0 , ? (k?1) , f are defined and s(k) has been computed
Compute g(x(k) ) = h??f (x(k) ), s(k) ? x(k) i (Compute FW gap)
Compute gu (x(k) ) = h??f (x(k) ), u0 ? x(k) i (Compute ?uniform gap?)
if gu (x(k) ) < 0 then
g(x(k) )
5:
Let ?? = ?4g
(Compute new proposal for ?)
(k) )
u (x
6:
if ?? < ? (k?1) then
(k?1)
? ?
7:
? (k) = min ?,
(Shrink by at least a factor of two if proposal is smaller)
1:
2:
3:
4:
2
8:
end if
9: end if (and set ? (k) = ? (k?1) if it was not updated)
Therefore, to solve the FW subproblem (3) over M? , we can run as usual a MAP solver and simply
shift the resulting vertex of M towards u0 to obtain a vertex of M? . Our solution to optimize over
restrictions of the polytope is more broadly applicable to the optimization problem defined below,
with f satisfying Prop. 3.3 (satisfied by the TRW objective) in order to get convergence rates.
Problem 3.2. Solve minx?D f (x) where D is a compact convex set and f is convex and continuously differentiable on the relative interior of D.
Property 3.3. (Controlled growth of Lipschitz constant over D? ). We define D? := (1 ? ?)D + ?u0
for a fixed u0 in the relative interior of D. We suppose that there exists a fixed p ? 0 and L such
that for any ? > 0, ?f (x) has a bounded Lipschitz constant L? ? L? ?p ?x ? D? .
Fixed ?: The first algorithm fixes a value for ? a-priori and performs the optimization over D? . The
following theorem bounds the sub-optimality of the iterates with respect to the optimum over D.
Theorem 3.4 (Suboptimality bound for fixed-? algorithm). Let f satisfy the properties in Prob. 3.2
and Prop. 3.3, and suppose further that f is finite on the boundary of D. Then the use of Frank-Wolfe
for minx?D? f (x) realizes a sub-optimality over D bounded as:
2C?
f (x(k) ) ? f (x? ) ?
+ ? (? diam(D)) ,
(k + 2)
where x? is the optimal solution in D, C? ? L? diam||.|| (D? )2 , and ? is the modulus of continuity
function of the (uniformly) continuous f (in particular, ?(?) ? 0 as ? ? 0).
The full proof is given in App. C. The first term of the bound comes from the standard Frank-Wolfe
convergence analysis of the sub-optimality of x(k) relative to x?(?) , the optimum over D? , as in (4)
? ? f (x? ) with a
and using Prop. 3.3. The second term arises by bounding f (x?(?) ) ? f (x? ) ? f (x)
? ? D? (as x?(?) is optimal in D? ). We pick x
? := (1 ? ?)x? + ?u0 and note that
cleverly chosen x
?
? ? x k? ? diam(D). As f is continuous on a compact set, it is uniformly continuous and we
kx
? ? f (x? ) ? ?(? diam(D)) with ? its modulus of continuity function.
thus have f (x)
Adaptive ?: The second variant to solve minx?D f (x) iteratively perform FW steps over D? , but
also decreases ? adaptively. The update schedule for ? is given in Alg. 1 and is motivated by the
convergence proof. The idea is to ensure that the FW gap over D? is always at least half the FW
gap over D, relating the progress over D? with the one over D. It turns out that FW-gap-D? =
(1 ? ?)FW-gap-D + ? ? gu (x(k) ), where the ?uniform gap? gu (x(k) ) quantifies the decrease of the
function when contracting towards u0 . When gu (x(k) ) is negative and large compared to the FW
gap, we need to shrink ? (see step 5 in Alg. 1) to ensure that the ?-modified direction is a sufficient
descent direction. We can show that the algorithm converges to the global solution as follows:
Theorem 3.5 (Global convergence for adaptive-? variant over D). For a function f satisfying the
properties in Prob. 3.2 and Prop. 3.3, the sub-optimality of the iterates obtained by running the FW
updates over D? with ? updated according to Alg. 1 is bounded as:
1
f (x(k) ) ? f (x? ) ? O k ? p+1 .
A full proof with a precise rate and constants is given in App. D. The sub-optimality hk := f (x(k) )?
f (x? ) traverses three stages with an overall rate as above. The updates to ? (k) as in Alg. 1 enable us
4
Algorithm 2: Approximate marginal inference over M (solving (1)). Here f is the negative TRW objective.
1: Function TRW-Barrier-FW(?(0) , , ? (init) , u0 ):
2: Inputs: Edge-appearance probabilities ?(0) , ? (init) ? 41 initial contraction of polytope, inner loop
stopping criterion , fixed reference point u0 in the interior of M. Let ? (?1) = ? (init) .
3: Let V := {u0 } (visited vertices), x(0) = u0 (Initialize the algorithm at the uniform distribution)
4: for i = 0 . . . MAX RHO ITS do {FW outer loop to optimize ? over T}
5:
for k = 0 . . . MAXITS do {FCFW inner loop to optimize x over M}
~ ?(i) ) (Compute gradient)
6:
Let ?? = ?f (x(k) ; ?,
? vi (Run MAP solver to compute FW vertex)
7:
Let s(k) ? arg min h?,
v?M
8:
9:
10:
11:
12:
13:
? s(k) ? x(k) i (Inner loop FW duality gap)
Compute g(x(k) ) = h??,
(k)
if g(x ) ? then
break FCFW inner loop (x(k) is -optimal)
end if
? (k) = ? (k?1) (For Adaptive-?: Run Alg. 1 to modify ?)
(k)
(k)
(k)
Let s(?) = (1 ? ? (k) )s(k) + ? (k) u0 and d(?) = s(?) ? x(k) (?-contracted quantities)
14:
x(k+1) = arg min{f (x(k) + ? d(?) ) : ? ? [0, 1]}
(k)
(FW step with line search)
(k)
15:
Update correction polytope: V := V ? {s }
16:
x(k+1) := CORRECTION(x(k+1) , V, ? (k) , ?(i) ) (optional: correction step)
17:
x(k+1) , Vsearch := LOCALSEARCH(x(k+1) , s(k) , ? (k) , ?(i) ) (optional: fast MAP solver)
18:
Update correction polytope (with vertices from LOCALSEARCH): V := V ? {Vsearch }
19:
end for
20:
?v ? minSpanTree(edgesMI(x(k) )) (FW vertex of the spanning tree polytope)
i
21:
?(i+1) ? ?(i) + ( i+2
)(?v ? ?(i) ) (Fixed step-size schedule FW update for ? kept in relint(T))
(0)
(k)
(?1)
22:
x ?x , ?
? ? (k?1) (Re-initialize for FCFW inner loop)
23:
If i < MAX RHO ITS then x(0) = CORRECTION(x(0) , V, ? (?1) , ?(i+1) )
24: end for
25: return x(0) and ?(i)
to (1) upper bound the duality gap over D as a function of the duality gap in D? and (2) lower bound
the value of ? (k) as a function of hk . Applying the standard Descent Lemma with the Lipschitz
constant on the gradient of the form L? ?p (Prop. 3.3), and replacing ? (k) by its bound in hk , we get
the recurrence: hk+1 ? hk ? Chp+2
k . Solving this gives us the desired bound.
~ ?) is akin to minx?D f (x) and the
Application to the TRW Objective: min?
?; ?,
~ ?M ?TRW(~
~ ?) has been previously shown (Wainwright et al., 2005; London
(strong) convexity of ?TRW(~
?; ?,
et al., 2015). The gradient of the TRW objective is Lipschitz continuous over M? since all marginals
are strictly positive. Its growth for Prop. 3.3 can be bounded with p = 1 as we show in App. E.1. This
gives a rate of convergence of O(k ?1/2 ) for the adaptive-? variant, which interestingly is a typical
rate for non-smooth convex optimization. The hidden constant is of the order O(k?k?|V |). The
modulus of continuity ? for the TRW objective is close to linear (it is almost a Lipschitz function),
and its constant is instead of the order O(k?k+|V |).
4
Algorithm
Alg. 2 describes the pseudocode for our proposed algorithm to do marginal inference with
~ ?). minSpanTree finds the minimum spanning tree of a weighted graph, and
TRW(~
?; ?,
~ 2 (to
edgesMI(~
?) computes the mutual information of edges of G from the pseudomarginals in ?
perform FW updates over ? as in Alg. 2 in Wainwright et al. (2005)). It is worthwhile to note that
our approach uses three levels of Frank-Wolfe: (1) for the (tightening) optimization of ? over T, (2)
~ over M, and (3) to perform
to perform approximate marginal inference, i.e for the optimization of ?
the correction steps (lines 16 and 23). We detail a few heuristics that aid practicality.
Fast Local Search: Fast methods for MAP inference such as Iterated Conditional Modes (Besag, 1986) offer a cheap, low cost alternative to a more expensive combinatorial MAP solver. We
2
The component ij has value H(?i ) + H(?j ) ? H(?ij ).
5
warm start the ICM solver with the last found vertex s(k) of the marginal polytope. The subroutine
LOCALSEARCH (Alg. 6 in Appendix) performs a fixed number of FW updates to the pseudomarginals using ICM as the (approximate) MAP solver.
Re-optimizing over the Vertices of M (FCFW algorithm): As the iterations of FW progress,
we keep track of the vertices of the marginal polytope found by Alg. 2 in the set V . We make use
of these vertices in the CORRECTION subroutine (Alg. 5 in Appendix) which re-optimizes the
objective function over (a contraction of) the convex hull of the elements of V (called the correction
polytope). x(0) in Alg. 2 is initialized to the uniform distribution which is guaranteed to be in M
(and M? ). After updating ?, we set x(0) to the approximate minimizer in the correction polytope.
The intuition is that changing ? by a small amount may not substantially modify the optimal x?
(for the new ?) and that the new optimum might be in the convex hull of the vertices found thus far.
If so, CORRECTION will be able to find it without resorting to any additional MAP calls. This
encourages the MAP solver to search for new, unique vertices instead of rediscovering old ones.
Approximate MAP Solvers: We can swap out the exact MAP solver with an approximate MAP
solver. The primal objective plus the (approximate) duality gap may no longer be an upper bound
on the log-partition function (black-box MAP solvers could be considered to optimize over an inner
bound to the marginal polytope). Furthermore, the gap over D may be negative if the approximate
MAP solver fails to find a direction of descent. Since adaptive-? requires that the gap be positive
in Alg. 1, we take the max over the last gap obtained over the correction polytope (which is always
non-negative) and the computed gap over D as a heuristic.
Theoretically, one could get similar convergence rates as in Thm. 3.4 and 3.5 using an approximate
MAP solver that has a multiplicative guarantee on the gap (line 8 of Alg. 2), as was done previously
for FW-like algorithms (see, e.g., Thm. C.1 in Lacoste-Julien et al. (2013)). With an -additive
error guarantee on the MAP solution, one can prove similar rates up to a suboptimality error of .
Even if the approximate MAP solver does not provide an approximation guarantee, if it returns an
upper bound on the value of the MAP assignment (as do branch-and-cut solvers for integer linear
programs, or Sontag et al. (2008)), one can use this to obtain an upper bound on log Z (see App. J).
5
Experimental Results
PN
Setup: The L1 error in marginals is computed as: ?? := N1 i=1 |?i (1) ? ??i (1)|. When using
exact MAP inference, the error in log Z (denoted ?log Z ) is computed by adding the duality gap to
the primal (since this guarantees us an upper bound). For approximate MAP inference, we plot the
primal objective. We use a non-uniform initialization of ? computed with the Matrix Tree Theorem
~ to a duality
(Sontag and Jaakkola, 2007; Koo et al., 2007). We perform 10 updates to ?, optimize ?
gap of 0.5 on M, and always perform correction steps. We use LOCALSEARCH only for the realworld instances. We use the implementation of TRBP and the Junction Tree Algorithm (to compute
exact marginals) in libDAI (Mooij, 2010). Unless specified, we compute marginals by optimizing
the TRW objective using the adaptive-? variant of the algorithm (denoted in the figures as M? ).
MAP Solvers: For approximate MAP, we run three solvers in parallel: QPBO (Kolmogorov and
Rother, 2007; Boykov and Kolmogorov, 2004), TRW-S (Kolmogorov, 2006) and ICM (Besag, 1986)
using OpenGM (Andres et al., 2012) and use the result that realizes the highest energy. For exact
inference, we use Gurobi Optimization (2015) or toulbar2 (Allouche et al., 2010).
Test Cases: All of our test cases are on binary pairwise MRFs. (1) Synthetic 10 nodes cliques:
Same setup as Sontag and Jaakkola (2007, Fig. 2), with 9 sets of 100 instances each with coupling strength drawn from U[??, ?] for ? ? {0.5, 1, 2, . . . , 8}. (2) Synthetic Grids: 15 trials with
5 ? 5 grids. We sample ?i ? U[?1, 1] and ?ij ? [?4, 4] for nodes and edges. The potentials
were (??i , ?i ) for nodes and (?ij , ??ij ; ??ij , ?ij ) for edges. (3) Restricted Boltzmann Machines
(RBMs): From the Probabilistic Inference Challenge 2011.3 (4) Horses: Large (N ? 12000) MRFs
representing images from the Weizmann Horse Data (Borenstein and Ullman, 2002) with potentials
learned by Domke (2013). (5) Chinese Characters: An image completion task from the KAIST
Hanja2 database, compiled in OpenGM by Andres et al. (2012). The potentials were learned using
Decision Tree Fields (Nowozin et al., 2011). The MRF is not a grid due to skip edges that tie nodes
at various offsets. The potentials are a combination of submodular and supermodular and therefore
a harder task for inference algorithms.
3
http://www.cs.huji.ac.il/project/PASCAL/index.php
6
40
M
L?
30
M(no correction)
20
10
0.5
50
0.4
40
30
0
0
5
10
15
MAP calls
20
25
0.2
0.1
0.0
0
101
Exact MAP M?
L?
Approx MAP M?
100
20
40
60
MAP calls
80
(d) ?log Z : 40 node RBM
Approx. vs. Exact MAP
20
40 60 80
MAP calls
100 120
(c) ?? : 5 ? 5 grids
Approx. vs. Exact MAP
102
0.8
102
Exact MAP M?
L?
Approx MAP M?
0.3
(b) ?log Z : 10 node cliques
M vs M?
Error in Marginals (??)
Error in LogZ (?log Z)
M(no correction)
10
103
10?1
0
M
L?
20
10 20 30 40 50 60 70 80
MAP calls
(a) ?log Z : 5 ? 5 grids
M vs M?
M?
M0.0001
0.7
perturbMAP
L?
0.6
L? (?opt)
0.5
M? (?opt)
M?
0.4
Error in LogZ (?log Z)
0
0
60
Error in Marginals (??)
M?
M0.0001
Error in LogZ (?log Z)
Error in LogZ (?log Z)
50
0.3
0.2
perturbMAP
L?
101
L? (?opt)
M? (?opt)
M?
100
0.1
100
0.0
0.51
2
3
4
?
5
6
(e) ?? : 10 node cliques
Optimization over T
7
8
10?1
0.51
2
3
4
?
5
6
7
8
(f) ?log Z : 10 node cliques
Optimization over T
Figure 1: Synthetic Experiments: In Fig. 1(c) & 1(d), we unravel MAP calls across updates to ?. Fig. 1(d)
corresponds to a single RBM (not an aggregate over trials) where for ?Approx MAP? we plot the absolute error
between the primal objective and log Z (not guaranteed to be an upper bound).
On the Optimization of M versus M?
We compare the performance of Alg. 2 on optimizing over M (with and without correction), optimizing over M? with fixed-? = 0.0001 (denoted M0.0001 ) and optimizing over M? using the
adaptive-? variant. These plots are averaged across all the trials for the first iteration of optimizing
over T. We show error as a function of the number of MAP calls since this is the bottleneck for
large MRFs. Fig. 1(a), 1(b) depict the results of this optimization aggregated across trials. We find
that all variants settle on the same average error. The adaptive ? variant converges faster on average
followed by the fixed ? variant. Despite relatively quick convergence for M with no correction on
the grids, we found that correction was crucial to reducing the number of MAP calls in subsequent
steps of inference after updates to ?. As highlighted earlier, correction steps on M (in blue) worsen
convergence, an effect brought about by iterates wandering too close to the boundary of M.
On the Applicability of Approximate MAP Solvers
Synthetic Grids: Fig. 1(c) depicts the accuracy of approximate MAP solvers versus exact MAP
solvers aggregated across trials for 5 ? 5 grids. The results using approximate MAP inference are
competitive with those of exact inference, even as the optimization is tightened over T. This is an
encouraging and non-intuitive result since it indicates that one can achieve high quality marginals
through the use of relatively cheaper approximate MAP oracles.
RBMs: As in Salakhutdinov (2008), we observe for RBMs that the bound provided by
~ ?) over L? is loose and does not get better when optimizing over T. As Fig. 1(d) depicts
TRW(~
?; ?,
for a single RBM, optimizing over M? realizes significant gains in the upper bound on log Z which
improves with updates to ?. The gains are preserved with the use of the approximate MAP solvers.
Note that there are also fast approximate MAP solvers specifically for RBMs (Wang et al., 2014).
Horses: See Fig. 2 (right). The models are close to submodular and the local relaxation is a good
approximation to the marginal polytope. Our marginals are visually similar to those obtained by
TRBP and our algorithm is able to scale to large instances by using approximate MAP solvers.
7
Ground
Ground
Truth
Truth
MAP
MAP
Ground Truth
Ground Truth
TRBP
TRBP
FW(1)
FW(1)
FW(10)
FW(10)
Ground
Truth
MAP
TRBP
FW(1)
Ground
Truth
MAP
TRBP
FW(1)
Ground Truth
TRBP Marginals
TRBP Marginals
COND?0.01 Marginals
COND?0.01 Marginals
TRBP Marginals
Ground Truth
TRBP Marginals
Ground Truth
TRBP Marginals
FW(10)
FW(10)
COND?0.01 Marginals
COND?0.01 Marginals ? Opt Rho
COND?0.01 Marginals
COND?0.01 Marginals ? Opt Rho
COND?0.01 Marginals
COND?0.01 Marginals ? Opt Rho
COND?0.01 Marginals ? Opt Rho
COND?0.01 Marginals ? Opt Rho
Figure 2: Results on real world test cases. FW(i) corresponds to the final marginals at the ith iteration of
Ground Truth
TRBP Marginals
COND?0.01 Marginals
TRBP Marginals
Ground Truth
optimizing
?. The area highlighted
on the Chinese Characters depicts the region of uncertainty.
TRBP Marginals
Ground Truth
COND?0.01 Marginals
COND?0.01 Marginals
COND?0.01 Marginals ? Opt Rho
COND?0.01 Marginals ? Opt Rho
COND?0.01 Marginals ? Opt Rho
On the Importance of Optimizing over T
Synthetic Cliques: In Fig. 1(e), 1(f), we study the effect of tightening over T against coupling
strength ?. We consider the ?? and ?log Z obtained for the final marginals before updating ? (step 19)
and compare to the values obtained after optimizing over T (marked with ?opt ). The optimization
over T has little effect on TRW optimized over L? . For optimization over M? , updating ? realizes
better marginals and bound on log Z (over and above those obtained in Sontag and Jaakkola (2007)).
Chinese Characters: Fig. 2 (left) displays marginals across iterations of optimizing over T. The
submodular and supermodular potentials lead to frustrated models for which L? is very loose, which
results in TRBP obtaining poor results.4 Our method produces reasonable marginals even before the
first update to ?, and these improve with tightening over T.
Related Work for Marginal Inference with MAP Calls
Hazan and Jaakkola (2012) estimate log Z by averaging MAP estimates obtained on randomly perturbed inflated graphs. Our implementation of the method performed well in approximating log Z
but the marginals (estimated by fixing the value of each random variable and estimating log Z for
the resulting graph) were less accurate than our method (Fig. 1(e), 1(f)).
6
Discussion
We introduce the first provably convergent algorithm for the TRW objective over the marginal
polytope, under the assumption of exact MAP oracles. We quantify the gains obtained both from
marginal inference over M and from tightening over the spanning tree polytope. We give heuristics
that improve the scalability of Frank-Wolfe when used for marginal inference. The runtime cost of
iterative MAP calls (a reasonable rule of thumb is to assume an approximate MAP call takes roughly
the same time as a run of TRBP) is worthwhile particularly in cases such as the Chinese Characters
where L is loose. Specifically, our algorithm is appropriate for domains where marginal inference is
hard but there exist efficient MAP solvers capable of handling non-submodular potentials. Code is
available at https://github.com/clinicalml/fw-inference.
Our work creates a flexible, modular framework for optimizing a broad class of variational objectives, not simply TRW, with guarantees of convergence. We hope that this will encourage more
research on building better entropy approximations. The framework we adopt is more generally
applicable to optimizing functions whose gradients tend to infinity at the boundary of the domain.
Our method to deal with gradients that diverge at the boundary bears resemblance to barrier functions used in interior point methods insofar as they bound the solution away from the constraints.
Iteratively decreasing ? in our framework can be compared to decreasing the strength of the barrier,
enabling the iterates to get closer to the facets of the polytope, although its worthwhile to note that
we have an adaptive method of doing so.
Acknowledgements
RK and DS gratefully acknowledge the support of the DARPA Probabilistic Programming for Advancing Machine Learning (PPAML) Program under AFRL prime contract no. FA8750-14-C-0005.
4
We run TRBP for 1000 iterations using damping = 0.9; the algorithm converges with a max norm difference
between consecutive iterates of 0.002. Tightening over T did not significantly change the results of TRBP.
8
References
D. Allouche, S. de Givry, and T. Schiex. Toulbar2, an open source exact cost function network solver, 2010.
B. Andres, B. T., and J. H. Kappes. OpenGM: A C++ library for discrete graphical models, June 2012.
D. Belanger, D. Sheldon, and A. McCallum. Marginal inference in MRFs using Frank-Wolfe. NIPS Workshop
on Greedy Optimization, Frank-Wolfe and Friends, 2013.
J. Besag. On the statistical analysis of dirty pictures. J R Stat Soc Series B, 1986.
E. Borenstein and S. Ullman. Class-specific, top-down segmentation. In ECCV, 2002.
Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. TPAMI, 2004.
J. Domke. Learning graphical model parameters with approximate marginal inference. TPAMI, 2013.
S. Ermon, C. P. Gomes, A. Sabharwal, and B. Selman. Taming the curse of dimensionality: Discrete integration
by hashing and optimization. In ICML, 2013.
D. Garber and E. Hazan. A linearly convergent conditional gradient algorithm with applications to online and
stochastic optimization. arXiv preprint arXiv:1301.4666, 2013.
A. Globerson and T. Jaakkola. Convergent propagation algorithms via oriented trees. In UAI, 2007.
I. Gurobi Optimization. Gurobi optimizer reference manual, 2015.
T. Hazan and T. Jaakkola. On the partition function and random maximum a-posteriori perturbations. In ICML,
2012.
M. Jaggi. Revisiting Frank-Wolfe: Projection-free sparse convex optimization. In ICML, 2013.
J. Jancsary and G. Matz. Convergent decomposition solvers for tree-reweighted free energies. In AISTATS,
2011.
J. Kappes et al. A comparative study of modern inference techniques for discrete energy minimization problems. In CVPR, 2013.
V. Kolmogorov. Convergent tree-reweighted message passing for energy minimization. TPAMI, 2006.
V. Kolmogorov and C. Rother. Minimizing nonsubmodular functions with graph cuts-A Review. TPAMI, 2007.
T. Koo, A. Globerson, X. Carreras, and M. Collins. Structured prediction models via the matrix-tree theorem.
In EMNLP-CoNLL, 2007.
S. Lacoste-Julien and M. Jaggi. On the global linear convergence of Frank-Wolfe optimization variants. In
NIPS, 2015.
S. Lacoste-Julien, M. Jaggi, M. Schmidt, and P. Pletscher. Block-coordinate Frank-Wolfe optimization for
structural SVMs. In ICML, 2013.
B. London, B. Huang, and L. Getoor. The benefits of learning with strongly convex approximate inference. In
ICML, 2015.
J. M. Mooij. libDAI: A free and open source C++ library for discrete approximate inference in graphical
models. JMLR, 2010.
S. Nowozin, C. Rother, S. Bagon, T. Sharp, B. Yao, and P. Kohli. Decision tree fields. In ICCV, 2011.
G. Papandreou and A. Yuille. Perturb-and-map random fields: Using discrete optimization to learn and sample
from energy models. In ICCV, 2011.
R. Salakhutdinov. Learning and evaluating Boltzmann machines. Technical report, 2008.
S. Shimony. Finding MAPs for belief networks is NP-hard. Artificial Intelligence, 1994.
D. Sontag and T. Jaakkola. New outer bounds on the marginal polytope. In NIPS, 2007.
D. Sontag, T. Meltzer, A. Globerson, Y. Weiss, and T. Jaakkola. Tightening LP relaxations for MAP using
message-passing. In UAI, 2008.
M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log partition function.
IEEE Transactions on Information Theory, 2005.
S. Wang, R. Frostig, P. Liang, and C. Manning. Relaxations for inference in restricted Boltzmann machines. In
ICLR Workshop, 2014.
9
| 5892 |@word kohli:1 trial:5 norm:2 suitably:1 open:3 decomposition:2 contraction:7 pick:1 thereby:1 harder:1 initial:1 series:1 ecole:1 denoting:1 frankwolfe:1 interestingly:1 fa8750:1 existing:1 current:1 com:1 si:1 givry:1 subsequent:2 partition:6 informative:1 additive:1 pseudomarginals:4 enables:2 cheap:1 plot:3 update:14 depict:1 v:4 half:1 instantiate:1 leaf:1 greedy:1 intelligence:1 mccallum:1 ith:1 provides:3 iterates:9 node:10 location:1 traverse:1 unbounded:4 become:2 viable:1 prove:3 manner:2 introduce:3 pairwise:3 x0:3 theoretically:2 roughly:1 nor:1 salakhutdinov:2 globally:2 decreasing:3 encouraging:1 little:1 curse:1 solver:32 increasing:1 becomes:1 project:2 provided:1 moreover:2 bounded:5 estimating:1 substantially:1 finding:2 transformation:1 guarantee:6 every:1 concave:2 growth:2 runtime:2 tie:1 reuses:1 organize:1 positive:3 before:2 local:7 modify:2 consequence:1 despite:2 koo:2 inria:1 black:2 might:1 plus:1 initialization:1 challenging:1 regroup:1 averaged:1 weizmann:1 practical:2 globerson:4 unique:1 block:1 logz:4 area:2 significantly:2 chp:1 projection:2 get:5 convenience:1 close:5 interior:4 applying:2 optimize:13 equivalent:1 map:73 deterministic:1 restriction:1 www:1 quick:1 go:1 schiex:1 convex:12 unravel:1 restated:1 insight:1 rule:1 deriving:1 rediscovering:1 coordinate:2 hurt:1 updated:2 suppose:2 exact:17 programming:3 us:1 wolfe:28 element:3 expensive:2 satisfying:2 updating:3 particularly:1 cut:3 database:1 subproblem:4 preprint:1 wang:2 revisiting:1 kappes:3 region:2 cycle:2 connected:1 decrease:2 highest:1 intuition:1 convexity:1 solving:4 yuille:2 dilemma:1 creates:1 swap:1 gu:5 resolved:1 joint:1 darpa:1 various:1 ppaml:1 corrective:2 kolmogorov:6 fewest:1 fast:6 mileage:1 describe:1 london:2 artificial:1 horse:3 aggregate:1 whose:3 modular:2 heuristic:4 solve:4 garber:2 rho:10 kaist:1 cvpr:1 highlighted:2 final:2 online:1 tpami:4 differentiable:1 propose:1 loop:10 achieve:1 intuitive:1 scalability:1 convergence:18 optimum:3 produce:1 comparative:1 converges:3 sierra:1 coupling:2 develop:1 completion:1 fixing:1 stat:1 ac:1 friend:1 ij:15 progress:3 strong:1 soc:1 implemented:1 c:1 involves:2 come:1 skip:1 inflated:1 quantify:1 direction:4 sabharwal:1 hull:3 stochastic:1 enable:3 ermon:2 settle:1 require:2 fix:1 opt:13 tighter:2 strictly:2 correction:23 hold:1 considered:1 ground:12 visually:1 m0:3 optimizer:1 adopt:2 consecutive:1 applicable:2 realizes:4 combinatorial:2 visited:2 successfully:1 weighted:3 minimization:6 hope:1 brought:1 always:4 modified:1 normale:1 rather:1 avoid:1 shelf:1 pn:1 factorizes:1 jaakkola:15 focus:1 june:1 indicates:1 aka:1 hk:5 besag:3 posteriori:2 inference:39 mrfs:8 stopping:1 hidden:1 kc:2 subroutine:2 provably:2 overall:2 dual:2 ill:1 arg:7 denoted:4 issue:1 priori:1 pascal:1 flexible:1 integration:1 initialize:2 mutual:1 marginal:38 field:5 libdai:2 never:1 having:1 broad:1 icml:5 np:2 report:1 few:1 modern:1 randomly:1 oriented:1 cheaper:1 n1:1 message:4 fcfw:5 evaluation:1 primal:5 accurate:3 edge:12 capable:1 byproduct:1 encourage:1 closer:1 unless:1 tree:23 damping:1 old:1 initialized:1 re:6 desired:1 theoretical:1 increased:1 instance:5 earlier:1 facet:1 papandreou:2 shimony:2 bagon:1 allouche:3 assignment:4 maximization:1 cost:3 applicability:1 vertex:20 uniform:6 too:2 offthe:1 perturbed:2 synthetic:7 adaptively:2 thanks:1 huji:1 probabilistic:3 contract:1 diverge:1 together:1 continuously:1 yao:1 satisfied:1 huang:1 emnlp:1 return:2 ullman:2 potential:10 relint:1 de:1 sec:1 coefficient:1 satisfy:2 vi:2 performed:3 break:1 multiplicative:1 hazan:5 sup:1 doing:1 start:1 competitive:1 option:1 maintains:1 parallel:1 worsen:1 simon:1 contribution:1 localsearch:4 il:1 php:1 accuracy:1 correspond:2 thumb:1 iterated:1 andres:3 app:4 manual:1 definition:2 against:1 energy:6 rbms:4 proof:5 rbm:3 gain:3 knowledge:1 improves:1 dimensionality:1 segmentation:1 schedule:2 trw:41 afrl:1 hashing:1 courant:2 supermodular:2 specify:1 rahul:1 wei:1 evaluated:1 box:2 strongly:2 shrink:2 furthermore:1 done:1 stage:1 d:1 belanger:5 replacing:1 propagation:2 continuity:3 mode:1 quality:2 behaved:1 resemblance:1 modulus:3 building:1 effect:4 requiring:1 true:1 iteratively:3 deal:2 reweighted:5 recurrence:1 encourages:1 suboptimality:3 criterion:1 demonstrate:1 performs:2 l1:1 image:2 variational:6 novel:1 recently:1 boykov:2 pseudocode:1 empirically:1 relating:1 marginals:38 significant:2 refer:1 approx:5 consistency:4 erieure:1 similarly:1 resorting:1 grid:8 frostig:1 submodular:4 gratefully:1 had:1 longer:1 compiled:1 lkx:1 jaggi:8 curvature:2 carreras:1 showed:1 recent:1 optimizing:22 optimizes:4 prime:1 binary:1 seen:1 minimum:1 additional:3 aggregated:2 u0:17 branch:1 full:3 smooth:1 technical:1 faster:1 offer:1 nonsubmodular:1 controlled:1 prediction:1 variant:11 mrf:3 vision:2 arxiv:2 iteration:6 sometimes:1 achieved:1 proposal:2 background:1 preserved:1 source:2 crucial:1 borenstein:2 tend:1 undirected:3 flow:1 call:18 integer:2 structural:1 near:2 leverage:1 insofar:1 krishnan:1 iterate:1 xj:7 meltzer:1 inner:9 idea:1 shift:2 bottleneck:1 motivated:1 akin:2 sontag:12 speech:1 york:2 passing:4 wandering:1 repeatedly:1 generally:2 useful:1 amount:1 svms:1 http:2 exist:2 notice:1 estimated:1 popularity:1 track:1 blue:1 broadly:1 discrete:6 key:1 matz:2 drawn:1 changing:1 neither:1 lacoste:6 kept:1 advancing:1 graph:7 relaxation:10 year:1 run:7 prob:2 realworld:1 uncertainty:1 almost:1 reasonable:3 decision:2 appendix:2 conll:1 bound:25 guaranteed:2 followed:1 convergent:9 correspondence:1 display:1 oracle:6 strength:3 constraint:3 infinity:2 x2:1 sheldon:1 optimality:6 min:9 performing:5 relatively:2 structured:3 according:1 combination:1 poor:1 manning:1 cleverly:1 smaller:1 describes:1 across:5 character:4 lp:1 modification:1 invariant:1 restricted:2 iccv:2 computationally:1 previously:4 remains:1 turn:1 loose:3 fail:1 needed:1 end:5 junction:1 available:1 apply:1 observe:1 worthwhile:3 away:2 appropriate:1 appearing:1 alternative:2 schmidt:1 top:1 running:2 cf:3 ensure:2 dirty:1 graphical:6 xc:2 practicality:1 perturb:2 chinese:4 approximating:1 objective:30 move:1 quantity:1 usual:1 traditional:1 gradient:17 minx:5 iclr:1 outer:4 polytope:41 spanning:9 willsky:1 assuming:2 rother:3 code:1 vali:2 index:1 minimizing:1 innovation:1 liang:1 difficult:1 setup:2 frank:28 slows:1 tightening:7 negative:4 implementation:2 boltzmann:3 perform:8 upper:11 qpbo:1 markov:3 minh:1 finite:1 descent:4 enabling:1 acknowledge:1 optional:2 team:1 precise:1 perturbation:1 sharp:1 thm:3 david:1 paris:1 specified:1 extensive:1 gurobi:3 optimized:1 trbp:19 learned:2 nip:3 address:1 able:2 below:1 challenge:2 program:4 including:1 max:8 memory:1 belief:2 wainwright:6 getoor:1 warm:1 indicator:1 pletscher:1 representing:2 improve:3 github:1 library:2 julien:6 picture:1 taming:1 review:1 geometric:1 acknowledgement:1 mooij:2 relative:3 fully:2 contracting:1 bear:1 proven:1 versus:2 affine:1 sufficient:1 tightened:1 nowozin:2 toulbar2:2 eccv:1 surprisingly:1 last:2 free:4 institute:2 neighbor:1 taking:1 barrier:5 absolute:1 sparse:1 benefit:1 boundary:12 xn:1 world:4 valid:3 evaluating:1 computes:1 selman:1 adaptive:11 far:1 tighten:1 transaction:1 approximate:28 obtains:1 compact:3 keep:1 clique:6 global:5 uai:2 gomes:1 xi:17 continuous:6 iterative:2 search:4 quantifies:1 learn:1 init:3 obtaining:1 alg:14 complex:1 domain:5 did:1 aistats:1 linearly:2 motivation:1 bounding:1 repeated:1 suffering:1 icm:3 x1:1 contracted:1 fig:10 depicts:3 slow:1 aid:1 sub:6 fails:1 wish:1 lie:2 jmlr:1 third:1 opengm:3 down:2 remained:1 theorem:5 rk:1 specific:2 offset:1 grouping:1 intractable:1 exists:1 workshop:2 adding:1 gained:1 importance:1 kx:1 gap:20 entropy:5 simply:2 appearance:3 explore:1 corresponds:2 minimizer:1 truth:12 frustrated:1 prop:6 conditional:5 diam:5 marked:1 towards:3 lipschitz:8 feasible:3 jancsary:2 hard:3 fw:35 typical:1 specifically:2 reducing:2 uniformly:2 domke:2 averaging:1 change:1 lemma:1 called:6 duality:6 experimental:3 cond:16 support:1 arises:1 collins:1 handling:2 |
5,405 | 5,893 | Practical and Optimal LSH for Angular Distance
Alexandr Andoni?
Columbia University
Piotr Indyk
MIT
Ilya Razenshteyn
MIT
Thijs Laarhoven
TU Eindhoven
Ludwig Schmidt
MIT
Abstract
We show the existence of a Locality-Sensitive Hashing (LSH) family for the angular distance that yields an approximate Near Neighbor Search algorithm with the
asymptotically optimal running time exponent. Unlike earlier algorithms with this
property (e.g., Spherical LSH [1, 2]), our algorithm is also practical, improving
upon the well-studied hyperplane LSH [3] in practice. We also introduce a multiprobe version of this algorithm and conduct an experimental evaluation on real
and synthetic data sets.
We complement the above positive results with a fine-grained lower bound for the
quality of any LSH family for angular distance. Our lower bound implies that the
above LSH family exhibits a trade-off between evaluation time and quality that is
close to optimal for a natural class of LSH functions.
1
Introduction
Nearest neighbor search is a key algorithmic problem with applications in several fields including
computer vision, information retrieval, and machine learning [4]. Given a set of n points P ? Rd ,
the goal is to build a data structure that answers nearest neighbor queries efficiently: for a given query
point q ? Rd , find the point p ? P that is closest to q under an appropriately chosen distance metric.
The main algorithmic design goals are usually a fast query time, a small memory footprint, and?in
the approximate setting?a good quality of the returned solution.
There is a wide range of algorithms for nearest neighbor search based on techniques such as space
partitioning with indexing, as well as dimension reduction or sketching [5]. A popular method for
point sets in high-dimensional spaces is Locality-Sensitive Hashing (LSH) [6, 3], an approach that
offers a provably sub-linear query time and sub-quadratic space complexity, and has been shown
to achieve good empirical performance in a variety of applications [4]. The method relies on the
notion of locality-sensitive hash functions. Intuitively, a hash function is locality-sensitive if its
probability of collision is higher for ?nearby? points than for points that are ?far apart?. More formally,
two points are nearby if their distance is at most r1 , and they are far apart if their distance is at
least r2 = c ? r1 , where c > 1 quantifies the gap between ?near? and ?far?. The quality of a
hash function is characterized by two key parameters: p1 is the collision probability for nearby
points, and p2 is the collision probability for points that are far apart. The gap between p1 and p2
determines how ?sensitive? the hash function is to changes in distance, and this property is captured
1/p1
by the parameter ? = log
log 1/p2 , which can usually be expressed as a function of the distance gap c.
The problem of designing good locality-sensitive hash functions and LSH-based efficient nearest
neighbor search algorithms has attracted significant attention over the last few years.
?
The authors are listed in alphabetical order.
1
In this paper, we focus on LSH for the Euclidean distance on the unit sphere, which is an important
special case for several reasons. First, the spherical case is relevant in practice: Euclidean distance
on a sphere corresponds to the angular distance or cosine similarity, which are commonly used in
applications such as comparing image feature vectors [7], speaker representations [8], and tf-idf data
sets [9]. Moreover, on the theoretical side, the paper [2] shows a reduction from Nearest Neighbor
Search in the entire Euclidean space to the spherical case. These connections lead to a natural
question: what are good LSH families for this special case?
On the theoretical side, the recent work of [1, 2] gives the best known provable guarantees for LSHbased nearest neighbor search w.r.t. the Euclidean distance on the unit sphere. Specifically, their
algorithm has a query time of O(n? ) and space complexity of O(n1+? ) for ? = 2c21?1 .1 E.g., for the
approximation factor c = 2, the algorithm achieves a query time of n1/7+o(1) . At the heart of the
algorithm is an LSH scheme called Spherical LSH,?which works for?unit vectors. Its key property
is that it can distinguish between distances r1 = 2/c and r2 = 2 with probabilities yielding
? = 2c21?1 (the formula for the full range of distances is more complex and given in Section 3).
Unfortunately, the scheme as described in the paper is not applicable in practice as it is based on
rather complex hash functions that are very time consuming to evaluate. E.g., simply evaluating
a single hash function from [2] can take more time than a linear scan over 106 points. Since an LSH
data structure contains many individual hash functions, using their scheme would be slower than
a simple linear scan over all points in P unless the number of points n is extremely large.
On the practical side, the hyperplane LSH introduced in the influential work of Charikar [3] has worse
theoretical guarantees, but works well in practice. Since the hyperplane LSH can be implemented
very efficiently, it is the standard hash function in practical LSH-based nearest neighbor algorithms2
and the resulting implementations has been shown to improve over a linear scan on real data by
multiple orders of magnitude [14, 9].
The aforementioned discrepancy between the theory and practice of LSH raises an important question: is there a locality-sensitive hash function with optimal guarantees that also improves over the
hyperplane LSH in practice?
In this paper we show that there is a family of locality-sensitive hash functions that achieves both
objectives. Specifically, the hash functions match the theoretical guarantee of Spherical LSH from [2]
and, when combined with additional techniques, give better experimental results than the hyperplane
LSH. More specifically, our contributions are:
Theoretical guarantees for the cross-polytope LSH. We show that a hash function based on
randomly rotated cross-polytopes (i.e., unit balls of the `1 -norm) achieves the same parameter ? as
the Spherical LSH scheme in [2], assuming data points are unit vectors. While the cross-polytope
LSH family has been proposed by researchers before [15, 16] we give the first theoretical analysis of
its performance.
Fine-grained lower bound for cosine similarity LSH. To highlight the difficulty of obtaining
optimal and practical LSH schemes, we prove the first non-asymptotic lower bound on the trade-off
between the collision probabilities p1 and p2 . So far, the optimal LSH upper bound ? = 2c21?1 (from
[1, 2] and cross-polytope from here) attain this bound only in the limit, as p1 , p2 ? 0. Very small p1
and p2 are undesirable since the hash evaluation time is often proportional to 1/p2 . Our lower bound
proves this is unavoidable: if we require p2 to be large, ? has to be suboptimal.
This result has two important implications for designing practical hash functions. First, it shows that
the trade-offs achieved by the cross-polytope LSH and the scheme of [1, 2] are essentially optimal.
Second, the lower bound guides design of future LSH functions: if one is to significantly improve
upon the cross-polytope LSH, one has to design a hash function that is computed more efficiently
than by explicitly enumerating its range (see Section 4 for a more detailed discussion).
Multiprobe scheme for the cross-polytope LSH. The space complexity of an LSH data structure
is sub-quadratic, but even this is often too large (i.e., strongly super-linear in the number of points),
1
This running time is known to be essentially optimal for a large class of algorithms [10, 11].
Note that if the data points are binary, more efficient LSH schemes exist [12, 13]. However, in this paper
we consider algorithms for general (non-binary) vectors.
2
2
and several methods have been proposed to address this issue. Empirically, the most efficient scheme
is multiprobe LSH [14], which leads to a significantly reduced memory footprint for the hyperplane
LSH. In order to make the cross-polytope LSH competitive in practice with the multiprobe hyperplane
LSH, we propose a novel multiprobe scheme for the cross-polytope LSH.
We complement these contributions with an experimental evaluation on both real and synthetic
data (SIFT vectors, tf-idf data, and a random point set). In order to make the cross-polytope LSH
practical, we combine it with fast pseudo-random rotations [17] via the Fast Hadamard Transform,
and feature hashing [18] to exploit sparsity of data. Our results show that for data sets with around 105
to 108 points, our multiprobe variant of the cross-polytope LSH is up to 10? faster than an efficient
implementation of the hyperplane LSH, and up to 700? faster than a linear scan. To the best of
our knowledge, our combination of techniques provides the first ?exponent-optimal? algorithm that
empirically improves over the hyperplane LSH in terms of query time for an exact nearest neighbor
search.
1.1
Related work
The cross-polytope LSH functions were originally proposed in [15]. However, the analysis in that
paper was mostly experimental. Specifically, the probabilities p1 and p2 of the proposed LSH functions were estimated empirically using the Monte Carlo method. Similar hash functions were later
proposed in [16]. The latter paper also uses DFT to speed-up the random matrix-vector matrix multiplication operation. Both of the aforementioned papers consider only the single-probe algorithm.
There are several works that show lower bounds on the quality of LSH hash functions [19, 10, 20, 11].
However, those papers provide only a lower bound on the ? parameter for asymptotic values of p1
and p2 , as opposed to an actual trade-off between these two quantities. In this paper we provide such
a trade-off, with implications as outlined in the introduction.
2
Preliminaries
We use k.k to denote the Euclidean (a.k.a. `2 ) norm on Rd . We also use S d?1 to denote the unit
sphere in Rd centered in the origin. The Gaussian distribution with mean zero and variance of one is
denoted by N (0, 1). Let ? be a normalized Haar measure on S d?1 (that is, ?(S d?1 ) = 1). Note that
? it corresponds to the uniform distribution over S d?1 . We also let u ? S d?1 be a point sampled
from S d?1 uniformly at random. For ? ? R we denote
Z ?
2
1
?c (?) =
Pr [X ? ?] = ?
e?t /2 dt.
X?N (0,1)
2? ?
We will be interested in the Near Neighbor Search on the sphere S d?1 with respect to the Euclidean
distance. Note that the angular distance can be expressed via the Euclidean distance between normalized vectors, so our results apply to the angular distance as well.
Definition 1. Given an n-point dataset P ? S d?1 on the sphere, the goal of the (c, r)-Approximate
Near Neighbor problem (ANN) is to build a data structure that, given a query q ? S d?1 with the
promise that there exists a datapoint p ? P with kp ? qk ? r, reports a datapoint p0 ? P within
distance cr from q.
Definition 2. We say that a hash family H on the sphere S d?1 is (r1 , r2 , p1 , p2 )-sensitive, if for
every p, q ? S d?1 one has Pr [h(x) = h(y)] ? p1 if kx ? yk ? r1 , and Pr [h(x) = h(y)] ? p2 if
h?H
h?H
kx ? yk ? r2 ,
It is known [6] that an efficient (r, cr, p1 , p2 )-sensitive hash family implies a data structure for (c, r)log(1/p1 )
ANN with space O(n1+? /p1 + dn) and query time O(d ? n? /p1 ), where ? = log(1/p
.
2)
3
Cross-polytope LSH
In this section, we describe the cross-polytope LSH, analyze it, and show how to make it practical.
First, we recall the definition of the cross-polytope LSH [15]: Consider the following hash family
3
H for points on a unit sphere S d?1 ? Rd . Let A ? Rd?d be a random matrix with i.i.d. Gaussian
entries (?a random rotation?). To hash a point x ? S d?1 , we compute y = Ax/kAxk ? S d?1 and
then find the point closest to y from {?ei }1?i?d , where ei is the i-th standard basis vector of Rd .
We use the closest neighbor as a hash of x.
The following theorem bounds the collision probability for two points under the above family H.
Theorem 1. Suppose that p, q ? S d?1 are such that kp ? qk = ? , where 0 < ? < 2. Then,
ln
1
?2
=
? ln d + O? (ln ln d) .
4 ? ?2
Pr h(p) = h(q)
h?H
Before we show how to prove this theorem, we briefly describe its implications. Theorem 1 shows
that the cross-polytope LSH achieves essentially the same bounds on the collision probabilities as the
(theoretically) optimal LSH for the sphere from [2] (see Section ?Spherical LSH? there). In particular,
substituting the bounds from Theorem 1 for the cross-polytope LSH into the standard reduction from
Near Neighbor Search to LSH [6], we obtain the following data structure with sub-quadratic space
and sublinear query time for Near Neighbor Search on a sphere.
Corollary 1. The (c, r)-ANN on a unit sphere S d?1 can be solved in space O(n1+? + dn) and query
2 2
r
time O(d ? n? ), where ? = c12 ? 4?c
4?r 2 + o(1) .
We now outline the proof of Theorem 1. For the full proof, see Appendix B.
Due to the spherical symmetry of Gaussians, we can assume that p = e1 and q = ?e1 + ?e2 , where
?, ? are such that ?2 + ? 2 = 1 and (? ? 1)2 + ? 2 = ? 2 . Then, we expand the collision probability:
Pr [h(p) = h(q)] = 2d ? Pr [h(p) = h(q) = e1 ]
h?H
h?H
= 2d ?
Pr
u,v?N (0,1)d
= 2d ? E
X1 ,Y1
Pr
[?i |ui | ? u1 and |?ui + ?vi | ? ?u1 + ?v1 ]
X2 ,Y2
h
|X2 | ? X1 and |?X2 + ?Y2 | ? ?X1 + ?Y1
id?1
, (1)
where X1 , Y1 , X2 , Y2 ? N (0, 1). Indeed, the first step is due to the spherical symmetry of the hash
family, the second step follows from the above discussion about replacing a random orthogonal
matrix with a Gaussian one and that one can assume w.l.o.g. that p = e1 and q = ?e1 + ?e2 ; the last
step is due to the independence of the entries of u and v.
Thus, proving Theorem 1 reduces to estimating the right-hand side of (1). Note that the probability
Pr[|X2 | ? X1 and |?X2 +?Y2 | ? ?X1 +?Y1 ] is equal to the Gaussian area of the planar set SX1 ,Y1
2
shown in Figure 1a. The latter is heuristically equal to 1 ? e?? /2 , where ? is the distance from
the origin to the complement of SX1 ,Y1 , which is easy to compute (see Appendix A for the precise
statement of this argument). Using this estimate, we compute (1) by taking the outer expectation.
3.1
Making the cross-polytope LSH practical
As described above, the cross-polytope LSH is not quite practical. The main bottleneck is sampling,
storing, and applying a random rotation. In particular, to multiply a random Gaussian matrix with a
vector, we need time proportional to d2 , which is infeasible for large d.
Pseudo-random rotations. To rectify this issue, we instead use pseudo-random rotations. Instead
of multiplying an input vector x by a random Gaussian matrix, we apply the following linear transformation: x 7? HD3 HD2 HD1 x, where H is the Hadamard transform, and Di for i ? {1, 2, 3}
is a random diagonal ?1-matrix. Clearly, this is an orthogonal transformation, which one can store
in space O(d) and evaluate in time O(d log d) using the Fast Hadamard Transform. This is similar to pseudo-random rotations used in the context of LSH [21], dimensionality reduction [17], or
compressed sensing [22]. While we are currently not aware how to prove rigorously that such pseudorandom rotations perform as well as the fully random ones, empirical evaluations show that three
applications of HDi are exactly equivalent to applying a true random rotation (when d tends to
infinity). We note that only two applications of HDi are not sufficient.
4
Figure 1
0.4
Sensitivity ?
?x + ?y = ?X1 + ?Y1
x = ?X1
Cross-polytope LSH
Lower bound
0.35
0.3
0.25
0.2
x = X1
0.15
?x + ?y = ?(?X1 + ?Y1 )
100
104
108
1012
Number of parts T
1016
(b) Trade-off ?
between ? ?
and the number of parts
for distances 2/2 and 2 (approximation c =
2); both bounds tend to 1/7 (see discussion in Section 4).
(a) The set appearing in the analysis of the crosspolytope LSH: SX1 Y1 = {|x| ? X1 and |?x +
?y| ? ?X1 + ?Y1 }.
Feature hashing. While we can apply a pseudo-random rotation in time O(d log d), even this
can be too slow. E.g., consider an input vector x that is sparse: the number of non-zero entries of
x is s much smaller than d. In this case, we can evaluate the hyperplane LSH from [3] in time
O(s), while computing the cross-polytope LSH (even with pseudo-random rotations) still takes time
O(d log d). To speed-up the cross-polytope LSH for sparse vectors, we apply feature hashing [18]:
before performing a pseudo-random rotation, we reduce the dimension from d to d0 d by applying
a linear map x 7? Sx, where S is a random sparse d0 ? d matrix, whose columns have one non-zero
?1 entry sampled uniformly. This way, the evaluation time becomes O(s + d0 log d0 ). 3
?Partial? cross-polytope LSH. In the above discussion, we defined the cross-polytope LSH as a
hash family that returns the closest neighbor among {?ei }1?i?d as a hash (after a (pseudo-)random
rotation). In principle, we do not have to consider all d basis vectors when computing the closest
neighbor. By restricting the hash to d0 ? d basis vectors instead, Theorem 1 still holds for the
new hash family (with d replaced by d0 ) since the analysis is essentially dimension-free. This slight
generalization of the cross-polytope LSH turns out to be useful for experiments (see Section 6). Note
that the case d0 = 1 corresponds to the hyperplane LSH.
4
Lower bound
Let H be a hash family on S d?1 . For 0 < r1 < r2 < 2 we would like to understand the trade-off
between p1 and p2 , where p1 is the smallest probability of collision under H for points at distance
at most r1 and p2?is the largest probability?of collision for points at distance at least r2 . We focus
on the case r2 ? 2 because setting r2 to 2 ? o(1) (as d tends to infinity) allows us to replace p2
with the following quantity that is somewhat easier to handle:
p?2 =
Pr
h?H
u,v?S d?1
[h(u) = h(v)].
This quantity is at most p2 + o(1), since
a unit sphere
? the distance between two random points ond?1
d?1
S
is tightly concentrated around 2. So for a hash family H on a ?
unit sphere S
, we would
like to understand the upper bound on p1 in terms of p?2 and 0 < r1 < 2.
?
For 0 ? ? ? 2 and ? ? R, we define
"
#
r
.
?2
?4
2
?(?, ?) =
Pr
X ? ? and 1 ?
?X + ? ?
?Y ??
Pr [X ? ?] .
2
4
X,Y ?N (0,1)
X?N (0,1)
3
Note that one can apply Lemma 2 from the arXiv version of [18] to claim that?after such a dimension
reduction?the distance between any two points remains sufficiently concentrated for the bounds from Theorem 1 to still hold (with d replaced by d0 ).
5
We are now ready to formulate the main result of this section.
Theorem 2. Let H be a hash family on S d?1 such that every function in H partitions the sphere into
at most T parts of measure at most 1/2. Then we have p1 ? ?(r1 , ?) + o(1), where ? ? R is such
that ?c (?) = p?2 and o(1) is a quantity that depends on T and r1 and tends to 0 as d tends to infinity.
The idea of the proof is first to reason about one part of the partition using the isoperimetric inequality
from [23], and then to apply a certain averaging argument by proving concavity of a function related
to ? using a delicate analytic argument. For the full proof, see Appendix C.
We note that the above requirement of all parts induced by H having measure at most 1/2 is only a
technicality. We conjecture that Theorem 2 holds without this restriction. In any case, as we will see
below, in the interesting range of parameters this restriction is essentially irrelevant.
One can observe that if every hash function in H partitions the sphere into at most T parts, then
p?2 ? T1 (indeed, p?2 is precisely the average sum of squares of measures of the parts). This observation,
combined with Theorem 2, leads to the following interesting consequence. Specifically, we can
1)
numerically estimate ? in order to give a lower bound on ? = log(1/p
log(1/p2 ) for any hash family H in
which every function induces
? at most T parts of measure at most 1/2. See Figure 1b, where we plot
this lower bound for r1 = 2/2,4 together with an upper bound that is given by the cross-polytope
LSH5 (for which we use numerical estimates for (1)). We can make several conclusions from this
plot. First, the cross-polytope LSH gives an almost optimal trade-off between ? and T . Given that the
evaluation time for the cross-polytope LSH is O(T log T ) (if one uses pseudo-random rotations), we
conclude that in order to improve upon the cross-polytope LSH substantially in practice, one should
design an LSH family with ? being close to optimal and evaluation time that is sublinear in T . We
note that none of the known LSH families for a sphere has been shown to have this property. This
direction looks especially interesting since the convergence of ? to the optimal
? value (as T tends
? to
infinity) is extremely slow (for instance, according to Figure 1b, for r1 = 2/2 and r2 ? 2 we
need more than 105 parts to achieve ? ? 0.2, whereas the optimal ? is 1/7 ? 0.143).
5
Multiprobe LSH for the cross-polytope LSH
We now describe our multiprobe scheme for the cross-polytope LSH, which is a method for reducing
the number of independent hash tables in an LSH data structure. Given a query point q, a ?standard?
LSH data structure considers only a single cell in each of the L hash tables (the cell is given by the
hash value hi (q) for i ? [L]). In multiprobe LSH, we consider candidates from multiple cells in each
table [14]. The rationale is the following: points p that are close to q but fail to collide with q under
hash function hi are still likely to hash to a value that is close to hi (q). By probing multiple hash
locations close to hi (q) in the same table, multiprobe LSH achieves a given probability of success
with a smaller number of hash tables than ?standard? LSH. Multiprobe LSH has been shown to
perform well in practice [14, 24].
The main ingredient in multiprobe LSH is a probing scheme for generating and ranking possible
modifications of the hash value hi (q). The probing scheme should be computationally efficient and
ensure that more likely hash locations are probed first. For a single cross-polytope hash, the order of
alternative hash values is straightforward: let x be the (pseudo-)randomly rotated version of query
point q. Recall that the ?main? hash value is hi (q) = arg maxj?[d] |xj |.6 Then it is easy to see
that the second highest probability of collision is achieved for the hash value corresponding to the
coordinate with the second largest absolute value, etc. Therefore, we consider the indices i ? [d]
sorted by their absolute value as our probing sequence or ?ranking? for a single cross-polytope.
The remaining question is how to combine multiple cross-polytope rankings when we have more
than one hash function. As in the analysis of the cross-polytope LSH (see Section 3, we consider
two points q = e1 and p = ?e1 + ?e2 at distance R. Let A(i) be the i.i.d. Gaussian matrix of hash
4
The situation is qualitatively similar for other values of r1 .
More specifically, for the ?partial? version from Section 3.1, since T should be constant, while d grows
6
In order to simplify notation, we consider a slightly modified version of the cross-polytope LSH that maps
both the standard basis vector +ej and its opposite ?ej to the same hash value. It is easy to extend the multiprobe
scheme defined here to the ?full? cross-polytope LSH from Section 3.
5
6
function hi , and let x(i) = A(i) e1 be the randomly rotated version of point q. Given x(i) , we are
interested in the probability of p hashing to a certain combination of the individual cross-polytope
(i)
rankings. More formally, let rvi be the index of the vi -th largest element of |x(i) |, where v ? [d]k
specifies the alternative probing location. Then we would like to compute
Pr
hi (p) = rv(i)
for all i ? [k] | A(i) q = x(i)
i
A(1) ,...,A(k)
=
k
Y
i=1
h
i
(i)
(i)
Pr arg max (? ? A(i) e1 + ? ? A(i) e2 )j = rv(i)
A
e
=
x
.
1
i
(i)
A
j?[d]
If we knew this probability for all v ? [d]k , we could sort the probing locations by their probability.
We now show how to approximate this probability efficiently for a single value of i (and hence drop
the superscripts to simplify notation). WLOG, we permute the rows of A so that rv = v and get
h
i
h
i
?
Pr arg max (?x + ? ? Ae2 )j = v Ae1 = x =
Pr
arg max (x + ? y)j = v .
A
?
y?N (0,Id )
j?[d]
j?[d]
?
The RHS is the Gaussian measure of the set S = {y ? Rd | arg maxj?[d] (x + ?
y)j = v}. Similar
to the analysis of the cross-polytope LSH, we approximate the measure of S by its distance to the
origin. Then the probability of probing location v is proportional to exp(?kyx,v k2 ), where yx,v is the
shortest vector y such that arg maxj |x+y|j = v. Note that the factor ?/? becomes a proportionality
constant, and hence the probing scheme does not require to know the distance R. For computational
performance and simplicity, we make a further approximation and use yx,v = (maxi |xi | ? |xv |) ? ev ,
i.e., we only consider modifying a single coordinate to reach the set S.
Once we have estimated the probabilities for each vi ? [d], we incrementally construct the probing
sequence using a binary heap, similar to the approach in [14]. For a probing sequence of length m,
the resulting algorithm has running time O(L ? d log d + m log m). In our experiments, we found that
the O(L ? d log d) time taken to sort the probing candidates vi dominated the running time of the hash
function evaluation. In order to circumvent this issue, we use an incremental sorting approach that
only sorts the relevant parts of each cross-polytope and gives a running time of O(L ? d + m log m).
6
Experiments
We now show that the cross-polytope LSH, combined with our multiprobe extension, leads to an
algorithm that is also efficient in practice and improves over the hyperplane LSH on several data sets.
The focus of our experiments is the query time for an exact nearest neighbor search. Since hyperplane
LSH has been compared to other nearest-neighbor algorithms before [8], we limit our attention to
the relative speed-up compared with hyperplane hashing.
We evaluate the two hashing schemes on three types of data sets. We use a synthetic data set of
randomly generated points because this allows us to vary a single problem parameter while keeping
the remaining parameters constant. We also investigate the performance of our algorithm on real
data: two tf-idf data sets [25] and a set of SIFT feature vectors [7]. We have chosen these data sets in
order to illustrate when the cross-polytope LSH gives large improvements over the hyperplane LSH,
and when the improvements are more modest. See Appendix D for a more detailed description of
the data sets and our experimental setup (implementation details, CPU, etc.).
In all experiments, we set the algorithm parameters so that the empirical probability of successfully
finding the exact nearest neighbor is at least 0.9. Moreover, we set the number of LSH tables L
so that the amount of additional memory occupied by the LSH data structure is comparable to the
amount of memory necessary for storing the data set. We believe that this is the most interesting
regime because significant memory overheads are often impossible for large data sets. In order to
determine the parameters that are not fixed by the above constraints, we perform a grid search over
the remaining parameter space and report the best combination of parameters. For the cross-polytope
hash, we consider ?partial? cross-polytopes in the last of the k hash functions in order to get a smooth
trade-off between the various parameters (see Section 3.1).
Multiprobe experiments. In order to demonstrate that the multiprobe scheme is critical for making
the cross-polytope LSH competitive with hyperplane hashing, we compare the performance of a
7
Data set
Method
Query
time (ms)
Speed-up
vs HP
Best k
Number of
candidates
Hashing
time (ms)
Distances
time (ms)
NYT
NYT
HP
CP
120 ms
35 ms
3.4?
19
2 (64)
57,200
17,900
16
3.0
96
30
pubmed
pubmed
HP
CP
857 ms
213 ms
4.0?
20
2 (512)
1,481,000
304,000
36
18
762
168
SIFT
SIFT
HP
CP
3.7 ms
3.1 ms
1.2?
30
6 (1)
18,600
13,400
0.2
0.6
3.0
2.2
Table 1: Average running times for a single nearest neighbor query with the hyperplane (HP) and
cross-polytope (CP) algorithms on three real data sets. The cross-polytope LSH is faster than the
hyperplane LSH on all data sets, with significant speed-ups for the two tf-idf data sets NYT and
pubmed. For the cross-polytope LSH, the entries for k include both the number of individual hash
functions per table and (in parenthesis) the dimension of the last of the k cross-polytopes.
?standard? cross-polytope LSH data structure with our multiprobe variant on an instance of the random
data set (n = 220 , d = 128). As can be seen in Table 2 (Appendix D), the multiprobe variant is about
13? faster in our memory-constrained setting (L = 10). Note that in all of the following experiments,
the speed-up of the multiprobe cross-polytope LSH compared to the multiprobe hyperplane LSH is
less than 11?. Hence without our multiprobe addition, the cross-polytope LSH would be slower than
the hyperplane LSH, for which a multiprobe scheme is already known [14].
Experiments on random data. Next, we show that the better time complexity of the crosspolytope LSH already applies for moderate values of n. In particular, we compare the cross-polytope
LSH, combined with fast rotations (Section 3.1) and our multiprobe scheme, to a multi-probe hyperplane?
LSH on random data. We keep the dimension d = 128 and the distance to the nearest neighbor
R = 2/2 fixed, and vary the size of the data set from 220 to 228 . The number of hash tables L is set
to 10. For 220 points, the cross-polytope LSH is already 3.5? faster than the hyperplane LSH, and for
n = 228 the speedup is 10.3? (see Table 3 in Appendix D). Compared to a linear scan, the speed-up
achieved by the cross-polytope LSH ranges from 76? for n = 220 to about 700? for n = 228 .
Experiments on real data. On the SIFT data set (n = 106 and d = 128), the cross-polytope
LSH achieves a modest speed-up of 1.2? compared to the hyperplane LSH (see Table 1). On the
other hand, the speed-up is is 3 ? 4? on the two tf-idf data sets, which is a significant improvement
considering the relatively small size of the NYT data set (n ? 300, 000). One important difference
between the data sets is that the typical distance to the nearest neighbor is smaller in the SIFT data
set, which can make the nearest neighbor problem easier (see Appendix D). Since the tf-idf data sets
are very high-dimensional but sparse (d ? 100, 000), we use the feature hashing approach described
in Section 3.1 in order to reduce the hashing time of the cross-polytope LSH (the standard hyperplane
LSH already runs in time proportional to the sparsity of a vector). We use 1024 and 2048 as feature
hashing dimensions for NYT and pubmed, respectively.
Acknowledgments
We thank Michael Kapralov for many valuable discussions during various stages of this work. We
also thank Stefanie Jegelka and Rasmus Pagh for helpful conversations. This work was supported
in part by the NSF and the Simons Foundation. Work done in part while the first author was at the
Simons Institute for the Theory of Computing.
8
References
[1] Alexandr Andoni, Piotr Indyk, Huy L. Nguyen, and Ilya Razenshteyn. Beyond locality-sensitive hashing.
In SODA, 2014. Full version at http://arxiv.org/abs/1306.1547.
[2] Alexandr Andoni and Ilya Razenshteyn. Optimal data-dependent hashing for approximate near neighbors.
In STOC, 2015. Full version at http://arxiv.org/abs/1501.01062.
[3] Moses S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, 2002.
[4] Gregory Shakhnarovich, Trevor Darrell, and Piotr Indyk. Nearest-Neighbor Methods in Learning and
Vision: Theory and Practice. MIT Press, Cambridge, MA, 2005.
[5] Hanan Samet. Foundations of multidimensional and metric data structures. Morgan Kaufmann, 2006.
[6] Sariel Har-Peled, Piotr Indyk, and Rajeev Motwani. Approximate nearest neighbor: Towards removing
the curse of dimensionality. Theory of Computing, 8(14):321?350, 2012.
[7] Herv? J?gou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbor search.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(1):117?128, 2011.
[8] Ludwig Schmidt, Matthew Sharifi, and Ignacio Lopez Moreno. Large-scale speaker identification. In
ICASSP, 2014.
[9] Narayanan Sundaram, Aizana Turmukhametova, Nadathur Satish, Todd Mostak, Piotr Indyk, Samuel Madden, and Pradeep Dubey. Streaming similarity search over one billion tweets using parallel localitysensitive hashing. In VLDB, 2013.
[10] Moshe Dubiner. Bucketing coding and information theory for the statistical high-dimensional nearestneighbor problem. IEEE Transactions on Information Theory, 56(8):4166?4179, 2010.
[11] Alexandr Andoni and Ilya Razenshteyn. Tight lower bounds for data-dependent locality-sensitive hashing,
2015. Available at http://arxiv.org/abs/1507.04299.
[12] Anshumali Shrivastava and Ping Li. Fast near neighbor search in high-dimensional binary data. In Machine
Learning and Knowledge Discovery in Databases, pages 474?489. Springer, 2012.
[13] Anshumali Shrivastava and Ping Li. Densifying one permutation hashing via rotation for fast near neighbor
search. In ICML, 2014.
[14] Qin Lv, William Josephson, Zhe Wang, Moses Charikar, and Kai Li. Multi-probe lsh: efficient indexing
for high-dimensional similarity search. In VLDB, 2007.
[15] Kengo Terasawa and Yuzuru Tanaka. Spherical lsh for approximate nearest neighbor search on unit
hypersphere. In Algorithms and Data Structures, pages 27?38. Springer, 2007.
[16] Kave Eshghi and Shyamsundar Rajaram. Locality sensitive hash functions based on concomitant rank
order statistics. In KDD, 2008.
[17] Nir Ailon and Bernard Chazelle. The fast Johnson?Lindenstrauss transform and approximate nearest
neighbors. SIAM Journal on Computing, 39(1):302?322, 2009.
[18] Kilian Q. Weinberger, Anirban Dasgupta, John Langford, Alexander J. Smola, and Josh Attenberg. Feature
hashing for large scale multitask learning. In ICML, 2009.
[19] Rajeev Motwani, Assaf Naor, and Rina Panigrahy. Lower bounds on locality sensitive hashing. SIAM
Journal on Discrete Mathematics, 21(4):930?935, 2007.
[20] Ryan O?Donnell, Yi Wu, and Yuan Zhou. Optimal lower bounds for locality-sensitive hashing (except
when q is tiny). ACM Transactions on Computation Theory, 6(1):5, 2014.
[21] Anirban Dasgupta, Ravi Kumar, and Tam?s Sarl?s. Fast locality-sensitive hashing. In KDD, 2011.
[22] Nir Ailon and Holger Rauhut. Fast and RIP-optimal transforms. Discrete & Computational Geometry,
52(4):780?798, 2014.
[23] Uriel Feige and Gideon Schechtman. On the optimality of the random hyperplane rounding technique for
MAX CUT. Random Structures and Algorithms, 20(3):403?440, 2002.
[24] Malcolm Slaney, Yury Lifshits, and Junfeng He. Optimal parameters for locality-sensitive hashing. Proceedings of the IEEE, 100(9):2604?2623, 2012.
[25] Moshe Lichman. UCI machine learning repository, 2013.
[26] Persi Diaconis and David Freedman. A dozen de Finetti-style results in search of a theory. Annales de
l?institut Henri Poincar? (B) Probabilit?s et Statistiques, 23(S2):397?423, 1987.
9
| 5893 |@word multitask:1 repository:1 version:8 briefly:1 norm:2 proportionality:1 heuristically:1 d2:1 vldb:2 p0:1 reduction:5 contains:1 lichman:1 comparing:1 chazelle:1 attracted:1 john:1 numerical:1 partition:3 razenshteyn:4 kdd:2 analytic:1 moreno:1 plot:2 drop:1 sundaram:1 hash:55 v:1 intelligence:1 hypersphere:1 provides:1 location:5 kaxk:1 org:3 dn:2 lopez:1 prove:3 naor:1 yuan:1 combine:2 overhead:1 assaf:1 introduce:1 theoretically:1 indeed:2 p1:18 multi:2 spherical:10 actual:1 cpu:1 curse:1 considering:1 gou:1 becomes:2 estimating:1 moreover:2 notation:2 what:1 substantially:1 finding:1 transformation:2 guarantee:5 pseudo:10 every:4 multidimensional:1 exactly:1 k2:1 partitioning:1 unit:11 positive:1 before:4 t1:1 todd:1 tends:5 limit:2 consequence:1 xv:1 id:2 studied:1 nearestneighbor:1 range:5 c21:3 practical:10 acknowledgment:1 ond:1 alexandr:4 practice:11 alphabetical:1 footprint:2 poincar:1 probabilit:1 area:1 empirical:3 attain:1 significantly:2 ups:1 get:2 close:5 undesirable:1 context:1 applying:3 impossible:1 restriction:2 equivalent:1 map:2 straightforward:1 attention:2 formulate:1 simplicity:1 sx1:3 proving:2 handle:1 notion:1 coordinate:2 suppose:1 rip:1 exact:3 us:2 designing:2 origin:3 element:1 kave:1 cut:1 database:1 solved:1 wang:1 laarhoven:1 rina:1 kilian:1 trade:9 highest:1 valuable:1 yk:2 complexity:4 ui:2 peled:1 rigorously:1 isoperimetric:1 raise:1 shakhnarovich:1 tight:1 upon:3 ae1:1 basis:4 icassp:1 collide:1 kengo:1 various:2 fast:10 describe:3 monte:1 kp:2 query:16 sarl:1 quite:1 whose:1 kai:1 say:1 compressed:1 statistic:1 transform:4 indyk:5 superscript:1 sequence:3 localitysensitive:1 propose:1 douze:1 product:1 qin:1 tu:1 relevant:2 hadamard:3 junfeng:1 uci:1 densifying:1 ludwig:2 achieve:2 description:1 billion:1 convergence:1 motwani:2 requirement:1 r1:13 darrell:1 generating:1 incremental:1 rotated:3 illustrate:1 nearest:20 p2:18 implemented:1 implies:2 direction:1 modifying:1 centered:1 require:2 generalization:1 samet:1 preliminary:1 ryan:1 eindhoven:1 extension:1 hold:3 around:2 sufficiently:1 exp:1 ae2:1 algorithmic:2 claim:1 matthew:1 substituting:1 achieves:6 vary:2 smallest:1 heap:1 estimation:1 applicable:1 currently:1 sensitive:17 largest:3 tf:6 successfully:1 mit:4 offs:1 clearly:1 gaussian:8 anshumali:2 super:1 modified:1 rather:1 occupied:1 zhou:1 cr:2 ej:2 corollary:1 ax:1 focus:3 improvement:3 rank:1 helpful:1 dependent:2 streaming:1 entire:1 expand:1 interested:2 provably:1 issue:3 aforementioned:2 among:1 arg:6 denoted:1 exponent:2 hanan:1 constrained:1 special:2 field:1 construct:1 equal:2 aware:1 having:1 piotr:5 sampling:1 cordelia:1 once:1 holger:1 look:1 icml:2 discrepancy:1 future:1 report:2 simplify:2 few:1 randomly:4 shyamsundar:1 diaconis:1 tightly:1 individual:3 maxj:3 replaced:2 geometry:1 n1:4 delicate:1 ab:3 william:1 investigate:1 multiply:1 evaluation:9 pradeep:1 yielding:1 har:1 implication:3 partial:3 necessary:1 institut:1 orthogonal:2 unless:1 conduct:1 hdi:2 euclidean:7 modest:2 theoretical:6 instance:2 column:1 earlier:1 entry:5 uniform:1 rounding:2 satish:1 johnson:1 too:2 answer:1 mostak:1 gregory:1 synthetic:3 combined:4 sensitivity:1 siam:2 matthijs:1 donnell:1 off:8 pagh:1 michael:1 together:1 ilya:4 sketching:1 unavoidable:1 opposed:1 worse:1 slaney:1 tam:1 style:1 return:1 li:3 de:2 c12:1 coding:1 yury:1 explicitly:1 ranking:4 vi:4 depends:1 later:1 analyze:1 kapralov:1 competitive:2 sort:3 parallel:1 simon:2 contribution:2 square:1 variance:1 qk:2 efficiently:4 kaufmann:1 yield:1 rajaram:1 identification:1 rauhut:1 none:1 carlo:1 multiplying:1 researcher:1 datapoint:2 ping:2 reach:1 trevor:1 definition:3 e2:4 proof:4 di:1 sampled:2 dataset:1 popular:1 persi:1 recall:2 knowledge:2 conversation:1 improves:3 dimensionality:2 hashing:23 higher:1 originally:1 dt:1 planar:1 done:1 strongly:1 angular:6 stage:1 smola:1 uriel:1 langford:1 statistiques:1 hand:2 ei:3 replacing:1 rajeev:2 incrementally:1 quality:5 grows:1 believe:1 normalized:2 y2:4 true:1 hence:3 during:1 speaker:2 cosine:2 samuel:1 m:9 outline:1 demonstrate:1 cp:4 image:1 novel:1 rotation:15 empirically:3 extend:1 slight:1 he:1 numerically:1 significant:4 cambridge:1 dft:1 rd:8 outlined:1 grid:1 hp:5 mathematics:1 lsh:112 rectify:1 similarity:5 etc:2 closest:5 recent:1 irrelevant:1 apart:3 moderate:1 store:1 certain:2 inequality:1 binary:4 success:1 yi:1 captured:1 seen:1 additional:2 somewhat:1 morgan:1 determine:1 shortest:1 rv:3 full:6 multiple:4 reduces:1 d0:8 smooth:1 match:1 characterized:1 faster:5 offer:1 sphere:16 retrieval:1 cross:57 e1:9 parenthesis:1 variant:3 vision:2 metric:2 essentially:5 expectation:1 arxiv:4 achieved:3 cell:3 whereas:1 addition:1 fine:2 appropriately:1 unlike:1 induced:1 tend:1 near:9 easy:3 variety:1 independence:1 xj:1 suboptimal:1 opposite:1 reduce:2 idea:1 enumerating:1 bottleneck:1 herv:1 returned:1 useful:1 collision:10 detailed:2 listed:1 dubey:1 amount:2 transforms:1 concentrated:2 induces:1 narayanan:1 reduced:1 http:3 specifies:1 exist:1 nsf:1 moses:2 estimated:2 per:1 discrete:2 probed:1 promise:1 dasgupta:2 finetti:1 key:3 ravi:1 v1:1 nyt:5 asymptotically:1 annales:1 tweet:1 year:1 sum:1 josephson:1 run:1 soda:1 family:19 almost:1 wu:1 appendix:7 comparable:1 bound:24 hi:8 distinguish:1 quadratic:3 infinity:4 idf:6 precisely:1 constraint:1 x2:6 nearby:3 dominated:1 u1:2 speed:9 argument:3 extremely:2 optimality:1 kumar:1 performing:1 pseudorandom:1 relatively:1 conjecture:1 speedup:1 influential:1 charikar:3 according:1 ailon:2 ball:1 combination:3 bucketing:1 smaller:3 slightly:1 anirban:2 feige:1 making:2 modification:1 intuitively:1 indexing:2 pr:16 heart:1 taken:1 ln:4 computationally:1 remains:1 turn:1 fail:1 know:1 hd2:1 operation:1 gaussians:1 available:1 probe:3 apply:6 observe:1 appearing:1 attenberg:1 schmidt:2 alternative:2 weinberger:1 slower:2 existence:1 running:6 ensure:1 remaining:3 include:1 yx:2 exploit:1 build:2 prof:1 especially:1 objective:1 question:3 quantity:4 already:4 moshe:2 diagonal:1 exhibit:1 distance:31 thank:2 outer:1 polytope:54 considers:1 reason:2 provable:1 assuming:1 panigrahy:1 length:1 index:2 rasmus:1 concomitant:1 setup:1 unfortunately:1 mostly:1 statement:1 stoc:2 design:4 implementation:3 perform:3 upper:3 observation:1 situation:1 precise:1 y1:10 introduced:1 complement:3 nadathur:1 david:1 connection:1 polytopes:3 tanaka:1 address:1 beyond:1 usually:2 below:1 ev:1 pattern:1 regime:1 sparsity:2 gideon:1 including:1 memory:6 max:4 critical:1 natural:2 difficulty:1 circumvent:1 haar:1 scheme:19 improve:3 ignacio:1 ready:1 madden:1 stefanie:1 columbia:1 schmid:1 nir:2 thijs:1 discovery:1 sariel:1 multiplication:1 asymptotic:2 relative:1 fully:1 highlight:1 rationale:1 sublinear:2 interesting:4 permutation:1 proportional:4 ingredient:1 lv:1 foundation:2 jegelka:1 sufficient:1 principle:1 storing:2 tiny:1 row:1 supported:1 last:4 free:1 keeping:1 infeasible:1 side:4 guide:1 understand:2 institute:1 neighbor:31 wide:1 taking:1 absolute:2 sparse:4 algorithms2:1 dimension:7 evaluating:1 lindenstrauss:1 concavity:1 author:2 commonly:1 qualitatively:1 nguyen:1 far:5 transaction:3 yuzuru:1 henri:1 approximate:9 technicality:1 keep:1 conclude:1 consuming:1 knew:1 xi:1 zhe:1 search:19 quantifies:1 eshghi:1 table:12 hd3:1 obtaining:1 symmetry:2 improving:1 shrivastava:2 permute:1 complex:2 main:5 rh:1 s2:1 huy:1 freedman:1 x1:12 pubmed:4 lifshits:1 slow:2 probing:11 wlog:1 sub:4 turmukhametova:1 candidate:3 grained:2 dozen:1 formula:1 theorem:12 removing:1 sift:6 sensing:1 r2:9 maxi:1 exists:1 quantization:1 andoni:4 restricting:1 magnitude:1 kx:2 sx:1 gap:3 easier:2 sorting:1 locality:14 simply:1 likely:2 josh:1 expressed:2 applies:1 springer:2 corresponds:3 determines:1 relies:1 acm:1 rvi:1 ma:1 goal:3 sorted:1 ann:3 towards:1 replace:1 change:1 specifically:6 typical:1 uniformly:2 reducing:1 hyperplane:25 averaging:1 except:1 lemma:1 called:1 hd1:1 bernard:1 experimental:5 schechtman:1 formally:2 latter:2 scan:5 alexander:1 evaluate:4 malcolm:1 |
5,406 | 5,894 | Principal Differences Analysis: Interpretable
Characterization of Differences between Distributions
Jonas Mueller
CSAIL, MIT
jonasmueller@csail.mit.edu
Tommi Jaakkola
CSAIL, MIT
tommi@csail.mit.edu
Abstract
We introduce principal differences analysis (PDA) for analyzing differences between high-dimensional distributions. The method operates by finding the projection that maximizes the Wasserstein divergence between the resulting univariate populations. Relying on the Cramer-Wold device, it requires no assumptions
about the form of the underlying distributions, nor the nature of their inter-class
differences. A sparse variant of the method is introduced to identify features responsible for the differences. We provide algorithms for both the original minimax
formulation as well as its semidefinite relaxation. In addition to deriving some
convergence results, we illustrate how the approach may be applied to identify differences between cell populations in the somatosensory cortex and hippocampus
as manifested by single cell RNA-seq. Our broader framework extends beyond
the specific choice of Wasserstein divergence.
1
Introduction
Understanding differences between populations is a common task across disciplines, from biomedical data analysis to demographic or textual analysis. For example, in biomedical analysis, a set of
variables (features) such as genes may be profiled under different conditions (e.g. cell types, disease
variants), resulting in two or more populations to compare. The hope of this analysis is to answer
whether or not the populations differ and, if so, which variables or relationships contribute most to
this difference. In many cases of interest, the comparison may be challenging primarily for three
reasons: 1) the number of variables profiled may be large, 2) populations are represented by finite,
unpaired, high-dimensional sets of samples, and 3) information may be lacking about the nature of
possible differences (exploratory analysis).
We will focus on the comparison of two high dimensional populations. Therefore, given two unpaired i.i.d. sets of samples Xpnq ? xp1q , . . . , xpnq ? PX and Ypmq ? y p1q , . . . , y pmq ? PY , the
goal is to answer the following two questions about the underlying multivariate random variables
X, Y P Rd : (Q1) Is PX ? PY ? (Q2) If not, what is the minimal subset of features S ? t1, . . . , du
such that the marginal distributions differ PXS ? PYS while PXSC ? PYSC for the complement? A
finer version of (Q2) may additionally be posed which asks how much each feature contributes to
the overall difference between the two probability distributions (with respect to the given scale on
which the variables are measured).
Many two-sample analyses have focused on characterizing limited differences such as mean shifts
[1, 2]. More general differences beyond the mean of each feature remain of interest, however, including variance/covariance of demographic statistics such as income. It is also undesirable to restrict
the analysis to specific parametric differences, especially in exploratory analysis where the nature
of the underlying distributions may be unknown. In the univariate case, a number of nonparametric
tests of equality of distributions are available with accompanying concentration results [3]. Popular examples of such divergences (also referred to as probability metrics) include: f -divergences
1
(Kullback-Leibler, Hellinger, total-variation, etc.), the Kolmogorov distance, or the Wasserstein
metric [4]. Unfortunately, this simplicity vanishes as the dimensionality d grows, and complex
test-statistics have been designed to address some of the difficulties that appear in high-dimensional
settings [5, 6, 7, 8].
In this work, we propose the principal differences analysis (PDA) framework which circumvents the
curse of dimensionality through explicit reduction back to the univariate case. Given a pre-specified
statistical divergence D which measures the difference between univariate probability distributions,
PDA seeks to find a projection which maximizes Dp T X, T Y q subject to the constraints || ||2 ?
1, 1 ? 0 (to avoid underspecification). This reduction is justified by the Cramer-Wold device,
which ensures that PX ? PY if and only if there exists a direction along which the univariate linearly
projected distributions differ [9, 10, 11]. Assuming D is a positive definite divergence (meaning it is
nonzero between any two distinct univariate distributions), the projection vector produced by PDA
can thus capture arbitrary types of differences between high-dimensional PX and PY . Furthermore,
the approach can be straightforwardly modified to address (Q2) by introducing a sparsity penalty on
and examining the features with nonzero weight in the resulting optimal projection. The resulting
comparison pertains to marginal distributions up to the sparsity level. We refer to this approach as
sparse differences analysis or SPARDA.
2
Related Work
The problem of characterizing differences between populations, including feature selection, has received a great deal of study [2, 12, 13, 5, 1]. We limit our discussion to projection-based methods
which, as a family of methods, are closest to our approach. For multivariate two-class data, the most
widely adopted methods include (sparse) linear discriminant analysis (LDA) [2] and the logistic
lasso [12]. While interpretable, these methods seek specific differences (e.g., covariance-rescaled
average differences) or operate under stringent assumptions (e.g., log-linear model). In contrast,
SPARDA (with a positive-definite divergence) aims to find features that characterize a priori unspecified differences between general multivariate distributions.
Perhaps most similar to our general approach is Direction-Projection-Permutation (DiProPerm) procedure of Wei et al. [5], in which the data is first projected along the normal to the separating hyperplane (found using linear SVM, distance weighted discrimination, or the centroid method) followed
by a univariate two-sample test on the projected data. The projections could also be chosen at
random [1]. In contrast to our approach, the choice of the projection in such methods is not optimized for the test statistics. We note that by restricting the divergence measure in our technique,
methods such as the (sparse) linear support vector machine [13] could be viewed as special cases.
The divergence in this case would measure the margin between projected univariate distributions.
While suitable for finding well-separated projected populations, it may fail to uncover more general
differences between possibly multi-modal projected populations.
3
General Framework for Principal Differences Analysis
For a given divergence measure D between two univariate random variables, we find the projection
p that solves
(
p pnq , T Yp pmq q
max
Dp T X
(1)
PB,|| ||0 ?k
where B :? t P R : || ||2 ? 1, 1 ? 0u is the feasible set, || ||0 ? k is the sparsity constraint,
p pnq denotes the observed random variable that follows the empirical distribution of n samand T X
ples of T X. Instead of imposing a hard cardinality constraint || ||0 ? k, we may instead penalize
by adding a penalty term1 ? || ||0 or its natural relaxation, the `1 shrinkage used in Lasso [12],
sparse LDA [2], and sparse PCA [14, 15]. Sparsity in our setting explicitly restricts the comparison
to the marginal distributions over features with non-zero coefficients. We can evaluate the null hypothesis PX ? PY (or its sparse variant over marginals) using permutation testing (cf. [5, 16]) with
p pnq , pT Yp pmq q.
statistic Dp pT X
d
1
In practice, shrinkage parameter (or explicit cardinality constraint k) may be chosen via cross-validation
by maximizing the divergence between held-out samples.
2
The divergence D plays a key role in our analysis. If D is defined in terms of density functions as in
f -divergence, one can use univariate kernel density estimation to approximate projected pdfs with
additional tuning of the bandwidth hyperparameter. For a suitably chosen kernel (e.g. Gaussian), the
unregularized PDA objective (without shrinkage) is a smooth function of , and thus amenable to the
projected gradient method (or its accelerated variants [17, 18]). In contrast, when D is defined over
the cdfs along the projected direction ? e.g. the Kolmogorov or Wasserstein distance that we focus
on in this paper ? the objective is nondifferentiable due to the discrete jumps in the empirical cdf.
We specifically address the combinatorial problem implied by the Wasserstein distance. Moreover,
since the divergence assesses general differences between distributions, Equation (1) is typically
a non-concave optimization. To this end, we develop a semi-definite relaxation for use with the
Wasserstein distance.
4
PDA using the Wasserstein Distance
In the remainder of the paper, we focus on the squared L2 Wasserstein distance (a.k.a. Kantorovich,
Mallows, Dudley, or earth-mover distance), defined as
DpX, Y q ? min EPXY ||X ? Y ||2 s.t. pX, Y q ? PXY , X ? PX , Y ? PY
(2)
PXY
where the minimization is over all joint distributions over pX, Y q with given marginals PX and PY .
Intuitively interpreted as the amount of work required to transform one distribution into the other,
D provides a natural dissimilarity measure between populations that integrates both the fraction of
individuals which are different and the magnitude of these differences. While component analysis
based on the Wasserstein distance has been limited to [19], this divergence has been successfully
used in many other applications [20]. In the univariate case, (2) may be analytically expressed as
the L2 distance between quantile functions. We can thus efficiently compute empirical projected
Wasserstein distances by sorting X and Y samples along the projection direction to obtain quantile
estimates.
Using the Wasserstein distance, the empirical objective in Equation (1) between unpaired sampled
populations txp1q , . . . , xpnq u and ty p1q , . . . , y pmq u can be shown to be
"
*
"
*
n ?
m
?
max
min
p T xpiq ? T y pjq q2 Mij ? max
min T WM
(3)
PB
|| ||0 ?k
M PM,
PB
|| ||0 ?k
i?1 j?1
M PM
where M is the set of all n ? m nonnegative matching
? matrices with fixed row sums ? 1{n and
column sums ? 1{m (see [20] for details), WM :? i,j rZij b Zij sMij , and Zij :? xpiq ? y pjq .
If we omitted (fixed) the inner minimization over the matching matrices and set ? 0, the solution
of (3) would be simply the largest eigenvector of WM . Similarly, for the sparse variant without
minizing over M , the problem would be solvable as sparse PCA [14, 15, 21]. The actual maxmin problem in (3) is more complex and non-concave with respect to . We propose a two-step
procedure similar to ?tighten after relax? framework used to attain minimax-optimal rates in sparse
PCA [21]. First, we first solve a convex relaxation of the problem and subsequently run a steepest
ascent method (initialized at the global optimum of the relaxation) to greedily improve the current
solution with respect to the original nonconvex problem whenever the relaxation is not tight.
Finally, we emphasize that PDA (and SPARDA) not only computationally resembles (sparse) PCA,
but the latter is actually a special case of the former in the Gaussian, paired-sample-differences
setting. This connection is made explicit by considering the two-class problem with paired samples
pxpiq , y piq q where X, Y follow two multivariate Gaussian distributions. Here, the largest principal
component of the (uncentered) differences xpiq ? y piq is in fact equivalent to the direction which
maximizes the projected Wasserstein difference between the distribution of X ? Y and a delta
distribution at 0.
4.1
Semidefinite Relaxation
The SPARDA problem may be expressed in terms of d ? d symmetric matrices B as
max min tr pWM Bq
B
M PM
subject to trpBq ? 1, B ? 0, ||B||0 ? k 2 , rankpBq ? 1
3
(4)
where the correspondence between (3) and (4) comes from writing B ? b (note that any solution
of (3) will have unit norm). When k ? d, i.e., we impose no sparsity constraint as in PDA, we can
relax by simply dropping the rank-constraint. The objective is then a supremum of linear functions
of B and the resulting semidefinite problem is concave over a convex set and may be written as:
max min tr pWM Bq
BPBr
M PM
(5)
where Br is the convex set of positive semidefinite d ? d matrices with trace = 1. If B ? P Rd?d
denotes the global optimum of this relaxation and rankpB ? q ? 1, then the best projection for PDA
is simply the dominant eigenvector of B ? and the relaxation is tight. Otherwise, we can truncate B ?
as in [14], treating the dominant eigenvector as an approximate solution to the original problem (3).
To obtain a relaxation for the sparse version where k ? d (SPARDA), we follow [14] closely.
Because B ? b implies ||B||0 ? k 2 , we obtain an equivalent cardinality constrained problem by
incorporating this nonconvex constraint into (4). Since trpBq ? 1 and ||B||F ? || ||22 ? 1, a convex
relaxation of the squared `0 constraint is given by ||B||1 ? k. By selecting as the optimal Lagrange
multiplier for this `1 constraint, we can obtain an equivalent penalized reformulation parameterized
by rather than k [14]. The sparse semidefinite relaxation is thus the following concave problem
(
max min tr pWM Bq ? ||B||1
(6)
BPBr
M PM
While the relaxation bears strong resemblance to DSPCA relaxation for sparse PCA, the inner maximization over matchings prevents direct application of general semidefinite programming solvers.
Let M pBq denote the matching that minimizes tr pWM Bq for a given B. Standard projected subgradient ascent could be applied to solve (6), where at the tth iterate the (matrix-valued) subgradient
is WM pB ptq q . However, this approach requires solving optimal transport problems with large n ? m
matrices at each iteration. Instead, we turn to a dual form of (6), assuming n ? m (cf. [22, 23])
n m
n
m
1 ??
1 ?
1 ?
maxn
mint0,
trprZ
bZ
s
Bq?u
?v
u`
u
`
vj ? ||B||1 (7)
ij
ij
i
j
i
BPBr ,uPR ,vPRm m
n i?1
m j?1
i?1 j?1
(7) is simply a maximization over B P Br , u P Rn , and v P Rm which no longer requires matching
matrices nor their cumbersome row/column constraints. While dual variables u and v can be solved
in closed form for each fixed B (via sorting), we describe a simple sub-gradient approach that works
better in practice.
RELAX Algorithm: Solves the dualized semidefinite relaxation of SPARDA (7). Returns the
largest eigenvector of the solution to (6) as the desired projection direction for SPARDA.
Input: d-dimensional data xp1q , . . . , xpnq and y p1q , . . . , y pmq (with n ? m)
Parameters:
? 0 controls the amount of regularization,
? 0 is the step-size used for B
updates, ? ? 0 is the step-size used for updates of dual variables u and v, T is the maximum number
of iterations without improvement
in
?
? cost after which algorithm terminates.
1: Initialize
p0q
?
?
?
d
d
d ,..., d
, B p0q ?
p0q
b
p0q
P Br , up0q ? 0n?1 , v p0q ? 0m?1
2: While the number of iterations since last improvement in objective function is less than T :
3:
Bu ? r1{n, . . . , 1{ns P Rn , Bv ? r1{m, . . . , 1{ms P Rm , BB ? 0d?d
4:
For i, j P t1, . . . , nu ? t1, . . . , mu:
5:
6:
7:
8:
9:
Zij ? xpiq ? y pjq
ptq
ptq
If trprZij b Zij sB ptq q ? ui ? vj ? 0 :
Bui ? Bui ? 1{m , Bvj ? Bvj ? 1{m , BB ? BB ` Zij b Zij {m
End For
upt`1q ? uptq ` ? ? Bu and v pt`1q ? v ptq ` ? ? Bv
?
?
10:
B pt`1q ? Projection B ptq ` ||BB||F ? BB ; , {||BB||F
Output: prelax P Rd defined as the largest eigenvector (based on corresponding eigenvalue?s magni?
tude) of the matrix B pt q which attained the best objective value over all iterations.
4
Projection Algorithm: Projects matrix onto positive semidefinite cone of unit-trace matrices Br
(the feasible set in our relaxation). Step 4 applies soft-thresholding proximal operator for sparsity.
Input: B P Rd?d
Parameters: ? 0 controls the amount of regularization, ? {||BB||F ? 0 is the actual step-size
used in the B-update.
1: Q?QT ? eigendecomposition of B
(
2: w? ? arg min ||w ? diagp?q||22 : w P r0, 1sd , ||w||1 ? 1
(Quadratic program)
r ? Q ? diagtw? , . . . , w? u ? QT
3: B
1
d
4: If ? 0: For r, s P t1, . . . , du2 :
r P Br
Output: B
rr,s ? signpB
rr,s q ? maxt0, |B
rr,s | ?
B
u
The RELAX algorithm (boxed) is a projected subgradient method with supergradients computed in
Steps 3 - 8. For scaling to large samples, one may alternatively employ incremental supergradient directions [24] where Step 4 would be replaced by drawing random pi, jq pairs. After each subgradient
step, projection back into the feasible set Br is done via a quadratic program involving the current
solution?s eigenvalues. In SPARDA, sparsity is encouraged via the soft-thresholding proximal map
corresponding to the `1 penalty. The overall form of our iterations matches subgradient-proximal
updates (4.14)-(4.15) in [24]. By the convergence analysis in ?4.2 of [24], the RELAX algorithm (as
well as its incremental variant) is guaranteed to approach the optimal solution of the dual which also
solves (6), provided we employ sufficiently large T and small step-sizes. In practice, fast and accurate convergence is attained by: (a) renormalizing the B-subgradient (Step 10) to ensure balanced
updates of the unit-norm constrained B, (b) using diminishing learning rates which are initially set
larger for the unconstrained dual variables (or even taking multiple subgradient steps in the dual
variables per each update of B).
4.2
Tightening after relaxation
It is unreasonable to expect that our semidefinite relaxation is always tight. Therefore, we can
sometimes further refine the projection prelax obtained by the RELAX algorithm by using it as
a starting point in the original non-convex optimization. We introduce a sparsity constrained
tightening procedure for applying projected gradient ascent for the original nonconvex objective
Jp q ? minM PM T WM where is now forced to lie in BXSk and Sk :? t P Rd : || ||0 ? ku.
The sparsity level k is fixed based on the relaxed solution (k ? || prelax ||0 ). After initializing
p0q
? prelax P Rd , the tightening procedure iterates steps in the gradient direction of J followed
by straightforward projections into the unit half-ball B and the set Sk (accomplished by greedily
truncating all entries of to zero besides the largest k in magnitude).
Let M p q again denote the matching matrix chosen in response to . J fails to be differentiable at
the r where M p rq is not unique. This occurs, e.g., if two samples have identical projections under
r. While this situation becomes increasingly likely as n, m ? 8, J interestingly becomes smoother
overall (assuming the distributions admit density functions). For all other : M p 1 q ? M p q where
1
lies in a small neighborhood around and J admits a well-defined gradient 2WM p q . In practice, we find that the tightening always approaches a local optimum of J with a diminishing stepsize. We note that, for a given projection, we can efficiently calculate gradients without recourse to
T
T
T
T
matrices M p q or WM p q by sorting ptq xp1q , . . . , ptq xpnq and ptq y p1q , . . . , ptq y pmq . The
gradient is directly derivable from expression (3) where the nonzero Mij are determined by appropriately matching empirical quantiles (represented by sorted indices) since the univariate Wasserstein
distance is simply the L2 distance between quantile functions [20]. Additional computation can be
saved by employing insertion sort which runs in nearly linear time for almost sorted points (in iteration t ? 1, the points have been sorted along the pt?1q direction and their sorting in direction ptq
is likely similar under small step-size). Thus the tightening procedure is much more efficient than
the RELAX algorithm (respective runtimes are Opdn log nq vs. Opd3 n2 q per iteration).
5
We require the combined steps for good performance. The projection found by the tightening algorithm heavily depends on the starting point p0q , finding only the closest local optimum (as in
Figure 1a). It is thus important that p0q is already a good solution, as can be produced by our
RELAX algorithm. Additionally, we note that as first-order methods, both the RELAX and tightening algorithms are amendable to a number of (sub)gradient-acceleration schemes (e.g. momentum
techniques, adaptive learning rates, or FISTA and other variants of Nesterov?s method [18, 17, 25]).
4.3
Properties of semidefinite relaxation
We conclude the algorithmic discussion by highlighting basic conditions under which our PDA
relaxation is tight. Assuming n, m ? 8, each of (i)-(iii) implies that the B ? which maximizes (5)
is nearly rank one, or equivalently B ? ? r b r (see Supplementary Information ?S4 for intuition).
Thus, the tightening procedure initialized at r will produce a global maximum of the PDA objective.
(i) There exists direction in which the projected Wasserstein distance between X and Y is
nearly as large as the overall Wasserstein distance in Rd . This occurs for example if
||ErXs ? ErY s||2 is large while both ||CovpXq||F and ||CovpY q||F are small (the distributions need not be Gaussian).
(ii) X ? N p?X , ?X q and Y ? N p?Y , ?Y q with ?X ? ?Y and ?X ? ?Y .
(iii) X ? N p?X , ?X q and Y ? N p?Y , ?Y q with ?X ? ?Y where the underlying covariance
structure is such that arg maxBPBr ||pB 1{2 ?X B 1{2 q1{2 ? pB 1{2 ?Y B 1{2 q1{2 ||2F is nearly
rank 1. For example, if the primary difference between covariances is a shift in the marginal
variance of some features, i.e. ?Y ? V ? ?X where V is a diagonal matrix.
5
Theoretical Results
In this section, we characterize statistical properties of an empirical divergence-maximizing projecp pnq , T Yp pnq q, although we note that the algorithms may not succeed
tion p :? arg max Dp T X
PB
in finding such a global maximum for severely nonconvex problems. Throughout, D denotes the
squared L2 Wasserstein distance between univariate distributions, C represents universal constants
that change from line to line. All proofs are relegated to the Supplementary Information ?S3. We
make the following simplifying assumptions: (A1) n ? m (A2) X, Y admit continuous density
functions (A3) X, Y are compactly supported with nonzero density in the Euclidean ball of radius
R. Our theory can be generalized beyond (A1)-(A3) to obtain similar (but complex) statements
through careful treatment of the distributions? tails and zero-density regions where cdfs are flat.
Theorem 1. Suppose there exists direction
p pnq , pT Yp pnq q ?
Dp pT X
??
?
P B such that Dp
?T
?T
Y q ? . Then:
?
?
n?2
with probability greater than 1 ? 4 exp ?
16R4
X,
Theorem 1 gives basic concentration results for the projections used in empirical applications our
method. To relate distributional differences between X, Y in the ambient d-dimensional space with
their estimated divergence along the univariate linear representation chosen by PDA, we turn to
Theorems 2 and 3. Finally, Theorem 4 provides sparsistency guarantees for SPARDA in the case
where X, Y exhibit large differences over a certain feature subset (of known cardinality).
p pnq , pT Yp pnq q ? ?
Theorem 2. If X and Y are identically distributed in Rd , then: Dp pT X
with probability greater than
?
?d
?
?
R2
C2
1 ? C1 1 `
exp ? 4 n?2
?
R
To measure the difference between the untransformed random variables X, Y P Rd , we define the
following metric between distributions on Rd which is parameterized by a ? 0 (cf. [11]):
Ta pX, Y q :? | Prp|X1 | ? a, . . . , |Xd | ? aq ? Prp|Y1 | ? a, . . . , |Yd | ? aq|
6
(8)
In addition to (A1)-(A3), we assume the following for the next two theorems: (A4) Y has subGaussian tails, meaning cdf FY satisfies: 1 ? FY pyq ? Cy expp?y 2 {2q, (A5) ErXs ? ErY s ?
0 (note that mean differences can trivially be captured by linear projections, so these are not the
differences of interest in the following theorems), (A6) Var(X` ) = 1 for ` ? 1, . . . , d
Theorem 3. Suppose D a ? 0 s.t. Ta pX, Y q ? h pgp qq where h pgp qq :? mint 1 , 2 u with
?
?
?
d
2
exp ?1{p 2 q
(9)
1 :? pa ` dq pgp q ` dq ` expp?a {2q `
`
?
2
(10)
2 :? gp q ` expp?a {2q ? d
(
?4
:? ||CovpXq||1 , gp q :? 4 ? p1 ` q , and :? sup?PB supy |f?T Y pyq|
with f?T Y pyq defined as the density of the projection of Y in the ? direction.
Then:
p pnq , pT Yp pnq q ? C ? ?
Dp pT X
(11)
` C
?
2
2
with probability greater than 1 ? C1 exp ? R4 n?
Theorem 4. Define C as in (11). Suppose there exists feature subset S ? t1, . . . , du s.t. |S| ? k,
T pXS , YS q ? h pg p?pd ` 1q{Cqq, and remaining marginal distributions XS C , YS C are identical.
Then:
ppkq :? arg max tDp T X
p pnq , T Yp pnq q : || ||0 ? ku
satisfies
6
ppkq
i
? 0 and
Experiments
ppkq
j
PB
? 0 @ i P S, j P S C with probability greater than
?
?d?k
?
?
R2
C2
1 ? C1 1 `
exp ? 4 n?2
?
R
Figure 1a illustrates the cost function of PDA pertaining to two 3-dimensional distributions (see
details in Supplementary Information ?S1). In this example, the point of convergence p of the tightening method after random initialization (in green) is significantly inferior to the solution produced
by the RELAX algorithm (in red). It is therefore important to use RELAX before tightening as we
advise.
The synthetic MADELON dataset used in the NIPS 2003 feature selection challenge consists of
points (n ? m ? 1000, d ? 500) which have 5 features scattered on the vertices of a fivedimensional hypercube (so that interactions between features must be considered in order to distinguish the two classes), 15 features that are noisy linear combinations of the original five, and 480
useless features [26]. While the focus of the challenge was on extracting features useful to classifiers, we direct our attention toward more interpretable models. Figure 1b demonstrates how well
SPARDA (red), the top sparse principal component (black) [27], sparse LDA (green) [2], and the
logistic lasso (blue) [12] are able to identify the 20 relevant features over different settings of their
respective regularization parameters (which determine the cardinality of the vector returned by each
method). The red asterisk indicates the SPARDA result with automatically selected via our crossvalidation procedure (without information of the underlying features? importance), and the black
asterisk indicates the best reported result in the challenge [26].
Two Sample Testing
100
200
300
Cardinality
(a)
(b)
400
500
0.4
0.2
p value
0
0.0
15
10
5
0
Relevant features
20
0.6
MADELON
10
20
30
40
Data dimension (d )
50
60
(c)
Figure 1: (a) example where PDA is nonconvex, (b) SPARDA vs. other feature selection methods,
(c) power of various tests for multi-dimensional problems with 3-dimensional differences.
7
The restrictive assumptions in logistic regression and linear discriminant analysis are not satisfied in
this complex dataset resulting in poor performance. Despite being class-agnostic, PCA was successfully utilized by numerous challenge participants [26], and we find that the sparse PCA performs
on par with logistic regression and LDA. Although the lasso fairly efficiently picks out 5 relevant
features, it struggles to identify the rest due to severe multi-colinearity. Similarly, the challengewinning Bayesian SVM with Automatic Relevance Determination [26] only selects 8 of the 20
relevant features. In many applications, the goal is to thoroughly characterize the set of differences
rather than select a subset of features that maintains predictive accuracy. SPARDA is better suited
for this alternative objective. Many settings of return 14 of the relevant features with zero false
positives. If is chosen automatically through cross-validation, the projection returned by SPARDA
contains 46 nonzero elements of which 17 correspond to relevant features.
Figure 1c depicts (average) p-values produced by SPARDA (red), PDA (purple), the overall Wasserstein distance in Rd (black), Maximum Mean Discrepancy [8] (green), and DiProPerm [5] (blue)
in two-sample synthetically controlled problems where PX ? PY and the underlying differences
have varying degrees of sparsity. Here, d indicates the overall number of features included of which
only the first 3 are relevant (see Supplementary Information ?S1 for details). As we evaluate the
significance of each method?s statistic via permutation testing, all the tests are guaranteed to exactly
control Type I error [16], and we thus only compare their respective power in determining PX ? PY
setting. The figure demonstrates clear superiority of SPARDA which leverages the underlying sparsity to maintain high power even with the increasing overall dimensionality. Even when all the
features differ (when d ? 3), SPARDA matches the power of methods that consider the full space
despite only selecting a single direction (which cannot be based on mean-differences as there are
none in this controlled data). This experiment also demonstrate that the unregularized PDA retains
greater power than DiProPerm, a similar projection-based method [5].
Recent technological advances allow complete transcriptome profiling in thousands of individual
cells with the goal of fine molecular characterization of cell populations (beyond the crude averagetissue-level expression measure that is currently standard) [28]. We apply SPARDA to expression
measurements of 10,305 genes profiled in 1,691 single cells from the somatosensory cortex and
1,314 hippocampus cells sampled from the brains of juvenile mice [29]. The resulting p identifies
many previously characterized subtype-specific genes and is in many respects more informative than
the results of standard differential expression methods (see Supplementary Information ?S2 for details). Finally, we also apply SPARDA to normalized data with mean-zero & unit-variance marginals
in order to explicitly restrict our search to genes whose relationship with other genes? expression is
different between hippocampus and cortex cells. This analysis reveals many genes known to be
heavily involved in signaling, regulating important processes, and other forms of functional interaction between genes (see Supplementary Information ?S2.1 for details). These types of important
changes cannot be detected by standard differential expression analyses which consider each gene
in isolation or require gene-sets to be explicitly identified as features [28].
7
Conclusion
This paper introduces the overall principal differences methodology and demonstrates its numerous
practical benefits of this approach. While we focused on algorithms for PDA & SPARDA tailored
to the Wasserstein distance, different divergences may be better suited for certain applications.
Further theoretical investigation of the SPARDA framework is of interest, particularly in the highdimensional d ? Opnq setting. Here, rich theory has been derived for compressed sensing and
sparse PCA by leveraging ideas such as restricted isometry or spiked covariance [15]. A natural
question is then which analogous properties of PX , PY theoretically guarantee the strong empirical
performance of SPARDA observed in our high-dimensional applications. Finally, we also envision
extensions of the methods presented here which employ multiple projections in succession, or adapt
the approach to non-pairwise comparison of multiple populations.
Acknowledgements
This research was supported by NIH Grant T32HG004947.
8
References
[1] Lopes M, Jacob L, Wainwright M (2011) A More Powerful Two-Sample Test in High Dimensions using
Random Projection. NIPS : 1206?1214.
[2] Clemmensen L, Hastie T, Witten D, Ersb? ll B (2011) Sparse Discriminant Analysis. Technometrics 53:
406?413.
[3] van der Vaart AW, Wellner JA (1996) Weak Convergence and Empirical Processes. Springer.
[4] Gibbs AL, Su FE (2002) On Choosing and Bounding Probability Metrics. International Statistical Review
70: 419?435.
[5] Wei S, Lee C, Wichers L, Marron JS (2015) Direction-Projection-Permutation for High Dimensional
Hypothesis Tests. Journal of Computational and Graphical Statistics .
[6] Rosenbaum PR (2005) An exact distribution-free test comparing two multivariate distributions based on
adjacency. Journal of the Royal Statistical Society Series B 67: 515?530.
[7] Szekely G, Rizzo M (2004) Testing for equal distributions in high dimension. InterStat 5.
[8] Gretton A, Borgwardt KM, Rasch MJ, Scholkopf B, Smola A (2012) A Kernel Two-Sample Test. The
Journal of Machine Learning Research 13: 723?773.
[9] Cramer H, Wold H (1936) Some Theorems on Distribution Functions. Journal of the London Mathematical Society 11: 290?294.
[10] Cuesta-Albertos JA, Fraiman R, Ransford T (2007) A sharp form of the Cramer?Wold theorem. Journal
of Theoretical Probability 20: 201?209.
[11] Jirak M (2011) On the maximum of covariance estimators. Journal of Multivariate Analysis 102: 1032?
1046.
[12] Tibshirani R (1996) Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society Series B : 267?288.
[13] Bradley PS, Mangasarian OL (1998) Feature Selection via Concave Minimization and Support Vector
Machines. ICML : 82?90.
[14] D?Aspremont A, El Ghaoui L, Jordan MI, Lanckriet GR (2007) A direct formulation for sparse PCA
using semidefinite programming. SIAM Review : 434?448.
[15] Amini AA, Wainwright MJ (2009) High-dimensional analysis of semidefinite relaxations for sparse principal components. The Annals of Statistics 37: 2877?2921.
[16] Good P (1994) Permutation Tests: A Practical Guide to Resampling Methods for Testing Hypotheses.
Spring-Verlag.
[17] Duchi J, Hazan E, Singer Y (2011) Adaptive Subgradient Methods for Online Learning and Stochastic
Optimization. Journal of Machine Learning Research 12: 2121?2159.
[18] Wright SJ (2010) Optimization Algorithms in Machine Learning. NIPS Tutorial .
[19] Sandler R, Lindenbaum M (2011) Nonnegative Matrix Factorization with Earth Mover?s Distance Metric
for Image Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 33: 1590?1602.
[20] Levina E, Bickel P (2001) The Earth Mover?s distance is the Mallows distance: some insights from
statistics. ICCV 2: 251?256.
[21] Wang Z, Lu H, Liu H (2014) Tighten after Relax: Minimax-Optimal Sparse PCA in Polynomial Time.
NIPS 27: 3383?3391.
[22] Bertsekas DP (1998) Network Optimization: Continuous and Discrete Models. Athena Scientific.
[23] Bertsekas DP, Eckstein J (1988) Dual coordinate step methods for linear network flow problems. Mathematical Programming 42: 203?243.
[24] Bertsekas DP (2011) Incremental gradient, subgradient, and proximal methods for convex optimization:
A survey. In: Optimization for Machine Learning, MIT Press. pp. 85?119.
[25] Beck A, Teboulle M (2009) A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM Journal on Imaging Sciences 2: 183?202.
[26] Guyon I, Gunn S, Nikravesh M, Zadeh LA (2006) Feature Extraction: Foundations and Applications.
Secaucus, NJ, USA: Springer-Verlag.
[27] Zou H, Hastie T, Tibshirani R (2005) Sparse Principal Component Analysis. Journal of Computational
and Graphical Statistics 67: 301?320.
[28] Geiler-Samerotte KA, Bauer CR, Li S, Ziv N, Gresham D, et al. (2013) The details in the distributions:
why and how to study phenotypic variability. Current opinion in biotechnology 24: 752?9.
[29] Zeisel A, Munoz-Manchado AB, Codeluppi S, Lonnerberg P, La Manno G, et al. (2015) Cell types in the
mouse cortex and hippocampus revealed by single-cell RNA-seq. Science 347: 1138?1142.
9
| 5894 |@word madelon:2 version:2 polynomial:1 hippocampus:4 norm:2 suitably:1 km:1 seek:2 covariance:6 simplifying:1 pg:1 q1:3 pick:1 asks:1 jacob:1 tr:4 reduction:2 liu:1 contains:1 dspca:1 zij:6 selecting:2 series:2 interestingly:1 envision:1 bradley:1 current:3 comparing:1 ka:1 written:1 must:1 informative:1 designed:1 interpretable:3 treating:1 update:6 discrimination:1 v:2 half:1 selected:1 device:2 nq:1 resampling:1 tdp:1 intelligence:1 steepest:1 characterization:2 provides:2 contribute:1 iterates:1 five:1 p1q:4 mathematical:2 along:6 c2:2 direct:3 differential:2 jonas:1 scholkopf:1 consists:1 upr:1 hellinger:1 introduce:2 pairwise:1 theoretically:1 inter:1 p1:1 nor:2 multi:3 brain:1 ol:1 relying:1 automatically:2 actual:2 curse:1 cardinality:6 considering:1 solver:1 project:1 provided:1 underlying:7 moreover:1 maximizes:4 becomes:2 agnostic:1 null:1 what:1 increasing:1 unspecified:1 interpreted:1 eigenvector:5 q2:4 minimizes:1 finding:4 nj:1 guarantee:2 concave:5 xd:1 exactly:1 rm:2 classifier:1 demonstrates:3 control:3 unit:5 subtype:1 grant:1 appear:1 superiority:1 bertsekas:3 t1:5 positive:5 before:1 local:2 sd:1 limit:1 severely:1 struggle:1 despite:2 analyzing:1 untransformed:1 yd:1 black:3 initialization:1 resembles:1 r4:2 challenging:1 limited:2 factorization:1 cdfs:2 unique:1 responsible:1 practical:2 testing:5 mallow:2 practice:4 definite:3 dpx:1 signaling:1 procedure:7 empirical:9 universal:1 attain:1 significantly:1 projection:28 matching:6 pre:1 advise:1 lindenbaum:1 onto:1 undesirable:1 selection:5 operator:1 pyq:3 cannot:2 applying:1 writing:1 py:10 equivalent:3 map:1 maximizing:2 straightforward:1 attention:1 starting:2 truncating:1 convex:6 focused:2 survey:1 simplicity:1 estimator:1 insight:1 deriving:1 fraiman:1 population:14 exploratory:2 variation:1 juvenile:1 analogous:1 qq:2 annals:1 pt:12 play:1 heavily:2 suppose:3 colinearity:1 programming:3 exact:1 hypothesis:3 lanckriet:1 pa:1 element:1 particularly:1 utilized:1 gunn:1 distributional:1 observed:2 role:1 solved:1 capture:1 initializing:1 dualized:1 calculate:1 region:1 ensures:1 cy:1 thousand:1 wang:1 rescaled:1 technological:1 disease:1 balanced:1 pd:1 vanishes:1 mu:1 ui:1 pda:17 rq:1 insertion:1 nesterov:1 intuition:1 tight:4 solving:1 predictive:1 matchings:1 compactly:1 manno:1 joint:1 represented:2 various:1 kolmogorov:2 separated:1 distinct:1 fast:2 describe:1 forced:1 london:1 pertaining:1 detected:1 neighborhood:1 choosing:1 whose:1 minizing:1 posed:1 widely:1 solve:2 valued:1 relax:12 otherwise:1 drawing:1 compressed:1 supplementary:6 statistic:9 vaart:1 gp:2 transform:1 noisy:1 online:1 eigenvalue:2 rr:3 differentiable:1 propose:2 interaction:2 remainder:1 relevant:7 secaucus:1 crossvalidation:1 convergence:5 optimum:4 r1:2 p:1 produce:1 renormalizing:1 incremental:3 illustrate:1 develop:1 measured:1 ij:2 qt:2 received:1 strong:2 solves:3 rosenbaum:1 somatosensory:2 come:1 implies:2 tommi:2 differ:4 direction:15 radius:1 rasch:1 closely:1 saved:1 stochastic:1 subsequently:1 stringent:1 opinion:1 adjacency:1 require:2 ja:2 investigation:1 extension:1 accompanying:1 supergradients:1 sufficiently:1 cramer:4 around:1 normal:1 exp:5 great:1 considered:1 algorithmic:1 wright:1 bickel:1 rankpbq:1 a2:1 omitted:1 earth:3 estimation:1 integrates:1 combinatorial:1 currently:1 largest:5 albertos:1 successfully:2 weighted:1 hope:1 minimization:3 mit:5 rna:2 gaussian:4 aim:1 modified:1 rather:2 always:2 avoid:1 cr:1 shrinkage:5 varying:1 jaakkola:1 broader:1 derived:1 focus:4 pdfs:1 improvement:2 rank:3 indicates:3 prp:2 contrast:3 centroid:1 greedily:2 mueller:1 el:1 sb:1 typically:1 diminishing:2 initially:1 jq:1 relegated:1 selects:1 overall:8 dual:7 arg:4 sandler:1 ziv:1 priori:1 constrained:3 special:2 initialize:1 fairly:1 marginal:5 equal:1 opnq:1 extraction:1 encouraged:1 identical:2 runtimes:1 represents:1 icml:1 nearly:4 discrepancy:1 primarily:1 employ:3 divergence:18 mover:3 individual:2 sparsistency:1 beck:1 replaced:1 maintain:1 technometrics:1 ab:1 interest:4 regulating:1 a5:1 severe:1 introduces:1 semidefinite:12 epxy:1 held:1 amenable:1 accurate:1 ambient:1 respective:3 bq:5 ples:1 euclidean:1 initialized:2 desired:1 pmq:6 theoretical:3 minimal:1 column:2 soft:2 teboulle:1 retains:1 maximization:2 a6:1 cost:2 introducing:1 vertex:1 subset:4 entry:1 examining:1 piq:2 p0q:8 gr:1 characterize:3 reported:1 straightforwardly:1 answer:2 aw:1 marron:1 proximal:4 synthetic:1 combined:1 thoroughly:1 density:7 international:1 borgwardt:1 siam:2 csail:4 bu:2 lee:1 discipline:1 mouse:2 squared:3 again:1 satisfied:1 possibly:1 admit:2 return:2 yp:7 li:1 coefficient:1 explicitly:3 depends:1 tion:1 closed:1 hazan:1 sup:1 red:4 wm:7 sort:1 participant:1 maintains:1 ery:2 ass:1 purple:1 accuracy:1 variance:3 efficiently:3 succession:1 correspond:1 identify:4 weak:1 magni:1 bayesian:1 produced:4 none:1 lu:1 finer:1 minm:1 cumbersome:1 whenever:1 term1:1 ty:1 pp:1 involved:1 proof:1 mi:1 sampled:2 dataset:2 treatment:1 popular:1 dimensionality:3 pjq:3 uncover:1 actually:1 back:2 attained:2 ta:2 follow:2 maxmin:1 modal:1 wei:2 response:1 methodology:1 formulation:2 done:1 wold:4 furthermore:1 biomedical:2 smola:1 transport:1 su:1 logistic:4 lda:4 perhaps:1 resemblance:1 scientific:1 grows:1 usa:1 normalized:1 multiplier:1 former:1 equality:1 analytically:1 regularization:3 symmetric:1 leibler:1 nonzero:5 deal:1 ll:1 inferior:1 szekely:1 m:1 generalized:1 complete:1 demonstrate:1 performs:1 duchi:1 meaning:2 image:1 mint0:1 mangasarian:1 nikravesh:1 nih:1 common:1 witten:1 functional:1 jp:1 tail:2 marginals:3 refer:1 measurement:1 munoz:1 imposing:1 gibbs:1 rd:11 tuning:1 unconstrained:1 pm:6 similarly:2 trivially:1 automatic:1 aq:2 cortex:4 longer:1 etc:1 dominant:2 du2:1 multivariate:6 closest:2 recent:1 isometry:1 j:1 mint:1 certain:2 nonconvex:5 manifested:1 verlag:2 transcriptome:1 accomplished:1 der:1 captured:1 wasserstein:18 additional:2 impose:1 relaxed:1 greater:5 r0:1 determine:1 semi:1 smoother:1 full:1 multiple:3 ii:1 gretton:1 smooth:1 match:2 determination:1 profiling:1 cross:2 xp1q:3 characterized:1 supergradient:1 adapt:1 levina:1 molecular:1 y:2 paired:2 a1:3 controlled:2 variant:7 involving:1 basic:2 regression:3 metric:5 bz:1 iteration:7 kernel:3 sometimes:1 tailored:1 cell:10 penalize:1 justified:1 addition:2 c1:3 fine:1 appropriately:1 operate:1 rest:1 ascent:3 interstat:1 subject:2 rizzo:1 leveraging:1 flow:1 clemmensen:1 jordan:1 extracting:1 subgaussian:1 leverage:1 synthetically:1 revealed:1 iii:2 identically:1 iterate:1 isolation:1 hastie:2 restrict:2 lasso:5 bandwidth:1 inner:2 identified:1 idea:1 coordinate:1 br:6 shift:2 whether:1 expression:6 pca:10 wellner:1 penalty:3 returned:2 biotechnology:1 tude:1 useful:1 clear:1 amount:3 nonparametric:1 s4:1 unpaired:3 tth:1 restricts:1 tutorial:1 s3:1 delta:1 estimated:1 per:2 tibshirani:2 blue:2 discrete:2 hyperparameter:1 dropping:1 key:1 reformulation:1 pb:9 phenotypic:1 imaging:1 relaxation:21 subgradient:9 fraction:1 sum:2 cone:1 run:2 inverse:1 parameterized:2 powerful:1 lope:1 extends:1 family:1 almost:1 throughout:1 guyon:1 seq:2 circumvents:1 zadeh:1 scaling:1 followed:2 guaranteed:2 pxy:2 correspondence:1 distinguish:1 quadratic:2 refine:1 nonnegative:2 bv:2 constraint:10 flat:1 min:7 spring:1 px:16 maxn:1 truncate:1 ball:2 combination:1 poor:1 across:1 remain:1 terminates:1 increasingly:1 bvj:2 s1:2 intuitively:1 iccv:1 restricted:1 spiked:1 pr:1 ghaoui:1 unregularized:2 computationally:1 equation:2 recourse:1 previously:1 turn:2 fail:1 singer:1 pbq:1 demographic:2 end:2 adopted:1 available:1 unreasonable:1 apply:2 amini:1 dudley:1 stepsize:1 alternative:1 original:6 denotes:3 pwm:4 include:2 cf:3 ensure:1 a4:1 remaining:1 top:1 graphical:2 restrictive:1 quantile:3 especially:1 hypercube:1 society:3 implied:1 objective:9 question:2 already:1 occurs:2 parametric:1 concentration:2 primary:1 diagonal:1 kantorovich:1 exhibit:1 gradient:9 dp:11 distance:22 separating:1 athena:1 nondifferentiable:1 fy:2 discriminant:3 reason:1 toward:1 assuming:4 besides:1 index:1 relationship:2 useless:1 equivalently:1 unfortunately:1 fe:1 statement:1 relate:1 trace:2 tightening:10 unknown:1 finite:1 situation:1 variability:1 y1:1 rn:2 arbitrary:1 sharp:1 pnq:13 introduced:1 complement:1 pair:1 required:1 specified:1 eckstein:1 optimized:1 connection:1 textual:1 nu:1 nip:4 address:3 beyond:4 able:1 pattern:1 sparsity:11 challenge:4 program:2 including:2 max:8 green:3 royal:2 wainwright:2 power:5 suitable:1 difficulty:1 natural:3 solvable:1 minimax:3 scheme:1 improve:1 numerous:2 identifies:1 aspremont:1 larger:1 review:2 understanding:1 l2:4 acknowledgement:1 ptq:11 expp:3 determining:1 lacking:1 expect:1 permutation:5 bear:1 par:1 var:1 validation:2 eigendecomposition:1 asterisk:2 supy:1 degree:1 foundation:1 thresholding:3 dq:2 pi:1 row:2 penalized:1 supported:2 last:1 free:1 profiled:3 guide:1 allow:1 characterizing:2 taking:1 sparse:23 distributed:1 benefit:1 van:1 dimension:3 bauer:1 rich:1 made:1 jump:1 projected:15 adaptive:2 employing:1 income:1 tighten:2 transaction:1 bb:7 fivedimensional:1 approximate:2 emphasize:1 derivable:1 sj:1 kullback:1 bui:2 gene:9 supremum:1 global:4 uncentered:1 reveals:1 conclude:1 alternatively:1 underspecification:1 continuous:2 search:1 iterative:1 sk:2 why:1 additionally:2 nature:3 ku:2 mj:2 contributes:1 du:2 boxed:1 complex:4 zou:1 vj:2 significance:1 linearly:1 cuesta:1 s2:2 bounding:1 n2:1 x1:1 referred:1 quantiles:1 scattered:1 depicts:1 n:1 sub:2 fails:1 momentum:1 explicit:3 lie:2 crude:1 pgp:3 theorem:11 specific:4 sensing:1 r2:2 x:1 svm:2 admits:1 a3:3 exists:4 incorporating:1 restricting:1 adding:1 false:1 importance:1 dissimilarity:1 magnitude:2 illustrates:1 margin:1 maxt0:1 sorting:4 suited:2 simply:5 univariate:14 likely:2 lagrange:1 expressed:2 prevents:1 highlighting:1 applies:1 springer:2 mij:2 aa:1 satisfies:2 cdf:2 succeed:1 goal:3 viewed:1 sorted:3 acceleration:1 careful:1 feasible:3 hard:1 fista:1 change:2 specifically:1 determined:1 operates:1 included:1 hyperplane:1 principal:9 total:1 pys:1 la:2 select:1 highdimensional:1 support:2 latter:1 pertains:1 relevance:1 accelerated:1 evaluate:2 |
5,407 | 5,895 | Kullback-Leibler Proximal Variational Inference
Mohammad Emtiyaz Khan?
Ecole Polytechnique F?ed?erale de Lausanne
Lausanne, Switzerland
emtiyaz@gmail.com
Pierre Baqu?e?
Ecole Polytechnique F?ed?erale de Lausanne
Lausanne, Switzerland
pierre.baque@epfl.ch
Pascal Fua
Ecole Polytechnique F?ed?erale de Lausanne
Lausanne, Switzerland
pascal.fua@epfl.ch
Franc?ois Fleuret
Idiap Research Institute
Martigny, Switzerland
francois.fleuret@idiap.ch
Abstract
We propose a new variational inference method based on a proximal framework
that uses the Kullback-Leibler (KL) divergence as the proximal term. We make
two contributions towards exploiting the geometry and structure of the variational
bound. First, we propose a KL proximal-point algorithm and show its equivalence
to variational inference with natural gradients (e.g., stochastic variational inference). Second, we use the proximal framework to derive efficient variational algorithms for non-conjugate models. We propose a splitting procedure to separate
non-conjugate terms from conjugate ones. We linearize the non-conjugate terms
to obtain subproblems that admit a closed-form solution. Overall, our approach
converts inference in a non-conjugate model to subproblems that involve inference
in well-known conjugate models. We show that our method is applicable to a wide
variety of models and can result in computationally efficient algorithms. Applications to real-world datasets show comparable performances to existing methods.
1
Introduction
Variational methods are a popular alternative to Markov chain Monte Carlo (MCMC) methods for
Bayesian inference. They have been used extensively for their speed and ease of use. In particular,
methods based on the evidence lower bound optimization (ELBO) are quite popular because they
convert a difficult integration problem to an optimization problem. This reformulation enables the
application of optimization techniques for large-scale Bayesian inference.
Recently, an approach called stochastic variational inference (SVI) has gained popularity for inference in conditionally-conjugate exponential family models [1]. SVI exploits the geometry of the
posterior distribution by using natural gradients and uses a stochastic method to improve scalability.
The resulting updates are simple and easy to implement.
Several generalizations of SVI have been proposed for general latent-variable models where the
lower bound might be intractable [2, 3, 4]. These generalizations, although important, do not take
the geometry of the posterior distribution into account.
In addition, none of these approaches exploit the structure of the lower bound. In practice, not all
factors of the joint distribution introduce difficulty in the optimization. It is therefore desirable to
treat ?difficult? terms differently from ?easy? terms.
?
A note on contributions: P. Baqu?e proposed the use of the KL proximal term and showed that the resulting
proximal steps have closed-form solutions. The rest of the work was carried out by M. E. Khan.
1
In this context, we propose a splitting method for variational inference; this method exploits both
the structure and the geometry of the lower bound. Our approach is based on the proximal-gradient
framework. We make two important contributions. First, we propose a proximal-point algorithm
that uses the Kullback-Leibler (KL) divergence as the proximal term. We show that the addition of
this term incorporates the geometry of the posterior distribution. We establish the equivalence of our
approach to variational methods that use natural gradients (e.g., [1, 5, 6]).
Second, following the proximal-gradient framework, we propose a splitting approach for variational
inference. In this approach, we linearize difficult terms such that the resulting optimization problem
is easy to solve. We apply this approach to variational inference on non-conjugate models. We
show that linearizing non-conjugate terms leads to subproblems that have closed-form solutions.
Our approach therefore converts inference in a non-conjugate model to subproblems that involve
inference in well-known conjugate models, and for which efficient implementation exists.
2
Latent Variable Models and Evidence Lower-Bound Optimization
Consider a general latent-variable model with data vector y of length N and the latent vector z of
length D, following a joint distribution p(y, z) (we drop the parameters of the distribution from
the notation). ELBO approximates the posterior p(z|y) by a distribution q(z|?) that maximizes a
lower bound to the marginal likelihood. Here, ? is the vector of parameters of the distribution q.
As shown in (1), the lower bound is obtained by first multiplying and then dividing by q(z|?), and
then applying Jensen?s inequality by using concavity of log. The approximate posterior q(z|?) is
obtained by maximizing the lower bound with respect to ?.
Z
p(y, z)
p(y, z)
log p(y) = log q(z|?)
dz ? max Eq(z|?) log
(1)
:= L(?).
?
q(z|?)
q(z|?)
Unfortunately, the lower bound may not always be easy to optimize, e.g., some terms in the lower
bound might be intractable or might admit a form that is not easy to optimize. In addition, the
optimization can be slow when N and D are large.
3
The KL Proximal-Point Algorithm for Conjugate Models
In this section, we introduce a proximal-point method based on Kullback-Leibler (KL) proximal
function and establish its relation to the existing approaches based on natural gradients [1, 5, 6].
In particular, for conditionally-conjugate exponential-family models, we show that each iteration of
our proximal-point approach is equivalent to a step along the natural gradient.
The Kullback-Leibler (KL) divergence between two distributions q(z|?) and q(z|?0 ) is defined as
follows: DKL [q(z|?) k q(z|?0 )] := Eq(z|?) [log q(z|?) ? log q(z|?0 )]. Using the KL divergence
as the proximal term, we introduce a proximal-point algorithm that generates a sequence of ?k by
solving the following subproblems:
KL Proximal-Point : ?k+1 = arg max L(?) ?
?
1
D [q(z|?) k q(z|?k )],
?k KL
(2)
given an initial value ?0 and a bounded sequence of step-size ?k > 0,
One benefit of using the KL term is that it takes the geometry of the posterior distribution into
account. This fact has lead to their extensive use in both the optimization and statistics literature,
e.g., for speeding up the expectation-maximization algorithm [7, 8], for convex optimization [9], for
message-passing in graphical models [10], and for approximate Bayesian inference [11, 12, 13].
Relationship to the methods that use natural gradients: An alternative approach to incorporate
the geometry of the posterior distribution is to use natural gradients [6, 5, 1]. We now establish its
relationship to our approach. The natural gradient can be interpreted as finding a descent direction
that ensures a fixed amount of change in the distribution. For variational inference, this is equivalent
to the following [1, 14]:
arg max L(?k + ??), s.t. Dsym
KL [q(z|?k + ??) k q(z|?k )] ? ,
??
2
(3)
where Dsym
KL is the symmetric KL divergence. It appears that the proximal-point subproblem (2) is
related to a Lagrangian of the above optimization. In fact, as we show below, the two problems are
equivalent for conditionally conjugate exponential-family models.
We consider the set-up described in [15], which is a bit more
Q general than that of [1]. Consider a
Bayesian network with nodes zi and a joint distribution i p(zi |pai ) where pai are the parents of
zi . We assume that each factor is an exponential-family distribution defined as follows:
p(zi |pai ) := hi (zi ) exp ? Ti (pai )Ti (zi ) ? Ai (? i ) ,
(4)
where ? i is the natural parameter, Ti (zi ) is the sufficient statistics, Ai (? i ) is the partition function
and hi (zi ) is the base measure. We seek a factorized approximation shown in (5), where each zi
belongs to the same exponential-family distribution as the joint distribution. The parameters of this
distribution are denoted by ?i to differentiate them from the joint-distribution parameters ? i . Also
note that the subscript refers to the factor i, not to the iteration.
h
i
Y
q(z|?) =
qi (zi |?i ), where qi (zi ) := hi (z) exp ?Ti Ti (zi ) ? Ai (?i ) .
(5)
i
For this model, we show the following equivalence between a gradient-descent method based on
natural gradients and our proximal-point approach. The proof is given in the supplementary material.
Theorem 1. For the model shown in (4) and the posterior approximation shown in (5), the sequence
?k generated by the proximal-point algorithm of (2) is equal to the one obtained using gradientdescent along the natural gradient with step lengths ?k /(1 + ?k ).
Proof of convergence : Convergence of the proximal-point algorithm shown in (2) is proved in
[8]. We give a summary of the results here. We assume ?k = 1, however the proof holds for any
bounded sequence of ?k . Let the space of all ? be denoted by S. Define the set S0 := {? ? S :
L(?) ? L(?0 )}. Then, k?k+1 ? ?k k ? 0 under the following conditions:
(A) Maximum of L exist and the gradient of L is continuous and defined in S0 .
(B) The KL divergence and its gradient are continuous and defined in S0 ? S0 .
(C) DKL [q(z|?) k q(z|?0 )] = 0 only when ?0 = ?.
In our case, the conditions (A) and (B) are either assumed or satisfied, and the condition (C) can be
ensured by choosing an appropriate parameterization of q.
4
The KL Proximal-Gradient Algorithm for Non-conjugate Models
The proximal-point algorithm of (2) might be difficult to optimize for non-conjugate models, e.g.,
due to the non-conjugate factors. In this section, we present an algorithm based on the proximalgradient framework where we first split the objective function into ?difficult? and ?easy? terms, and
then, to simplify the optimization, linearize the difficult term. See [16] for a good review of proximal
methods for machine learning.
We split the ratio p(y, z)/q(z|?) ? c p?d (z|?)?
pe (z|?), where p?d contains all factors that make the
optimization difficult, and p?e contains the rest (c is a constant). This results in the following split:
p(y, z|?)
:= Eq(z|?) [log p?d (z|?)] + Eq(z|?) [log p?e (z|?)] + log c,
(6)
L(?) = Eq(z|?) log
q(z|?)
|
{z
} |
{z
}
f (?)
h(?)
Note that p?d and p?e can be un-normalized factors in the distribution. In the worst case, we can set
p?e (z|?) ? 1 and take the rest as p?d (z|?). We give an example of the split in the next section.
The main idea is to linearize the difficult term f such that the resulting problem admits a simple
form. Specifically, we use a proximal-gradient algorithm that solves the following sequence of
subproblems to maximize L as shown below. Here, 5f (?k ) is the gradient of f at ?k .
KL Proximal-Gradient: ?k+1 = arg max ?T 5 f (?k ) + h(?) ?
?
3
1
D [q(z|?) k q(z|?k )]. (7)
?k KL
Note that our linear approximation is equivalent to the one used in gradient descent. Also, the
approximation is tight at ?k . Therefore, it does not introduce any error in the optimization, rather it
only acts as a surrogate to take the next step. Existing variational methods have used approximations
such as ours, e.g., see [17, 18, 19]. Most of these methods first approximate the log p?d (z|?) term
by using a linear or quadratic approximation and then compute the expectation. As a result the
approximation is not tight and can result in a bad performance [20]. In contrast, our approximation
is applied directly to E[log p?d (z|?)] and therefore is tight at ?k .
The convergence of our approach is covered under the results shown in [21]; they prove convergence
of an algorithm more general algorithm than ours. Below, we summarize the results. As before, we
assume that the maximum exists and L is continuous. We make three additional assumptions. First,
the gradient of f is L-Lipschitz continuous in S, i.e., || 5 f (?) ? 5f (?0 )|| ? L||? ? ?0 ||, ??, ?0 ?
S. Second, the function h is concave. Third, there exists an ? > 0 such that,
(?k+1 ? ?k )T 51 DKL [q(z|?k+1 ) k q(z|?k )] ? ?k?k+1 ? ?k k2 ,
(8)
where 51 denotes the gradient with respect to the first argument. Under these conditions, k?k+1 ?
?k k ? 0 when 0 < ?k < ?/L. The choice of constant ? is also discussed in [21]. Note that
even though h is required to be concave, f could be non-convex. The lower bound usually contains
concave terms, e.g., in the entropy term. In the worst case when there are no concave terms, we can
simply choose h ? 0.
5
Examples of KL Proximal-Gradient Variational Inference
In this section, we show a few examples where the subproblem (7) has a closed-form solution.
Generalized linear model : We consider the generalized linear model shown in (9). Here, y is
the output vector (of length N ) whose n?th entry is equal to yn , whereas X is an N ? D feature
matrix that contains feature vectors xTn as rows. The weight vector z is a Gaussian with mean ? and
covariance ?. To obtain the probability of yn , the linear predictor xTn z is passed through p(yn |?).
p(y, z) :=
N
Y
p(yn |xTn z)N (z|?, ?).
(9)
n=1
We restrict the posterior distribution to be a Gaussian q(z|?) = N (z|m, V) with mean m and
covariance V, therefore ? := {m, V}. For this posterior family, the non-Gaussian terms p(yn |xTn z)
are difficult to handle, while the Gaussian term N (z|?, ?) is easy because it is conjugate to q.
Therefore, we set p?e (z|?) ? N (z|?, ?)/N (z|m, V) and let the rest of the terms go in p?d .
By substituting in (6) and using the definition of the KL divergence, we get the lower bound shown
below in (10). The first term is the function f that will be linearized, and the second term is the
function h.
N
X
N (z|?, ?)
L(m, V) :=
Eq(z|?) [log p(yn |xTn z)] + Eq(z|?) log
.
(10)
N (z|m, V)
n=1
{z
}
|
{z
} |
h(m,V )
f (m,V )
For linearization, we compute the gradient of f using the chain rule. Denote fn (m
e n , ven ) :=
Eq(z|?) [log p(yn |xTn z)] where m
e n := xTn m and ven := xTn Vxn . Gradients of f w.r.t. m and V
can then be expressed in terms of gradients of fn w.r.t. m
e n and ven :
5m f (m, V) =
N
X
xn 5m
e n , ven ),
e n fn (m
5V f (m, V) =
n=1
N
X
xn xTn 5ven fn (m
e n , ven ), (11)
n=1
For notational simplicity, we denote the gradient of fn at m
e nk := xTn mk and venk := xTn Vk xn by,
?nk := ? 5m
e nk , venk ), ?nk := ?2 5ven fn (m
e nk , venk ).
(12)
e n fn (m
Using (11) and (12), we get the following linear approximation of f :
f (m, V) ? ?T 5 f (?k ) := mT [5m f (mk , Vk )] + Tr [V {5V f (mk , Vk )}]
=?
N
X
?nk (xTn m) + 12 ?nk (xTn Vxn ) .
n=1
4
(13)
(14)
Substituting the above in (7), we get the following subproblem in the k?th iteration:
N
X
N (z|?, ?)
(mk+1 , Vk+1 ) = arg max ?
?nk (xTn m) + 21 ?nk (xTn Vxn ) + Eq(z|?)
m,V 0
N (z|m, V)
n=1
?
1
DKL [N (z|m, V)||N (z|mk , Vk )] ,
?k
(15)
Taking the gradient w.r.t. m and V and setting it to zero, we get the following closed-form solutions
(details are given in the supplementary material):
i
h
?1
?1
T
V?1
=
r
V
+
(1
?
r
)
?
+
X
diag(?
)X
,
(16)
k
k
k
k+1
k
h
i
?1
mk+1 = (1 ? rk )??1 + rk V?1
(1 ? rk )(??1 ? ? XT ?k ) + rk V?1
(17)
k
k mk ,
where rk := 1/(1 + ?k ) and ?k and ? k are vectors of ?nk and ?nk respectively, for all k.
Computationally efficient updates : Even though the updates are available in closed form, they are
not efficient when dimensionality D is large. In such a case, an explicit computation of V is costly
because the resulting D ? D matrix is extremely large. We now derive efficient updates that avoids
an explicit computation of V.
Our derivation involves two key steps. The first step is to show that Vk+1 can be parameterized by
? k . Specifically, if we initialize V0 = ?, then we can show that:
h
i?1
e k+1 = rk ?
e k + (1 ? rk )? k .
Vk+1 = ??1 + XT diag(e
? k+1 )X
, where ?
(18)
e 0 = ? 0 . A detailed derivation is given in the supplementary material.
with ?
The second key step is to express the updates in terms of m
e n and ven . For this purpose, we define
e be a vector whose n?th entry is m
e be the vector of ven
some new quantities. Let m
e n . Similarly, let v
e k and v
e k , respectively. Finally,
for all n. Denote the corresponding vectors in the k?th iteration by m
e = X?XT .
e = X? and ?
define ?
e = Xm and v
e = diag(XVXT ) and by applying the Woodbury matrix
Now, by using the fact that m
e and v
e , as shown below (a detailed derivation is
identity, we can express the updates in terms of m
given in the supplementary material):
e ?1 )(e
e k ), where Bk := ?
e + [diag(rk ?
e k+1 = m
e k + (1 ? rk )(I ? ?B
e k ? ??
e k )]?1 ,
m
??m
k
e ? diag(?A
e ?1 ?),
e where Ak := ?
e + [diag(e
e k+1 = diag(?)
v
? )]?1 .
(19)
k
k
e ?k , and ? k (whose size only depends on N and is indee , ?,
Note that these updates depend on ?
pendent of D). Most importantly, these updates avoid an explicit computation of V and only require
e k and v
e k , both of which scale linearly with N .
storing m
Also note that the matrix Ak and Bk differ only slightly and we can reduce computation by using
Ak in place of Bk . In our experiments, this does not create any convergence issues.
To assess convergence, we can use the optimality condition. By taking the norm of the derivative of
e k+1 k2 +
e k+1 ? ??
??m
L at mk+1 and Vk+1 and simplifying, we get the following criteria: ke
2
e
e
Tr[? diag(e
? k ? ? k+1 ? 1) ?] ? , for some > 0 (derivation is in the supplementary material).
Linear-Basis Function Model and Gaussian Process : The algorithm presented above can be
extended to linear-basis function models by using the weight-space view presented in [22]. Consider
a non-linear basis function ?(x) that maps a D-dimensional feature vector into an N -dimensional
feature space. The generalized linear model of (9) is extended to a linear basis function model by
replacing xTn z with the latent function g(x) := ?(x)T z. The Gaussian prior on z then translates
to a kernel function ?(x, x0 ) := ?(x)T ??(x) and a mean function ?
e(x) := ?(x)T ? in the latent
e whose (i, j)?th entry is equal
function space. Given input vectors xn , we define the kernel matrix ?
e whose i?th entry is ?
to ?(xi , xj ) and the mean vector ?
e(xi ).
Assuming a Gaussian posterior distribution over the latent function g(x), we can compute its mean
e to be the vector of
m(x)
e
and variance ve(x) using the proximal-gradient algorithm. We define m
5
Algorithm 1 Proximal-gradient algorithm for linear basis function models and Gaussian process
e step-size sequence rk ,
e , covariance ?,
Given: Training data (y, X), test data x? , kernel mean ?
and threshold .
e and ?
e0 ? ?
e, v
e 0 ? diag(?)
e 0 ? ?1 1.
Initialize: m
repeat
For all n in parallel: ?nk ? 5m
e nk , venk ) and ?nk ? 5ven fn (m
e nk , venk ).
e n fn (m
e k and v
e k using (19).
Update m
e k+1 ? rk ?
e k + (1 ? rk )? k .
?
e k k + Tr[?
e diag(e
e > .
e k ? ??
until ke
??m
? k ? ? k+1 ? 1)?]
Predict test inputs x? using (20).
e to be the vector of all ve(xn ). Following the same derivation as the
m(x
e n ) for all n and similarly v
e and variance v
e.
previous section, we can show that the updates of (19) give us the posterior mean m
These updates are the kernalized version of (16) and (17).
For prediction, we only need the converged value of ?k and ? k , denoted by ?? and ? ? , respectively.
Given a new input x? , define ??? := ?(x? , x? ) and ?? to be a vector whose n?th entry is equal to
?(xn , x? ). The predictive mean and variance can be computed as shown below:
e + (diag(e
ve(x? ) = ??? ? ?T? [?
? ? ))?1 ]?1 ??
,
m(x
e ?) = ?
e? ? ?T? ??
(20)
e to a small constant ?1 , otherwise
A pseudo-code is given in Algorithm 1. Here, we initialize ?
solving the first equation might be ill-conditioned.
These updates also work for the Gaussian process (GP) models with a kernel k(x, x0 ) and mean
function ?
e(x), and for many other latent Gaussian models such as matrix factorization models.
6
Experiments and Results
We now present some results on the real data. Our goal is to show that our approach gives comparable results to existing methods and is easy to implement. We also show that, in some cases, our
method is significantly faster than the alternatives due to the kernel trick.
We show results on three models: Bayesian logistic regression, GP classification with logistic likelihood, and GP regression with Laplace likelihood. For these likelihoods, expectations can be computed (almost) exactly, for which we used the methods described in [23, 24]. We use a fixed step-size
of ?k = 0.25 and 1 for logistic and Laplace likelihoods, respectively.
We consider three datasets for each model. A summary is given in Table 1. These datasets can be
found at the data repository1 of LIBSVM and UCI.
Bayesian Logistic Regression: Results for Bayesian logistic regression are shown in Table 2. We
consider two datasets. For ?a1a?, N > D, and, for ?Colon?, N < D. We compare our ?proximal?
method to three other existing methods: the ?MAP? method which finds the mode of the penalized
log-likelihood, the ?Mean-Field? method where the distribution is factorized across dimensions, and
the ?Cholesky? method of [25]. We implemented these methods using ?minFunc? software by Mark
Schmidt2 . We used L-BFGS for optimization. All algorithms are stopped when optimality condition
is below 10?4 . We set the Gaussian prior to ? = ?I and ? = 0. To set the hyperparameter ?, we use
cross-validation for MAP, and maximum marginal-likelihood estimate for the rest of the methods.
As we compare running times as well, we use a common range of hyperparameter values for all
methods. These values are shown in Table 1.
For Bayesian methods, we report the negative of the marginal likelihood approximation (?Neg-LogLik?). This is (the negative of) the
P value of the lower bound at the maximum. We also report the
log-loss computed as follows:? n log p?n /N where p?n are the predictive probabilities of the test
data and N is the total number of test-pairs. A lower value is better and a value of 1 is equivalent
to random coin-flipping. In addition, we report the total time taken for hyperparameter selection.
1
2
https://archive.ics.uci.edu/ml/datasets.html and http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/
Available at https://www.cs.ubc.ca/?schmidtm/Software/minFunc.html
6
Model
LogReg
GP class
GP reg
Dataset
a1a
Colon
Ionosphere
Sonar
USPS-3vs5
Housing
Triazines
Space ga
N
32,561
62
351
208
1,540
506
186
3,106
D
123
2000
34
60
256
13
60
6
%Train
5%
50%
50%
50%
50%
50%
50%
50%
#Splits
1
10
10
10
5
10
10
1
Hyperparameter range
? = logspace(-3,1,30)
? = logspace(0,6,30)
for all datasets
log(l) = linspace(-1,6,15)
log(?) = linspace(-1,6,15)
log(l) = linspace(-1,6,15)
log(?) = linspace(-1,6,15)
log(b) = linspace(-5,1,2)
Table 1: A list of models and datasets. %Train is the % of training data. The last column shows the
hyperparameters values (?linspace? and ?logspace? refer to Matlab commands).
Dataset
a1a
Colon
Methods
MAP
Mean-Field
Cholesky
Proximal
MAP
Mean-Field
Proximal
Neg-Log-Lik
?
792.8
590.1
590.1
?
18.35 (0.11)
15.82 (0.13)
Log Loss
0.499
0.505
0.488
0.488
0.78 (0.01)
0.78 (0.01)
0.70 (0.01)
Time
27s
21s
12m
7m
7s (0.00)
15m (0.04)
18m (0.14)
Table 2: A summary of the results obtained on Bayesian logistic regression. In all columns, a lower
values implies better performance.
For MAP, this is the total cross-validation time, whereas for Bayesian methods it is the time taken
to compute ?Neg-Log-Lik? for all hyperparameters values over the whole range.
We summarize these results in Table 2. For all columns, a lower value is better. We see that for ?a1a?,
fully Bayesian methods perform slightly better than MAP. More importantly, the Proximal method
is faster than the Cholesky method but obtains the same error and marginal likelihood estimate. For
the Proximal method, we use updates of (17) and (16) because D N , but even in this scenario,
the Cholesky method is slow due to expensive line-search for a large number of parameters.
For the ?Colon? dataset, we use the update (19) for the Proximal method. We do not compare to
the Cholesky method because it is too slow for the large datasets. In Table 2, we see that, our
implementation is as fast as the Mean-Field method but performs significantly better.
Overall, with the Proximal method, we achieve the same results as the Cholesky method but take less
time. In some cases, we can also match the running time of the Mean-Field method. Note that the
Mean-Field method does not give bad predictions and the minimum value of log-loss are comparable
to our approach. However, as Neg-Log-Lik values for the Mean-Field method are inaccurate, it ends
up choosing a bad hyperparameter value. This is expected as the Mean-Field method makes an
extreme approximation. Therefore, cross-validation is more appropriate for the Mean-Field method.
Gaussian process classification and regression: We compare the Proximal method to expectation
propagation (EP) and Laplace approximation. We use the GPML toolbox for this comparison. We
used a squared-exponential kernel for the Gaussian process with two scale parameters ? and l (as
defined in GPML toolbox). We do a grid search over these hyperparameters. The grid values are
given in Table 1. We report the log-loss and running time for each method.
The left plot in Figure 1 shows the log-loss for GP classification on USPS 3vs5 dataset, where the
Proximal method shows very similar behaviour to EP. These results are summarized in Table 3. We
see that our method performs similar to EP, sometimes a bit better. The running times of EP and
the Proximal method are also comparable. The advantage of our approach is that it is easier to
implement compared to EP and it is also numerically robust. The predictive probabilities obtained
with EP and the Proximal method for ?USPS 3vs5? dataset are shown in the right plot of Figure
1. The horizontal axis shows the test examples in an ascending order; the examples are sorted
according to their predictive probabilities obtained with EP. The probabilities themselves are shown
in the y-axis. A higher value implies a better performance, therefore the Proximal method gives
7
1
0
2
4
0
2
log(s)
0.
0.
2
0
4
6
0
2
log(s)
Laplace-usps
6
0.
4
0.
6
0.
4
0.
6
0
6
40
3
20 0
15
20
0.7
0.6
0.5
0.4
0.3
10
0.2
10
0
15
0
2
10
2
20
2
4050
4
30
30
30
20
1015
30
0.
5
1
0.5
4
6
Prox-usps
6
30
0.5
4
log(s)
EP-usps
6
4
0.8
2
0.
0. 0.2
4
0.
6
0
log(sigma)
0.07
0.07
0.
2
2
2
EP
Proximal
0.9
Predictive Prob
1
0.
0.6
0.4
4
1
0.2
4
0.4
0.6
log(sigma)
0.
1
0.07
0.1
0.2 0.4
0.6
0.1
0.2
4
EP vs Proximal
Prox-usps
6
1
EP-usps
6
15
Laplace-usps
6
0.1
5
2
4
6
10
0
log(s)
2
4
log(s)
6
5
0
0
0
0
2
4
0
6
50
100
150
200
250
300
Test Examples
log(s)
Figure 1: In the left figure, the top row shows the log-loss and the bottom row shows the running time
in seconds for the ?USPS 3vs5? dataset. In each plot, the minimum value of the log-loss is shown
with a black circle. The right figure shows the predictive probabilities obtained with EP and the
Proximal method. The horizontal axis shows the test examples in an ascending order; the examples
are sorted according to their predictive probabilities obtained with EP. The probabilities themselves
are shown in the y-axis. A higher value implies a better performance, therefore the Proximal method
gives estimates better than EP.
Data
Ionosphere
Sonar
USPS-3vs5
Housing
Triazines
Space ga
Laplace
.285 (.002)
.410 (.002)
.101 (.002)
1.03 (.004)
1.35 (.006)
1.01 (?)
Log Loss
EP
.234 (.002)
.341 (.003)
.065 (.002)
.300 (.006)
1.36 (.006)
.767 (?)
Proximal
.230 (.002)
.317 (.004)
.055 (.003)
.310 (.009)
1.35 (.006)
.742 (?)
Time (s is sec, m is min, h is hr)
Laplace
EP
Proximal
10s (.3)
3.8m (.10) 3.6m (.10)
4s (.01)
45s (.01)
63s (.13)
1m (.06)
1h (.06)
1h (.02)
.36m (.00) 25m (.65) 61m (1.8)
10s (.10)
8m (.04)
14m (.30)
2m (?)
5h (?)
11h (?)
Table 3: Results for the GP classification using a logistic likelihood and the GP regression using a
Laplace likelihood. For all rows, a lower value is better.
estimates better than EP. The improvement in the performance is due to the numerical error in the
likelihood implementation. For the Proximal method, we use the method of [23], which is quite
accurate. Designing such accurate likelihood approximations for EP is challenging.
7
Discussion and Future Work
In this paper, we have proposed a proximal framework that uses the KL proximal term to take
the geometry of the posterior distribution into account. We established the equivalence between our
proximal-point algorithm and natural-gradient methods. We proposed a proximal-gradient algorithm
that exploits the structure of the bound to simplify the optimization. An important future direction
is to apply stochastic approximations to approximate gradients. This extension is discussed in [21].
It is also important to design a line-search method to set the step sizes. In addition, our proximal
framework can also be used for distributed optimization in variational inference [26, 11].
Acknowledgments
Mohammad Emtiyaz Khan would like to thank Masashi Sugiyama and Akiko Takeda from University of Tokyo, Matthias Grossglauser and Vincent Etter from EPFL, and Hannes Nickisch from
Philips Research (Hamburg) for useful discussions and feedback. Pierre Baqu?e was supported in
part by the Swiss National Science Foundation, under the grant CRSII2-147693 ?Tracking in the
Wild?.
8
References
[1] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. The
Journal of Machine Learning Research, 14(1):1303?1347, 2013.
[2] Tim Salimans, David A Knowles, et al. Fixed-form variational posterior approximation through stochastic
linear regression. Bayesian Analysis, 8(4):837?882, 2013.
[3] Rajesh Ranganath, Sean Gerrish, and David M Blei. Black box variational inference. arXiv preprint
arXiv:1401.0118, 2013.
[4] Michalis Titsias and Miguel L?azaro-Gredilla. Doubly Stochastic Variational Bayes for Non-Conjugate
Inference. In International Conference on Machine Learning, 2014.
[5] Masa-Aki Sato. Online model selection based on the variational Bayes. Neural Computation, 13(7):1649?
1681, 2001.
[6] A. Honkela, T. Raiko, M. Kuusela, M. Tornio, and J. Karhunen. Approximate Riemannian conjugate
gradient learning for fixed-form variational Bayes. The Journal of Machine Learning Research, 11:3235?
3268, 2011.
[7] St?ephane Chr?etien and Alfred OIII Hero. Kullback proximal algorithms for maximum-likelihood estimation. Information Theory, IEEE Transactions on, 46(5):1800?1810, 2000.
[8] Paul Tseng. An analysis of the EM algorithm and entropy-like proximal point methods. Mathematics of
Operations Research, 29(1):27?44, 2004.
[9] M. Teboulle. Convergence of proximal-like algorithms. SIAM Jon Optimization, 7(4):1069?1083, 1997.
[10] Pradeep Ravikumar, Alekh Agarwal, and Martin J Wainwright. Message-passing for graph-structured
linear programs: Proximal projections, convergence and rounding schemes. In International Conference
on Machine Learning, 2008.
[11] Behnam Babagholami-Mohamadabadi, Sejong Yoon, and Vladimir Pavlovic. D-MFVI: Distributed mean
field variational inference using Bregman ADMM. arXiv preprint arXiv:1507.00824, 2015.
[12] Bo Dai, Niao He, Hanjun Dai, and Le Song. Scalable Bayesian inference via particle mirror descent.
Computing Research Repository, abs/1506.03101, 2015.
[13] Lucas Theis and Matthew D Hoffman. A trust-region method for stochastic variational inference with
applications to streaming data. International Conference on Machine Learning, 2015.
[14] Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. arXiv preprint
arXiv:1301.3584, 2013.
[15] Ulrich Paquet. On the convergence of stochastic variational inference in bayesian networks. NIPS Workshop on variational inference, 2014.
[16] Nicholas G Polson, James G Scott, and Brandon T Willard. Proximal algorithms in statistics and machine
learning. arXiv preprint arXiv:1502.03175, 2015.
[17] Harri Lappalainen and Antti Honkela. Bayesian non-linear independent component analysis by multilayer perceptrons. In Advances in independent component analysis, pages 93?121. Springer, 2000.
[18] Chong Wang and David M. Blei. Variational inference in nonconjugate models. J. Mach. Learn. Res.,
14(1):1005?1031, April 2013.
[19] M. Seeger and H. Nickisch. Large scale Bayesian inference and experimental design for sparse linear
models. SIAM Journal of Imaging Sciences, 4(1):166?199, 2011.
[20] Antti Honkela and Harri Valpola. Unsupervised variational Bayesian learning of nonlinear models. In
Advances in neural information processing systems, pages 593?600, 2004.
[21] Mohammad Emtiyaz Khan, Reza Babanezhad, Wu Lin, Mark Schmidt, and Masashi Sugiyama. Convergence of Proximal-Gradient Stochastic Variational Inference under Non-Decreasing Step-Size Sequence.
arXiv preprint arXiv:1511.00146, 2015.
[22] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. MIT
Press, 2006.
[23] B. Marlin, M. Khan, and K. Murphy. Piecewise bounds for estimating Bernoulli-logistic latent Gaussian
models. In International Conference on Machine Learning, 2011.
[24] Mohammad Emtiyaz Khan. Decoupled Variational Inference. In Advances in Neural Information Processing Systems, 2014.
[25] E. Challis and D. Barber. Concave Gaussian variational approximations for inference in large-scale
Bayesian linear models. In International conference on Artificial Intelligence and Statistics, 2011.
[26] Huahua Wang and Arindam Banerjee. Bregman alternating direction method of multipliers. In Advances
in Neural Information Processing Systems, 2014.
9
| 5895 |@word repository:1 version:1 norm:1 triazine:2 seek:1 linearized:1 covariance:3 simplifying:1 tr:3 initial:1 contains:4 ecole:3 ours:2 existing:5 com:1 loglik:1 gmail:1 john:1 fn:9 numerical:1 partition:1 enables:1 drop:1 plot:3 update:14 v:1 intelligence:1 parameterization:1 akiko:1 blei:3 pascanu:1 node:1 along:2 prove:1 doubly:1 wild:1 introduce:4 x0:2 expected:1 themselves:2 decreasing:1 estimating:1 bounded:2 notation:1 maximizes:1 factorized:2 vs5:5 interpreted:1 finding:1 marlin:1 pseudo:1 masashi:2 ti:5 act:1 concave:5 exactly:1 ensured:1 k2:2 grant:1 yn:7 before:1 treat:1 mach:1 ak:3 subscript:1 might:5 black:2 equivalence:4 lausanne:6 challenging:1 ease:1 factorization:1 range:3 challis:1 acknowledgment:1 woodbury:1 practice:1 implement:3 swiss:1 svi:3 razvan:1 procedure:1 significantly:2 projection:1 refers:1 get:5 ga:2 selection:2 context:1 applying:2 optimize:3 equivalent:5 map:7 lagrangian:1 dz:1 maximizing:1 www:2 go:1 williams:1 convex:2 ke:2 simplicity:1 splitting:3 rule:1 importantly:2 handle:1 laplace:8 carl:1 us:4 designing:1 etien:1 trick:1 expensive:1 linspace:6 csie:1 subproblem:3 logspace:3 ep:18 bottom:1 wang:3 baqu:3 worst:2 preprint:5 region:1 ensures:1 revisiting:1 vxn:3 depend:1 solving:2 tight:3 predictive:7 titsias:1 basis:5 logreg:1 usps:11 joint:5 differently:1 harri:2 derivation:5 train:2 fast:1 monte:1 artificial:1 choosing:2 quite:2 whose:6 supplementary:5 solve:1 elbo:2 otherwise:1 statistic:4 paquet:1 gp:8 online:1 differentiate:1 sequence:7 housing:2 advantage:1 matthias:1 propose:6 uci:2 erale:3 achieve:1 yoon:1 scalability:1 takeda:1 exploiting:1 parent:1 convergence:10 francois:1 tim:1 derive:2 linearize:4 tornio:1 miguel:1 pendent:1 eq:9 solves:1 dividing:1 implemented:1 ois:1 idiap:2 involves:1 implies:3 c:1 differ:1 switzerland:4 direction:3 tokyo:1 stochastic:10 libsvmtools:1 material:5 require:1 behaviour:1 generalization:2 ntu:1 extension:1 a1a:4 hold:1 brandon:1 ic:1 exp:2 babanezhad:1 predict:1 matthew:2 substituting:2 purpose:1 estimation:1 applicable:1 create:1 hoffman:2 mit:1 always:1 gaussian:16 rather:1 avoid:1 command:1 gpml:2 notational:1 vk:8 improvement:1 likelihood:14 bernoulli:1 contrast:1 seeger:1 colon:4 inference:32 epfl:3 inaccurate:1 streaming:1 relation:1 oiii:1 arg:4 overall:2 issue:1 pascal:2 denoted:3 ill:1 classification:4 html:2 lucas:1 integration:1 initialize:3 marginal:4 equal:4 field:10 pai:4 ven:10 unsupervised:1 jon:1 future:2 report:4 ephane:1 simplify:2 pavlovic:1 few:1 franc:1 piecewise:1 yoshua:1 divergence:7 ve:3 national:1 murphy:1 geometry:8 willard:1 ab:1 message:2 chong:2 masa:1 extreme:1 pradeep:1 chain:2 accurate:2 rajesh:1 bregman:2 decoupled:1 circle:1 re:1 e0:1 mk:8 minfunc:2 stopped:1 column:3 teboulle:1 maximization:1 entry:5 predictor:1 rounding:1 too:1 proximal:60 nickisch:2 st:1 international:5 siam:2 squared:1 satisfied:1 choose:1 admit:2 derivative:1 repository1:1 account:3 prox:2 de:3 bfgs:1 sec:1 summarized:1 depends:1 view:1 closed:6 bayes:3 lappalainen:1 parallel:1 contribution:3 ass:1 variance:3 emtiyaz:5 bayesian:18 vincent:1 none:1 carlo:1 multiplying:1 xtn:16 converged:1 ed:3 definition:1 james:1 proof:3 riemannian:1 proved:1 dataset:6 popular:2 dimensionality:1 proximalgradient:1 sean:1 appears:1 higher:2 nonconjugate:1 april:1 fua:2 hannes:1 though:2 box:1 until:1 honkela:3 horizontal:2 replacing:1 trust:1 nonlinear:1 christopher:1 propagation:1 banerjee:1 logistic:8 mode:1 schmidtm:1 normalized:1 multiplier:1 alternating:1 symmetric:1 leibler:5 conditionally:3 aki:1 linearizing:1 generalized:3 criterion:1 mohammad:4 polytechnique:3 performs:2 variational:31 arindam:1 recently:1 common:1 mt:1 reza:1 discussed:2 he:1 approximates:1 numerically:1 refer:1 ai:3 paisley:1 grid:2 mathematics:1 similarly:2 particle:1 sugiyama:2 alekh:1 v0:1 base:1 posterior:14 showed:1 belongs:1 scenario:1 hamburg:1 inequality:1 neg:4 minimum:2 additional:1 dai:2 edward:1 maximize:1 lik:3 desirable:1 faster:2 match:1 cross:3 lin:1 ravikumar:1 dkl:4 qi:2 prediction:2 scalable:1 regression:8 multilayer:1 expectation:4 arxiv:10 iteration:4 kernel:6 sometimes:1 agarwal:1 addition:5 whereas:2 rest:5 archive:1 incorporates:1 split:5 easy:8 bengio:1 variety:1 xj:1 zi:12 restrict:1 reduce:1 idea:1 translates:1 passed:1 song:1 passing:2 matlab:1 deep:1 useful:1 fleuret:2 covered:1 involve:2 detailed:2 amount:1 extensively:1 http:3 exist:1 popularity:1 alfred:1 hyperparameter:5 grossglauser:1 express:2 key:2 reformulation:1 threshold:1 libsvm:1 imaging:1 graph:1 convert:3 prob:1 parameterized:1 place:1 family:6 almost:1 knowles:1 wu:1 comparable:4 bit:2 bound:16 hi:3 quadratic:1 sato:1 software:2 generates:1 speed:1 argument:1 extremely:1 optimality:2 min:1 martin:1 structured:1 according:2 gredilla:1 conjugate:20 across:1 slightly:2 em:1 tw:1 taken:2 computationally:2 equation:1 cjlin:1 hero:1 ascending:2 end:1 available:2 operation:1 etter:1 apply:2 salimans:1 appropriate:2 pierre:3 nicholas:1 alternative:3 coin:1 schmidt:1 denotes:1 running:5 top:1 michalis:1 graphical:1 exploit:4 establish:3 objective:1 quantity:1 flipping:1 costly:1 niao:1 surrogate:1 gradient:36 separate:1 thank:1 valpola:1 philip:1 barber:1 tseng:1 assuming:1 length:4 code:1 relationship:2 ratio:1 vladimir:1 difficult:9 unfortunately:1 subproblems:6 sigma:2 negative:2 martigny:1 polson:1 implementation:3 design:2 perform:1 datasets:9 markov:1 descent:4 extended:2 bk:3 david:4 pair:1 required:1 kl:21 khan:6 extensive:1 toolbox:2 huahua:1 established:1 nip:1 below:7 usually:1 xm:1 scott:1 summarize:2 program:1 max:5 wainwright:1 natural:13 difficulty:1 hr:1 scheme:1 improve:1 axis:4 raiko:1 carried:1 speeding:1 review:1 literature:1 prior:2 theis:1 loss:8 fully:1 validation:3 foundation:1 sufficient:1 s0:4 ulrich:1 storing:1 row:4 summary:3 penalized:1 repeat:1 last:1 supported:1 antti:2 rasmussen:1 institute:1 wide:1 taking:2 sparse:1 benefit:1 distributed:2 feedback:1 dimension:1 xn:6 world:1 avoids:1 concavity:1 crsii2:1 transaction:1 ranganath:1 approximate:5 obtains:1 kullback:6 ml:1 assumed:1 xi:2 continuous:4 latent:9 un:1 search:3 sonar:2 table:10 learn:1 robust:1 ca:1 diag:11 main:1 linearly:1 whole:1 hyperparameters:3 paul:1 slow:3 explicit:3 exponential:6 pe:1 third:1 hanjun:1 theorem:1 gradientdescent:1 rk:12 bad:3 xt:3 jensen:1 behnam:1 list:1 admits:1 ionosphere:2 evidence:2 intractable:2 exists:3 workshop:1 gained:1 mirror:1 linearization:1 conditioned:1 karhunen:1 nk:15 easier:1 entropy:2 simply:1 azaro:1 expressed:1 tracking:1 bo:1 springer:1 ch:3 ubc:1 gerrish:1 kuusela:1 identity:1 goal:1 sorted:2 towards:1 lipschitz:1 admm:1 change:1 specifically:2 called:1 total:3 experimental:1 perceptrons:1 chr:1 cholesky:6 mark:2 incorporate:1 mcmc:1 reg:1 |
5,408 | 5,896 | Learning Large-Scale Poisson DAG Models based on
OverDispersion Scoring
Gunwoong Park
Department of Statistics
University of Wisconsin-Madison
Madison, WI 53706
parkg@stat.wisc.edu
Garvesh Raskutti
Department of Statistics
Department of Computer Science
Wisconsin Institute for Discovery, Optimization Group
University of Wisconsin-Madison
Madison, WI 53706
raskutti@cs.wisc.edu
Abstract
In this paper, we address the question of identifiability and learning algorithms
for large-scale Poisson Directed Acyclic Graphical (DAG) models. We define
general Poisson DAG models as models where each node is a Poisson random
variable with rate parameter depending on the values of the parents in the underlying DAG. First, we prove that Poisson DAG models are identifiable from observational data, and present a polynomial-time algorithm that learns the Poisson
DAG model under suitable regularity conditions. The main idea behind our algorithm is based on overdispersion, in that variables that are conditionally Poisson
are overdispersed relative to variables that are marginally Poisson. Our algorithms
exploits overdispersion along with methods for learning sparse Poisson undirected
graphical models for faster computation. We provide both theoretical guarantees
and simulation results for both small and large-scale DAGs.
1
Introduction
Modeling large-scale multivariate count data is an important challenge that arises in numerous applications such as neuroscience, systems biology and amny others. One approach that has received
significant attention is the graphical modeling framework since graphical models include a broad
class of dependence models for different data types. Broadly speaking, there are two sets of graphical models: (1) undirected graphical models or Markov random fields and (2) directed acyclic
graphical (DAG) models or Bayesian networks.
Between undirected graphical models and DAGs, undirected graphical models have generally received more attention in the large-scale data setting since both learning and inference algorithms
scale to larger datasets. In particular, for multivariate count data Yang et al. [1] introduce undirected
Poisson graphical models. Yang et al. [1] define undirected Poisson graphical models so that each
node is a Poisson random variable with rate parameter depending only on its neighboring nodes in
the graph. As pointed out in Yang et al. [1] one of the major challenges with Poisson undirected
graphical models is ensuring global normalizability.
Directed acyclic graphs (DAGs) or Bayesian networks are a different class of generative models that
model directional or causal relationships (see e.g. [2, 3] for details). Such directional relationships
naturally arise in most applications but are difficult to model based on observational data. One of
the benefits of DAG models is that they have a straightforward factorization into conditional distributions [4], and hence no issues of normalizability arise as they do for undirected graphical models
as mentioned earlier. However a number of challenges arise that make learning DAG models often impossible for large datasets even when variables have a natural causal or directional structure.
1
These issues are: (1) identifiability since inferring causal directions from data is often not possible;
(2) computational complexity since it is often computationally infeasible to search over the space of
DAGs [5]; (3) sample size guarantee since fundamental identifiability assumptions such as faithfulness are often required extremely large sample sizes to be satisfied even when the number of nodes
p is small (see e.g. [6]).
In this paper, we define Poisson DAG models and address these 3 issues. In Section 3 we prove that
Poisson DAG models are identifiable and in Section 4 we introduce a polynomial-time DAG learning
algorithm for Poisson DAGs which we call OverDispersion Scoring (ODS). The main idea behind
proving identifiability is based on the overdispersion of variables that are conditionally Poisson but
not marginally Poisson. Using overdispersion, we prove that it is possible to learn the causal ordering
of Poisson DAGs using a polynomial-time algorithm and once the ordering is known, the problem of
learning DAGs reduces to a simple set of neighborhood regression problems. While overdispersion
with conditionally Poisson random variables is a well-known phenomena that is exploited in many
applications (see e.g. [7, 8]), using overdispersion has never been exploited in DAG model learning
to our knowledge.
Statistical guarantees for learning the causal ordering are provided in Section 4.2 and we provide
numerical experiments on both small DAGs and large-scale DAGs with node-size up to 5000 nodes.
Our theoretical guarantees prove that even in the setting where the number of nodes p is larger than
the sample size n, it is possible to learn the causal ordering under the assumption that the degree
of the so-called moralized graph of the DAG has small degree. Our numerical experiments support
our theoretical results and show that our ODS algorithm performs well compared to other state-ofthe-art DAG learning methods. Our numerical experiments confirm that our ODS algorithm is one
of the few DAG-learning algorithms that performs well in terms of statistical and computational
complexity in the high-dimensional p > n setting.
2
Poisson DAG Models
In this section, we define general Poisson DAG models. A DAG G = (V, E) consists of a set of
vertices V and a set of directed edges E with no directed cycle. We usually set V = {1, 2, . . . , p}
and associate a random vector (X1 , X2 , . . . , Xp ) with probability distribution P over the vertices
in G. A directed edge from vertex j to k is denoted by (j, k) or j ? k. The set Pa(k) of parents
of a vertex k consists of all nodes j such that (j, k) ? E. One of the convenient properties of
DAG models is that the joint distribution f (X1 , X2 , ..., Xp ) factorizes in terms of the conditional
distributions as follows [4]:
f (X1 , X2 , ..., Xp ) = ?pj=1 fj (Xj |XPa(j) ),
where fj (Xj |XPa(j) ) refers to the conditional distribution of node Xj in terms of its parents. The
basic property of Poisson DAG models is that each conditional distribution fj (xj |xPa(j) ) has a
Poisson distribution. More precisely, for Poisson DAG models:
Xj |X{1,2,...,p}\{j} ? Poisson(gj (XPa(j) )),
(1)
where gj (.) is an arbitrary function of XPa(j) . To take a concrete example, gj (.) can represent the
link function for the univariate Poisson generalized linear model (GLM) or gj (XPa(j) ) = exp(?j +
P
k?Pa(j) ?jk Xk ) where (?jk )k?Pa(j) represent the linear weights.
Using the factorization (1), the overall joint distribution is:
X
X
X
X ?j +P
?jk Xk
k?Pa(j)
f (X1 , X2 , ..., Xp ) = exp
?j Xj +
?jk Xk Xj ?
log Xj !?
e
.
j?V
j?V
(k,j)?E
j?V
(2)
To contrast this formulation with the Poisson undirected graphical model in Yang et al. [1], the joint
distribution for undirected graphical models has the form:
X
X
X
f (X1 , X2 , ..., Xp ) = exp
?j Xj +
?jk Xk Xj ?
log Xj ! ? A(?) ,
(3)
j?V
(k,j)?E
2
j?V
where A(?) is the log-partition function or the log of the normalization constant. While the two
forms (2) and (3) look quite similar,
the key difference is the normalization constant of A(?) in (3) as
P
P
?j +
k?
Pa(j) ?kj Xk in (2) which depends on X. To ensure the undirected
opposed to the term
e
j?V
graphical model representation in (3) is a valid distribution, A(?) must be finite which guarantees
the distribution is normalizable and Yang et al. [1] prove that A(?) is normalizable if and only if all
? values are less than or equal to 0.
3
Identifiability
In this section, we prove that Poisson DAG models are identifiable under a very mild condition.
In general, DAG models can only be defined up to their Markov equivalence class (see e.g. [3]).
However in some cases, it is possible to identify the DAG by exploiting specific properties of the
distribution. For example, Peters and B?uhlmann prove that for Gaussian DAGs based on structural
equation models with known or the same variance, the models are identifiable [9], Shimizu et al. [10]
prove identifiability for linear non-Gaussian structural equation models, and Peters et al. [11] prove
identifiability of non-parametric structural equation models with additive independent noise. Here
we show that Poisson DAG models are also identifiable using the idea of overdispersion.
To provide intuition, we begin by showing the identifiability of a two-node Poisson DAG model.
The basic idea is that the relationship between nodes X1 and X2 generates the overdispersed child
variable. To be precise, consider all three models: M1 : X1 ? Poisson(?1 ), X2 ? Poisson(?2 ),
where X1 and X2 are independent; M2 : X1 ? Poisson(?1 ) and X2 |X1 ? Poisson(g2 (X1 )); and
M3 : X2 ? Poisson(?2 ) and X1 |X2 ? Poisson(g1 (X2 )). Our goal is to determine whether the
underlying DAG model is M1 , M2 or M3 .
X1
X2
X1
M1
X2
X1
M2
X2
M3
Figure 1: Directed graphs of M1 , M2 and M3
Now we exploit the fact that for a Poisson random variable X, Var(X) = E(X), while for a distribution which is a conditionally Poisson, the variance is overdispersed relative to the mean. Hence
for M1 , Var(X1 ) = E(X1 ) and Var(X2 ) = E(X2 ). For M2 , Var(X1 ) = E(X1 ), while
Var(X2 ) = E[Var(X2 |X1 )] + Var[E(X2 |X1 )] = E[g2 (X1 )] + Var[g2 (X1 )] > E[g2 (X1 )] = E(X2 ),
as long as Var(g2 (X1 )) > 0.
Similarly under M3 , Var(X2 ) = E(X2 ) and Var(X1 ) > E(X1 ) as long as Var(g1 (X2 )) > 0.
Hence we can identify model M1 , M2 , and M3 by testing whether the variance is greater than the
expectation or equal to the expectation. With finite sample size n, the quantities E(?) and Var(?) can
be estimated from data and we consider the finite sample setting in Section 4 and 4.2. Now we
extend this idea to provide an identifiability condition for general Poisson DAG models.
The key idea to extending identifiability from the bivariate to multivariate scenario involves condition on parents of each node and then testing overdispersion. The general p-variate result is as
follows:
Theorem 3.1. Assume that for any j ? V , K ? Pa(j) and S ? {1, 2, .., p} \ K,
Var(gj (XPa(j) )|XS ) > 0,
the Poisson DAG model is identifiable.
We defer the proof to the supplementary material. Once again, the main idea of the proof is
overdispersion. To explain the required assumption note that for any j ? V and S ? Pa(j),
Var(Xj |XS ) ? E(Xj |XS ) = Var(gj (XPa(j) )|XS ). Note that if S = Pa(j) or {1, ...j ? 1},
Var(gj (XPa(j) )|XS ) = 0. Otherwise Var(gj (XPa(j) )|XS ) > 0 by our assumption.
3
1
3
2
1
3
2
Gm
G
Figure 2: Moralized graph Gm for DAG G
4
Algorithm
Our algorithm which we call OverDispersion Scoring (ODS) consists of three main steps: 1) estimating a candidate parents set [1, 12, 13] using existing learning undirected graph algorithms; 2)
estimating a causal ordering using overdispersion scoring; and 3) estimating directed edges using
standard regression algorithms such as Lasso. Steps 3) is a standard problem in which we use offthe-shelf algorithms. Step 1) allows us to reduce both computational and sample complexity by
exploiting sparsity of the moralized or undirected graphical model representation of the DAG which
we inroduce shortly. Step 2) exploits overdispersion to learn a causal ordering.
An important concept we need to introduce for Step 1) of our algorithm is the moral graph or
undirected graphical model representation of the DAG (see e.g. [14]). The moralized graph Gm
for a DAG G = (V, E) is an undirected graph where Gm = (V, Eu ) where Eu includes edge
set E without directions plus edges between any nodes that are parents of a common child. Fig. 2
demonstrates concepts of a moralized graph for a simple 3-node example where E = {(1, 3), (2, 3)}
for DAG G. Note that 1, 2 are parents of a common child 3. Hence Eu = {(1, 2), (1, 3), (2, 3)}
where the additional edge (1, 2) arises from the fact that nodes 1 and 2 are both parents of node 3.
Further, define N (j) := {k ? {1, 2, ..., p} |(j, k) or (k, j) ? Eu } denote the neighborhood set of a
node j in the moralized graph Gm . Let {X (i) }ni=1 denote n samples drawn from the Poisson DAG
model G. Let ? : {1, 2, ..., p} ? {1, 2, ..., p} be a bijective function corresponding to a permutation
or a causal ordering. We will also use the convenient notation b. to denote an estimate based on the
data. For ease of notation for any j ? {1, 2, ...p}, and S ? {1, 2, ..., p} let ?j|S and ?j|S (xS )
2
2
represent E(Xj |XS ) and E(Xj |XS = xS ), respectively. Furthermore let ?j|S
and ?j|S
(xS ) denote
Pn
(i)
Var(Xj |XSP
) and Var(Xj |XS = xS ), respectively. We also define n(xS ) = i=1 1(XS = xS )
and nS = xS n(xS )1(n(xS ) ? c0 .n) for an arbitrary c0 ? (0, 1).
The computation of the score sbjk in Step 2) of our ODS algorithm 1 involves the following equation:
sbjk =
X
bjk )
x?X (C
n(x) 2
?
bj|Cb (x) ? ?
bj|Cbjk (x)
jk
nCbjk
(4)
bjk refers to an estimated candidate set of parents specified in Step 2) of our ODS algorithm 1
where C
bjk ) = {x ? {X (1) , X (2) , ..., X (n) } | n(x) ? c0 .n} so that we ensure we have enough
and X (C
bjk
bjk
bjk
C
C
C
samples for each element we select. In addition, c0 is a tuning parameter of our algorithm that we
specify in our main Theorem 4.2 and our numerical experiments.
We can use a number of standard algorithms for Step 1) of our ODS algorithm since it boils down
to finding a candidate set of parents. The main purpose of Step 1) is to reduce both computational
complexity and the sample complexity by exploiting sparsity in the moralized graph. In Step 1)
a candidate set of parents is generated for each node which in principle could be the entire set of
nodes. However since Step 2) requires computation of a conditional mean and variance, both the
sample complexity and computational complexity depend significantly on the number of variables
we condition on as illustrated in Section 4.1 and 4.2. Hence by making the set of candidate parents
for each node as small as possible we gain significant computational and statistical improvements
by exploiting the graph structure. A similar step is taken in the MMHC [15] and SC algorithms [16].
The way we choose a candidate set of parents is by learning the moralized graph Gm and then using
the neighborhood set N (j) for each j. Hence Step 1) reduces to a standard undirected graphical
model learning algorithm. A number of choices are available for Step 1) including the neighborhood
regression approach of Yang et al. [1] as well as standard DAG learning algorithms which find a
candidate parents set such as HITON [13] and MMPC [15].
4
Algorithm 1: OverDispersion Scoring (ODS)
input : n samples from the given Poisson DAG model. X (1) , ..., X (n) ? {{0} ? N}p
b ? {0, 1}p?p
output: A causal ordering ?
b ? Np and a graph structure, E
bu corresponding to the moralized graph with
Step 1: Estimate the undirected edges E
b
neighborhood set N (j);
Step 2: Estimate causal ordering using overdispersion score;
for i ? {1, 2, ..., p} do
bi
sbi = ?
bi2 ? ?
end
The first element of a causal ordering ?
b1 = arg minj sbj ;
for j = {2, 3, ...p ? 1} do
for k ? N (b
?j?1 ) ? {1, 2, ..., p} \ {b
?1 , ...b
?j?1 } do
b
b
The candidate parents set Cjk = N (k) ? {b
?1 , ?
b2 , ..., ?
bj?1 };
Calculate sbjk using (4);
end
The j th element of a causal ordering ?
bj = arg mink sbjk ;
bj ;
Step 3: Estimate directed edges toward ?
bj , denoted by D
end
The pth element of the causal ordering ?
bp = {1, 2, ..., p} \ {b
?1 , ?
b2 , ..., ?
bp?1 };
bp = N
b (b
The directed edges toward ?
bp , denoted by D
?p );
Return the estimated causal ordering ?
b = (b
?1 , ?
b2 , ..., ?
bp );
b
b
b
b p };
Return the estimated edge structure E = {D2 , D3 , ..., D
Step 2) learns the causal ordering by assigning an overdispersion score for each node. The basic idea
is to determine which nodes are overdispersed based on the sample conditional mean and conditional
variance. The causal ordering is determined one node at a time by selecting the node with the
smallest overdispersion score which is representative of a node that is least likely to be conditionally
Poisson and most likely to be marginally Poisson. Finding the causal ordering is usually the most
challenging step of DAG learning, since once the causal ordering is learnt, all that remains is to
find the edge set for the DAG. Step 3), the final step finds the directed edge set of the DAG G by
finding the parent set of each node. Using Steps 1) and 2), finding the parent set of node j boils
down to selecting which variables are parents out of the candidate parents of node j generated in
Step 1) intersected with all elements before node j of the causal ordering in Step 2). Hence we have
p regression variable selection problems which can be performed using GLMLasso [17] as well as
standard DAG learning algorithms.
4.1
Computational Complexity
Steps 1) and 3) use existing algorithms with known computational complexity. Clearly the computational complexity for Steps 1) and 3) depend on the choice of algorithm. For example, if we use
the neighborhood selection GLMLasso algorithm [17] as is used in Yang et al. [1], the worst-case
complexity is O(min(n, p)np) for a single Lasso run but since there are p nodes, the total worst-case
complexity is O(min(n, p)np2 ). Similarly if we use GLMLasso for Step 3) the computational complexity is also O(min(n, p)np2 ). As we show in numerical experiments, DAG-based algorithms for
Step 1) tend to run more slowly than neighborhood regression based on GLMLasso.
For Step 2) where we estimate the causal ordering has (p ? 1) iterations and each iteration has a
number of overdispersion scores sbj and sbjk computed which is bounded by O(|K|) where K is
a set of candidates of each element of a causal ordering, N (b
?j?1 ) ? {1, 2, ..., p} \ {b
?1 , ...b
?j?1 },
which is also bounded by the maximum degree of the moralized graph d. Hence the total number
of overdispersion scores that need to be computed is O(pd). Since the time for calculating each
overdispersion score which is the difference between a conditional variance and expectation is proportional to n, the time complexity is O(npd). In worst case where the degree of the moralized
graph is p, the computational complexity of Step 2) is O(np2 ). As we discussed earlier there is a
5
significant computational saving by exploiting a sparse moralized graph which is why we perform
Step 1) of the algorithm. Hence Steps 1) and 3) are the main computational bottlenecks of our ODS
algorithm. The addition of Step 2) which estimates the causal ordering does not significantly add
to the computational bottleneck. Consequently our ODS algorithm, which is designed for learning
DAGs is almost as computationally efficient as standard methods for learning undirected graphical
models.
4.2
Statistical Guarantees
In this section, we show consistency of recovering a valid causal ordering recovery of our ODS
algorithm under suitable regularity conditions. We begin by stating the assumptions we impose on
the functions gj (.).
Assumption 4.1.
(A1) For all j ? V , K ? Pa(j) and all S ? {1, 2.., p} \ K, there exists an m > 0 such that
Var(gj (XPa(j) )|XS ) > m.
(A2) For all j ? V , there exists an M < ? such that E[exp(gj (XPa(j) ))] < M .
(A1) is a stronger version of the identifiability assumption in 3.1 Var(gj (XPa(j) )|XS ) > 0 where
since we are in the finite sample setting, we need the conditional variance to be lower bounded by a
constant bounded away from 0. (A2) is a condition on the tail behavior of gj (Pa(j)) for controlling
tails of the score sbjk in Step 2 of our ODS algorithm. To take a concrete example for which (A1)
and (A2) are satisfied, it is straightforward to show that the GLM DAG model (2) with non-positive
values of {?kj } satisfies both (A1) and (A2). The non-positivity constraint on the ??s is sufficient
but not necessary and ensures that the parameters do not grow too large.
Now we present the main result under Assumptions (A1) and (A2). For general DAGs, the true
causal ordering ? ? is not unique. Therefore let E(? ? ) denote all the causal orderings that are consistent with the true DAG G? . Further recall that d denotes the maximum degree of the moralized
graph G?m .
Theorem 4.2 (Recovery of a causal ordering). Consider a Poisson DAG model as specified in (1),
with a set of true causal orderings E(? ? ) and the rate function gj (.) satisfies assumptions 4.1. If
the sample size threshold parameter c0 ? n?1/(5+d) , then there exist positive constants, C1 , C2 , C3
such that
P(?
??
/ E(? ? )) ? C1 exp(?C2 n1/(5+d) + C3 log max{n, p}).
We defer the proof to the supplementary material. The main idea behind the proof uses the overdispersion property exploited in Theorem 3.1 in combination with concentration bounds that exploit
Assumption (A2). Note once again that the maximum degree d of the undirected graph plays an important role in the sample complexity which is why Step 1) is so important. This is because the size
of the conditioning set depends on the degree of the moralized graph d. Hence d plays an important
role in both the sample complexity and computational complexity.
Theorem 4.2 can be used in combination with sample complexity guarantees for Steps 1) and 3)
b is the true DAG G? with high probability.
of our ODS algorithm to prove that our output DAG G
Sample complexity guarantees for Steps 1) and 3) depend on the choice of algorithm but for neighborhood regression based on the GLMLasso, provided n = ?(d log p), Steps 1) and 3) should be
consistent.
For Theorem 4.2 if the triple (n, d, p) satisfies n = ?((log p)5+d ), then our ODS algorithm recovers
the true DAG. Hence if the moralized graph is sparse, ODS recovers the true DAG in the highdimensional p > n setting. DAG learning algorithms that apply to the high-dimensional setting
are not common since they typically rely on faithfulness or similar assumptions or other restrictive
conditions that are not satisfied in the p > n setting. Note that if the DAG is not sparse and d = ?(p),
our sample complexity is extremely large when p is large. This makes intuitive sense since if the
number of candidate parents is large, we would need to condition on a large set of variables which
is very sample-intensive. Our sample complexity is certainly not optimal since the choice of tuning
parameter c0 ? n?1/(5+d) . Determining optimal sample complexity remains an open question.
6
Causal ordering
100
75
75
75
50
50
50
25
25
25
0
0
0
2500
5000
7500
sample size
(a) p = 10, d ? 3
10000
2500
5000
7500
80
60
40
20
0
2500
10000
5000
7500
10000
sample size
sample size
(b) p = 50, d ? 3
Causal ordering for large DAGs
Causal ordering
100
Accuracy (%)
Accuracy (%)
Causal ordering
100
(c) p = 100, d ? 3
2500
5000
7500
10000
sample size
(d) p = 5000, d ? 3
Figure 3: Accuracy rates of successful recovery for a causal ordering via our ODS algorithm using
different base algorithms
The larger sample complexity of our ODS algorithm relative to undirected graphical models learning
is mainly due to the fact that DAG learning is an intrinsically harder problem than undirected graph
learning when the causal ordering is unknown. Furthermore note that Theorem 4.2 does not require
any additional identifiability assumptions such as faithfulness which severely increases the sample
complexity for large-scale DAGs [6].
5
Numerical Experiments
In this section, we support our theoretical results with numerical experiments and show that our ODS
algorithm performs favorably compared to state-of-the-art DAG learning methods. The simulation
study was conducted using 50 realizations of a p-node random Poisson DAG that was generated as
follows. The gj (.) functions for the general PoissonPDAG model (1) was chosen using the standard
GLM link function (i.e.gj (XPa(j) ) = exp(?j + k?Pa(j) ?jk Xk )) resulting in the GLM DAG
model (2). We experimented with other choices of gj (.) but only present results for the GLM
DAG model (2). Note that our ODS algorithm works well as long as Assumption 4.1 is satisfied
regardless of choices of gj (.). In all results presented (?jk ) parameters were chosen uniformly
at random in the range ?jk ? [?1, ?0.7] although any values far from zero and satisfying the
assumption 4.1 work well. In fact, smaller values of ?jk are more favorable to our ODS algorithm
than state-of-the-art DAG learning methods because of weak dependency between nodes. DAGs are
generated randomly with a fixed unique causal ordering {1, 2..., p} with edges randomly generated
while respecting desired maximum degree constraints for the DAG. In our experiments, we always
set the thresholding constant c0 = 0.005 although any value below 0.01 seems to work well.
In Fig. 3, we plot the proportion of simulations in which our ODS algorithm recovers the correct
causal ordering in order to validate Theorem 4.2. All graphs in Fig. 3 have exactly 2 parents for
each node and we plot how the accuracy in recovering the true ? ? varies as a function of n for
n ? {500, 1000, 2500, 5000, 10000} and for different node sizes (a) p = 10, (b) p = 50, (c)
p = 100, and (d) p = 5000. As we can see, even when p = 5000, our ODS algorithm recovers the
true causal ordering about 40% of the time even when n is approximately 5000 and for smaller DAGs
accuracy is 100%. In each sub-figure, 3 different algorithms are used for Step 1): GLMLasso [17]
where we choose ? = 0.1; MMPC [15] with ? = 0.005; and HITON [13] again with ? = 0.005
and an oracle where the edges for the true moralized graph is used. As Fig. 3 shows, the GLMLasso
seems to be the best performing algorithm in terms of recovery so we use the GLMLasso for Steps 1)
and 3) for the remaining figures. GLMLasso was also the only algorithm that scaled to the p = 5000
setting. However, it should be pointed out that GLMLasso is not necessarily consistent and it is
highly depending on the choice of gj (.). Recall that the degree d refers to the maximum degree of
the moralized DAG.
Fig. 4 provides a comparison of how our ODS algorithm performs in terms of Hamming distance
compared to the state-of-the-art PC [3], MMHC [15], GES [18], and SC [16] algorithms. For the PC,
MMHC and SC algorithms, we use ? = 0.005 while for the GES algorithm we use the mBDe [19]
(modified Bayesian Dirichlet equivalent) score since it performs better than other score choices.
We consider node sizes of p = 10 in (a) and (b) and p = 100 in (c) and (d) since many of these
algorithms do not easily scale to larger node sizes. We consider two Hamming distance measures:
in (a) and (c), we only measure the Hamming distance to the skeleton of the true DAG, which is the
set of edges of the DAG without directions; for (b) and (d) we measure the Hamming distance for
7
Directed edges
30
15
20
10
10
5
0
0
2500
5000
7500
10000
2500
sample size
5000
7500
10000
Normalized Hamming Dist (%)
Normalized Hamming Dist (%)
Skeletons
Skeletons
3
1.5
1.0
2
0.5
1
0.0
0
2500
sample size
(a) p = 10, d ? 3
Directed edges
2.0
5000
7500
10000
2500
sample size
(b) p = 10, d ? 3
5000
7500
10000
sample size
(c) p = 100, d ? 3
(d) p = 100, d ? 3
Figure 4: Comparison of our ODS algorithm (black) and PC, GES, MMHC, SC algorithms in terms
of Hamming distance to skeletons and directed edges.
the edges with directions. The reason we consider the skeleton is because the PC does not recover
all directions
of the DAG. We normalize the Hamming distance by dividing by the total number of
edges p2 and p(p ? 1), respectively so that the overall score is a percentage. As we can see our
ODS algorithm significantly out-performs the other algorithms. We can also see that as the sample
size n grows, our algorithm recovers the true DAG which is consistent with our theoretical results.
It must be pointed out that the choice of DAG model is suited to our ODS algorithm while these
state-of-the-art algorithms apply to more general classes of DAG models.
Now we consider the statistical performance for large-scale DAGs. Fig. 5 plots the statistical performance of ODS for large-scale DAGs in terms of (a) recovering the causal ordering; (b) Hamming distance to the true skeleton; (c) Hamming distance to the true DAG with directions. All
graphs in Fig. 5 have exactly 2 parents for each node and accuracy varies as a function of n for
n ? {500, 1000, 2500, 5000, 10000} and for different node sizes p = {1000, 2500, 5000}. Fig. 5
shows that our ODS algorithm accurately recovers the causal ordering and true DAG models even
in high dimensional setting, supporting our theoretical results 4.2.
Causal ordering for large DAGs
Accuracy (%)
80
60
40
20
0
2500
5000
7500
10000
Normalized Hamming dist (%)
Fig. 6 shows run-time of our ODS algorithm. We measure the running time (a) by varying node size
p from 10 to 125 with the fixed n = 100 and 2 parents; (b) sample size n from 100 to 2500 with
the fixed p = 20 and 2 parents; (c) the number of parents of each node |Pa| from 1 to 5 with the
fixed n = 5000 and p = 20. Fig. 6 (a) and (b) support the section 4.1 where the time complexity
of our ODS algorithm is at most O(np2 ). Fig. 6 (c) shows running time is proportional to a parents
size which is a minimum degree of a graph. It agrees with the time complexity of Step 2) of our
ODS algorithm is O(npd). We can also see that the GLMLasso has the fastest run-time amongst all
algorithms that determine the candidate parent set.
Skeletons for large DAGs
Directed edges for large DAGs
0.75
0.3
0.50
0.2
0.25
0.1
0.00
0.0
2500
sample size
5000
7500
10000
2500
sample size
(a) d ? 3
5000
7500
10000
sample size
(b) d ? 3
(c) d ? 3
Figure 5: Performance of our ODS algorithm for large-scale DAGs with p = 1000, 2500, 5000
Time complexity
Time complexity
Time complexity
12.5
Running time (sec)
5
150
10.0
4
100
7.5
3
50
5.0
2
0
2.5
1
25
50
75
100
Node size, p
(a) n = 100, d ? 3
125
0
500
1000
1500
Sampe size, n
(b) p = 20, d ? 3
2000
1
2
3
4
5
Parents size, |Pa|
(c) n = 5000, p = 20
Figure 6: Time complexity of our ODS algorithm with respect to node size p, sample size n, and
parents size |Pa|
8
References
[1] E. Yang, G. Allen, Z. Liu, and P. K. Ravikumar, ?Graphical models via generalized linear
models,? in Advances in Neural Information Processing Systems, 2012, pp. 1358?1366.
[2] P. Bonissone, M. Henrion, L. Kanal, and J. Lemmer, ?Equivalence and synthesis of causal
models,? in Uncertainty in artificial intelligence, vol. 6, 1991, p. 255.
[3] P. Spirtes, C. Glymour, and R. Scheines, Causation, Prediction and Search. MIT Press, 2000.
[4] S. L. Lauritzen, Graphical models.
Oxford University Press, 1996.
[5] D. M. Chickering, ?Learning Bayesian networks is NP-complete,? in Learning from data.
Springer, 1996, pp. 121?130.
[6] C. Uhler, G. Raskutti, P. B?uhlmann, B. Yu et al., ?Geometry of the faithfulness assumption in
causal inference,? The Annals of Statistics, vol. 41, no. 2, pp. 436?463, 2013.
[7] C. B. Dean, ?Testing for overdispersion in Poisson and binomial regression models,? Journal
of the American Statistical Association, vol. 87, no. 418, pp. 451?457, 1992.
[8] T. Zheng, M. J. Salganik, and A. Gelman, ?How many people do you know in prison? Using
overdispersion in count data to estimate social structure in networks,? Journal of the American
Statistical Association, vol. 101, no. 474, pp. 409?423, 2006.
[9] J. Peters and P. B?uhlmann, ?Identifiability of Gaussian structural equation models with equal
error variances,? Biometrika, p. ast043, 2013.
[10] S. Shimizu, P. O. Hoyer, A. Hyv?arinen, and A. Kerminen, ?A linear non-Gaussian acyclic
model for causal discovery,? The Journal of Machine Learning Research, vol. 7, pp. 2003?
2030, 2006.
[11] J. Peters, J. Mooij, D. Janzing et al., ?Identifiability of causal graphs using functional models,?
arXiv preprint arXiv:1202.3757, 2012.
[12] I. Tsamardinos, L. E. Brown, and C. F. Aliferis, ?The max-min hill-climbing Bayesian network
structure learning algorithm,? Machine learning, vol. 65, no. 1, pp. 31?78, 2006.
[13] C. F. Aliferis, I. Tsamardinos, and A. Statnikov, ?HITON: a novel Markov Blanket algorithm
for optimal variable selection,? in AMIA Annual Symposium Proceedings, vol. 2003. American Medical Informatics Association, 2003, p. 21.
[14] R. G. Cowell, P. A. Dawid, S. L. Lauritzen, and D. J. Spiegelhalter, Probabilistic Networks and
Expert Systems. Springer-Verlag, 1999.
[15] I. Tsamardinos and C. F. Aliferis, ?Towards principled feature selection: Relevancy, filters and
wrappers,? in Proceedings of the ninth international workshop on Artificial Intelligence and
Statistics. Morgan Kaufmann Publishers: Key West, FL, USA, 2003.
[16] N. Friedman, I. Nachman, and D. Pe?er, ?Learning bayesian network structure from massive
datasets: the sparse candidate algorithm,? in Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc., 1999, pp. 206?215.
[17] J. Friedman, T. Hastie, and R. Tibshirani, ?glmnet: Lasso and elastic-net regularized generalized linear models,? R package version, vol. 1, 2009.
[18] D. M. Chickering, ?Optimal structure identification with greedy search,? The Journal of Machine Learning Research, vol. 3, pp. 507?554, 2003.
[19] D. Heckerman, D. Geiger, and D. M. Chickering, ?Learning Bayesian networks: The combination of knowledge and statistical data,? Machine learning, vol. 20, no. 3, pp. 197?243,
1995.
9
| 5896 |@word mild:1 version:2 polynomial:3 stronger:1 seems:2 proportion:1 c0:7 open:1 relevancy:1 d2:1 hyv:1 simulation:3 harder:1 wrapper:1 liu:1 score:11 selecting:2 existing:2 od:33 assigning:1 must:2 numerical:7 partition:1 additive:1 designed:1 plot:3 generative:1 intelligence:3 greedy:1 xk:6 provides:1 node:42 along:1 c2:2 symposium:1 prove:10 consists:3 introduce:3 behavior:1 dist:3 prison:1 provided:2 begin:2 underlying:2 estimating:3 notation:2 bounded:4 finding:4 guarantee:8 exactly:2 biometrika:1 demonstrates:1 scaled:1 medical:1 before:1 positive:2 severely:1 oxford:1 amia:1 approximately:1 black:1 plus:1 equivalence:2 challenging:1 ease:1 factorization:2 fastest:1 bi:1 range:1 directed:15 bjk:6 unique:2 testing:3 significantly:3 convenient:2 refers:3 selection:4 gelman:1 impossible:1 equivalent:1 dean:1 straightforward:2 attention:2 regardless:1 recovery:4 m2:6 proving:1 annals:1 controlling:1 gm:6 play:2 massive:1 us:1 associate:1 pa:14 element:6 satisfying:1 jk:10 dawid:1 role:2 preprint:1 worst:3 calculate:1 ensures:1 cycle:1 ordering:38 eu:4 sbi:1 mentioned:1 intuition:1 pd:1 principled:1 complexity:31 respecting:1 skeleton:7 depend:3 easily:1 joint:3 artificial:3 sc:4 neighborhood:8 quite:1 npd:2 larger:4 supplementary:2 aliferis:3 otherwise:1 bonissone:1 statistic:4 g1:2 final:1 net:1 neighboring:1 realization:1 intuitive:1 validate:1 normalize:1 exploiting:5 parent:29 regularity:2 extending:1 depending:3 stating:1 stat:1 lauritzen:2 received:2 p2:1 dividing:1 recovering:3 c:1 involves:2 blanket:1 direction:6 correct:1 filter:1 observational:2 material:2 require:1 arinen:1 exp:6 cb:1 bj:6 major:1 smallest:1 a2:6 purpose:1 favorable:1 nachman:1 uhlmann:3 xpa:14 agrees:1 mit:1 clearly:1 gaussian:4 always:1 normalizability:2 modified:1 pn:1 shelf:1 factorizes:1 varying:1 np2:4 improvement:1 mainly:1 contrast:1 sense:1 inference:2 entire:1 typically:1 issue:3 overall:2 arg:2 denoted:3 art:5 field:1 once:4 never:1 equal:3 saving:1 biology:1 park:1 look:1 broad:1 yu:1 others:1 np:3 few:1 causation:1 randomly:2 geometry:1 n1:1 friedman:2 uhler:1 highly:1 zheng:1 certainly:1 pc:4 behind:3 edge:21 necessary:1 desired:1 causal:44 theoretical:6 modeling:2 earlier:2 kerminen:1 vertex:4 successful:1 conducted:1 too:1 dependency:1 offthe:1 xsp:1 varies:2 learnt:1 fundamental:1 international:1 bu:1 probabilistic:1 informatics:1 synthesis:1 concrete:2 again:3 satisfied:4 opposed:1 choose:2 slowly:1 positivity:1 american:3 expert:1 return:2 b2:3 sec:1 includes:1 inc:1 depends:2 performed:1 recover:1 identifiability:14 defer:2 ni:1 accuracy:7 variance:8 kaufmann:2 ofthe:1 identify:2 climbing:1 directional:3 weak:1 bayesian:7 identification:1 accurately:1 marginally:3 explain:1 minj:1 janzing:1 pp:10 naturally:1 proof:4 recovers:6 boil:2 hamming:11 gain:1 intrinsically:1 recall:2 knowledge:2 specify:1 formulation:1 furthermore:2 grows:1 usa:1 brown:1 true:14 concept:2 normalized:3 overdispersed:4 hence:11 spirtes:1 illustrated:1 conditionally:5 generalized:3 bijective:1 hill:1 complete:1 performs:6 allen:1 fj:3 novel:1 common:3 garvesh:1 raskutti:3 functional:1 conditioning:1 extend:1 discussed:1 m1:6 association:3 tail:2 significant:3 dag:90 tuning:2 consistency:1 salganik:1 similarly:2 pointed:3 gj:19 add:1 base:1 multivariate:3 scenario:1 verlag:1 exploited:3 scoring:5 morgan:2 minimum:1 greater:1 additional:2 impose:1 determine:3 reduces:2 faster:1 long:3 ravikumar:1 a1:5 ensuring:1 prediction:1 regression:7 basic:3 expectation:3 poisson:48 fifteenth:1 arxiv:2 iteration:2 represent:3 normalization:2 c1:2 addition:2 grow:1 publisher:2 tend:1 undirected:21 call:2 structural:4 yang:8 enough:1 xj:17 variate:1 hastie:1 lasso:3 reduce:2 idea:9 intensive:1 lemmer:1 bottleneck:2 whether:2 sbjk:6 moral:1 peter:4 statnikov:1 speaking:1 generally:1 tsamardinos:3 exist:1 percentage:1 neuroscience:1 estimated:4 tibshirani:1 broadly:1 vol:10 group:1 key:3 threshold:1 drawn:1 d3:1 wisc:2 intersected:1 pj:1 graph:29 run:4 package:1 uncertainty:2 you:1 almost:1 geiger:1 bound:1 fl:1 identifiable:6 oracle:1 annual:1 precisely:1 constraint:2 normalizable:2 x2:24 bp:5 generates:1 extremely:2 min:4 performing:1 glymour:1 department:3 combination:3 smaller:2 heckerman:1 wi:2 making:1 glm:5 taken:1 computationally:2 equation:5 scheines:1 remains:2 count:3 know:1 overdispersion:24 ge:3 end:3 available:1 apply:2 away:1 shortly:1 sbj:2 denotes:1 remaining:1 ensure:2 include:1 dirichlet:1 graphical:23 running:3 binomial:1 madison:4 calculating:1 exploit:4 restrictive:1 question:2 quantity:1 parametric:1 concentration:1 dependence:1 hoyer:1 amongst:1 distance:8 link:2 toward:2 reason:1 relationship:3 difficult:1 favorably:1 mink:1 unknown:1 perform:1 markov:3 datasets:3 finite:4 supporting:1 precise:1 ninth:1 arbitrary:2 mmhc:4 required:2 specified:2 c3:2 faithfulness:4 address:2 usually:2 below:1 sparsity:2 challenge:3 including:1 max:2 suitable:2 bi2:1 natural:1 rely:1 regularized:1 spiegelhalter:1 numerous:1 kj:2 discovery:2 mooij:1 determining:1 relative:3 wisconsin:3 permutation:1 proportional:2 acyclic:4 var:22 triple:1 degree:11 sufficient:1 xp:5 consistent:4 principle:1 thresholding:1 infeasible:1 institute:1 sparse:5 benefit:1 valid:2 pth:1 far:1 social:1 hiton:3 confirm:1 global:1 b1:1 search:3 why:2 learn:3 elastic:1 kanal:1 necessarily:1 main:9 noise:1 arise:3 child:3 x1:27 fig:11 representative:1 west:1 n:1 sub:1 inferring:1 candidate:13 pe:1 chickering:3 learns:2 theorem:8 moralized:17 down:2 specific:1 showing:1 er:1 x:21 experimented:1 bivariate:1 cjk:1 exists:2 workshop:1 shimizu:2 suited:1 univariate:1 likely:2 glmnet:1 g2:5 cowell:1 springer:2 satisfies:3 conditional:9 goal:1 consequently:1 towards:1 henrion:1 determined:1 uniformly:1 called:1 total:3 m3:6 select:1 highdimensional:1 support:3 people:1 arises:2 phenomenon:1 |
5,409 | 5,897 | Streaming Min-Max Hypergraph Partitioning
Jennifer Iglesias?
Carnegie Mellon University
Pittsburgh, PA
jiglesia@andrew.cmu.edu
Dan Alistarh
Microsoft Research
Cambridge, United Kingdom
dan.alistarh@microsoft.com
Milan Vojnovic
Microsoft Research
Cambridge, United Kingdom
milanv@microsoft.com
Abstract
In many applications, the data is of rich structure that can be represented by a
hypergraph, where the data items are represented by vertices and the associations
among items are represented by hyperedges. Equivalently, we are given an input
bipartite graph with two types of vertices: items, and associations (which we refer
to as topics). We consider the problem of partitioning the set of items into a given
number of components such that the maximum number of topics covered by a
component is minimized. This is a clustering problem with various applications,
e.g. partitioning of a set of information objects such as documents, images, and
videos, and load balancing in the context of modern computation platforms.
In this paper, we focus on the streaming computation model for this problem, in
which items arrive online one at a time and each item must be assigned irrevocably
to a component at its arrival time. Motivated by scalability requirements, we focus
on the class of streaming computation algorithms with memory limited to be at
most linear in the number of components. We show that a greedy assignment
strategy is able to recover a hidden co-clustering of items under a natural set of
recovery conditions. We also report results of an extensive empirical evaluation,
which demonstrate that this greedy strategy yields superior performance when
compared with alternative approaches.
1
Introduction
In a variety of applications, one needs to process data of rich structure that can be conveniently
represented by a hypergraph, where associations of the data items, represented by vertices, are represented by hyperedges, i.e. subsets of items. Such data structure can be equivalently represented
by a bipartite graph that has two types of vertices: vertices that represent items, and vertices that
represent associations among items, which we refer to as topics. In this bipartite graph, each item
is connected to one or more topics. The input can be seen as a graph with vertices belonging to
(overlapping) communities.
There has been significant work on partitioning a set of items into disjoint components such that
similar items are assigned to the same component, see, e.g., [8] for a survey. This problem arises in
the context of clustering of information objects such as documents, images or videos. For example,
the goal may be to partition given collection of documents into disjoint sub-collections such that
the maximum number of distinct topics covered by each sub-collection is minimized, resulting in a
?
Work performed in part while an intern with Microsoft Research.
1
Figure 2: An example of hidden coclustering with five hidden clusters.
Figure 1: A simple example of a set of items
with overlapping associations to topics.
parsimonious summary. The same fundamental problem also arises in processing of complex data
workloads, including enterprise emails [10], online social networks [18], graph data processing and
machine learning computation platforms [20, 21, 2], and load balancing in modern streaming query
processing platforms [24]. In this context, the goal is to partition a set of data items over a given
number of servers to balance the load according to some given criteria.
Problem Definition. We consider the min-max hypergraph partitioning problem defined as follows.
The input to the problem is a set of items, a set of topics, a number of components to partition the
set of items, and a demand matrix that specifies which particular subset of topics is associated with
each individual item. Given a partitioning of the set of items, the cost of a component is defined
as the number of distinct topics that are associated with items of the given component. The cost of
a given partition is the maximum cost of a component. In other words, given an input hypergraph
and a partition of the set of vertices into a given number of disjoints components, the cost of a
component is defined to be the number of hyperedges that have at least one vertex assigned to this
component. For example, for the simple input graph in Figure 1, a partition of the set of items into
two components {1, 3} and {2, 4} amounts to the cost of the components each of value 2, thus, the
cost of the partition is of value 2. The cost of a component is a submodular function as the distinct
topics associated with items of the component correspond to a neighborhood set in the input bipartite
graph.
In the streaming computation model that we consider, items arrive sequentially one at a time, and
each item needs to be assigned, irrevocably, to one component at its arrival time. This streaming
computation model allows for limited memory to be used at any time during the execution whose
size is restricted to be at most linear in the number of the components. Both these assumptions arise
as part of system requirements for deployment in web-scale services.
The min-max hypergraph partition problem is NP hard. The streaming computation problem is even
more difficult, as less information is available to the algorithm when an item must be assigned.
Contribution. In this paper, we consider the streaming min-max hypergraph partitioning problem.
We identify a greedy item placement strategy which outperforms all alternative approaches considered on real-world datasets, and can be proven to have a non-trivial recovery property: it recovers
hidden co-clusters of items in probabilistic inputs subject to a recovery condition.
Specifically, we show that, given a set of hidden co-clusters to be placed onto k components, the
greedy strategy will tend to place items from the same hidden cluster onto the same component, with
high probability. In turn, this property implies that greedy will provide a constant factor approximation of the optimal partition on inputs satisfying the recovery property.
The probabilistic input model we consider is defined as follows. The set of topics is assumed to
be partitioned into a given number ? ? 1 of disjoint hidden clusters. Each item is connected to
topics according to a mixture probability distribution defined as follows. Each item first selects one
of the hidden clusters as a home hidden cluster by drawing an independent sample from a uniform
distribution over the hidden clusters. Then, it connects to each topic from its home hidden cluster
independently with probability p, and it connects to each topic from each other hidden cluster with
probability q ? p. This defines a hidden co-clustering of the input bipartite graph; see Figure 2 for
an example.
This model is similar in spirit to the popular stochastic block model of an undirected graph, and
it corresponds to a hidden co-clustering [6, 7, 17, 4] model of an undirected bipartite graph. We
consider asymptotically accurate recovery of this hidden co-clustering.
2
A hidden cluster is said to be asymptotically recovered if the portion of items from the given hidden
cluster assigned to the same partition goes to one asymptotically as the number of items observed
grows large. An algorithm guarantees balanced asymptotic recovery if, additionally, it ensures that
the cost of the most loaded partition is within a constant of the average partition load.
Our main analytical result is showing that a simple greedy strategy provides balanced asymptotic
recovery of hidden clusters (Theorem 1). We prove that a sufficient condition for the recovery of
hidden clusters is that the number of hidden clusters ? is at least k log k, where k is the number
of components, and that the gap between the probability parameters q and p is sufficiently large:
q < log r/(kr) < 2 log r/r ? p, where r is the number of topics in a hidden cluster. Roughly
speaking, this means that if the mean number of topics to which an item is associated with in its
home hidden cluster of topics is at least twice as large as the mean number of topics to which an
item is associated with from other hidden clusters of topics, then the simple greedy online algorithm
guarantees asymptotic recovery.
The proof is based on a coupling argument, where we first show that assigning an item to a partition based on the number of topics it has in common with each partition is similar to making the
assignment proportionally to the number of items corresponding to the same hidden cluster present
on each partition. In turn, this allows us to couple the assignment strategy with a Polya urn process [5] with ?rich-get-richer? dynamics, which implies that the policy converges to assigning each
item from a hidden cluster to the same partition. Additionally, this phenomenon occurs ?in parallel?
for each cluster. This recovery property will imply that this strategy will ensure a constant factor
approximation of the optimum assignment.
Further, we provide experimental evidence that this greedy online algorithm exhibits good performance for several real-world input bipartite graphs, outperforming more complex assignment strategies, and even some offline approaches.
2
Problem Definition and Basic Results
In this section we provide a formal problem definition, and present some basic results on the computational hardness and lower bounds.
Input. The input is defined by a set of items N = {1, 2, . . . , n}, a set of topics M = {1, 2, . . . , m},
and a given number of components k. Dependencies between items and topics are given by a demand
matrix D = (di,l ) ? {0, 1}n?m where di,l = 1 indicates that item i needs topic l, and di,l = 0,
otherwise.1
Alternatively, we can represent the input as a bipartite graph G = (N, M, E) where there is an edge
(i, l) ? E if and only if item i needs topic l or as a hypergraph H = (N, E) where a hyperedge
e ? E consists of all items that use the same topic.
The Problem. An assignment of items to components is given by x ? {0, 1}n?k where xi,j = 1
if item i is assigned to component j, and xi,j = 0, otherwise. Given an assignment of items to
components x, the cost of component j is defined to be equal to the minimum number of distinct
topics that are needed by this component to cover all the items assigned to it, i.e.
{
}
?
?
min
cj (x) =
di,l xi,j , 1 .
i?N
l?M
As defined, the cost of each component is a submodular function of the items assigned to it. We
consider the min-max hypergraph partitioning problem defined as follows:
minimize ?
max{c1 (x), c2 (x), . . . , ck (x)}
subject to
j?[k] xi,j = 1 ?i ? [n]
x ? {0, 1}n?k
(1)
We note that this problem is an instance of the submodular load balancing, as defined in [23].
1
The framework allows for a natural generalization to allow for real-valued demands. In this paper we focus
on {0, 1}-valued demands.
3
Basic Results. This problem is NP-Complete, by reduction from the subset sum problem.
Proposition 1. The min-max hypergraph partitioning problem is NP-Complete.
We now give a lower bound on the optimal value of the problem, using the observation that each
topic needs to be made available on at least one component.
Proposition 2. For every partition of the set of items in k components, the maximum cost of a
component is larger than or equal to m/k, where m is the number of topics.
We next analyze the performance of an algorithm which simply assigns each item independently to
a component chosen uniformly at random from the set of all components upon its arrival. Although
this is a popular strategy commonly deployed in practice (e.g. for load balancing in computation
platforms), the following result shows that it does not yield a good solution for the min-max hypergraph partitioning problem.
Proposition
?m 3. The expected maximum load of a component under random assignment is at least
(1 ? j=1 (1 ? 1/k)nj /m) ? m, where nj is the number of items associated with topic j.
For instance, if we assume that nj ? k for each topic j, we obtain that the expected maximum load
is of at least (1 ? 1/e)m. This suggests that the performance of random assignment is poor: on
an input where m topics form k disjoint clusters, and each item subscribes to a single cluster, the
optimal solution has cost m/k, whereas, by the above claim, random assignment has approximate
cost 2m/3, yielding a competitive ratio that is linear in k.
Balanced Recovery of Hidden Co-Clusters. We relax the worst-case input requirements by defining a family of hidden co-clustering inputs. Our model is a generalization of the stochastic block
model of a graph to the case of hypergraphs.
We consider a set of topics R, partitioned into ? clusters C1 , C2 , . . . , C? , each of which contains
r topics. Given these hidden clusters, each item is associated with topics as follows. Each item is
first assigned a ?home? cluster Ch , chosen uniformly at random among the hidden clusters. The
item then connects to topics inside its home cluster by picking each topic independently with fixed
probability p. Further, the item connects to topics from a fixed arbitrary ?noise? set Qh of size at
most r/2 outside its home cluster Ch , where the item is connected to each topic in Qh uniformly at
random, with fixed probability q. (Sampling outside topics from the set of all possible topics would
in the limit lead to every partition to contain all possible topics, which renders the problem trivial.
We do not impose this limitation in the experimental validation.)
Definition 1 (Hidden Co-Clustering). A bipartite graph is in HC(n, r, ?, p, q) if it is constructed
using the above process, with n items and ? clusters with r topics per cluster, where each item
subscribes to topics inside its randomly chosen home cluster with probability p, and to topics from
the noise set with probability q.
At each time step t, a new item is presented in the input stream of items, and is immediately assigned
to one of the k components, S1 , S2 , . . . , Sk , according to some algorithm. Algorithms do not know
the number of hidden clusters or their size, but can examine previous assignments.
Definition 2 (Asymptotic Balanced Recovery.). Given a hidden co-clustering HC(n, r, ?, p, q), we
say an algorithm asymptotically recovers the hidden clusters C1 , C2 , . . . , C? if there exists a recovery time tR during its execution after which, for each hidden cluster Ci , there exists a component
Sj such that each item with home cluster Ci is assigned to component Sj with probability that goes
to 1 as the number of items grows large. Moreover, the recovery is balanced if the ratio between
the maximum cost of a component and the average cost over components is upper bounded by a
constant B > 0.
3
Streaming Algorithm and the Recovery Guarantee
Recall that we consider the online problem, where we receive one item at a time together with all its
corresponding topics. The item must be immediately and irrevocably assigned to some component.
In the following, we describe the greedy strategy, specified in Algorithm 1.
4
1
2
3
4
5
6
7
8
Data: Hypergraph H = (V, E), received one item (vertex) at a time, k partitions, capacity bound c
Result: A partition of V into k parts
Set initial partitions S1 , S2 , . . . , Sk to be empty sets
while there are incoming items do
Receive the next item t, and its topics R
I ? {i : |Si | ? minj |Sj | + c} /* components not exceeding capacity */
Compute ri = |Si ? R| ?i ? I /* size of topic intersection */
j ? arg maxi?I ri /* if tied, choose a least loaded component */
Sj ? Sj ? R /* item t and its topics are assigned to Sj */
return S1 , S2 , . . . , Sk
Algorithm 1: The greedy algorithm.
This strategy places each incoming item onto the component whose incremental cost (after adding
the item and its topics) is minimized. The immediate goal is not balancing, but rather clustering
similar items. This could in theory lead to large imbalances; to prevent this, we add a balancing
constraint specifying the maximum load imbalance. If adding the item to the first candidate component would violate the balancing constraint, then the item is assigned to the first valid component,
in decreasing order of the intersection size.
3.1
The Recovery Theorem
In this section, we present our main theoretical result, which provides a sufficient condition for the
greedy strategy to guarantee balanced asymptotic recovery of hidden clusters.
Theorem 1 (The Recovery Theorem). For a random input consisting of a hidden co-cluster graph
G in HC(n, r, ?, p, q) to be partitioned across k ? 2 components, if the number of clusters is ? ?
k log k, and the probabilities p and q satisfy p ? 2 log r/r, and q ? log r/(rk), then the greedy
algorithm ensures balanced asymptotic recovery of the hidden clusters.
Remarks. Specifically, we prove that, under the given conditions, recovery occurs for each hidden
cluster by the time r/ log r cluster items have been observed, with probability 1 ? 1/rc , where c ? 1
is a constant. Moreover, clusters are randomly distributed among the k components.
Together, these results can be used to bound the maximum cost of a partition to be at most a constant factor away the lower bound of r?/k given by Lemma 2. The extra cost comes from incorrect
assignments before the recovery time, and from the imperfect balancing of clusters over the components.
Corollary 1. The expected maximum load of a component is at most 2.4r?/k.
3.2
Proof Overview
We now provide an overview of the main ideas of the proof, which is available in the full version of
the paper.
Preliminaries. We say that two random processes are coupled if their random choices are the
same. We say that an event occurs with high probability (w.h.p.) if it occurs with probability at least
1 ? 1/rc , where c ? 1 is a constant. We make use of a Polya urn process [5], which is defined as
follows. We start each of k ? 2 urns with one ball, and, at each step t, observe a new ball. We assign
the new ball to urn i ? {1, . . . , k} with probability proportional to (bi )? , where ? > 0 is a fixed real
constant, and bi is the number of balls in urn i at time t. We use the following classic result.
Lemma 1 (Polya Urn Convergence [5]). Consider a finite k-bin Polya urn process with exponent
? > 1, and let xti be the fraction of balls in urn i at time t. Then, almost surely, the limit Xi =
limt?? xti exists for each 1 ? i ? k. Moreover, we have that there exists an urn j such that
Xj = 1, and that Xi = 0, for all i ?= j.
Step 1: Recovering a Single Cluster. We first prove that, in the case of a single home cluster
for all items, and two components (k = 2), with no balance constraints, the greedy algorithm
with no balance constraints converges to a monopoly, i.e., eventually assigns all the items from
5
Dataset
Book Ratings
Facebook App Data
Retail Data
Zune Podcast Data
Items
Readers
Users
Customers
Listeners
Topics
Books
Apps
Items bought
Podcasts
# of Items
107,549
173,502
74,333
80,633
# of Topics
105,283
13,604
16,470
7928
# edges
965,949
5,115,433
947,940
1,037,999
Figure 3: A table showing the data sets and information about the items and topics.
this cluster onto the same component, w.h.p. Formally, there exists some convergence time tR and
some component Si such that, after time tR , all future items will be assigned to component Si , with
probability at least 1 ? 1/rc .
Our strategy will be to couple greedy assignment with a Polya urn process with exponent ? >
1, showing that the dynamics of the two processes are the same, w.h.p. There is one significant
technical challenge that one needs to address: while the Polya process assigns new balls based on
the ball counts of urns, greedy assigns items (and their respective topics) based on the number of
topic intersections between the item and the partition. We resolve this issue by taking a two-tiered
approach. Roughly, we first prove that, w.h.p., we can couple the number of items in a component
with the number of unique topics assigned to the same component. We then prove that this is enough
to couple the greedy assignment with a Polya urn process with exponent ? > 1. This will imply that
greedy converges to a monopoly, by Lemma 1.
We then extend this argument to a single cluster and k ? 3 components, but with no load balancing constraints. The crux of the extension is that we can apply the k = 2 argument to pairs of
components to yield that some component achieves a monopoly.
Lemma 2. Given a single cluster instance in HC(n, r, ?, p, q) with ? = 1, p ? 2 log r/r and q = 0 to
be partitioned in k components, the greedy algorithm with no balancing constraints will eventually
place every item in the cluster onto the same component w.h.p.
Second Step: The General Case. We complete the proof of Theorem 1 by considering the general
case with ? ? 2 clusters and q > 0. We proceed in three sub-steps. We first show the recovery claim
for general number of clusters ? ? 2, but q = 0 and no balance constraints. This follows since, for
q = 0, the algorithm?s choices with respect to clusters and their respective topics are independent.
Hence clusters are assigned to components uniformly at random.
Second, we extend the proof for any value q ? log r/(rk), by showing that the existence of ?noise?
edges under this threshold only affects the algorithm?s choices with very low probability. Finally,
we prove that the balance constraints are practically never violated for this type of input, as clusters
are distributed uniformly at random. We obtain the following.
Lemma 3. For a hidden co-cluster input, the greedy algorithm with q = 0 and without capacity
constraints can be coupled with a version of the algorithm with q ? log r/(rk) and a constant
capacity constraint, w.h.p.
Final Argument. Putting together Lemmas 2 and 3, we obtain that greedy ensures balanced recovery for general inputs in HC(n, r, ?, p, q), for parameter values ? ? k log k, p ? 2 log r/r, and
q ? log r/(rk).
4
Experimental Results
Datasets and Evaluation. We first consider a set of real-world bipartite graph instances with a
summary provided in Table 3. All these datasets are available online, except for Zune podcast
subscriptions. We chose the consumer to be the item and the resource to be the topic. We provide an
experimental validation of the analysis on synthetic co-cluster inputs in the full version of our paper.
In our experiments, we considered partitioning of items onto k components for a range of values
going from two to ten components. We report the maximum number of topics in a component
normalized by the cost of a perfectly balanced solution m/k, where m is the total number of topics.
Online Assignment Algorithms. We compared the following other online assignment strategies:
6
10
10
8
7
6
All on One
Proportional Greedy (Decreasing Order)
Balance Big
Prefer Big
Random
Greedy (Random Order)
Greedy (Decreasing Order)
5
4
3
8
7
6
5
4
3
2
2
1
2
All on One
Proportional Greedy (Decreasing Order)
Balance Big
Prefer Big
Random
Greedy (Random Order)
Greedy (Decreasing Order)
9
Normalized Maximum Load
Normalized Maximum Load
9
3
4
5
6
k
7
8
9
1
2
10
3
(a) Book Ratings
7
6
9
5
4
3
7
8
9
10
8
7
6
All on One
Proportional Greedy (Decreasing Order)
Balance Big
Prefer Big
Random
Greedy (Random Order)
Greedy (Decreasing Order)
5
4
3
2
2
1
2
6
k
10
All on One
Proportional Greedy (Decreasing Order)
Balance Big
Prefer Big
Random
Greedy (Random Order)
Greedy (Decreasing Order)
Normalized Maximum Load
Normalized Maximum Load
8
5
(b) Facebook App Data
10
9
4
3
4
5
6
k
7
8
9
1
2
10
(c) Retail Data
3
4
5
6
k
7
8
9
10
(d) Zune Podcast Data
Figure 4: The normalized maximum load for various online assignment algorithms under different
input bipartite graphs versus the numbers of components.
? All-on-One: trivially assign all items and topics to one component.
? Random: assign each item independently to a component chosen uniformly at random from the
set of all components.
? Balance Big: inspect the items in a random order and assign the large items to the least loaded
component, and the small items according to greedy. An item is considered large if it subscribes to
more than 100 topics, and small otherwise.
? Prefer Big: inspect the items in a random order, and keep a buffer of up to 100 small items; when
receiving a large item, put it on the least loaded component; when the buffer is full, place all the
small items according to greedy.
? Greedy: assign the items to the component they have the most topics in common with. We consider
two variants: items arrive in random order, and items arrive in decreasing order of the number of
topics. We allow a slack (parameter c) of up to 100 topics.
? Proportional Allocation: inspect the items in decreasing order of the number of topics; the probability an item is assigned to a component is proportional to the number of common topics.
Results. Greedy generally outperforms other online heuristics (see Figure 4). Also, its performance
is improved if items arrive in decreasing order of number of topics. Intuitively, items with larger
number of topics provide more information about the underlying structure of the bipartite graph than
the items with smaller number of topics. Interestingly, adding randomness to the greedy assignment
made it perform far worse; most times Proportional Assignment approached the worst case scenario.
Random assignment outperformed Proportional Assignment and regularly outperformed Prefer Big
and Balance Big item assignment strategies.
Offline methods. We also tested the streaming algorithm for a wide range of synthetic input bipartite graphs according to the model defined in this paper, and several offline approaches for the
problem including hMetis [11], label propagation, basic spectral methods, and PARSA [13]. We
found that label propagation and spectral methods are extremely time and memory intensive on our
inputs, due to the large number of topics and item-topic edges. hMetis returns within seconds, however the assignments were not competitive. However, hMetis provides balanced hypergraph cuts,
which are not necessarily a good solution to our problem.
7
Compared to PARSA on bipartite graph inputs, greedy provides assignments with up to 3x higher
max partition load. On social graphs, the performance difference can be as high as 5x. This discrepancy is natural since PARSA has the advantage of performing multiple passes through the input.
5
Related Work
The related problem of min-max multi-way graph cut problem, originally introduced in [23], is
defined as follows: given an input graph, the objective is to component the set of vertices such
that the maximum number of edges adjacent to a component is minimized. A similar problem was
recently studied, e.g. [1], with respect to expansion, defined as the ratio of the sum of weights of
edges adjacent to a component and the minimum between the sum of the weights of vertices within
and outside the given component. The balanced graph partition problem is a bi-criteria optimization
problem where the goal is to find a balanced partition of the set of vertices that minimizes the total
number of edges cut. The best known approximation ratio for this problem is poly-logarithmic in
the number of vertices [12]. The balanced graph partition problem was also considered for the set of
edges of a graph [2]. The related problem of community detection in an input graph data has been
commonly studied for the planted partition model, also well known as stochastic block model. Tight
conditions for recovery of hidden clusters are known from the recent work in [16] and [14], as well
as various approximation algorithms, e.g. see [3]. Some variants of hypergraph partition problems
were studied by the machine learning research community, including balanced cuts studied by [9]
using relaxations based on the concept of total variation, and the maximum likelihood identification
of hidden clusters [17]. The difference is that we consider the min-max multi-way cut problem for
a hypergraph in the streaming computation model. PARSA [13] considers the same problem in an
offline model, where the entire input is initially available to the algorithm, and provides an efficient
distributed algorithm for optimizing multiple criteria. A key component of PARSA is a procedure for
optimizing the order of examining vertices. By contrast, we focus on performance under arbitrary
arrival order, and provide analytic guarantees under a stochastic input model.
Streaming computation with limited memory was considered for various canonical problems such
as principal component analysis [15], community detection [22], balanced graph partition [20, 21],
and query placement [24]. For the class of (hyper)graph partition problems, most of the work is
restricted to studying various streaming heuristics using empirical evaluations with a few notable
exceptions. A first theoretical analysis of streaming algorithms for balanced graph partitioning was
presented in [19] using the framework similar to the one deployed in this paper. The paper gives
sufficient conditions for a greedy streaming strategy to recover clusters of vertices for the input graph
according to stochastic block model, which makes irrevocable assignments of vertices as they are
observed in the input stream and uses memory limited to grow linearly with the number of clusters.
As in our case, the argument uses a reduction to Polya urn processes. The two main differences with
our work is that we consider a different problem (min-max hypergraph partition) and this requires a
novel proof technique based on a two-step reduction to Polya urn processes. Streaming algorithms
for the recovery of clusters in a stochastic block model were also studied in [22], under a weaker
computation model, which does not require irrevocable assignments of vertices at instances they are
presented in the input stream and allows for memory polynomial in the number of vertices.
6 Conclusion
We studied the min-max hypergraph partitioning problem in the streaming computation model with
the size of memory limited to be at most linear in the number of the components of the partition.
We established first approximation guarantees for inputs according to a random bipartite graph with
hidden co-clusters, and evaluated performance on several real-world input graphs. There are several interesting open questions for future work. It is of interest to study the tightness of the given
recovery condition, and, in general, better understand the trade-off between the memory size and
the accuracy of the recovery. It is also of interest to consider the recovery problem for a wider set
of random bipartite graph models. Another question of interest is to consider dynamic graph inputs
with addition and deletion of items and topics.
8
References
[1] N. Bansal, U. Feige, R. Krauthgamer, K. Makarychev, V. Nagarajan, J. SeffiNaor, and
R. Schwartz. Min-max graph partitioning and small set expansion. SIAM J. on Computing,
43(2):872?904, 2014.
[2] F. Bourse, M. Lelarge, and M. Vojnovic. Balanced graph edge partition. In Proc. of ACM
KDD, 2014.
[3] Y. Chen, S. Sanghavi, and H. Xu. Clustering sparse graphs. In Proc. of NIPS, 2012.
[4] Y. Cheng and G. M. Church. Biclustering of expression data. In Ismb, volume 8, pages 93?103,
2000.
[5] F. Chung, S. Handjani, and D. Jungreis. Generalizations of Polya?s urn problem. Annals of
Combinatorics, (7):141?153, 2003.
[6] I. S. Dhillon. Co-clustering documents and words using bipartite spectral graph partitioning.
In Proc. of ACM KDD, 2001.
[7] I. S. Dhillon, S. Mallela, and D. S. Modha. Information-theoretic co-clustering. In Proc. of
ACM KDD, 2003.
[8] S. Fortunato. Community detection in graphs. Physics Reports, 486(75), 2010.
[9] M. Hein, S. Setzer, L. Jost, and S. S. Rangapuram. The total variation on hypergraphs - learning
hypergraphs revisited. In Proc. of NIPS, 2013.
[10] T. Karagiannis, C. Gkantsidis, D. Narayanan, and A. Rowstron. Hermes: clustering users in
large-scale e-mail services. In Proc. of ACM SoCC, 2010.
[11] G. Karypis and V. Kumar. Multilevel k-way hypergraph partitioning. VLSI Design, 11(3),
2000.
[12] R. Krauthgamer, J. S. Naor, and R. Schwartz. Partitioning graphs into balanced components.
2009.
[13] M. Li, D. G. Andersen, and A. J. Smola. Graph partitioning via parallel submodular approximation to accelerate distributed machine learning. arXiv preprint arXiv:1505.04636, 2015.
[14] L. Massouli?e. Community detection thresholds and the weak Ramanujan property. In Proc. of
ACM STOC, 2014.
[15] I. Mitliagkas, C. Caramanis, and P. Jain. Memory limited, streaming PCA. In Proc. of NIPS,
2013.
[16] E. Mossel, J. Neeman, and A. Sly. Reconstruction and estimation in the planted partition
model. Probability Theory and Related Fields, pages 1?31, 2014.
[17] L. O?Connor and S. Feizi. Biclustering using message passing. In Proc. of NIPS, 2014.
[18] J. M. Pujol et al. The little engine(s) that could: Scaling online social networks. IEEE/ACM
Trans. Netw., 20(4):1162?1175, 2012.
[19] I. Stanton. Streaming balanced graph partitioning algorithms for random graphs. In Proc. of
ACM-SIAM SODA, 2014.
[20] I. Stanton and G. Kliot. Streaming graph partitioning for large distributed graphs. In Proc. of
ACM KDD, 2012.
[21] C. E. Tsourakakis, C. Gkantsidis, B. Radunovic, and M. Vojnovic. FENNEL: streaming graph
partitioning for massive scale graphs. In Proc. of ACM WSDM, 2014.
[22] S.-Y. Yun, M. Lelarge, and A. Proutiere. Streaming, memory limited algorithms for community
detection. In Proc. of NIPS, 2014.
[23] Z. Z. Svitkina and E. Tardos. Min-max multiway cut. In K. Jansen, S. Khanna, J. Rolim, and
D. Ron, editors, Proc. of APPROX/RANDOM, pages 207?218. 2004.
[24] B. Zong, C. Gkantsidis, and M. Vojnovic. Herding small streaming queries. In Proc. of ACM
DEBS, 2015.
9
| 5897 |@word version:3 polynomial:1 open:1 tr:3 reduction:3 initial:1 contains:1 united:2 neeman:1 document:4 interestingly:1 outperforms:2 recovered:1 com:2 si:4 assigning:2 must:3 partition:35 kdd:4 analytic:1 greedy:39 item:109 provides:5 revisited:1 ron:1 five:1 rc:3 enterprise:1 c2:3 constructed:1 incorrect:1 prove:6 consists:1 naor:1 dan:2 inside:2 expected:3 hardness:1 roughly:2 examine:1 multi:2 wsdm:1 decreasing:12 resolve:1 xti:2 little:1 considering:1 provided:1 moreover:3 bounded:1 underlying:1 minimizes:1 nj:3 guarantee:6 every:3 schwartz:2 partitioning:21 before:1 service:2 limit:2 modha:1 chose:1 twice:1 studied:6 suggests:1 specifying:1 irrevocably:3 co:16 deployment:1 limited:7 bi:3 range:2 karypis:1 ismb:1 unique:1 practice:1 block:5 procedure:1 empirical:2 word:2 coclustering:1 get:1 onto:6 put:1 context:3 customer:1 ramanujan:1 go:2 independently:4 survey:1 recovery:28 assigns:4 immediately:2 classic:1 variation:2 tardos:1 annals:1 qh:2 monopoly:3 user:2 massive:1 us:2 pa:1 satisfying:1 cut:6 observed:3 rangapuram:1 preprint:1 worst:2 ensures:3 connected:3 trade:1 balanced:19 hypergraph:18 rowstron:1 dynamic:3 tight:1 upon:1 bipartite:17 workload:1 accelerate:1 represented:7 various:5 listener:1 caramanis:1 distinct:4 jain:1 describe:1 query:3 approached:1 hyper:1 neighborhood:1 outside:3 whose:2 richer:1 larger:2 valued:2 heuristic:2 say:3 drawing:1 otherwise:3 relax:1 tightness:1 final:1 online:11 advantage:1 analytical:1 hermes:1 reconstruction:1 iglesias:1 milan:1 scalability:1 convergence:2 cluster:64 requirement:3 optimum:1 empty:1 incremental:1 converges:3 object:2 wider:1 coupling:1 andrew:1 polya:10 received:1 recovering:1 implies:2 come:1 stochastic:6 bin:1 require:1 crux:1 assign:5 nagarajan:1 multilevel:1 generalization:3 preliminary:1 proposition:3 extension:1 practically:1 sufficiently:1 considered:5 makarychev:1 claim:2 achieves:1 estimation:1 proc:15 outperformed:2 label:2 ck:1 rather:1 corollary:1 focus:4 indicates:1 likelihood:1 contrast:1 streaming:23 entire:1 initially:1 hidden:42 vlsi:1 proutiere:1 going:1 selects:1 arg:1 among:4 issue:1 exponent:3 jansen:1 platform:4 equal:2 field:1 never:1 sampling:1 future:2 minimized:4 report:3 np:3 discrepancy:1 sanghavi:1 few:1 modern:2 randomly:2 individual:1 connects:4 consisting:1 microsoft:5 detection:5 interest:3 message:1 evaluation:3 mixture:1 yielding:1 accurate:1 edge:9 respective:2 hein:1 theoretical:2 instance:5 cover:1 assignment:26 cost:19 vertex:19 subset:3 uniform:1 examining:1 dependency:1 synthetic:2 fundamental:1 siam:2 probabilistic:2 off:1 receiving:1 physic:1 picking:1 together:3 andersen:1 choose:1 worse:1 book:3 chung:1 return:2 li:1 satisfy:1 notable:1 combinatorics:1 stream:3 performed:1 analyze:1 portion:1 competitive:2 recover:2 start:1 parallel:2 contribution:1 minimize:1 zong:1 accuracy:1 loaded:4 yield:3 correspond:1 identify:1 apps:1 weak:1 identification:1 app:2 randomness:1 herding:1 minj:1 facebook:2 email:1 definition:5 lelarge:2 associated:7 proof:6 recovers:2 di:4 couple:4 dataset:1 popular:2 recall:1 cj:1 higher:1 originally:1 improved:1 evaluated:1 smola:1 sly:1 web:1 overlapping:2 propagation:2 defines:1 khanna:1 grows:2 svitkina:1 contain:1 normalized:6 concept:1 hence:1 assigned:19 dhillon:2 adjacent:2 during:2 subscription:1 criterion:3 bansal:1 yun:1 theoretic:1 demonstrate:1 complete:3 image:2 novel:1 recently:1 superior:1 common:3 overview:2 volume:1 association:5 hypergraphs:3 extend:2 mellon:1 refer:2 significant:2 cambridge:2 connor:1 approx:1 trivially:1 submodular:4 multiway:1 add:1 recent:1 optimizing:2 scenario:1 buffer:2 server:1 feizi:1 outperforming:1 hyperedge:1 seen:1 minimum:2 impose:1 mallela:1 surely:1 violate:1 full:3 multiple:2 technical:1 jost:1 variant:2 basic:4 cmu:1 arxiv:2 represent:3 limt:1 retail:2 c1:3 receive:2 whereas:1 addition:1 grow:1 hyperedges:3 extra:1 pass:1 subject:2 tend:1 undirected:2 regularly:1 spirit:1 bought:1 enough:1 variety:1 xj:1 affect:1 perfectly:1 imperfect:1 idea:1 intensive:1 motivated:1 expression:1 pca:1 setzer:1 render:1 speaking:1 proceed:1 passing:1 remark:1 generally:1 covered:2 proportionally:1 amount:1 ten:1 narayanan:1 specifies:1 canonical:1 disjoint:4 per:1 carnegie:1 putting:1 key:1 threshold:2 prevent:1 graph:48 asymptotically:4 relaxation:1 fraction:1 sum:3 soda:1 massouli:1 arrive:5 place:4 family:1 almost:1 reader:1 parsimonious:1 home:9 prefer:6 scaling:1 bound:5 cheng:1 placement:2 constraint:10 deb:1 ri:2 argument:5 min:14 extremely:1 kumar:1 alistarh:2 urn:15 performing:1 according:8 ball:7 poor:1 belonging:1 across:1 smaller:1 feige:1 partitioned:4 making:1 s1:3 intuitively:1 restricted:2 resource:1 jennifer:1 turn:2 eventually:2 count:1 slack:1 needed:1 know:1 studying:1 available:5 apply:1 disjoints:1 observe:1 away:1 spectral:3 alternative:2 existence:1 clustering:14 ensure:1 krauthgamer:2 objective:1 question:2 occurs:4 strategy:16 planted:2 said:1 exhibit:1 subscribes:3 capacity:4 topic:72 mail:1 considers:1 trivial:2 consumer:1 tiered:1 ratio:4 balance:11 equivalently:2 kingdom:2 difficult:1 stoc:1 fortunato:1 design:1 policy:1 tsourakakis:1 perform:1 upper:1 imbalance:2 observation:1 inspect:3 datasets:3 finite:1 immediate:1 defining:1 arbitrary:2 community:7 rating:2 introduced:1 pair:1 specified:1 extensive:1 engine:1 deletion:1 established:1 nip:5 trans:1 address:1 able:1 challenge:1 pujol:1 max:15 memory:10 video:2 including:3 event:1 natural:3 stanton:2 mossel:1 imply:2 church:1 coupled:2 asymptotic:6 interesting:1 limitation:1 proportional:9 allocation:1 proven:1 versus:1 validation:2 sufficient:3 editor:1 irrevocable:2 balancing:10 summary:2 placed:1 offline:4 formal:1 allow:2 weaker:1 understand:1 wide:1 taking:1 sparse:1 distributed:5 world:4 valid:1 rich:3 collection:3 made:2 commonly:2 far:1 social:3 sj:6 approximate:1 netw:1 keep:1 sequentially:1 incoming:2 pittsburgh:1 assumed:1 xi:6 alternatively:1 sk:3 table:2 additionally:2 expansion:2 hc:5 complex:2 necessarily:1 poly:1 main:4 linearly:1 s2:3 noise:3 arise:1 arrival:4 big:12 xu:1 deployed:2 sub:3 parsa:5 exceeding:1 candidate:1 tied:1 theorem:5 rk:4 load:17 showing:4 maxi:1 evidence:1 exists:5 adding:3 kr:1 ci:2 mitliagkas:1 execution:2 demand:4 gap:1 chen:1 intersection:3 logarithmic:1 simply:1 intern:1 conveniently:1 radunovic:1 biclustering:2 ch:2 corresponds:1 vojnovic:4 acm:10 goal:4 gkantsidis:3 hard:1 specifically:2 except:1 uniformly:6 lemma:6 principal:1 total:4 experimental:4 exception:1 formally:1 arises:2 violated:1 tested:1 phenomenon:1 |
5,410 | 5,898 | Efficient Output Kernel Learning for Multiple Tasks
Pratik Jawanpuria1 , Maksim Lapin2 , Matthias Hein1 and Bernt Schiele2
1
Saarland University, Saarbr?ucken, Germany
2
Max Planck Institute for Informatics, Saarbr?ucken, Germany
Abstract
The paradigm of multi-task learning is that one can achieve better generalization
by learning tasks jointly and thus exploiting the similarity between the tasks rather
than learning them independently of each other. While previously the relationship
between tasks had to be user-defined in the form of an output kernel, recent approaches jointly learn the tasks and the output kernel. As the output kernel is a
positive semidefinite matrix, the resulting optimization problems are not scalable
in the number of tasks as an eigendecomposition is required in each step. Using
the theory of positive semidefinite kernels we show in this paper that for a certain
class of regularizers on the output kernel, the constraint of being positive semidefinite can be dropped as it is automatically satisfied for the relaxed problem. This
leads to an unconstrained dual problem which can be solved efficiently. Experiments on several multi-task and multi-class data sets illustrate the efficacy of our
approach in terms of computational efficiency as well as generalization performance.
1
Introduction
Multi-task learning (MTL) advocates sharing relevant information among several related tasks during the training stage. The advantage of MTL over learning tasks independently has been shown
theoretically as well as empirically [1, 2, 3, 4, 5, 6, 7].
The focus of this paper is the question how the task relationships can be inferred from the data.
It has been noted that naively grouping all the tasks together may be detrimental [8, 9, 10, 11].
In particular, outlier tasks may lead to worse performance. Hence, clustered multi-task learning
algorithms [10, 12] aim to learn groups of closely related tasks. The information is then shared only
within these clusters of tasks. This corresponds to learning the task covariance matrix, which we
denote as the output kernel in this paper. Most of these approaches lead to non-convex problems.
In this work, we focus on the problem of directly learning the output kernel in the multi-task learning
framework. The multi-task kernel on input and output is assumed to be decoupled as the product
of a scalar kernel and the output kernel, which is a positive semidefinite matrix [1, 13, 14, 15]. In
classical multi-task learning algorithms [1, 16], the degree of relatedness between distinct tasks is
set to a constant and is optimized as a hyperparameter. However, constant similarity between tasks
is a strong assumption and is unlikely to hold in practice. Thus recent approaches have tackled the
problem of directly learning the output kernel. [17] solves a multi-task formulation in the framework
of vector-valued reproducing kernel Hilbert spaces involving squared loss where they penalize the
Frobenius norm of the output kernel as a regularizer. They formulate an invex optimization problem that they solve optimally. In comparison, [18] recently proposed an efficient barrier method
to optimize a generic convex output kernel learning formulation. On the other hand, [9] proposes a
convex formulation to learn low rank output kernel matrix by enforcing a trace constraint. The above
approaches [9, 17, 18] solve the resulting optimization problem via alternate minimization between
task parameters and the output kernel. Each step of the alternate minimization requires an eigen1
value decomposition of a matrix having as size the number of tasks and a problem corresponding to
learning all tasks independently.
In this paper we study a similar formulation as [17]. However, we allow arbitrary convex loss
functions and employ general p-norms for p ? (1, 2] (including the Frobenius norm) as regularizer
for the output kernel. Our problem is jointly convex over the task parameters and the output kernel.
Small p leads to sparse output kernels which allows for an easier interpretation of the learned task
relationships in the output kernel. Under certain conditions on p we show that one can drop the
constraint that the output kernel should be positive definite as it is automatically satisfied for the
unconstrained problem. This significantly simplifies the optimization and our result could also be of
interest in other areas where one optimizes over the cone of positive definite matrices. The resulting
unconstrained dual problem is amenable to efficient optimization methods such as stochastic dual
coordinate ascent [19], which scale well to large data sets. Overall we do not require any eigenvalue
decomposition operation at any stage of our algorithm and no alternate minimization is necessary,
leading to a highly efficient methodology. Furthermore, we show that this trick not only applies to
p-norms but also applies to a large class of regularizers for which we provide a characterization.
Our contributions are as follows: (a) we propose a generic p-norm regularized output kernel matrix
learning formulation, which can be extended to a large class of regularizers; (b) we show that the
constraint on the output kernel to be positive definite can be dropped as it is automatically satisfied,
leading to an unconstrained dual problem; (c) we propose an efficient stochastic dual coordinate
ascent based method for solving the dual formulation; (d) we empirically demonstrate the superiority
of our approach in terms of generalization performance as well as significant reduction in training
time compared to other methods learning the output kernel.
The paper is organized as follows. We introduce our formulation in Section 2. Our main technical
result is discussed in Section 3. The proposed optimization algorithm is described in Section 4. In
Section 5, we report the empirical results. All the proofs can be found in the supplementary material.
2
The Output Kernel Learning Formulation
We first introduce the setting considered in this paper. We denote the number of tasks by T . We
assume that all tasks have a common input space X and a common positive definite kernel function
k : X ? X ? R. We denote by ?(?) the feature map and by Hk the reproducing kernel Hilbert
space (RKHS) [20] associated with k. The training data is (xi , yi , ti )ni=1 , where xi ? X , ti is the
task the i-th instance belongs to and yi is the corresponding label. Moreover, we have a positive
T
T
definite matrix ? ? S+
on the set of tasks {1, . . . , T }, where S+
is the set of T ? T symmetric and
positive semidefinite (p.s.d.) matrices.
If one arranges the predictions of all tasks in a vector one can see multi-task learning as learning a
vector-valued function in a RKHS [see 1, 13, 14, 15, 18, and references therein]. However, in this
paper we use the one-to-one correspondence between real-valued and matrix-valued kernels, see
[21], in order to limit the technical overhead. In this framework we define the joint kernel of input
space and the set of tasks M : (X ? {1, . . . , T }) ? (X ? {1, . . . , T }) ? R as
M (x, s), (z, t) = k(x, z)?(s, t),
(1)
We denote the corresponding RKHS of functions on X ? {1, . . . , T } as HM and by k?kHM the
corresponding norm. We formulate the output kernel learning problem for multiple tasks as
n
X
1
2
min
C
L yi , F (xi , ti ) + kF kHM + ? V (?)
(2)
T ,F ?H
2
??S+
M
i=1
where L : R ? R ? R is the convex loss function (convex in the second argument), V (?) is a
convex regularizer penalizing the complexity of the output kernel ? and ? ? R+ is the regularization
2
parameter. Note that kF kHM implicitly depends also on ?. In the following we show that (2) can
be reformulated into a jointly convex problem in the parameters of the prediction function and the
output kernel ?. Using the standard representer theorem [20] (see the supplementary material) for
fixed output kernel ?, one can show that the optimal solution F ? ? HM of (2) can be written as
F ? (x, t) =
T X
n
X
T X
n
X
?is M (xi , s), (x, t) =
?is k(xi , x)?(s, t).
s=1 i=1
s=1 i=1
2
(3)
With the explicit form of the prediction function one can rewrite the main problem (2) as
min
T ,??Rn?T
??S+
C
n
X
L yi ,
i=1
T
n
X
1 X
?ir ?js kij ?rs + ? V (?),
?js kji ?s ti +
2 r,s=1 i,j=1
s=1 j=1
T X
n
X
(4)
where ?rs = ?(r, s) and kij = k(xi , xj ). Unfortunately, problem (4) is not jointly convex in ?
and ? due to the product in the second term. A similar problem has been analyzed in [17]. They
2
could show that for the squared loss and V (?) = k?kF the corresponding optimization problem is
invex and directly optimize it. For an invex function every stationary point is globally optimal [22].
We follow a different path which leads to a formulation similar to the one of [2] used for learning
an input mapping (see also [9]). Our formulation for the output kernel learning problem is jointly
convex in the task kernel ? and the task parameters. We present a derivation for the general RKHS
Hk , analogous to the linear case presented in [2, 9]. We use the following variable transformation,
?it =
T
X
?ts ?is , i = 1, . . . , n, s = 1, . . . , T,
?is =
resp.
s=1
T
X
??1
? .
st it
t=1
In the last expression ??1 has to be understood as the pseudo-inverse if ? is not invertible. Note
that this causes no problems as in case ? is not invertible, we can without loss of generality restrict
? in (4) to the range of ?. The transformation leads to our final problem formulation, where the
2
prediction function F and its squared norm kF kHM can be written as
F (x, t) =
n
X
2
kF kHM =
?it k(xi , x),
n
T
X
X
??1
? ? k(xi , xj ).
sr is jr
(5)
r,s=1 i,j=1
i=1
We get our final primal optimization problem
min
T ,??Rn?T
??S+
C
n
X
L yi ,
i=1
T
n
X
1 X
??1 sr ?is ?jr kij + ? V (?)
?jti kji +
2 r,s=1 i,j=1
j=1
n
X
(6)
Before we analyze the convexity of this problem, we want
Pn to illustrate the connection to the formulations in [9, 17]. With the task weight vectors wt = j=1 ?jt ?(xj ) ? Hk we get predictions as
F (x, t) = hwt , ?(x)i and one can rewrite
2
kF kHM =
T
n
X
X
??1
? ? k(xi , xj ) =
sr is jr
r,s=1 i,j=1
T
X
??1
sr
hws , wt i .
r,s=1
This identity is known for vector-valued RKHS, see [15] and references therein. When ? is ? times
2
PT
2
the identity matrix, then kF kHM = t=1 kw?t k and thus (2) is learning the tasks independently. As
2
mentioned before the convexity of the expression of kF kHM is crucial for the convexity of the full
problem (6). The following result has been shown in [2] (see also [9]).
T
Lemma 1 Let R(?) denote the range of ? ? S+
and let ?? be the pseudoinverse. The extended
T
n?T
function f : S+ ? R
? R ? {?} defined as
(P
Pn
T
?
if ?i? ? R(?), ? i = 1, . . . , n,
r,s=1
i,j=1 ? sr ?is ?jr k(xi , xj ),
f (?, ?) =
,
?
else .
is jointly convex.
The formulation in (6) is similar to [9, 17, 18]. [9] uses the constraint Trace(?) ? 1 instead
of a regularizer V (?) enforcing low rank of the output kernel. On the other hand, [17] employs
squared Frobenius norm for V (?) with squared loss function. [18] proposed an efficient algorithm
for convex V (?). Instead we think that sparsity of ? is better to avoid the emergence of spurious
relations between tasks and also leads to output kernels which are easier to interpret. Thus we
propose to use the following regularization functional for the output kernel ?:
V (?) =
T
X
p
|?tt0 |p = k?kp ,
t,t0 =1
3
for p ? [1, 2]. Several approaches [9, 17, 18] employ alternate minimization scheme, involving
T
costly eigendecompositions of T ? T matrix per iteration (as ? ? S+
). In the next section we show
that for a certain set of values of p one can derive an unconstrained dual optimization problem which
T
thus avoids the explicit minimization over the S+
cone. The resulting unconstrained dual problem
can then be easily optimized by stochastic coordinate ascent. Having explicit expressions of the
primal variables ? and ? in terms of the dual variables allows us to get back to the original problem.
3
Unconstrained Dual Problem Avoiding Optimization over S+T
The primal formulation (6) is a convex multi-task output kernel learning problem. The next lemma
derives the Fenchel dual function of (6). This still involves the optimization over the primal variable
T
? ? S+
. A main contribution of this paper is to show that this optimization problem over the
T
S+ cone can be solved with an analytical solution for a certain class of regularizers V (?). In the
following we denote by ?r := {?i | ti = r} the dual variables corresponding to task r and by Krs
the kernel matrix (k(xi , xj ) | ti = r, tj = s) corresponding to the dual variables of tasks r and s.
Lemma 2 Let L?i be the conjugate function of the loss Li : R ? R, u 7? L(yi , u), then
q : Rn ? R, q(?) = ?C
n
X
T
?
1 X
i
L?i ?
?rs h?r , Krs ?s i ? V (?)
? ? max
T
C
2? r,s=1
??S+
i=1
(7)
is the dual function of (6), where ? ? Rn are the dual variables. The primal variable ? ? Rn?T
in (6) and P
the prediction function F can be expressed in terms of ? and ? as ?is = ?i ?sti and
n
F (x, s) = j=1 ?j ?stj k(xj , x) respectively, where tj is the task of the j-th training example.
We now focus on the remaining maximization problem in the dual function in (7)
max
T
??S+
T
1 X
?rs h?r , Krs ?s i ? V (?).
2? r,s=1
(8)
This is a semidefinite program which is computationally expensive to solve and thus prohibits to
scale the output kernel learning problem to a large number of tasks. However, we show in the
following that this problem has an analytical solution for a subset of the regularizers V (?) =
PT
1
p
r,s=1 |?rs | for p ? 1. For better readability we defer a more general result towards the end
2
of the section. The basic idea is to relax the constraint on ? ? RT ?T in (8) so that it is equivalent
to the computation of the conjugate V ? of V . If the maximizer of the relaxed problem is positive
semi-definite, one has found the solution of the original problem.
Theorem 3 Let k ? N and p =
max
T
??S+
T
X
?rs ?rs ?
r,s=1
2k
2k?1 ,
then with ?rs =
1
2?
h?r , Krs ?s i we have
T
T
1 X
1 2k ? 1 2k X r
2k
|?rs |p =
h? , Krs ?s i ,
2 r,s=1
4k ? 2 2k?
r,s=1
and the maximizer is given by the positive semi-definite matrix
2k ? 1 2k?1
2k?1
??rs =
h?r , Krs ?s i
, r, s = 1, . . . , T.
2k?
(9)
(10)
Plugging the result of the previous theorem into the dual function of Lemma 2 we get for k ? N and
p
2k
p = 2k?1
with V (?) = k?kp the following unconstrained dual of our main problem (6):
maxn ?C
??R
n
X
T
?
? 2k ? 1 2k X r
i
2k
L?i ?
?
h? , Krs ?s i .
C
4k
?
2
2k?
r,s=1
i=1
(11)
Note that by doing the variable transformation ?i := ?Ci we effectively have only one hyperparameter in (11). This allows us to cross-validate more efficiently. The range of admissible values
for p in Theorem 3 lies in the interval (1, 2], where we get for k = 1 the value p = 2 and as k ? ?
4
Table 1: Examples of regularizers V (?) together with their generating function ? and the explicit
1
form of ?? in terms of the dual variables, ?rs = 2?
h?r , Krs ?s i. The optimal value of (8) is given
PT
in terms of ? as max h?, ?i ? V (?) = r,s=1 ?(?rs ).
??RT ?T
?(z)
??rs
V (?)
z 2k
2k ,
k?N
ez =
P?
2k?1
2k
k
z
k=0 k!
cosh(z) ? 1 =
z 2k
k=1 (2k)!
P?
T
P
2k
?2k?1
rs
|?rs | 2k?1
r,s=1
? T
? P ? log(? ) ? ?
if ?rs > 0?r, s
rs
rs
rs
r,s=1
?
?
else .
T
p
P
?rs arcsinh(?rs ) ? 1 + ?2rs + T 2
e?rs
arcsinh(?rs )
r,s=1
we have p ? 1. The regularizer for p = 2 together with the squared loss has been considered in the
primal in [17, 18]. Our analytical expression of the dual is novel and allows us to employ stochastic
dual coordinate ascent to solve the involved primal optimization problem. Please also note that by
optimizing the dual, we have access to the duality gap and thus a well-defined stopping criterion.
This is in contrast to the alternating scheme of [17, 18] for the primal problem which involves costly
matrix operations. Our runtime experiments show that our solver for (11) outperforms the solvers
of [17, 18]. Finally, note that even for suboptimal dual variables ?, the corresponding ? matrix in
(10) is positive semidefinite. Thus we always get a feasible set of primal variables.
Characterizing the set of convex regularizers V which allow an analytic expression for the
dual function The previous theorem raises the question for which class of convex, separable regularizers we can get an analytical expression of the dual function by explicitly solving the optimization problem (8) over the positive semidefinite cone. A key element in the proof of the previous theorem is the characterization of functions f : R ? R which when applied elementwise
T
f (A) = (f (aij ))Ti,j=1 to a positive semidefinite matrix A ? S+
result in a p.s.d. matrix, that is
T
f (A) ? S+ . This set of functions has been characterized by Hiai [23].
T
. We denote by f (A) = (f (aij ))Ti,j=1 the elementTheorem 4 ([23]) Let f : R ? R and A ? S+
T
T
if and only if f is analytic
=? f (A) ? S+
wise application of f to A. It holds ? T ? 2, A ? S+
P?
k
and f (x) = k=0 ak x with ak ? 0 for all k ? 0.
Note that in the previous theorem the condition on f is only necessary when we require the implication to hold for all T . If T is fixed, the set of functions is larger and includes even (large) fractional
powers, see [24]. We use the stronger formulation as we want that the result holds without any
restriction on the number of tasks T . Theorem 4 is the key element used in our following characterization of separable regularizers of ? which allow an analytical expression of the dual function.
P? ak k+1
Theorem 5 Let ? : R ? R be analytic on R and given as ?(z) = k=0 k+1
z
where ak ?
PT
?
T ?T
0 ?k ? 0. If ? is convex, then, V (?) := r,s=1 ? (?rs ), is a convex function V : R
? R and
max h?, ?i ? V (?) = V ? (?) =
??RT ?T
T
X
? ?rs ,
(12)
r,s=1
T
T
where the global maximizer fulfills ?? ? S+
if ? ? S+
and ??rs =
P?
k=0
ak ?krs .
Table 1 summarizes e.g. of functions ?, the corresponding V (?) and the maximizer ?? in (12).
4
Optimization Algorithm
The dual problem (11) can be efficiently solved via decomposition based methods like stochastic
dual coordinate ascent algorithm (SDCA) [19]. SDCA enjoys low computational complexity per
iteration and has been shown to scale effortlessly to large scale optimization problems.
5
Algorithm 1 Fast MTL-SDCA
Input: Gram matrix K, label vector y, regularization parameter and relative duality gap parameter
Output: ? (? is computed from ? using our result in 10)
Initialize ? = ?(0)
repeat
Randomly choose a dual variable ?i
Solve for ? in (13) corresponding to ?i
?i ? ?i + ?
until Relative duality gap is below
Our algorithm for learning the output kernel matrix and task parameters is summarized in Algorithm 1 (refer to the supplementary material for more details). At each step of the iteration we optimize the dual objective over a randomly chosen ?i variable. Let ti = r be the task corresponding to
?i . We apply the update ?i ? ?i + ?. The optimization problem of solving (11) with respect to ?
is as follows:
X
X
min L?i (??i ? ?)/C + ? (a?2 + 2brr ? + crr )2k + 2
(brs ? + crs )2k +
c2k
sz , (13)
??R
s6=r
s,z6=r
2k
P
?
2k?1
where a = kii , brs = j:tj =s kij ?j ?s, csz = h?s , Ksz ?z i ?s, z and ? = C(4k?2)
.
2k?
This one-dimensional convex optimization problem is solved efficiently via Newton method. The
complexity of the proposed algorithm is O(T ) per iteration . The proposed algorithm can also be
employed for learning output kernels regularized by generic V (?), discussed in the previous section.
Special case p = 2(k = 1): For certain loss functions such as the hinge loss, the squared loss, etc.,
L?ti ? ?tiC+? yields a linear or a quadratic expression in ?. In such cases problem (13) reduces to
finding the roots of a cubic equation, which has a closed form expression. Hence, our algorithm is
highly efficient with the above loss functions when ? is regularized by the squared Frobenius norm.
5
Empirical Results
In this section, we present our results on benchmark data sets comparing our algorithm with existing
approaches in terms of generalization accuracy as well as computational efficiency. Please refer to
the supplementary material for additional results and details.
5.1
Multi-Task Data Sets
We begin with the generalization results in multi-task setups. The data sets are as follows: a) Sarcos:
a regression data set, aim is to predict 7 degrees of freedom of a robotic arm, b) Parkinson: a
regression data set, aim is to predict the Parkinson?s disease symptom score for 42 patients, c) Yale:
a face recognition data with 28 binary classification tasks, d) Landmine: a data set containing binary
classifications from 19 different landmines, e) MHC-I: a bioinformatics data set having 10 binary
classification tasks, f) Letter: a handwritten letters data set with 9 binary classification tasks.
We compare the following algorithms: Single task learning (STL), multi-task methods learning the
output kernel matrix (MTL [16], CMTL [12], MTRL [9]) and approaches that learn both input and
output kernel matrices (MTFL [11], GMTL [10]). Our proposed formulation (11) is denoted by
FMTLp . We consider three different values for the p-norm: p = 2 (k = 1), p = 4/3 (k = 2) and
p = 8/7 (k = 4). Hinge and -SVR loss functions were employed for classification and regression
problems respectively. We follow the experimental protocol1 described in [11].
Table 2 reports the performance of the algorithms averaged over ten random train-test splits. The
proposed FMTLp attains the best generalization accuracy in general. It outperforms the baseline
MTL as well as MTRL and CMTL, which solely learns the output kernel matrix. Moreover, it
achieves an overall better performance than GMTL and MTFL. The FMTLp=4/3,8/7 give comparable generalization to p = 2 case, with the additional benefit of learning sparser and more interpretable output kernel matrix (see Figure 1).
1
The performance of STL, MTL, CMTL and MTFL are reported from [11].
6
Table 2: Mean generalization performance and the standard deviation over ten train-test splits.
Data set
STL
MTL
CMTL
MTFL
GMTL
MTRL
Regression data sets: Explained Variance (%)
Sarcos
40.5?7.6 34.5?10.2 33.0?13.4 49.9?6.3 45.8?10.6 41.6?7.1
2.7?3.6 16.8?10.8 33.6?9.4 12.0?6.8
Parkinson 2.8?7.5 4.9?20.0
Classification data sets: AUC (%)
Yale
93.4?2.3 96.4?1.6
Landmine 74.6?1.6 76.4?0.8
MHC-I
69.3?2.1 72.3?1.9
Letter
61.2?0.8 61.0?1.6
95.2?2.1
75.9?0.7
72.6?1.4
60.5?1.1
97.0?1.6
76.4?1.0
71.7?2.2
60.5?1.8
91.9?3.2
76.7?1.2
72.5?2.7
61.2?0.9
46.7?6.9
27.0?4.4
2
2
4
4
4
6
6
6
8
8
8
10
10
10
12
12
12
14
14
14
16
16
16
18
18
4
6
8
10
12
14
16
18
p = 8/7
50.3?5.8 48.4?5.8
27.0?4.4 27.0?4.4
96.1?2.1 97.0?1.2 97.0?1.4
76.1?1.0 76.8?0.8 76.7?1.0
71.5?1.7 71.7?1.9 70.8?2.1
60.3?1.4 61.4?0.7 61.5?1.0
2
2
FMTLp
p = 4/3
p=2
96.8?1.4
76.4?0.9
70.7?1.9
61.4?1.0
18
2
4
(p = 2)
6
8
10
12
14
16
18
2
4
6
(p = 4/3)
8
10
12
14
16
18
(p = 8/7)
Figure 1: Plots of |?| matrices (rescaled to [0,1] and averaged over ten splits) computed by our
solver FMTLp for the Landmine data set for different p-norms, with cross-validated hyper-parameter
values. The darker regions indicate higher value. Tasks (landmines) numbered 1-10 correspond to
highly foliated regions and those numbered 11-19 correspond to bare earth or desert regions. Hence,
we expect two groups of tasks (indicated by the red squares). We can observe that the learned ?
matrix at p = 2 depicts much more spurious task relationships than the ones at p = 4/3 and p = 8/7.
Thus, our sparsifying regularizer improves interpretability.
Table 3: Mean accuracy and the standard deviation over five train-test splits.
Data set
STL
MTL-SDCA
MNIST
USPS
84.1?0.3
90.5?0.3
86.0?0.2
90.6?0.2
5.2
GMTL
MTRL
p=2
FMTLp -H
p = 4/3
p = 8/7
p=2
FMTLp -S
p = 4/3
p = 8/7
84.8?0.3 85.6?0.4 86.1?0.4 85.8?0.4 86.2?0.4 82.2?0.6 82.5?0.4 82.4?0.3
91.6?0.3 92.4?0.2 92.4?0.2 92.6?0.2 92.6?0.1 87.2?0.4 87.7?0.3 87.5?0.3
Multi-Class Data Sets
The multi-class setup is cast as T one-vs-all binary classification tasks, corresponding to T classes.
In this section we experimented with two loss functions: a) FMTLp -H ? the hinge loss employed in
SVMs, and b) FMTLp -S ? the squared loss employed in OKL [17]. In these experiments, we also
compare our results with MTL-SDCA, a state-of-the-art multi-task feature learning method [25].
USPS & MNIST Experiments: We followed the experimental protocol detailed in [10]. Results
are tabulated in Table 3. Our approach FMTLp -H obtains better accuracy than GMTL, MTRL and
MTL-SDCA [25] on both data sets.
MIT Indoor67 Experiments: We report results on the MIT Indoor67 benchmark [26] which covers
67 indoor scene categories. We use the train/test split (80/20 images per class) provided by the
authors. FMTLp -S achieved the accuracy of 73.3% with p = 8/7. Note that this is better than the
ones reported in [27] (70.1%) and [26] (68.24%).
SUN397 Experiments: SUN397 [28] is a challenging scene classification benchmark [26] with 397
classes. We use m = 5, 50 images per class for training, 50 images per class for testing and report
the average accuracy over the 10 standard splits. We employed the CNN features extracted with the
7
Table 4: Mean accuracy and the standard deviation over ten train-test splits on SUN397.
m
STL
MTL
MTL-SDCA
5
50
40.5?0.9
55.0?0.4
42.0?1.4
57.0?0.2
41.2?1.3
54.8?0.3
p=2
FMTLp -H
p = 4/3
p = 8/7
p=2
FMTLp -S
p = 4/3
p = 8/7
41.5?1.1
55.1?0.2
41.6?1.3
55.6?0.3
41.6?1.2
55.1?0.3
44.1?1.3
58.6?0.1
44.1?1.1
58.5?0.1
44.0?1.2
58.6?0.2
(Time by baseline) / (Time by FMTL2?S)
3
10
FMTL ?S
2
2
ConvexOKL
Time (log10 scale), s
10
OKL
1
10
0
10
?1
10
?2
10
50
100
150
200 250 300
Number of Tasks
350
400
(a)
20
18
16
MIT Indoor67, OKL
SUN397, OKL
MIT Indoor67, ConvexOKL .
SUN397, ConvexOKL
14
12
10
8
6
4
2
0
3
3.5
4
4.5
5
5.5
Log10(?)
6
6.5
7
(b)
Figure 2: (a) Plot compares the runtime of various algorithms with varying number of tasks on
SUN397. Our approach FMTL2 -S is 7 times faster that OKL [17] and 4.3 times faster than ConvexOKL [18] when the number of tasks is maximum. (b) Plot showing the factor by which FMTL2 S outperforms OKL and ConvexOKL over the hyper-parameter range on various data sets. On
SUN397, we outperform OKL and ConvexOKL by factors of 5.2 and 7 respectively. On MIT Indoor67, we are better than OKL and ConvexOKL by factors of 8.4 and 2.4 respectively.
convolutional neural network (CNN) [26] using Places 205 database. The results are tabulated in
Table 4. The ? matrices computed by FMTLp -S are discussed in the supplementary material.
5.3
Scaling Experiment
We compare the runtime of our solver for FMTL2 -S with the OKL solver of [17] and the ConvexOKL solver of [18] on several data sets. All the three methods solve the same optimization problem.
Figure 2a shows the result of the scaling experiment where we vary the number of tasks (classes).
The parameters employed are the ones obtained via cross-validation. Note that both OKL and ConvexOKL algorithms do not have a well defined stopping criterion whereas our approach can easily
compute the relative duality gap (set as 10?3 ). We terminate them when they reach the primal objective value achieved by FMTL2 -S . Our optimization approach is 7 times and 4.3 times faster than
the alternate minimization based OKL and ConvexOKL, respectively, when the number of tasks is
maximal. The generic FMTLp=4/3,8/7 are also considerably faster than OKL and ConvexOKL.
Figure 2b compares the average runtime of our FMTLp -S with OKL and ConvexOKL on the crossvalidated range of hyper-parameter values. FMTLp -S outperform them on both MIT Indoor67 and
SUN397 data sets. On MNIST and USPS data sets, FMTLp -S is more than 25 times faster than
OKL, and more than 6 times faster than ConvexOKL. Additional details of the above experiments
are discussed in the supplementary material.
6
Conclusion
We proposed a novel formulation for learning the positive semi-definite output kernel matrix for
multiple tasks. Our main technical contribution is our analysis of a certain class of regularizers on the
output kernel matrix where one may drop the positive semi-definite constraint from the optimization
problem, but still solve the problem optimally. This leads to a dual formulation that can be efficiently
solved using stochastic dual coordinate ascent algorithm. Results on benchmark multi-task and
multi-class data sets demonstrates the effectiveness of the proposed multi-task algorithm in terms of
runtime as well as generalization accuracy.
Acknowledgments. P.J. and M.H. acknowledge the support by the Cluster of Excellence (MMCI).
8
References
[1] T. Evgeniou, C. A. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods. JMLR,
6:615?637, 2005.
[2] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. ML, 73:243?272, 2008.
[3] K. Lounici, M. Pontil, A. B. Tsybakov, and S. van de Geer. Taking advantage of sparsity in multi-task
learning. In COLT, 2009.
[4] A. Jalali, P. Ravikumar, S. Sanghavi, and C. Ruan. A dirty model for multi-task learning. In NIPS, 2010.
[5] P. Jawanpuria and J. S. Nath. Multi-task multiple kernel learning. In SDM, 2011.
[6] A. Maurer, M. Pontil, and B. Romera-paredes. Sparse coding for multitask and transfer learning. In
ICML, 2013.
[7] P. Jawanpuria, J. S. Nath, and G. Ramakrishnan. Generalized hierarchical kernel learning. JMLR, 16:617?
652, 2015.
[8] R. Caruana. Multitask learning. ML, 28:41?75, 1997.
[9] Y. Zhang and D. Y. Yeung. A convex formulation for learning task relationships in multi-task learning.
In UAI, 2010.
[10] Z. Kang, K. Grauman, and F. Sha. Learning with whom to share in multi-task feature learning. In ICML,
2011.
[11] P. Jawanpuria and J. S. Nath. A convex feature learning formulation for latent task structure discovery. In
ICML, 2012.
[12] L. Jacob, F. Bach, and J. P. Vert. Clustered multi-task learning: A convex formulation. In NIPS, 2008.
[13] C. A. Micchelli and M. Pontil. Kernels for multitask learning. In NIPS, 2005.
[14] A. Caponnetto, C. A. Micchelli, M. Pontil, and Y. Ying. Universal multi-task kernels. JMLR, 9:1615?
1646, 2008.
?
[15] M. A. Alvarez,
L. Rosasco, and N. D. Lawrence. Kernels for vector-valued functions: a review. Foundations and Trends in Machine Learning, 4:195?266, 2012.
[16] T. Evgeniou and M. Pontil. Regularized multi?task learning. In KDD, 2004.
[17] F. Dinuzzo, C. S. Ong, P. Gehler, and G. Pillonetto. Learning output kernels with block coordinate descent.
In ICML, 2011.
[18] C. Ciliberto, Y. Mroueh, T. Poggio, and L. Rosasco. Convex learning of multiple tasks and their structure.
In ICML, 2015.
[19] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss. JMLR,
14(1):567?599, 2013.
[20] B. Sch?olkopf and A. Smola. Learning with Kernels. MIT Press, 2002.
[21] M. Hein and O. Bousquet. Kernels, associated structures and generalizations. Technical Report TR-127,
Max Planck Institute for Biological Cybernetics, 2004.
[22] A. Ben-Israel and B. Mond. What is invexity ? J. Austral. Math. Soc. Ser. B, 28:1?9, 1986.
[23] F. Hiai. Monotonicity for entrywise functions of matrices.
431(8):1125 ? 1146, 2009.
Linear Algebra and its Applications,
[24] R. A. Horn. The theory of infinitely divisible matrices and kernels. Trans. Amer. Math. Soc., 136:269?286,
1969.
[25] M. Lapin, B. Schiele, and M. Hein. Scalable multitask representation learning for scene classification. In
CVPR, 2014.
[26] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition
using places database. In NIPS, 2014.
[27] M. Koskela and J. Laaksonen. Convolutional network features for scene recognition. In Proceedings of
the ACM International Conference on Multimedia, 2014.
[28] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. SUN database: Large-scale scene recognition
from abbey to zoo. In CVPR, 2010.
9
| 5898 |@word multitask:4 cnn:2 norm:11 paredes:1 stronger:1 r:27 covariance:1 decomposition:3 jacob:1 tr:1 reduction:1 efficacy:1 score:1 rkhs:5 romera:1 outperforms:3 existing:1 comparing:1 written:2 kdd:1 analytic:3 jawanpuria:3 drop:2 interpretable:1 update:1 mtfl:4 plot:3 stationary:1 v:1 dinuzzo:1 sarcos:2 characterization:3 pillonetto:1 math:2 readability:1 zhang:2 five:1 saarland:1 advocate:1 overhead:1 introduce:2 excellence:1 theoretically:1 multi:29 globally:1 automatically:3 ucken:2 solver:6 begin:1 provided:1 moreover:2 lapedriza:1 israel:1 tic:1 what:1 prohibits:1 finding:1 transformation:3 pseudo:1 every:1 ti:10 runtime:5 grauman:1 demonstrates:1 ser:1 superiority:1 planck:2 positive:17 before:2 dropped:2 understood:1 limit:1 ak:5 path:1 solely:1 therein:2 challenging:1 range:5 averaged:2 acknowledgment:1 horn:1 testing:1 practice:1 block:1 definite:9 hiai:2 pontil:7 sdca:7 area:1 empirical:2 universal:1 mhc:2 significantly:1 vert:1 numbered:2 get:7 svr:1 stj:1 optimize:3 equivalent:1 map:1 restriction:1 independently:4 convex:24 formulate:2 arranges:1 khm:8 s6:1 coordinate:8 analogous:1 resp:1 pt:4 user:1 us:1 trick:1 element:2 trend:1 expensive:1 recognition:4 database:3 gehler:1 solved:5 region:3 sun:1 rescaled:1 mentioned:1 disease:1 convexity:3 complexity:3 schiele:1 ong:1 raise:1 solving:3 rewrite:2 algebra:1 efficiency:2 usps:3 easily:2 joint:1 various:2 regularizer:6 derivation:1 train:5 distinct:1 fast:1 kp:2 hyper:3 shalev:1 bernt:1 supplementary:6 valued:6 solve:7 larger:1 relax:1 cvpr:2 think:1 jointly:7 emergence:1 final:2 advantage:2 eigenvalue:1 sdm:1 matthias:1 analytical:5 propose:3 product:2 maximal:1 relevant:1 achieve:1 frobenius:4 validate:1 olkopf:1 exploiting:1 cluster:2 generating:1 ben:1 illustrate:2 derive:1 strong:1 soc:2 solves:1 involves:2 indicate:1 closely:1 stochastic:7 material:6 require:2 kii:1 generalization:10 clustered:2 biological:1 hold:4 effortlessly:1 considered:2 lawrence:1 mapping:1 predict:2 achieves:1 vary:1 abbey:1 torralba:2 earth:1 hwt:1 label:2 minimization:6 mit:7 always:1 aim:3 rather:1 pn:2 avoid:1 cr:1 parkinson:3 varying:1 zhou:1 validated:1 focus:3 rank:2 hk:3 contrast:1 attains:1 baseline:2 stopping:2 unlikely:1 spurious:2 relation:1 germany:2 overall:2 dual:33 among:1 classification:9 denoted:1 colt:1 proposes:1 art:1 special:1 initialize:1 ruan:1 evgeniou:3 having:3 kw:1 icml:5 representer:1 report:5 brr:1 sanghavi:1 employ:4 randomly:2 ciliberto:1 freedom:1 interest:1 highly:3 analyzed:1 semidefinite:9 primal:10 tj:3 regularizers:10 amenable:1 implication:1 necessary:2 poggio:1 decoupled:1 maurer:1 hein:2 instance:1 kij:4 fenchel:1 cover:1 laaksonen:1 caruana:1 maximization:1 deviation:3 subset:1 eigendecompositions:1 optimally:2 reported:2 considerably:1 st:1 international:1 informatics:1 invertible:2 together:3 squared:9 satisfied:3 containing:1 choose:1 rosasco:2 worse:1 leading:2 li:1 de:1 summarized:1 coding:1 includes:1 explicitly:1 depends:1 root:1 closed:1 analyze:1 doing:1 red:1 defer:1 contribution:3 square:1 ni:1 ir:1 convolutional:2 accuracy:8 variance:1 efficiently:5 yield:1 correspond:2 landmine:3 crr:1 handwritten:1 cmtl:4 zoo:1 cybernetics:1 reach:1 sharing:1 involved:1 proof:2 associated:2 fractional:1 improves:1 hilbert:2 organized:1 back:1 higher:1 mtl:12 methodology:1 follow:2 alvarez:1 entrywise:1 formulation:21 lounici:1 amer:1 symptom:1 generality:1 furthermore:1 stage:2 smola:1 c2k:1 until:1 hand:2 maximizer:4 hein1:1 indicated:1 tt0:1 hence:3 regularization:3 alternating:1 symmetric:1 during:1 please:2 auc:1 noted:1 criterion:2 generalized:1 demonstrate:1 jti:1 image:3 wise:1 novel:2 recently:1 common:2 functional:1 empirically:2 discussed:4 interpretation:1 elementwise:1 interpret:1 significant:1 refer:2 mroueh:1 unconstrained:8 had:1 access:1 austral:1 similarity:2 etc:1 j:2 recent:2 optimizing:1 belongs:1 optimizes:1 certain:6 hay:1 binary:5 yi:6 additional:3 relaxed:2 employed:6 paradigm:1 semi:4 multiple:6 full:1 reduces:1 caponnetto:1 technical:4 faster:6 characterized:1 cross:3 bach:1 ravikumar:1 plugging:1 prediction:6 scalable:2 involving:2 basic:1 regression:4 patient:1 mmci:1 oliva:2 yeung:1 iteration:4 kernel:60 achieved:2 penalize:1 whereas:1 want:2 interval:1 else:2 crucial:1 sch:1 sr:5 ascent:7 koskela:1 nath:3 effectiveness:1 split:7 divisible:1 xj:7 restrict:1 suboptimal:1 simplifies:1 idea:1 br:2 t0:1 expression:9 tabulated:2 reformulated:1 cause:1 deep:1 foliated:1 detailed:1 tsybakov:1 cosh:1 ten:4 svms:1 category:1 outperform:2 per:6 hyperparameter:2 group:2 key:2 sparsifying:1 penalizing:1 fmtl:1 cone:4 sti:1 inverse:1 letter:3 place:2 summarizes:1 kji:2 scaling:2 comparable:1 followed:1 tackled:1 correspondence:1 yale:2 quadratic:1 constraint:7 scene:6 bousquet:1 argument:1 min:4 separable:2 maxn:1 alternate:5 conjugate:2 jr:4 outlier:1 explained:1 computationally:1 equation:1 previously:1 indoor67:6 arcsinh:2 end:1 operation:2 apply:1 observe:1 hierarchical:1 generic:4 original:2 remaining:1 dirty:1 hinge:3 newton:1 log10:2 classical:1 micchelli:3 objective:2 question:2 costly:2 rt:3 sha:1 jalali:1 detrimental:1 whom:1 enforcing:2 relationship:5 ying:1 setup:2 unfortunately:1 maksim:1 trace:2 benchmark:4 acknowledge:1 descent:1 t:1 extended:2 rn:5 reproducing:2 arbitrary:1 inferred:1 cast:1 required:1 optimized:2 connection:1 learned:2 kang:1 saarbr:2 nip:4 trans:1 below:1 indoor:1 sparsity:2 program:1 max:7 including:1 interpretability:1 power:1 regularized:5 arm:1 scheme:2 hm:2 bare:1 review:1 discovery:1 kf:8 relative:3 loss:17 expect:1 okl:14 validation:1 eigendecomposition:1 foundation:1 degree:2 xiao:2 share:1 repeat:1 last:1 enjoys:1 aij:2 allow:3 landmines:2 institute:2 characterizing:1 barrier:1 face:1 taking:1 sparse:2 crossvalidated:1 benefit:1 van:1 gram:1 pratik:1 avoids:1 author:1 obtains:1 relatedness:1 implicitly:1 sz:1 ml:2 pseudoinverse:1 global:1 robotic:1 uai:1 monotonicity:1 assumed:1 xi:11 shwartz:1 latent:1 z6:1 table:8 learn:4 terminate:1 transfer:1 protocol:1 main:5 depicts:1 cubic:1 ehinger:1 darker:1 explicit:4 lie:1 jmlr:4 mtrl:5 learns:1 admissible:1 theorem:9 jt:1 showing:1 experimented:1 stl:5 grouping:1 naively:1 derives:1 mnist:3 effectively:1 kr:9 ci:1 gap:4 easier:2 sparser:1 infinitely:1 ez:1 expressed:1 scalar:1 applies:2 ramakrishnan:1 corresponds:1 extracted:1 acm:1 identity:2 sun397:8 towards:1 shared:1 feasible:1 wt:2 lemma:4 geer:1 multimedia:1 duality:4 experimental:2 hws:1 desert:1 support:1 fulfills:1 bioinformatics:1 lapin:1 argyriou:1 avoiding:1 |
5,411 | 5,899 | Gradient Estimation Using
Stochastic Computation Graphs
1
John Schulman1,2
joschu@eecs.berkeley.edu
Nicolas Heess1
heess@google.com
Theophane Weber1
theophane@google.com
Pieter Abbeel2
pabbeel@eecs.berkeley.edu
Google DeepMind
2
University of California, Berkeley, EECS Department
Abstract
In a variety of problems originating in supervised, unsupervised, and reinforcement learning, the loss function is defined by an expectation over a collection
of random variables, which might be part of a probabilistic model or the external world. Estimating the gradient of this loss function, using samples, lies at
the core of gradient-based learning algorithms for these problems. We introduce
the formalism of stochastic computation graphs?directed acyclic graphs that include both deterministic functions and conditional probability distributions?and
describe how to easily and automatically derive an unbiased estimator of the loss
function?s gradient. The resulting algorithm for computing the gradient estimator
is a simple modification of the standard backpropagation algorithm. The generic
scheme we propose unifies estimators derived in variety of prior work, along with
variance-reduction techniques therein. It could assist researchers in developing intricate models involving a combination of stochastic and deterministic operations,
enabling, for example, attention, memory, and control actions.
1
Introduction
The great success of neural networks is due in part to the simplicity of the backpropagation algorithm, which allows one to efficiently compute the gradient of any loss function defined as a
composition of differentiable functions. This simplicity has allowed researchers to search in the
space of architectures for those that are both highly expressive and conducive to optimization; yielding, for example, convolutional neural networks in vision [12] and LSTMs for sequence data [9].
However, the backpropagation algorithm is only sufficient when the loss function is a deterministic,
differentiable function of the parameter vector.
A rich class of problems arising throughout machine learning requires optimizing loss functions
that involve an expectation over random variables. Two broad categories of these problems are (1)
likelihood maximization in probabilistic models with latent variables [17, 18], and (2) policy gradients in reinforcement learning [5, 23, 26]. Combining ideas from from those two perennial topics,
recent models of attention [15] and memory [29] have used networks that involve a combination of
stochastic and deterministic operations.
In most of these problems, from probabilistic modeling to reinforcement learning, the loss functions
and their gradients are intractable, as they involve either a sum over an exponential number of latent
variable configurations, or high-dimensional integrals that have no analytic solution. Prior work (see
Section 6) has provided problem-specific derivations of Monte-Carlo gradient estimators, however,
to our knowledge, no previous work addresses the general case.
Appendix C recalls several classic and recent techniques in variational inference [14, 10, 21] and reinforcement learning [23, 25, 15], where the loss functions can be straightforwardly described using
1
the formalism of stochastic computation graphs that we introduce. For these examples, the variancereduced gradient estimators derived in prior work are special cases of the results in Sections 3 and 4.
The contributions of this work are as follows:
? We introduce a formalism of stochastic computation graphs, and in this general setting, we derive
unbiased estimators for the gradient of the expected loss.
? We show how this estimator can be computed as the gradient of a certain differentiable function
(which we call the surrogate loss), hence, it can be computed efficiently using the backpropagation algorithm. This observation enables a practitioner to write an efficient implementation using
automatic differentiation software.
? We describe variance reduction techniques that can be applied to the setting of stochastic computation graphs, generalizing prior work from reinforcement learning and variational inference.
? We briefly describe how to generalize some other optimization techniques to this setting:
majorization-minimization algorithms, by constructing an expression that bounds the loss function; and quasi-Newton / Hessian-free methods [13], by computing estimates of Hessian-vector
products.
The main practical result of this article is that to compute the gradient estimator, one just needs
to make a simple modification to the backpropagation algorithm, where extra gradient signals are
introduced at the stochastic nodes. Equivalently, the resulting algorithm is just the backpropagation
algorithm, applied to the surrogate loss function, which has extra terms introduced at the stochastic
nodes. The modified backpropagation algorithm is presented in Section 5.
2
Preliminaries
2.1 Gradient Estimators for a Single Random Variable
This section will discuss computing the gradient of an expectation taken over a single random
variable?the estimators described here will be the building blocks for more complex cases with
multiple variables. Suppose that x is a random variable, f is a function (say, the cost), and we are
@
interested in computing @?
Ex [f (x)]. There are a few different ways that the process for generating
x could be parameterized in terms of ?, which lead to different gradient estimators.
? We might be given a parameterized probability distribution x ? p(?; ?). In this case, we can use
the score function (SF) estimator [3]:
?
@
@
Ex [f (x)] = Ex f (x) log p(x; ?) .
(1)
@?
@?
This classic equation is derived as follows:
Z
Z
@
@
@
Ex [f (x)] =
dx p(x; ?)f (x) = dx
p(x; ?)f (x)
@?
@?
@?
?
Z
@
@
= dx p(x; ?) log p(x; ?)f (x) = Ex f (x) log p(x; ?)
@?
@?
(2)
This equation is valid if and only if p(x; ?) is a continuous function of ?; however, it does not
need to be a continuous function of x [4].
? x may be a deterministic, differentiable function of ? and another random variable z, i.e., we can
write x(z, ?). Then, we can use the pathwise derivative (PD) estimator, defined as follows.
?
@
@
Ez [f (x(z, ?))] = Ez
f (x(z, ?)) .
(3)
@?
@?
This equation, which merely swaps the derivative and expectation, is valid if and only if f (x(z, ?))
is a continuous function of ? for all z [4]. 1 That is not true if, for example, f is a step function.
1
Note that for the pathwise derivative estimator, f (x(z, ?)) merely needs to be a continuous function of
??it is sufficient that this function is almost-everywhere differentiable. A similar statement can be made
about p(x; ?) and the score function estimator. See Glasserman [4] for a detailed discussion of the technical
requirements for these gradient estimators to be valid.
2
? Finally ? might appear both in the probability distribution and inside the expectation, e.g., in
@
@? Ez?p(?; ?) [f (x(z, ?))]. Then the gradient estimator has two terms:
?
?
?
@
@
@
Ez?p(?; ?) [f (x(z, ?))] = Ez?p(?; ?)
f (x(z, ?)) +
log p(z; ?) f (x(z, ?)) . (4)
@?
@?
@?
This formula can be derived by writing the expectation as an integral and differentiating, as in
Equation (2).
In some cases, it is possible to reparameterize a probabilistic model?moving ? from the distribution
to inside the expectation or vice versa. See [3] for a general discussion, and see [10, 21] for a recent
application of this idea to variational inference.
The SF and PD estimators are applicable in different scenarios and have different properties.
1. SF is valid under more permissive mathematical conditions than PD. SF can be used if f is
discontinuous, or if x is a discrete random variable.
2. SF only requires sample values f (x), whereas PD requires the derivatives f 0 (x). In the context
of control (reinforcement learning), SF can be used to obtain unbiased policy gradient estimators
in the ?model-free? setting where we have no model of the dynamics, we only have access to
sample trajectories.
3. SF tends to have higher variance than PD, when both estimators are applicable (see for instance
[3, 21]). The variance of SF increases (often linearly) with the dimensionality of the sampled
variables. Hence, PD is usually preferable when x is high-dimensional. On the other hand, PD
has high variance if the function f is rough, which occurs in many time-series problems due to
an ?exploding gradient problem? / ?butterfly effect?.
4. PD allows for a deterministic limit, SF does not. This idea is exploited by the deterministic policy
gradient algorithm [22].
Nomenclature. The methods of estimating gradients of expectations have been independently proposed in several different fields, which use differing terminology. What we call the score function
estimator (via [3]) is alternatively called the likelihood ratio estimator [5] and REINFORCE [26].
We chose this term because the score function is a well-known object in statistics. What we call
the pathwise derivative estimator (from the mathematical finance literature [4] and reinforcement
learning [16]) is alternatively called infinitesimal perturbation analysis and stochastic backpropagation [21]. We chose this term because pathwise derivative is evocative of propagating a derivative
through a sample path.
2.2
Stochastic Computation Graphs
The results of this article will apply to stochastic computation graphs, which are defined as follows:
Definition 1 (Stochastic Computation Graph). A directed, acyclic graph, with three types of
nodes:
1. Input nodes, which are set externally, including the parameters we differentiate with
respect to.
2. Deterministic nodes, which are functions of their parents.
3. Stochastic nodes, which are distributed conditionally on their parents.
Each parent v of a non-input node w is connected to it by a directed edge (v, w).
In the subsequent diagrams of this article, we will use circles to denote stochastic nodes and squares
to denote deterministic nodes, as illustrated below. The structure of the graph fully specifies what
estimator we will use: SF, PD, or a combination thereof. This graphical notation is shown below,
along with the single-variable estimators from Section 2.1.
3
z
Input node
?
Deterministic node
?
f
x
?
Gives SF estimator
Stochastic node
2.3
x
f
Gives PD estimator
Simple Examples
Several simple examples that illustrate the stochastic computation graph formalism are shown below.
The gradient estimators can be described by writing the expectations as integrals and differentiating,
as with the simpler estimators from Section 2.1. However, they are also implied by the general
results that we will present in Section 3.
Stochastic Computation Graph
Objective
Gradient Estimator
(1)
?
x
y
f
Ey [f (y)]
@x @
log p(y | x)f (y)
@? @x
(2)
?
x
y
f
Ex [f (y(x))]
@
log p(x | ?)f (y(x))
@?
(3)
?
x
y
f
Ex,y [f (y)]
@
log p(x | ?)f (y)
@?
f
Ex [f (x, y(?))]
@
@y @f
log p(x | ?)f (x, y(?)) +
@?
@? @y
Ex1 ,x2 [f1 (x1 ) + f2 (x2 )]
x
(4)
?
y
(5)
?
f1
f2
x0
x1
x2
@
log p(x1 | ?, x0 )(f1 (x1 ) + f2 (x2 ))
@?
@
+ log p(x2 | ?, x1 )f2 (x2 )
@?
Figure 1: Simple stochastic computation graphs
These simple examples illustrate several important motifs, where stochastic and deterministic nodes
are arranged in series or in parallel. For example, note that in (2) the derivative of y does not appear
in the estimator, since the path from ? to f is ?blocked? by x. Similarly, in (3), p(y | x) does not
appear (this type of behavior is particularly useful if we only have access to a simulator of a system,
but not access to the actual likelihood function). On the other hand, (4) has a direct path from ? to
f , which contributes a term to the gradient estimator. (5) resembles a parameterized Markov reward
process, and it illustrates that we?ll obtain score function terms of the form grad log-probability ?
future costs.
The examples above all have one input ?, but the formalism accommodates models with multiple inputs, for exy=label
W 1 b1 W 2 b2
ample a stochastic neural network with multiple layers of
weights and biases, which may influence different subcrosssoftentropy
h1
h2
sets of the stochastic and cost nodes. See Appendix C x
max
loss
for nontrivial examples with stochastic nodes and multiple inputs. The figure on the right shows a deterministic
computation graph representing classification loss for a two-layer neural network, which has four
parameters (W1 , b1 , W2 , b2 ) (weights and biases). Of course, this deterministic computation graph
is a special type of stochastic computation graph.
4
3
Main Results on Stochastic Computation Graphs
3.1
Gradient Estimators
This section will consider a general stochastic computation graph, in which a certain set of nodes
are designated as costs, and we would like to compute the gradient of the sum of costs with respect
to some input node ?.
In brief, the main results of this section are as follows:
1. We derive a gradient estimator for an expected sum of costs in a stochastic computation graph.
This estimator contains two parts (1) a score function part, which is a sum of terms grad logprob of variable ? sum of costs influenced by variable; and (2) a pathwise derivative term, that
propagates the dependence through differentiable functions.
2. This gradient estimator can be computed efficiently by differentiating an appropriate ?surrogate?
objective function.
Let ? denote the set of input nodes, D the set of deterministic nodes, and S the set of stochastic
nodes. Further, we will designate a set of cost nodes C, which are scalar-valued and deterministic.
(Note that there is no loss of generality in assuming that the costs are deterministic?if a cost is
stochastic, we can simply append a deterministic node that applies the identity function to it.) We
will use ? to denote an input node (? 2 ?) that we differentiate with respect to. In the context of
machine learning, we will usually be most concerned with differentiating with respect to a parameter
vector (or tensor), however, the theory we present does not make any assumptions about what ?
represents.
For the results that follow, we need to define the
notion of ?influence?, for which we will introduce
two relations
and D . The relation v
w
(?v influences w?) means that there exists a sequence of nodes a1 , a2 , . . . , aK , with K
0, such
that (v, a1 ), (a1 , a2 ), . . . , (aK 1 , aK ), (aK , w) are
edges in the graph. The relation v D w (?v deterministically influences w?) is defined similarly, except that now we require that each ak is a deterministic node. For example, in Figure 1, diagram (5)
above, ? influences {x1 , x2 , f1 , f2 }, but it only deterministically influences {x1 , x2 }.
Notation Glossary
?: Input nodes
D: Deterministic nodes
S: Stochastic nodes
C: Cost nodes
v
v
w: v influences w
D
w: v deterministically influences w
DEPS v :
?dependencies?,
{w 2 ? [ S | w D v}
Next, we will establish a condition that is sufficient
? v : sum of cost nodes influenced by v.
Q
for the existence of the gradient. Namely, we will
stipulate that every edge (v, w) with w lying in the
v?: denotes the sampled value of the node v.
?influenced? set of ? corresponds to a differentiable
dependency: if w is deterministic, then the Jacobian
@w
@v must exist; if w is stochastic, then the probability mass function p(w | v, . . . ) must be differentiable with respect to v.
More formally:
Condition 1 (Differentiability Requirements). Given input node ? 2 ?, for all edges (v, w)
which satisfy ? D v and ? D w, then the following condition holds: if w is deterministic,
Jacobian @w
@v exists, and if w is stochastic, then the derivative of the probability mass function
@
@v p(w | PARENTS w ) exists.
Note that Condition 1 does not require that all the functions in the graph are differentiable. If
the path from an input ? to deterministic node v is blocked by stochastic nodes, then v may be a
nondifferentiable function of its parents. If a path from input ? to stochastic node v is blocked by
other stochastic nodes, the likelihood of v given its parents need not be differentiable; in fact, it does
not need to be known2 .
2
This fact is particularly important for reinforcement learning, allowing us to compute policy gradient estimates despite having a discontinuous dynamics function or reward function.
5
We need a few more definitions to state the main theorems. Let DEPSv := {w 2 ? [ S | w D v},
the ?dependencies? of node v, i.e., the set of nodes that deterministically influence it. Note the
following:
? If v 2 S, the probability mass function of v is a function of DEPSv , i.e., we can write p(v | DEPSv ).
? If v 2 D, v is a deterministic function of DEPSv , so we can write v(DEPSv ).
? v := Pc v, c?, i.e., the sum of costs downstream of node v. These costs will be treated as
Let Q
c2C
constant, fixed to the values obtained during sampling. In general, we will use the hat symbol v? to
denote a sample value of variable v, which will be treated as constant in the gradient formulae.
Now we can write down a general expression for the gradient of the expected sum of costs in a
stochastic computation graph:
Theorem 1.
tions hold:
Suppose that ? 2 ? satisfies Condition 1. Then the following two equivalent equa-
3
?
?
X
X @
6 X
7
@
@
?w +
E
c = E6
log p(w | DEPSw ) Q
c(DEPSc )7
4
5
@?
@?
@?
"
c2C
#
2
2
w2S,
? Dw
(5)
3
X @
6X X @
7
7
= E6
c
?
log
p(w
|
DEPS
)
+
c(
DEPS
)
w
c
4
5
@?
@?
w c,
c2C
?
Proof: See Appendix A.
c2C
? Dc
D
(6)
c2C,
? Dc
w
The estimator expressions above have two terms. The first term is due to the influence of ? on probability distributions. The second term is due to the influence of ? on the cost variables through a chain
of differentiable functions. The distribution term involves a sum of gradients times ?downstream?
costs. The first term in Equation (5) involves a sum of gradients times ?downstream? costs, whereas
the first term in Equation (6) has a sum of costs times ?upstream? gradients.
3.2
Surrogate Loss Functions
Surrogate Loss Computation Graph
The next corollary lets us write down a ?surrogate? objective L,
which is a function of the inputs that we can differentiate to obtain
an unbiased gradient estimator.
P
?
Corollary
1. Let L(?, S) :=
w log p(w | DEPS w )Qw +
P
c(
DEPS
).
Then
differentiation
of
L
gives
us
an
unbiased
grac
c2C
?P
?
?@
?
@
dient estimate: @?
E
c
=
E
L(?,
S)
.
c2C
@?
One practical consequence of this result is that we can apply a standard automatic differentiation procedure to L to obtain an unbiased
gradient estimator. In other words, we convert the stochastic computation graph into a deterministic computation graph, to which we
can apply the backpropagation algorithm.
(1)
?
x
(2)
?
log p(x; ?)f?
(3)
?
log p(x; ?)f?
log p(y|x)f?
log p(x; ?)f?
(4)
?
(5)
?
f
y
There are several alternative ways to define the surrogate objective
log p(x |x ; ?)
x0
log p(x |x ; ?)f?
function that give the same gradient as L from Corollary 1. We
(f? + f? )
P p(w? | DEPSw ) ?
P
could also write L(?, S) := w
Qw + c2C c(DEPSc ),
P?v
Figure 2: Deterministic compuwhere P?w is the probability p(w| DEPSw ) obtained during sampling, tation graphs obtained as surrowhich is viewed as a constant.
gate loss functions of stochas1
0
2
1
1
2
tic computation graphs from Fig-
The surrogate objective from Corollary 1 is actually an upper bound ure 1.
on the true objective in the case that (1) all costs c 2 C are negative,
(2) the the costs are not deterministically influenced by the parameters ?. This construction allows from majorization-minimization algorithms (similar to EM) to be applied to general stochastic
computation graphs. See Appendix B for details.
6
2
3.3
Higher-Order Derivatives.
The gradient estimator for a stochastic computation graph is itself a stochastic computation graph.
Hence, it is possible to compute the gradient yet again (for each component of the gradient vector),
and get an estimator of the Hessian. For most problems of interest, it is not efficient to compute
this dense Hessian. On the other hand, one can also differentiate the gradient-vector product to get
a Hessian-vector product?this computation is usually not much more expensive than the gradient
computation itself. The Hessian-vector product can be used to implement a quasi-Newton algorithm via the conjugate gradient algorithm [28]. A variant of this technique, called Hessian-free
optimization [13], has been used to train large neural networks.
4
Variance Reduction
@
Consider estimating @?
Ex?p(?; ?) [f (x)]. Clearly this expectation is unaffected by subtracting a con@
stant b from the integrand, which gives @?
E
[f (x) b]. Taking the score function estimator,
? @x?p(?; ?)
?
@
we get @? Ex?p(?; ?) [f (x)] = Ex?p(?; ?) @?
log p(x; ?)(f (x) b) . Taking b = Ex [f (x)] generally leads to substantial variance reduction?b is often called a baseline3 (see [6] for a more thorough
discussion of baselines and their variance reduction properties).
We can make a general statement for the case of stochastic computation graphs?that we can
add a baseline to every stochastic node, which depends all of the nodes it doesn?t influence. Let
N ON I NFLUENCED(v) := {w | v ? w}.
Theorem 2.
2
"
#
?
?
X
@
6X @
?v
E
c = E4
log p(v | PARENTSv ) (Q
@?
@?
c2C
b(N ON I NFLUENCED(v)) +
v2S
v ?
X
c2C??
Proof: See Appendix A.
5
3
@ 7
c5
@?
Algorithms
As shown in Section 3, the gradient estimator can be obtained by differentiating a surrogate objective
function L. Hence, this derivative can be computed by performing the backpropagation algorithm
on L. That is likely to be the most practical and efficient method, and can be facilitated by automatic
differentiation software.
Algorithm 1 shows explicitly how to compute the gradient estimator in a backwards ?
pass through
P
@
the stochastic computation graph. The algorithm will recursively compute gv := @v
E
c2C c at
v c
every deterministic and input node v.
6
Related Work
As discussed in Section 2, the score function and pathwise derivative estimators have been used in a
variety of different fields, under different names. See [3] for a review of gradient estimation, mostly
from the simulation optimization literature. Glasserman?s textbook provides an extensive treatment
of various gradient estimators and Monte Carlo estimators in general. Griewank and Walther?s
textbook [8] is a comprehensive reference on computation graphs and automatic differentiation (of
deterministic programs.) The notation and nomenclature we use is inspired by Bayes nets and
influence diagrams [19]. (In fact, a stochastic computation graph is a type of Bayes network; where
the deterministic nodes correspond to degenerate probability distributions.)
The topic of gradient estimation has drawn significant recent interest in machine learning. Gradients
for networks with stochastic units was investigated in Bengio et al. [2], though they are concerned
3
@
@?
The optimal baseline for scalar ? is in fact the weighted expectation
log p(x; ?).
7
Ex [f (x)s(x)2 ]
Ex [s(x)2 ]
where s(x) =
Algorithm 1 Compute Gradient Estimator for Stochastic Computation Graph
for v 2 Graph
. Initialization at output nodes
? do
1dim v if v 2 C
gv =
0dim v otherwise
end for
? w for all nodes w 2 Graph
Compute Q
for v in R EVERSE T OPOLOGICAL S ORT (N ON I NPUTS ) do
. Reverse traversal
for w 2 PARENTSv do
if not I S S TOCHASTIC(w) then
if I S S TOCHASTIC(v) then
@
?w
gw += ( @w
log p(v | PARENTSv ))Q
else
@v T
gw += ( @w
) gv
end if
end if
end for
end for
return [g? ]?2?
with differentiating through individual units and layers; not how to deal with arbitrarily structured
models and loss functions. Kingma and Welling [11] consider a similar framework, although only
with continuous latent variables, and point out that reparameterization can be used to to convert
hierarchical Bayesian models into neural networks, which can then be trained by backpropagation.
The score function method is used to perform variational inference in general models (in the context
of probabilistic programming) in Wingate and Weber [27], and similarly in Ranganath et al. [20];
both papers mostly focus on mean-field approximations without amortized inference. It is used to
train generative models using neural networks with discrete stochastic units in Mnih and Gregor [14]
and Gregor et al. in [7]; both amortize inference by using an inference network.
Generative models with continuous valued latent variables networks are trained (again using an
inference network) with the reparametrization method by Rezende, Mohamed, and Wierstra [21] and
by Kingma and Welling [10]. Rezende et al. also provide a detailed discussion of reparameterization,
including a discussion comparing the variance of the SF and PD estimators.
Bengio, Leonard, and Courville [2] have recently written a paper about gradient estimation in neural
networks with stochastic units or non-differentiable activation functions?including Monte Carlo
estimators and heuristic approximations. The notion that policy gradients can be computed in multiple ways was pointed out in early work on policy gradients by Williams [26]. However, all of this
prior work deals with specific structures of the stochastic computation graph and does not address
the general case.
7
Conclusion
8
Acknowledgements
We have developed a framework for describing a computation with stochastic and deterministic
operations, called a stochastic computation graph. Given a stochastic computation graph, we can
automatically obtain a gradient estimator, given that the graph satisfies the appropriate conditions
on differentiability of the functions at its nodes. The gradient can be computed efficiently in a
backwards traversal through the graph: one approach is to apply the standard backpropagation algorithm to one of the surrogate loss functions from Section 3; another approach (which is roughly
equivalent) is to apply a modified backpropagation procedure shown in Algorithm 1. The results we
have presented are sufficiently general to automatically reproduce a variety of gradient estimators
that have been derived in prior work in reinforcement learning and probabilistic modeling, as we
show in Appendix C. We hope that this work will facilitate further development of interesting and
expressive models.
We would like to thank Shakir Mohamed, Dave Silver, Yuval Tassa, Andriy Mnih, and others at
DeepMind for insightful comments.
8
References
[1] J. Baxter and P. L. Bartlett. Infinite-horizon policy-gradient estimation. Journal of Artificial Intelligence
Research, pages 319?350, 2001.
[2] Y. Bengio, N. L?eonard, and A. Courville. Estimating or propagating gradients through stochastic neurons
for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
[3] M. C. Fu. Gradient estimation. Handbooks in operations research and management science, 13:575?616,
2006.
[4] P. Glasserman. Monte Carlo methods in financial engineering, volume 53. Springer Science & Business
Media, 2003.
[5] P. W. Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM,
33(10):75?84, 1990.
[6] E. Greensmith, P. L. Bartlett, and J. Baxter. Variance reduction techniques for gradient estimates in
reinforcement learning. The Journal of Machine Learning Research, 5:1471?1530, 2004.
[7] K. Gregor, I. Danihelka, A. Mnih, C. Blundell, and D. Wierstra. Deep autoregressive networks. arXiv
preprint arXiv:1310.8499, 2013.
[8] A. Griewank and A. Walther. Evaluating derivatives: principles and techniques of algorithmic differentiation. Siam, 2008.
[9] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780, 1997.
[10] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. arXiv:1312.6114, 2013.
[11] D. P. Kingma and M. Welling. Efficient gradient-based inference through transformations between bayes
nets and neural nets. arXiv preprint arXiv:1402.0480, 2014.
[12] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[13] J. Martens. Deep learning via Hessian-free optimization. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 735?742, 2010.
[14] A. Mnih and K. Gregor. Neural variational inference and learning in belief networks. arXiv:1402.0030,
2014.
[15] V. Mnih, N. Heess, A. Graves, and K. Kavukcuoglu. Recurrent models of visual attention. In Advances
in Neural Information Processing Systems, pages 2204?2212, 2014.
[16] R. Munos. Policy gradient in continuous time. The Journal of Machine Learning Research, 7:771?791,
2006.
[17] R. M. Neal. Learning stochastic feedforward networks. Department of Computer Science, University of
Toronto, 1990.
[18] R. M. Neal and G. E. Hinton. A view of the em algorithm that justifies incremental, sparse, and other
variants. In Learning in graphical models, pages 355?368. Springer, 1998.
[19] J. Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, 2014.
[20] R. Ranganath, S. Gerrish, and D. M. Blei. Black box variational inference. arXiv preprint
arXiv:1401.0118, 2013.
[21] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in
deep generative models. arXiv:1401.4082, 2014.
[22] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic policy gradient
algorithms. In ICML, 2014.
[23] R. S. Sutton, D. A. McAllester, S. P. Singh, Y. Mansour, et al. Policy gradient methods for reinforcement
learning with function approximation. In NIPS, volume 99, pages 1057?1063. Citeseer, 1999.
[24] N. Vlassis, M. Toussaint, G. Kontes, and S. Piperidis. Learning model-free robot control by a Monte
Carlo EM algorithm. Autonomous Robots, 27(2):123?130, 2009.
[25] D. Wierstra, A. F?orster, J. Peters, and J. Schmidhuber. Recurrent policy gradients. Logic Journal of IGPL,
18(5):620?634, 2010.
[26] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning.
Machine learning, 8(3-4):229?256, 1992.
[27] D. Wingate and T. Weber. Automated variational inference in probabilistic programming. arXiv preprint
arXiv:1301.1299, 2013.
[28] S. J. Wright and J. Nocedal. Numerical optimization, volume 2. Springer New York, 1999.
[29] W. Zaremba and I. Sutskever. Reinforcement learning neural Turing machines. arXiv preprint
arXiv:1505.00521, 2015.
9
| 5899 |@word briefly:1 pieter:1 simulation:1 citeseer:1 recursively:1 reduction:6 configuration:1 series:2 score:9 contains:1 document:1 com:2 comparing:1 exy:1 yet:1 dx:3 must:2 written:1 activation:1 john:1 subsequent:1 numerical:1 analytic:1 enables:1 gv:3 v2s:1 generative:3 intelligence:1 core:1 short:1 blei:1 provides:1 node:46 toronto:1 simpler:1 mathematical:2 along:2 wierstra:5 direct:1 walther:2 inside:2 introduce:4 x0:3 expected:3 intricate:1 roughly:1 behavior:1 simulator:1 inspired:1 automatically:3 glasserman:3 actual:1 provided:1 theophane:2 estimating:4 notation:3 mass:3 qw:2 medium:1 what:4 tic:1 deepmind:2 textbook:2 developed:1 differing:1 transformation:1 differentiation:6 berkeley:3 every:3 thorough:1 finance:1 zaremba:1 preferable:1 control:3 unit:4 appear:3 greensmith:1 danihelka:1 engineering:1 tends:1 limit:1 consequence:1 tation:1 despite:1 encoding:1 ak:5 sutton:1 ure:1 path:5 might:3 chose:2 black:1 therein:1 resembles:1 initialization:1 equa:1 directed:3 practical:3 lecun:1 block:1 implement:1 backpropagation:14 procedure:2 riedmiller:1 word:1 get:3 context:3 influence:13 writing:2 igpl:1 equivalent:2 deterministic:30 marten:1 williams:2 attention:3 independently:1 simplicity:2 griewank:2 estimator:52 financial:1 dw:1 reparameterization:2 classic:2 notion:2 autonomous:1 construction:1 suppose:2 programming:2 amortized:1 expensive:1 particularly:2 recognition:1 preprint:6 wingate:2 connected:1 substantial:1 pd:11 reward:2 dynamic:2 traversal:2 trained:2 singh:1 f2:5 swap:1 easily:1 perennial:1 various:1 derivation:1 train:2 describe:3 monte:5 artificial:1 heuristic:1 valued:2 plausible:1 say:1 otherwise:1 statistic:1 itself:2 shakir:1 butterfly:1 differentiate:4 sequence:2 differentiable:12 net:3 propose:1 subtracting:1 product:4 stipulate:1 combining:1 degenerate:1 sutskever:1 parent:6 kontes:1 requirement:2 generating:1 silver:2 incremental:1 object:1 tions:1 derive:3 illustrate:2 recurrent:2 propagating:2 involves:2 discontinuous:2 stochastic:56 mcallester:1 require:2 f1:4 preliminary:1 designate:1 hold:2 lying:1 sufficiently:1 wright:1 great:1 algorithmic:1 early:1 a2:2 estimation:7 applicable:2 label:1 vice:1 grac:1 weighted:1 minimization:2 hope:1 rough:1 clearly:1 w2s:1 modified:2 corollary:4 derived:5 focus:1 rezende:3 likelihood:5 baseline:3 dim:2 inference:14 motif:1 dient:1 relation:3 originating:1 quasi:2 reproduce:1 interested:1 classification:1 development:1 special:2 field:3 having:1 sampling:2 represents:1 broad:1 unsupervised:1 icml:2 future:1 others:1 connectionist:1 intelligent:1 few:2 comprehensive:1 individual:1 interest:2 highly:1 mnih:5 yielding:1 pc:1 chain:1 integral:3 edge:4 fu:1 circle:1 instance:1 formalism:5 modeling:2 maximization:1 cost:21 straightforwardly:1 dependency:3 eec:3 international:1 siam:1 probabilistic:8 w1:1 again:2 lever:1 management:1 external:1 derivative:14 return:1 degris:1 b2:2 satisfy:1 explicitly:1 depends:1 h1:1 view:1 bayes:4 parallel:1 reparametrization:1 contribution:1 majorization:2 square:1 convolutional:1 variance:10 kaufmann:1 efficiently:4 correspond:1 generalize:1 bayesian:1 unifies:1 kavukcuoglu:1 carlo:5 trajectory:1 researcher:2 unaffected:1 dave:1 influenced:4 definition:2 infinitesimal:1 mohamed:3 glynn:1 thereof:1 proof:2 permissive:1 con:1 sampled:2 treatment:1 recall:1 knowledge:1 dimensionality:1 actually:1 higher:2 supervised:1 follow:1 arranged:1 though:1 box:1 generality:1 just:2 hand:3 lstms:1 expressive:2 google:3 facilitate:1 effect:1 name:1 building:1 unbiased:6 true:2 hence:4 neal:2 illustrated:1 deal:2 conditionally:1 ex1:1 ll:1 during:2 gw:2 reasoning:1 weber:2 variational:8 recently:1 tassa:1 volume:3 discussed:1 significant:1 composition:1 blocked:3 versa:1 piperidis:1 automatic:4 similarly:3 pointed:1 moving:1 access:3 robot:2 ort:1 add:1 recent:4 optimizing:1 reverse:1 scenario:1 schmidhuber:2 certain:2 success:1 arbitrarily:1 exploited:1 morgan:1 ey:1 signal:1 exploding:1 multiple:5 conducive:1 technical:1 long:1 a1:3 involving:1 variant:2 vision:1 expectation:11 stant:1 arxiv:15 hochreiter:1 whereas:2 diagram:3 else:1 extra:2 w2:1 comment:1 ample:1 call:3 practitioner:1 backwards:2 feedforward:1 bengio:4 concerned:2 baxter:2 variety:4 automated:1 architecture:1 andriy:1 idea:3 haffner:1 grad:2 blundell:1 c2c:11 expression:3 bartlett:2 assist:1 peter:1 nomenclature:2 hessian:8 york:1 action:1 deep:3 heess:3 useful:1 generally:1 detailed:2 involve:3 category:1 differentiability:2 specifies:1 exist:1 arising:1 write:7 discrete:2 four:1 terminology:1 drawn:1 nocedal:1 graph:43 merely:2 downstream:3 sum:11 convert:2 facilitated:1 parameterized:3 everywhere:1 turing:1 throughout:1 almost:1 appendix:6 bound:2 layer:3 courville:2 nontrivial:1 x2:8 software:2 integrand:1 reparameterize:1 performing:1 department:2 developing:1 designated:1 structured:1 combination:3 conjugate:1 em:3 modification:2 taken:1 equation:6 discus:1 describing:1 end:5 operation:4 tochastic:2 apply:5 hierarchical:1 generic:1 appropriate:2 alternative:1 gate:1 hat:1 existence:1 denotes:1 include:1 graphical:2 newton:2 establish:1 nputs:1 gregor:4 implied:1 objective:7 tensor:1 occurs:1 dependence:1 surrogate:10 gradient:71 thank:1 reinforce:1 accommodates:1 nondifferentiable:1 topic:2 assuming:1 ratio:2 equivalently:1 mostly:2 statement:2 negative:1 append:1 implementation:1 policy:11 perform:1 allowing:1 upper:1 observation:1 neuron:1 markov:1 enabling:1 hinton:1 communication:1 vlassis:1 dc:2 mansour:1 perturbation:1 introduced:2 namely:1 extensive:1 california:1 kingma:4 pearl:1 nip:1 address:2 usually:3 below:3 program:1 including:3 memory:3 max:1 belief:1 treated:2 business:1 representing:1 scheme:1 brief:1 auto:1 prior:6 literature:2 review:1 acknowledgement:1 graf:1 loss:20 fully:1 interesting:1 acyclic:2 pabbeel:1 toussaint:1 h2:1 sufficient:3 article:3 propagates:1 principle:1 course:1 free:5 bias:2 taking:2 differentiating:6 munos:1 sparse:1 distributed:1 evaluating:1 world:1 valid:4 rich:1 glossary:1 doesn:1 collection:1 reinforcement:13 made:1 c5:1 autoregressive:1 welling:4 ranganath:2 approximate:1 logic:1 handbook:1 b1:2 alternatively:2 search:1 latent:4 continuous:7 nicolas:1 contributes:1 investigated:1 complex:1 upstream:1 constructing:1 bottou:1 main:4 dense:1 linearly:1 allowed:1 x1:7 fig:1 amortize:1 deterministically:5 exponential:1 sf:12 lie:1 jacobian:2 externally:1 formula:2 theorem:3 down:2 e4:1 specific:2 insightful:1 symbol:1 intractable:1 exists:3 illustrates:1 justifies:1 horizon:1 generalizing:1 simply:1 likely:1 ez:5 visual:1 pathwise:6 scalar:2 applies:1 springer:3 corresponds:1 satisfies:2 gerrish:1 acm:1 conditional:2 identity:1 viewed:1 leonard:1 infinite:1 except:1 yuval:1 called:5 pas:1 formally:1 e6:2 evocative:1 ex:14 |
5,412 | 59 | 442
How Neural Nets Work
Alan Lapedes
Robert Farber
Theoretical Division
Los Alamos National Laboratory
Los Alamos, NM 87545
Abstract:
There is presently great interest in the abilities of neural networks to mimic
"qualitative reasoning" by manipulating neural incodings of symbols. Less work
has been performed on using neural networks to process floating point numbers
and it is sometimes stated that neural networks are somehow inherently inaccurate and therefore best suited for "fuzzy" qualitative reasoning. Nevertheless,
the potential speed of massively parallel operations make neural net "number
crunching" an interesting topic to explore. In this paper we discuss some of our
work in which we demonstrate that for certain applications neural networks can
achieve significantly higher numerical accuracy than more conventional techniques. In particular, prediction of future values of a chaotic time series can
be performed with exceptionally high accuracy. We analyze how a neural net
is able to do this , and in the process show that a large class of functions from
Rn. ~ Rffl may be accurately approximated by a backpropagation neural net
with just two "hidden" layers. The network uses this functional approximation
to perform either interpolation (signal processing applications) or extrapolation
(symbol processing applicationsJ. Neural nets therefore use quite familiar methods to perform. their tasks. The geometrical viewpoint advocated here seems to
be a useful approach to analyzing neural network operation and relates neural
networks to well studied topics in functional approximation.
1. Introduction
Although a great deal of interest has been displayed in neural network's
capabilities to perform a kind of qualitative reasoning, relatively little work has
been done on the ability of neural networks to process floating point numbers
in a massively parallel fashion. Clearly, this is an important ability. In this
paper we discuss some of our work in this area and show the relation between
numerical, and symbolic processing. We will concentrate on the the subject of
accurate prediction in a time series. Accurate prediction has applications in
many areas of signal processing. It is also a useful, and fascinating ability, when
dealing with natural, physical systems. Given some .data from the past history
of a system, can one accurately predict what it will do in the future?
Many conventional signal processing tests, such as correlation function analysis, cannot distinguish deterministic chaotic behavior from from stochastic
noise. Particularly difficult systems to predict are those that are nonlinear and
chaotic. Chaos has a technical definition based on nonlinear, dynamical systems
theory, but intuitivly means that the system is deterministic but "random," in
a rather similar manner to deterministic, pseudo random number generators
used on conventional computers. Examples of chaotic systems in nature include
turbulence in fluids (D. Ruelle, 1971; H. Swinney, 1978), chemical reactions (K.
Tomita, 1979), lasers (H. Haken, 1975), plasma physics (D. Russel, 1980) to
name but a few. Typically, chaotic systems also display the full range of nonlinear behavior (fixed points, limit cycles etc.) when parameters are varied, and
therefore provide a good testbed in which to investigate techniques of nonlinear
signal processing. Clearly, if one can uncover the underlying, deterministic algorithm from a chaotic time series, then one may be able to predict the future
time series quite accurately,
? American Institute of Physics 1988
443
In this paper we review and extend our work (Lapedes and Farber ,1987)
on predicting the behavior of a particular dynamical system, the Glass-Mackey
equation. We feel that the method will be fairly general, and use the GlassMackey equation solely for illustrative purposes. The Glass-Mackey equation
has a strange attractor with fractal dimension controlled by a constant parameter appearing in the differential equation. We present results on a neural network's ability to predict this system at two values of this parameter, one value
corresponding to the onset of chaos, and the other value deeply in the chaotic
regime. We also present the results of more conventional predictive methods and
show that a neural net is able to achieve significantly better numerical accuracy.
This particular system was chosen because of D. Farmer's and J. Sidorowich's
(D. Farmer, J . Sidorowich, 1987) use of it in developing a new, non-neural net
method for predicting chaos. The accuracy of this non-neural net method, and
the neural net method, are roughly equivalent, with various advantages or disadvantages accruing to one method or the other depending on one's point of
view. We are happy to acknowledge many valuable discussions with Farmer and
Sidorowich that has led to further improvements in each method.
We also show that a neural net never needs more than two hidden layers to
solve most problems. This statement arises from a more general argument that
a neural net can approximate functions from Rn. -+ R m with only two hidden
layers, and that the accuracy of the approximation is controlled by the number
of neurons in each layer. The argument assumes that the global minimum to the
backpropagation minimization problem may be found, or that a local minima
very close in value to the global minimum may be found. This seems to be
the case in the examples we considered, and in many examples considered by
other researchers, but is never guaranteed. The conclusion of an upper bound
of two hidden layers is related to a similar conclusion of R. Lipman (R. Lipman,
1987) who has previously analyzed the number of hidden layers needed to form
arbitrary decision regions for symbolic processing problems. Related issues are
discussed by J. Denker (J. Denker et.al. 1987) It is easy to extend the argument
to draw similar conclusions about an upper bound of two hidden layers for
symbol processing and to place signal processing, and symbol processing in a
common theoretical framework.
2. Backpropagation
Backpropagation is a learning algorithm for neural networks that seeks to
find weights, T ij, such that given an input pattern from a training set of pairs
of Input/Output patterns, the network will produce the Output of the training
set given the Input. Having learned this mapping between I and 0 for the
training set, one then applies a new, previously unseen Input, and takes the
Output as the "conclusion" drawn by the neural net based on having learned
fundamental relationships between Input and Output from the training set. A
popular configuration for backpropagation is a totally feedforward net (Figure
1) where Input feeds up through "hidden layers" to an Output layer.
444
OUTPUT
Figure 1.
A feedforward neural
net. Arrows schematically indicate full
feedforward connectivity
Each neuron forms a weighted sum of the inputs from previous layers to
which it is connected, adds a threshold value, and produces a nonlinear function
of this sum as its output value. This output value serves as input to the future
layers to which the neuron is connected, and the process is repeated. Ultimately
a value is produced for the outputs of the neurons in the Output layer. Thus,
each neuron performs:
(1)
where Tii are continuous valued, positive or negative weights, 9. is a constant,
and g(x) is a nonlinear function that is often chosen to be of a sigmoidal form.
For example, one may choose
g(z)
= 2"1 (1 + tanhz)
(2)
where tanh is the hyperbolic tangent, although the exact formula of the sigmoid
is irrelevant to the results.
If t!") are the target output values for the pth Input pattern then ones trains
the network by minimizing
E
=L
p
L (t~P) - o!P)) 2
(3)
i
where t~p) is the target output values (taken from the training set) and O~pl
is the output of the network when the pth Input pattern of the training set is
presented on the Input layer. i indexes the number of neurons in the Output
layer.
An iterative procedure is used to minimize S. For example, the commonly
used steepest descents procedure is implemented by changing Tii and S, by AT'i
and AS, where
445
~T...
'1
=
aE
--'E
(4a)
aT...
'1
(4b)
This implies that ~E < 0 and hence E will decrease to a local minimum.
Use o~ the chain .rule and definition of some intermediate quantities allows the
followmg expressIons for ~Tij to be obtained (Rumelhart, 1987):
~Tij =
L E6lp)o~.p)
(Sa)
p
(Sb)
where
(6)
if i is labeling a neuron in the Output layer; and
6Jp) = O!p) (1 - o~p?) LTi j 6;p)
(7)
j
if i labels a neuron in the hidden layers. Therefore one computes 6Jp) for the
Output layer first, then uses Eqn. (7) to computer p ) for the hidden layers,
and finally uses Eqn. (S) to make an adjustment to the weights. We remark that
the steepest descents procedure in common use is extremely slow in simulation,
and that a better minimization procedure, such as the classic conjugate gradient
procedure (W. Press, 1986), can offer quite significant speedups. Many applications use bit representations (0,1) for symbols, and attempt to have a neural
net learn fundamental relationships between the symbols. This procedure has
been successfully used in converting text to speech (T. Sejnowski, 1986) and in
determining whether a given fragment of DNA codes for a protein or not (A.
Lapedes, R. Farber, 1987).
There is no fundamental reason, however, to use integer's as values for Input
and Output. If the Inputs and Outputs are instead a collection of floating point
numbers, then the network, after training, yields a specific continuous function
in n variables (for n inputs) involving g(x) (Le. hyperbolic tanh's) that provides
a type of nonlinear, least mean square interpolant formula for the discrete set
of data points in the training set. Use of this formula a = 1(11, 1", ... 1'1)
when given a new input not in the training set, is then either interpolation or
extrapolation.
Since the Output values, when assumed to be floating point numbers may
have a dynamic range great than 10,1\, one may modify the g(x) on the Output
layer to be a linear function, instead of sigmoidal, so as to encompass the larger
dynamic range. Dynamic range of the Input values is not so critical, however we
have found that numerical problems may be avoided by scaling the Inputs (and
6i
446
also the Outputs) to [0,1], training the network, and then rescaling the Ti;, (J,
to encompass the original dynamic range. The point is that scale changes in
I and 0 may, for feedforward networks, always be absorbed in the T ijJ (J, and
vice versa. We use this procedure (backpropagation, conjugate gradient, linear
outputs and scaling) in the following section to predict points in a chaotic time
series.
3. Prediction
Let us consider situations in Nature where a system is described by nonlinear differential equations. This is faily generic. We choose a particular nonlinear
equation that has an infinite dimensional phase space, so that it is similar to
other infinite dimensional systems such as partial differential equations. A differential equation with an infinite dimensional phase space (i.e. an infinite number
of values are necessary to describe the initial condition) is a delay, differential
equation. We choose to consider the time series generated by the Glass-Mackey
equation:
az(t - 1')
b t
X=
1 + Z 10 (t
_ 1') -
(8)
Z( )
This is a nonlinear differential, delay equation with an initial condition specified
by an initial function defined over a strip of width l' (hence the infinite dimensional phase space i.e. initial functions, not initial constants are required).
Choosing this function to be a constant function, and a = .2, b = .1, and l' = 17
yields a time series, x(t), (obtained by integrating Eqn. (8)), that is chaotic with
a fractal attractor of dimension 2.1. Increasing l' to 30 yields more complicated
evolution and a fractal dimension of 3.5. The time series for 500 time steps for
1'=30 (time in units of 1') is plotted in Figure 2. The nonlinear evolution of the
system collapses the infinite dimensional phase space down to a low (approximately 2 or 3 dimensional) fractal, attracting set. Similar chaotic systems are
not uncommon in Nature.
Figure 2. Example time series at tau
~
30.
447
The goal is to take a set of values of xO at discrete times in some time
window containing times less than t, and use the values to accurately predict
x(t + P), where P is some prediction time step into the future. One may fix
P, collect statistics on accuracy for many prediction times t (by sliding the
window along the time series), and then increase P and again collect statistics
on accuracy. This one may observe how an average index of accuracy changes as
P is increased. In terms of Figure 2 we will select various prediction time steps,
P, that correspond to attempting to predict within a "bump," to predicting
a couple of "bumps" ahead. The fundamental nature of chaos dictates that
prediction accuracy will decrease as P is increased. This is due to inescapable
inaccuracies of finite precision in specifying the x( t) at discrete times in the past
that are used for predicting the future. Thus, all predictive methods will degrade
as P is increased - the question is "How rapidly does the error increase with
P?" We will demonstrate that the neural net method can be orders of magnitude
more accurate than conventional methods at large prediction time steps, P.
Our goal is to use backpropagation, and a neural net, to construct a function
O(t
+ P)
= f (1 1 (t), 12(t - A) ... lm(t - mA))
(9)
where O(t + P) is the output of a single neuron in the Output layer, and 11 ~ 1m
are input neurons that take on values z(t), z(t - A) ... z(t - rnA), where A is
a time delay. O(t + P) takes on the value x(t + P). We chose the network
configuation of Figure 1.
We construct a training set by selecting a set of input values:
(10)
1m = x(t p
-
rnA)
with associated output values 0 = x(tp + P), for a collection of discrete times
that are labelled by tp. Typically we used 500 I/O pairs in the training set
so that p ranged from 1~ 500. Thus we have a collection of 500 sets of
{lip), l~p), ... , 1::); O(p)} to use in training the neural net. This procedure of
using delayed sampled values of x{t) can be implemented by using tapped delay lines, just as is normally done in linear signal processing applications, (B.
Widrow, 1985). Our prediction procedure is a straightforward nonlinear extension of the linear Widrow Hoff algorithm. After training is completed, prediction
is performed on a new set of times, t p, not in the training set i.e. for p = 500.
We have not yet specified what m or A should be, nor given any indication
why a formula like Eqn. (9) should work at all. An important theorem of Takens
(Takens, 1981) states that for flows evolving to compact attracting manifolds of
dimension d.A" that a functional relation like Eqn. (9) does exist, and that m
lies in the range d.A, < m + 1 < 2d.A, + 1. We therefore choose m 4, for T 30.
Takens provides no information on A and we chose A = 6 for both cases. We
found that a few different choices of m and A can affect accuracy by a factor of 2 a somewhat significant but not overwhelming sensitivity, in view of the fact that
neural nets tend to be orders of magnitude more accurate than other methods.
Takens theorem gives no information on the form of fO in Eqn. (9). It therefore
=
=
448
is necessary to show that neural nets provide a robust approximating procedure
for continuous fO, which we do in the following section. It is interesting to note
that attempts to predict future values of a time series using past values of x(t)
from a tapped delay line is a common procedUre in signal processing, and yet
there is little, if any, reference to results of nonlinear dynamical systems theory
showing why any such attempt is reasonable.
After trainin, the neural net as described above, we used it to predict 500
new values of x(tJ in the future and computed the average accuracy for these
points. The accuracy is defined to be the average root mean square error, divided
by a constant scale factor, which we took to be the standard deviation of the
data. It is necessary to remove the scale dependence of the data and dividing by
the standard deviation of the data provides a scale to use. Thus the resulting
"index of accuracy" is insensitive to the dynamic range of x( t).
As just described, if one wanted to use a neural net to continuously predict
x(t) values at, say, 6 time steps past the last observed value (i.e. wanted to
construct a net predicting x( t + 6)) then one would train one network, at P
= 6, to do this. If one wanted to always predict 12 time steps past the last
observed x( t) then a separate, P = 12, net would have to be trained. We, in
fact, trained separate networks for P ranging between 6 and 100 in steps of 6.
The index of accuracy for these networks (as obtained by computing the index
of accuracy in the prediction phase) is plotted as curve D in Figure 3. There
is however an alternate way to predict. If one wished to predict, say, x(t + 12)
using a P = 6 net, then one can iterate the P = 6 net. That is, one uses the
P
6 net to predict the x(t +6) values, and then feeds x(t +6) back into the
input line to predict x(t + 12) using the predicted x(t + 6) value instead of
the observed x(t + 6) value. in fact, one can't use the observed x(t +6) value,
because it hasn't been observed yet - the rule of the game is to use only data
occurring at time t and before, to predict x( t + 12). This procedure corresponds
to iterating the map given by Eqn. (9) to perform prediction at multiples of P.
Of course, the delays, ~, must be chosen commensurate with P.
This iterative method of prediction has potential dangers. Because (in our
example of iterating the P = 6 map) the predicted x(t + 6) is always made
with some error, then this error is compounded in iteration, because predicted,
and not observed values, are used on the input lines. However, one may predict more accurately for smaller P, so it may be the case that choosing a very
accurate small P prediction, and iterating, can ultimately achieve higher accuracy at the larger P's of interest. This tUrns out to be true, and the iterated
net method is plotted as curve E in Figure 3. It is the best procedure to use.
Curves A,B,C are alternative methods (iterated polynomial, Widrow-Hoff, and
non-iterated polynomial respectively. More information on these conventional
methods is in (Lapedes and Farber, 1987) ).
=
449
C B
A
1
D
E
1
I,
/'
,:
~
!I:
.8
/
I
/ " \f:J
:
/
I
I
I
I
I
I
I
I
I
I
I
I
I .. ,,:
.6
I
I
. .',
~
~
~
-=
,
.4
I
,
,
I
I
.2
o
o
P1-~~ictlon ~~.
P
Figure 3.
(T.U3~
30)
400
4. Why It Works
Consider writing out explicitly Eqn. (9) for a two hidden layer network
where the output is assumed to be a linear neuron. We consider Input connects
to Hidden Layer 1, Hidden Layer 1 to Hidden Layer 2, and Hidden Layer 2 to
Output, Therefore:
Recall that the output neurons a linear computing element so that only two gOs
occur in formula (11), due to the two nonlinear hidden layers. For ease in later
analysis, let us rewrite this formula as
Ot =
L TtJcg (SU Mle + Ole) + Ot
(12a)
Ie tH 2
where
(12b)
450
The T's and (Ps are specific numbers specified by the training algorithm,
so that after training is finished one has a relatively complicated formula (12a,
12b) that expresses the Output value as a specific, known, function of the Input
values:
Ot == 1(117 12," .lm).
A functional relation of this form, when there is only one output, may be
viewed as surface in m + 1 dimensional space, in exactly the same manner
one interprets the formula z == f(x,y) as a two dimensional surface in three
' dimensional space. The general structure of fO as determined by Eqn. (12a,
12b) is in fact quite simple. From Eqn. (12b) we see that one first forms a sum
of gO functions (where gO is s sigmoidal function) and then from Eqn. (12a)
one (orms yet another sum involving gO functions. It may at first be thought
that this special, simple form of fO restricts the type of surface that may be
represented by Ot = f(Ii)' This initial tl.ought is wrong - the special form of
Eqn. (12) is actually a general representation for quite arbitrary surfaces.
To prove that Eqn. (12) is a reasonable representation for surfaces we
first point out that surfaces may be approximated by adding up a series of
"bumps" that are appropriately placed. An example of this occurs in familiar
Fourier analysis, where wave trains of suitable frequency and amplitude are
added together to approximate curves (or surfaces). Each half period of each
wave of fixed wavelength is a "bump," and one adds all the bumps together to
form the approximant. Let us noW see how Eqn. (12) may be interpreted as
adding together bumps of specified heights and positions. First consider SUM k
which is a sum of g( ) functions. In Figure (4) we plot an example of such a gO
function for the case of two inputs.
Figure 4. A sigmoidal surface.
451
The orientation of this sigmoidal surface is determined by T sit the position by
8;'1 and height by T"'i. Now consider another gO function that occurs in SUM",.
The 8;, of the second gO function is chosen to displace it from the first, the Tii
is chosen so that it has the same orientation as the first, and T "'i is chosen to
have opposite sign to the first. These two g( ) functions occur in SUM"" and
so to determine their contribution to SUM", we sum them together and plot the
result in Fi ure 5. The result is a ridged surface.
Figure 5. A ridge.
Since our goal is to obtain localized bumps we select another pair of gO functions
in SUMk, add them together to get a ridged surface perpendicular to the first
ridged surface, and then add the two perpendicular ridged surfaces together to
see the contribution to SUMk. The result is plotted in Figure (6).
Figure 6. A pseudo-bump .
452
We see that this almost worked, in so much as one obtains a local maxima by
this procedure. However there are also saddle-like configurations at the corners
which corrupt the bump we were trying to obtain. Note that one way to fix
this is to take g(SUMk + Ok) which will, if Ole is chosen appropriately, depress
the local minima and saddles to zero while simultaneously sending the central
maximum
towards 1. The result is plotted in Figure (7) and is the sought___
after
____________________________________________
b~~
Figure 7. A bump.
Furthermore, note that the necessary gO function is supplied by Eqn. (12).
Therefore Eqn. (12) is a procedure to obtain localized bumps of arbitrary height
and position. For two inputs, the kth bump is obtained by using four gO functions from SUMk (two gO functions for each ridged surface and two ridged
surfaces per bump) and then taking gO of the result in Eqn. (12a). The height
of the kth bump is determined by T tJe in Eqn. (12a) and the k bumps are added
together by that equation as well. The general network architecture which corresponds to the above procedure of adding two gO functions together to form a
ridge, two perpendicular ridges together to form a pseudo-bump, and the final
gO to form the final bump is represented in Figure (8). To obtain any number
ot bumps one adds more neurons to the hidden layers by repeatedly using the
connectivity of Figure (8) as a template (Le. four neurons per bump in Hidden
Layer 1, and one neuron per bump in HiClden Layer 2).
453
Figure 8. Connectivity needed
to obtain one bump. Add four
more neurons to Hidden layer
1, and one more neuron to
Hidden Layer 2, for each
additional bump.
One never needs more than two layers, or any other type of connectivity
than that already schematically specified by Figure (8). The accuracy of the
approximation depends on the number of bumps, whIch in turn is specified,
by the number of neurons per layer. This result is easily generalized to higher
dimensions (more than two Inputs) where one needs 2m hiddens in the first
hidden layer, and one hidden neuron in the second layer for each bump.
The argument given above also extends to the situation where one is pro-cessing symbolic information with a neural net. In this situation, the Input
information is coded into bits (say Os and Is) and similarly for the Output. Or,
the Inputs may still be real valued numbers, in which case the binary output
is attempting to group the real valued Inputs into separate classes. To make
the Output values tend toward 0 and lone takes a third and final gO on the
output layer, i.e. each output neuron is represented by g(Ot) where Ot is given
in Eqn. (11) . Recall that up until now we have used hnear neurons on the
output layer. In typical backpropagation examples, one never actually achieves
a hard 0 or 1 on the output layers but achieves instead some value between 0.0
and 1.0. Then typically any value over 0.5 is called 1, and values under 0.5 are
called O. This "postprocessing" step is not really outside the framework of the
network formalism, because it may be performed by merely increasing the slope
of the sigmoidal function on the Output layer. Therefore the only effect of the
third and final gO function used on the Output layer in symbolic information
processing is to pass a hyperplane through the surface we have just been discussing. This plane cuts the surface, forming "decision regions," in which high
values are called 1 and low values are called O. Thus we see that the heart of the
problem is to be able to form surfaces in a general manner, which is then cut
by a hyperplane into general decision regions. We are therefore able to conclude
that the network architecture consisting of just two hidden layers is sufficient for
learning any symbol processing training set. For Boolean symbol mappings one
need not use the second hidden layer to remove the saddles on the bump (c.f.
Fig. 6). The saddles are lower than the central maximum so one may choose
a threshold on the output layer to cut the bump at a point over the saddles to
yield the correct decision region. Whether this representation is a reasonable
one for subsequently achieving good prediction on a prediction set, as opposed
to "memorizing" a training set, is an issue that we address below.
454
We also note that use of Sigma IIi units (Rummelhart, 1986) or high order
correlation nets (Y.-C. Lee, 1987) is an attempt to construct a surface by a
general polynomial expansion, which is then cut by a hyperplane into decision
regions, as in the above. Therefore the essential element of all these neural net
learning algorithms are identical (Le. surface construction), only the particular
method of parameterizing the surface varies from one algorithm to another. This
geometrical viewpoint, which provides a unifying framework for many neural net
algorithms, may provide a useful framework in which to attempt construction
of new algorithms.
Adding together bumps to approximate surfaces is a reasonable procedure
to use when dealing with real valued inputs. It ties in to general approximation
theory (c.f. Fourier series, or better yet, B splines), and can be quite successful
as we have seen. Clearly some economy is gained by giving the neural net bumps
to start with, instead of having the neural net form its own bumps from sigmoids.
One way to do this would be to use multidimensional Gaussian functions with
adjustable parameters.
The situation is somewhat different when processing symbolic (binary valued) data. When input symbols are encoded into N bit bit-strings then one has
well defined input values in an N dimensional input space. As shown above, one
can learn the training set of input patterns by appropriately forming and placing
bump surfaces over this space. This is an effective method for memorizing the
training set, but a very poor method for obtaining correct predictions on new
input data. The point is that, in contrast to real valued inputs that come from,
say, a chaotic time series, the input points in symbolic processing problems are
widely separated and the bumps do not add together to form smooth surfaces.
Furthermore, each input bit string is a corner of an 2N vertex hypercube, and
there is no sense in which one corner of a hypercube is surrounded by the other
corners. Thus the commonly used input representation for symbolic processing
problems requires that the neural net extrapolate the surface to make a new
prediction for a new input pattern (i.e. new corner of the hypercube) and not
interpolate, as is commonly the case for real valued inputs. Extrapolation is
a farmore dangerous procedure than interpolation, and in view of the separated
bumps of the training set one might expect on the basis of this argument that
neural nets would fail dismally at symbol processing. This is not the case.
The solution to this apparent conundrum, of course, is that although it is
sufficient for a neural net to learn a symbol processing training set by forming
bumps it is not necessary for it to operate in this manner. The simplest example of this occurs in the XOR problem. One can implement the input/output
mapping for this problem by duplicating the hidden layer architecture of Figure
(8) appropiately for two bumps ( i.e. 8 hid dens in layer 1, 2 hid dens in layer 2).
As discussed above, for Boolean mappings, one can even eliminate the second
hidden layer. However the architecture of Figure (9) will also suffice.
OUTPUT
Figure 9. Connectivity for XOR
HIDDEN
INPUT
455
Plotting the output of this network, Figure(9), as a function of the two inputs
yields a ridge orientated to run between (0,1) and (1,0) Figure(lO). Thus a
neural net may learn a symbolic training set without using bumps, and a high
dimensional version of this process takes place in more complex symbol processing tasks.Ridge/ravine representations of the training data are considerably
more efficient than bumps (less hidden neurons and weights) and the extended
nature of the surface allows reasonable predictions i.e. extrapolations.
Figure 10
XOR surface
(1, 1)
5. Conclusion.
Neural nets, in contrast to popular misconception, are capable of quite
accurate number crunching, with an accuracy for the prediction problem we
considered that exceeds conventional methods by orders of magnitude. Neural
nets work by constructing surfaces in a high dimensional space, and their operation when performing signal processing tasks on real valued inputs, is closely
related to standard methods of functional ,,-pproximation. One does not need
more than two hidden layers for processing real valued input data, and the accuracy of the approximation is controlled by the number of neurons per layer,
and not the number of layers. We emphasize that although two layers of hidden
neurons are sufficient they may not be efficient. Multilayer architectures may
provide very efficient networks (in the sense of number of neurons and number
of weights) that can perform accurately and with minimal cost.
Effective prediction for symbolic input data is achieved by a slightly different method than that used for real value inputs. Instead of forming localized
bumps (which would accurately represent the training data but would not predict well on new inputs) the network can use ridge/ravine like surfaces (and
generalizations thereof) to efficiently represent the scattered input data. While
neural nets generally perform prediction by interpolation for real valued data,
they must perform extrapolation for symbolic data if the usual bit representations are used. An outstanding problem is why do tanh representations seem to
extrapolate well in symbol processing problema? How do other functional bases
do? How does the representation for symbolic inputs affect the ability to extra~
olate? This geometrical viewpoint provides a unifyimt framework for examimr:
456
many neural net algorithms, for suggesting questions about neural net operation,
and for relating current neural net approaches to conventional methods.
Acknowledgment.
We thank Y. C. Lee, J. D. Farmer, and J. Sidorovich for a number of
valuable discussions.
References
C. Barnes, C. Burks, R. Farber, A. Lapedes, K. Sirotkin, "Pattern Recognition
by Neural Nets in Genetic Databases", manuscript in preparation
J. Denker et. al.," Automatic Learning, Rule Extraction,and Generalization",
ATT, Bell Laboratories preprint, 1987
D. Farmer, J.Sidorowich, Phys.Rev. Lett., 59(8), p. 845,1987
H. Haken, Phys. Lett. A53, p77 (1975)
A. Lapedes, R. Farber "Nonlinear Signal Processing Using Neural Networks:
Prediction and System Modelling", LA-UR87-2662,1987
Y.C. Lee, Physica 22D,(1986)
R. Lippman, IEEE ASAP magazine,p.4, 1987
D. Ruelle, F. Takens, Comm. Math. Phys. 20, p167 (1971)
D. Rummelhart, J. McClelland in "Parallel Distributed Processing" Vol. 1,
M.I.T. Press Cambridge, MA (1986)
D. Russel et al., Phys. Rev. Lett. 45, pU75 (1980)
T. Sejnowski et al., "Net Talk: A Parallel Network that Learns to Read Aloud,"
Johns Hopkins Univ. preprint (1986)
H. Swinney et al., Physics Today 31 (8), p41 (1978)
F. Takens, "Detecting Strange Attractor in Turbulence," Lecture Notes in Mathematics, D. Rand, L. Young (editors), Springer Berlin, p366 (1981)
K. Tomita et aI., J. Stat. Phys. 21, p65 (1979)
| 59 |@word version:1 polynomial:3 seems:2 simulation:1 seek:1 initial:6 configuration:2 series:14 fragment:1 selecting:1 att:1 genetic:1 lapedes:6 past:5 reaction:1 current:1 yet:5 must:2 john:1 numerical:4 wanted:3 remove:2 plot:2 displace:1 mackey:3 half:1 plane:1 steepest:2 provides:5 math:1 detecting:1 sigmoidal:6 height:4 along:1 differential:6 qualitative:3 prove:1 manner:4 roughly:1 p1:1 nor:1 tje:1 behavior:3 little:2 overwhelming:1 window:2 totally:1 increasing:2 underlying:1 suffice:1 what:2 kind:1 interpreted:1 string:2 fuzzy:1 lone:1 ought:1 pseudo:3 duplicating:1 burk:1 multidimensional:1 ti:1 tie:1 exactly:1 wrong:1 farmer:5 unit:2 normally:1 positive:1 before:1 local:4 modify:1 limit:1 accruing:1 analyzing:1 ure:1 solely:1 interpolation:4 approximately:1 might:1 chose:2 studied:1 collect:2 specifying:1 ease:1 collapse:1 range:7 perpendicular:3 acknowledgment:1 implement:1 backpropagation:8 chaotic:11 lippman:1 procedure:18 danger:1 area:2 evolving:1 bell:1 significantly:2 hyperbolic:2 dictate:1 thought:1 integrating:1 protein:1 symbolic:11 get:1 cannot:1 close:1 turbulence:2 writing:1 conventional:8 deterministic:4 equivalent:1 map:2 sidorowich:4 straightforward:1 go:16 rule:3 parameterizing:1 conundrum:1 classic:1 feel:1 target:2 construction:2 today:1 magazine:1 exact:1 us:4 tapped:2 element:2 rumelhart:1 approximated:2 particularly:1 recognition:1 cut:4 database:1 observed:6 preprint:2 region:5 cycle:1 connected:2 decrease:2 valuable:2 deeply:1 comm:1 interpolant:1 dynamic:5 ultimately:2 trained:2 rewrite:1 predictive:2 division:1 basis:1 easily:1 various:2 represented:3 talk:1 laser:1 train:3 separated:2 univ:1 describe:1 effective:2 sejnowski:2 ole:2 labeling:1 choosing:2 outside:1 aloud:1 quite:7 encoded:1 larger:2 solve:1 valued:10 say:4 widely:1 apparent:1 ability:6 statistic:2 unseen:1 final:4 advantage:1 indication:1 net:46 took:1 hid:2 rapidly:1 achieve:3 az:1 los:2 p:1 produce:2 depending:1 widrow:3 stat:1 ij:1 wished:1 advocated:1 sa:1 dividing:1 implemented:2 predicted:3 indicate:1 implies:1 come:1 concentrate:1 farber:6 correct:2 closely:1 pproximation:1 stochastic:1 subsequently:1 fix:2 generalization:2 really:1 extension:1 pl:1 physica:1 considered:3 great:3 mapping:4 predict:18 bump:37 lm:2 u3:1 achieves:2 purpose:1 label:1 tanh:3 vice:1 successfully:1 weighted:1 minimization:2 clearly:3 always:3 rna:2 gaussian:1 rather:1 improvement:1 modelling:1 contrast:2 sense:2 glass:3 economy:1 inaccurate:1 typically:3 sb:1 eliminate:1 hidden:29 relation:3 manipulating:1 issue:2 orientation:2 takens:6 special:2 fairly:1 hoff:2 construct:4 never:4 having:3 lipman:2 extraction:1 identical:1 placing:1 mimic:1 future:8 rummelhart:2 spline:1 few:2 simultaneously:1 national:1 interpolate:1 delayed:1 floating:4 familiar:2 phase:5 connects:1 consisting:1 attractor:3 attempt:5 interest:3 investigate:1 uncommon:1 analyzed:1 tj:1 chain:1 accurate:6 capable:1 partial:1 necessary:5 plotted:5 theoretical:2 minimal:1 increased:3 formalism:1 boolean:2 disadvantage:1 tp:2 cost:1 deviation:2 swinney:2 vertex:1 alamo:2 delay:6 successful:1 varies:1 considerably:1 fundamental:4 sensitivity:1 hiddens:1 ie:1 lee:3 physic:3 together:11 continuously:1 hopkins:1 connectivity:5 again:1 central:2 nm:1 containing:1 choose:5 opposed:1 corner:5 american:1 rescaling:1 approximant:1 potential:2 tii:3 suggesting:1 hasn:1 explicitly:1 onset:1 depends:1 performed:4 view:3 extrapolation:5 root:1 later:1 analyze:1 wave:2 start:1 parallel:4 capability:1 complicated:2 slope:1 contribution:2 minimize:1 square:2 accuracy:19 xor:3 who:1 efficiently:1 yield:5 correspond:1 iterated:3 accurately:7 produced:1 researcher:1 history:1 followmg:1 fo:4 phys:5 strip:1 definition:2 frequency:1 thereof:1 associated:1 couple:1 sampled:1 popular:2 recall:2 amplitude:1 uncover:1 actually:2 back:1 manuscript:1 feed:2 ok:1 higher:3 ridged:6 rand:1 done:2 furthermore:2 just:5 correlation:2 until:1 eqn:19 su:1 nonlinear:15 inescapable:1 o:1 somehow:1 name:1 effect:1 ranged:1 true:1 evolution:2 hence:2 chemical:1 read:1 laboratory:2 deal:1 game:1 width:1 illustrative:1 generalized:1 trying:1 ridge:6 demonstrate:2 haken:2 performs:1 pro:1 sirotkin:1 reasoning:3 geometrical:3 ranging:1 postprocessing:1 chaos:4 fi:1 common:3 sigmoid:1 functional:6 physical:1 jp:2 insensitive:1 extend:2 discussed:2 relating:1 significant:2 versa:1 cambridge:1 ai:1 cessing:1 automatic:1 mathematics:1 similarly:1 surface:29 attracting:2 etc:1 add:7 base:1 own:1 irrelevant:1 massively:2 certain:1 binary:2 discussing:1 seen:1 minimum:5 additional:1 somewhat:2 converting:1 determine:1 period:1 signal:9 ii:1 relates:1 full:2 encompass:2 sliding:1 multiple:1 alan:1 technical:1 compounded:1 smooth:1 exceeds:1 offer:1 divided:1 mle:1 coded:1 controlled:3 prediction:24 involving:2 ae:1 multilayer:1 iteration:1 sometimes:1 represent:2 a53:1 achieved:1 schematically:2 appropriately:3 ot:7 operate:1 extra:1 subject:1 tend:2 flow:1 seem:1 integer:1 feedforward:4 intermediate:1 easy:1 iii:1 iterate:1 affect:2 architecture:5 opposite:1 interprets:1 whether:2 expression:1 depress:1 asap:1 speech:1 remark:1 repeatedly:1 fractal:4 useful:3 tij:2 iterating:3 generally:1 mcclelland:1 dna:1 simplest:1 supplied:1 exist:1 restricts:1 sign:1 per:5 discrete:4 vol:1 express:1 group:1 four:3 nevertheless:1 threshold:2 achieving:1 drawn:1 changing:1 lti:1 merely:1 sum:10 run:1 place:2 almost:1 reasonable:5 ruelle:2 strange:2 extends:1 draw:1 decision:5 scaling:2 bit:6 layer:51 bound:2 guaranteed:1 distinguish:1 display:1 fascinating:1 barnes:1 ahead:1 occur:2 dangerous:1 worked:1 fourier:2 speed:1 argument:5 extremely:1 attempting:2 performing:1 relatively:2 speedup:1 developing:1 alternate:1 poor:1 conjugate:2 smaller:1 slightly:1 rev:2 presently:1 memorizing:2 den:2 xo:1 taken:1 heart:1 equation:12 previously:2 discus:2 turn:2 fail:1 needed:2 serf:1 sending:1 operation:4 denker:3 observe:1 generic:1 appearing:1 alternative:1 original:1 assumes:1 tomita:2 include:1 completed:1 unifying:1 giving:1 approximating:1 hypercube:3 question:2 quantity:1 occurs:3 added:2 already:1 dependence:1 usual:1 gradient:2 kth:2 separate:3 thank:1 berlin:1 degrade:1 topic:2 manifold:1 reason:1 toward:1 code:1 index:5 relationship:2 happy:1 minimizing:1 difficult:1 robert:1 statement:1 sigma:1 stated:1 fluid:1 negative:1 adjustable:1 perform:7 upper:2 neuron:25 commensurate:1 acknowledge:1 finite:1 descent:2 displayed:1 situation:4 extended:1 rn:2 varied:1 orientated:1 arbitrary:3 pair:3 required:1 specified:6 learned:2 testbed:1 inaccuracy:1 address:1 able:5 dynamical:3 pattern:7 below:1 regime:1 tau:1 critical:1 suitable:1 natural:1 predicting:5 finished:1 text:1 review:1 ur87:1 tangent:1 determining:1 expect:1 lecture:1 interesting:2 localized:3 generator:1 sufficient:3 plotting:1 viewpoint:3 editor:1 corrupt:1 surrounded:1 lo:1 course:2 placed:1 last:2 institute:1 template:1 taking:1 distributed:1 curve:4 dimension:5 lett:3 computes:1 commonly:3 collection:3 made:1 avoided:1 pth:2 approximate:3 compact:1 obtains:1 emphasize:1 dealing:2 global:2 assumed:2 conclude:1 ravine:2 continuous:3 iterative:2 why:4 lip:1 nature:5 learn:4 robust:1 inherently:1 obtaining:1 expansion:1 complex:1 constructing:1 arrow:1 noise:1 repeated:1 fig:1 tl:1 scattered:1 fashion:1 slow:1 precision:1 position:3 lie:1 trainin:1 third:2 learns:1 young:1 formula:8 down:1 theorem:2 specific:3 misconception:1 showing:1 symbol:13 sit:1 essential:1 adding:4 gained:1 magnitude:3 p41:1 sigmoids:1 occurring:1 suited:1 led:1 wavelength:1 explore:1 saddle:5 ijj:1 absorbed:1 forming:4 adjustment:1 applies:1 springer:1 corresponds:2 russel:2 ma:2 goal:3 viewed:1 towards:1 labelled:1 exceptionally:1 change:2 hard:1 infinite:6 determined:3 typical:1 hyperplane:3 called:4 pas:1 plasma:1 la:1 select:2 arises:1 outstanding:1 preparation:1 extrapolate:2 |
5,413 | 590 | Computing with Almost Optimal Size Neural
Networks
Kai-Yeung Siu
Dept. of Electrical & Compo Engineering
University of California, Irvine
Irvine, CA 92717
V wani Roychowdhury
School of Electrical Engineering
Purdue University
West Lafayette, IN 47907
Thomas Kailath
Information Systems Laboratory
Stanford University
Stanford, CA 94305
Abstract
Artificial neural networks are comprised of an interconnected collection
of certain nonlinear devices; examples of commonly used devices include
linear threshold elements, sigmoidal elements and radial-basis elements.
We employ results from harmonic analysis and the theory of rational approximation to obtain almost tight lower bounds on the size (i.e. number
of elements) of neural networks. The class of neural networks to which
our techniques can be applied is quite general; it includes any feedforward
network in which each element can be piecewise approximated by a low
degree rational function. For example, we prove that any depth-( d + 1)
network of sigmoidal units or linear threshold elements computing the parity function of n variables must have O(dnl/d-?) size, for any fixed i > O.
In addition, we prove that this lower bound is almost tight by showing
that the parity function can be computed with O(dnl/d) sigmoidal units
or linear threshold elements in a depth-(d + 1) network. These almost
tight bounds are the first known complexity results on the size of neural
networks with depth more than two. Our lower bound techniques yield
a unified approach to the complexity analysis of various models of neural
networks with feedforward structures. Moreover, our results indicate that
in the context of computing highly oscillating symmetric Boolean func19
20
Siu, Roychowdhury, and Kailath
tions, networks of continuous-output units such as sigmoidal elements do
not offer significant reduction in size compared with networks of linear
threshold elements of binary outputs.
1
Introduction
Recently, artificial neural networks have found wide applications in many areas
that require solutions to nonlinear problems. One reason for such success is the
existence of good "learning" or "training" algorithms such as Backpropagation [13]
that provide solutions to many problems for which traditional attacks have failed.
At a more fundamental level, the computational power of neural networks comes
from the fact that each basic processing element computes a nonlinear function
of its inputs. Networks of these nonlinear elements can yield solutions to highly
complex and nonlinear problems. On the other hand, because of the nonlinear
features, it is very difficult to study the fundamental limitations and capabilities of
neural networks. Undoubtedly, any significant progress in the applications of neural
networks must require a deeper understanding of their computational properties.
We employ classical tools such as harmonic analysis and rational approximation
to derive new results on the computational complexity of neural networks. The
class of neural networks to which our techniques can be applied is quite large; it
includes feedforward networks of sigmoidal elements, linear threshold elements, and
more generally, elements that can be piecewise approximated by low degree rational
functions.
1.1
Background, Related Work and Definitions
A widely accepted model of neural networks is the feedforward multilayer network
in which the basic processing element is a sigmoidal element. A sigmoidal element
(Xl, ... , xn) such that
computes a function I(X) of its input variables X
=
2
1- e-F(X)
I(X) = u(F(X? = 1 + e-F(X) - 1 = 1 + e-F(X)
where
s
F(X) =
L:
Wi ? Xi
+ WOo
i=l
The real valued coefficients Wi are commonly referred to as the weights of the sigmoidal function. The case that is of most interest to us is when the inputs are
binary, i.e., X E {l, _l}n. We shall refer to this model as sigmoidal network.
Another common feed forward multilayer model is one in which each basic processing
unit computes a binary linear threshold function sgn(F(X?, where F(X) is the
same as above, and
sgn(F(X? = {
_~
if F(X) ~ 0
if F(X) < 0
This model is often called the threshold circuit in the literature and recently has
been studied intensively in the field of computer science.
Computing with Almost Optimal Size Neural Networks
The size of a network/circuit is the number of elements. The depth of a network/circuit is the longest path from any input gate to the output gates. We can
arrange the gates in layers so that all gates in the same layer compute concurrently.
(A single element can be considered as a one-layer network.) Each layer costs a
unit delay in the computation. The depth of the network (which is the number of
layers) can therefore be interpreted as the time for (parallel) computation.
It has been established that threshold circuit is a very powerful model of computation. Many functions of common interest such as multiplication, division and sorting can be computed in polynomial-size threshold circuits of small constant depth
[19, 18, 21]. While many upper bound results for threshold circuits are known in
the literature, lower bound results have only been established for restricted cases of
threshold circuits. Most of the existing lower bound techniques [10, 17, 16] apply
only to depth-2 threshold circuits. In [16], novel techniques which utilized analytical tools from the theory of rational approximation were developed to obtain lower
bounds on the size of depth-2 threshold circuits that compute the parity function.
In [20], we generalized the methods of rational approximation and our earlier techniques based on harmonic analysis to obtain the first known almost tight lower
bounds on the size of threshold circuits with depth more than two. In this paper,
the techniques are further generalized to yield almost tight lower bounds on the
size of a more general class of neural networks in which each element computes a
continuous function.
The presentation of this paper will be divided into two parts. In the first part, we
shall focus on results concerning threshold circuits. In the second part, the lower
bound results presented in the first part are generalized and shown to be valid even
when the elements of the networks can assume continuous output values. The class
of networks for which such techniques can be applied include networks of sigmoidal
elements and radial basis elements. Due to space limitations, we shall only state
some of the important results; further results and detailed proofs will appear in an
extended paper.
Before we present our main results, we shall give formal definitions of the neural
network models and introduce some of the Boolean functions, which will be used
to explore the computational power of the various networks. To present our results
in a coherent fashion, we define throughout this paper a Boolean function as f :
{I, _l}n -+ {I, -I}, instead of using the usual {O, I} notation.
Definition 1
A threshold circuit is a Boolean circuit in which every gate computes a linear threshold function with an additional property: the weights are integers all bounded by a polynomial in n.
0
Remark 1
The assumption that the weights in the threshold circuits are integers
bounded by a polynomial is common in the literature. In fact, the best known lower
bound result on depth-2 threshold circuit [10] does not apply to the case where
exponentially large weights are allowed. On the other hand, such assumption does
not pose any restriction as far as constant-depth and polynomial-size is concerned.
In other words, the class of constant-depth polynomial-size threshold circuits (TeO)
remains the same when the weights are allowed to be arbitrary. This result was
implicit in [4] and was improved in [18] by showing that any depth-d threshold circuit
21
22
Siu, Roychowdhury, and Kailath
with arbitrary weights can be simulated by a depth-(2d + 1) threshold circuit of
polynomially bounded weights at the expense of a polynomial increase in size. More
recently, it has been shown that any polynomial-size depth-d threshold circuit with
arbitrary weights can be simulated by a polynomial-size depth-(2d + 1) threshold
circuit.
0
In addition to Boolean circuits, we shall also be interested in the computation of
Boolean functions by networks of continuous-valued elements. To formalize this
notion, we adopt the following definitions [12]:
Definition 2
Let 'Y : R - R. A 'Y element with weights WI, ... , Wm E Rand
threshold t is defined to be an element that computes the function 'Y(E~1 WiX; -t)
where (Xl. ... , xm) is the input. A 'Y-network is a feedforward network of'Y elements
with an additional property: the weights Wi are all bounded by a polynomial in n.
o
For example, when 'Y is the sigmoidal function O'(x), then we have a sigmoidal
network, a common model of neural network. In fact, a threshold circuit can also
be viewed as a special case of'Y network where 'Y is the sgn function.
Definition 3
A 'Y-network C is said to compute a Boolean function
f :
{I, -I} with separation (. > 0 if there is some tc E R such that
for any input X = (Xl, ... , Xm) to the network C, the output element of C outputs
{I,-l}n -
a value C(X) with the following property: If f(X) = 1, then C(X) ~ tc
f(X) = -1, then C(X) ~ tc - ?.
+ ?.
If
0
Remark 2
As pointed out in [12], computing with 'Y networks without separation
at the output element is less interesting because an infinitesimal change in the
output of any 'Y element may change the output bit. In this paper, we shall be
mainly interested in computations on 'Y networks Cn with separation at least O(n-k)
for some fixed k > o. This together with the assumption of polynomially bounded
weights makes the complexity class of constant-depth polynomial-size 'Y networks
quite robust and more interesting to study from a theoretical point of view (see
[12]).
0
Definition 4
The PARITY function of X = (x}, X2, .. . , xn) E {I, _l}n is defined to be -1 if the number of -1 in the variables x I, ... , Xn is odd and + 1 otherwise.
Note that this function can be represented as the product n~=l Xi.
0
Definition 5
following:
The Complete Quadratic (CQ) function [3] is defined to be the
=
CQ(X) (Xl" X2) EEl (Xl" X3) EEl ?.? EEl (Xn-l " xn)
i.e. CQ(X) is the sum modulo 2 of all AND's between the (~) pairs of distinct
variables. Note that it is also a symmetric function.
0
2
Results for Threshold Circuits
Fo. the lower bound results on threshold circuits, a central idea of our proof is the
use of a result from the theory of rational approximation which states the following
Computing with Almost Optimal Size Neural Networks
[9]: the function sgn(x) can be approximated with an error of O(e-ck/log(l/??) by
a rational function of degree k for 0 < f < Ixl < 1. (In [16], they apply an
equivalent result [15] that gives an approximation to the function Ixl instead of
sgn(x).) This result allows us to approximate several layers of threshold gates by a
rational function oflow (i.e. logarithmic) degree when the size of the circuit is small.
Then by upper bounding the degree of the rational function that approximates the
PARITY function, we give a lower bound on the size of the circuit. We also give
similar lower bound on the Complete Quadratic (CQ) function using the same
degree argument. By generalizing the 'telescoping' techniques in [14], we show an
almost matching upper bound on the size of the circuits computing the PARITY
and the CQ functions. We also examine circuits in which additional gates other
than the threshold gates are allowed and generalize the lower bound results in this
model. For this purpose, we introduce tools from harmonic analysis of Boolean
functions [11, 3, 18, 17]. We define the class of functions called SP such that
every function in SP can be closely approximated by a sparse polynomial for all
inputs. For example, it can be shown that [18] the class SP contains functions
AND, OR, COMPARISON and ADDITION, and more generally, functions that
have polynomially bounded spectral norms.
The main results on threshold circuits can be summarized by the following theorems. First we present an explicit construction for implementing PARITY. This
construction applies to any 'periodic' symmetric function, such as the CQ function.
Theorem 1
For every d < logn, there exists a depth-(d + 1) threshold circuit
1
d
0
with O(dn / ) gates that computes the PARITY function.
We next show that any depth-(d + 1) threshold circuit computing the PARITY
function or the CQ function must have size O(dnl/d-?) for any fixed f > o. This
result also holds for any function that has strong degree O(n).
Theorem 2
Any depth-(d + 1) threshold circuit computing the PARITY (CQ)
function must have size O(dnl/d / log:! n).
0
We also consider threshold circuits that approximate the PARITY and the CQ
functions when we have random inputs which are uniformly distributed. We derive
almost tight upper and lower bounds on the size of the approximating threshold
circuits.
We next consider threshold circuits with additional gates and prove the following
result.
Theorem 3
Suppose in addition to threshold gates, we have polynomially many
gates E SP in the first layer of a depth-2 threshold circuit that computes the CQ
function. Then the number of threshold gates required in the circuit is O(n/ log2 n).
o
This result can be extended to higher depth circuits when additional gates that
have low degree polynomial approximations are allowed.
Remark 3
Recently Beigel [2], using techniques similar to ours and the fact
23
24
Siu, Roychowdhury, and Kailath
that the PARITY function cannot be computed in polynomial-size constant-depth
circuits of AND, OR gates [7], has shown that any constant-depth threshold circuit
?
0(1)
With (2n ) AND, OR gates but only o(log n) threshold gates cannot compute the
PARITY function of n variables.
0
3
Results for ,-Networks
In the second part of the paper, we consider the computational power of networks
of continuous-output elements. A celebrated result in this area was obtained by
Cybenko [5]. It was shown in [5] that any continuous function over a compact
domain can be closely approximated by sigmoidal networks with two layers. More
recently, Barron [1] has significantly strengthened this result by showing that a
wide class of functions can be approximated with mean squared error of O( n -1 )
by tw<rlayer sigmoidal networks of only n elements. Here we are interested in
networks of continuous-output elements computing Boolean functions instead of
continuous functions. See Section 1.1 for a precise definition of computation of
Boolean functions by a "Y-network.
While quite a few techniques have been developed for deriving lower bound results
on the complexity of threshold circuits, an understanding of the power and the
limitation of networks of continuous elements such as sigmoidal networks, especially
as compared to threshold circuits, have not been explored. For example, we would
like to answer questions such as: how much added computational power does one
gain by using sigmoidal elements or other continuous elements to compute Boolean
functions? Can the size of the network be reduced by using sigmoidal elements
instead of threshold elements?
It was shown in [12] when the depth of the network is restricted to be two, then
there is a Boolean function of n variables that can be computed in a depth-2 sigmoidal network with a fixed number of elements, but requires a depth-2 threshold
circuit with size that increases at least logarithmic in n. In other words, in the
restricted case of depth-2 network, one can reduce the size of the network at least
a logarithmic factor by using continuous elements such as the sigmoidal elements
instead of threshold elements with binary output values. This result has been recently improved in [6], where it is shown that there exists an explicit function that
can be computed using only a constant number of sigmoidal gates, and that any
threshold circuit (irrespective of the depth) computing it must have size !l(log n).
These results motivate the following question: Can we characterize a class of functions for which the threshold circuits computing the functions have sizes at most a
logarithmic factor larger than the sizes of the sigmoidal networks computing them?
Because of the monotonicity of the sigmoidal functions, we do not expect that
there is substantial gain in the computational power over the threshold elements
for computing the class of highly oscillating functions.
It is natural to extend our techniques to sigmoidal networks by approximating
sigmoidal functions with rational functions. We derive a key lemma that yields
a single low degree rational approximation to any function that can be piecewise
approximated by low degree rational functions.
Computing with Almost Optimal Size Neural Networks
=
=
Lemma 1
Let f be a continuous function over A
[a, b]. Let Al
[a, c] and
A2 = [c,b], a < c < b. Denote II 9 II~,= sUP~e~ Ig(x)l. Suppose there are rational
functIOns rl and r2 such that
?
I
II / -
rj
lI~i ~
{
where { > O. Then for each l> 0 and 6 > 0, there is a rational function r such that
deg r ~ 2 deg rl
b- a
II /lII~)
+ 2 deg r2 + Gllog(e + -6-)
log(e +
where w(fj c5)~ is the modulus of continuity of / over A, G1 is a constant.
(1)
0
The above lemma is applied to show that both sigmoidal functions and radial basis
functions can be closely approximated by low degree rational functions. In fact
the above lemma can be generalized to show that if a continuous function can
be piecewise approximated by low degree rational functions over k
10gO(I) n
consecutive intervals, then it can be approximated by a single low degree rational
function over the union of these intervals.
=
These generalized approximation results enable us to show that many of our lower
bound results on threshold circuits can be carried over to sigmoidal networks. Prior
to our work, there was no nontrivial lower bound on the size of sigmoidal networks
with depth more than two. In fact, we can generalize our results to neural networks
whose elements can be piecewise approximated by low degree rational functions.
We show in this paper that for symmetric Boolean functions of large strong degree
(e.g. the parity function), any depth-d network whose elements can be piecewise
approximated by low degree rational functions requires almost the same size as a
depth-d threshold circuit computing the function.
In particular, if it is the class of polynomially bounded functions that are piecewise
continuous and can be piecewise approximated with low degree rational functions,
then we prove the following theorem.
Theorem 4
Let W be any depth-Cd + 1) neural network in which each element
Vj computes a function Ji (Li WiXi) where Ji E it and Li Iwi! ~ nOel) for each
element. If the network W computes the PARITY function of n variables with
separation 6, where 0 < 6 = n(n- k ) for some k > 0, then for any fixed { > 0, W
must have size n(dn 1 / d -().
0
References
[1] A. Barron. Universal Approximation Bounds for Superpositions of a Sigmoidal
Function. IEEE Transactions on In/ormation Theory, to appear.
[2] R. Beigel. Polylog( n) Majority or O(log log n) Symmetric Gates are Equivalent
to One. ACM Symposium on Theory of Computing (STOC), 1992.
[3] J. Bruck. Harmonic Analysis of Polynomial Threshold Functions. SIAM
Journal on Discrete Mathematics, pages 168-177, May 1990.
25
26
Siu, Roychowdhury, and Kailath
[4] A. K. Chandra, L. Stockmeyer, and U. Vishkin. Constant depth reducibility.
Siam J. Comput., 13:423-439, 1984.
[5] G. Cybenko.
Approximations by superpositions of a sigmoidal function.
Math. Control, Signals, Systems, vol. 2, pages 303-314, 1989.
[6] B. Dasgupta and G. Schnitger. Efficient Approximation with Neural Networks:
A Comparison of Gate Functions. In 5th Annual Conference on Neural Information Processing Systems - Natural and Synthetic (NIPS'92), 1992.
[7] M. Furst, J. B. Saxe, and M. Sipser. Parity, Circuits and the Polynomial-Time
Hierarchy. IEEE Symp. Found. Compo Sci., 22:260-270, 1981.
[8] M. Goldmann, J. Hastad, and A. Razborov. Majority Gates vs. General
Weighted Threshold Gates. Seventh Annual Conference on Structure in Complexity Theory, 1992.
[9] A. A. Goncar. On the rapidity of rational approximation of continuous functions with characteristic singularities. Mat. Sbornik, 2(4):561-568, 1967.
[10] A. Hajnal, W. Maass, P. Pudlak, M. Szegedy, and G. Turan. Threshold circuits
of bounded depth. IEEE Symp. Found. Compo Sci., 28:99-110, 1987.
[11] R. J. Lechner. Harmonic analysis of switching functions. In A. Mukhopadhyay,
editor, Recent Development in Switching Theory. Academic Press, 1971.
[12] W. Maass, G. Schnitger, and E. Sontag. On the computational power of
sigmoid versus boolean threshold circuits. IEEE Symp. Found. Compo Sci.,
October 1991.
[13] J. L. McClelland D. E. Rumelhart and the PDP Research Group. Parallel
Distributed Processing: Explorations in the Microstructure of Cognition, vol.
1. MIT Press, 1986.
[14] R. Minnick. Linear-Input Logic. IEEE Trans. on Electronic Computers, EC
10, 1961.
[15] D. J. Newman. Rational Approximation to Ixl. Michigan Math. Journal,
11:11-14, 1964.
[16] R. Paturi and M. Saks. On Threshold Circuits for Parity . IEEE Symp. Found.
Compo Sci., October 1990.
[17] V. P. Roychowdhury, K. Y. Siu, A. Orlitsky, and T. Kailath. A Geometric
Approach to Threshold Circuit Complexity. Workshop on Computational
Learning Theory (Colt'91), pp. 97-111, 1991.
[18] K. Y. Siu and J. Bruck. On the Power of Threshold Circuits with Small
Weights. SIAM J. Discrete Math, pp. 423-435, August 1991.
[19] K. Y. Siu and J . Bruck. Neural Computation of Arithmetic Functions. Proceedings of the IEEE, Special Issue on Neural Networks, pp. 1669-1675, October
1990.
[20] K. Y. Siu, V. P. Roychowdhury, and T. Kailath. Computing with Almost Optimal Size Threshold Circuits. IEEE International Symposium on Information
Theory, Budapest, Hungary, June 1991.
[21] K.-Y. Siu, J. Bruck, T. Kailath, and T. Hofmeister. Depth-Efficient Neural
Networks for Division and Related Problems. to appear in IEEE Trans. Information Theory, 1993.
| 590 |@word polynomial:15 norm:1 reduction:1 celebrated:1 contains:1 ours:1 existing:1 schnitger:2 must:6 hajnal:1 v:1 device:2 compo:5 math:3 attack:1 sigmoidal:29 dn:2 symposium:2 prove:4 symp:4 introduce:2 examine:1 moreover:1 notation:1 circuit:54 bounded:8 interpreted:1 developed:2 turan:1 unified:1 every:3 orlitsky:1 control:1 unit:5 appear:3 before:1 engineering:2 switching:2 path:1 studied:1 lafayette:1 union:1 backpropagation:1 x3:1 pudlak:1 area:2 universal:1 significantly:1 matching:1 word:2 radial:3 cannot:2 context:1 restriction:1 equivalent:2 go:1 deriving:1 notion:1 razborov:1 construction:2 suppose:2 hierarchy:1 modulo:1 element:47 rumelhart:1 approximated:13 utilized:1 mukhopadhyay:1 electrical:2 ormation:1 substantial:1 complexity:7 motivate:1 tight:6 division:2 basis:3 various:2 represented:1 distinct:1 artificial:2 newman:1 quite:4 whose:2 kai:1 stanford:2 widely:1 valued:2 larger:1 otherwise:1 saks:1 vishkin:1 g1:1 analytical:1 interconnected:1 product:1 budapest:1 hungary:1 oscillating:2 tions:1 derive:3 polylog:1 pose:1 odd:1 school:1 progress:1 strong:2 indicate:1 come:1 closely:3 exploration:1 sgn:5 enable:1 saxe:1 implementing:1 require:2 microstructure:1 cybenko:2 singularity:1 hold:1 considered:1 cognition:1 furst:1 arrange:1 adopt:1 a2:1 consecutive:1 purpose:1 superposition:2 teo:1 tool:3 weighted:1 mit:1 concurrently:1 ck:1 focus:1 june:1 longest:1 mainly:1 interested:3 issue:1 colt:1 logn:1 development:1 special:2 field:1 piecewise:8 employ:2 few:1 undoubtedly:1 interest:2 highly:3 theoretical:1 earlier:1 boolean:14 hastad:1 cost:1 oflow:1 comprised:1 siu:10 delay:1 wix:1 seventh:1 characterize:1 answer:1 periodic:1 synthetic:1 fundamental:2 siam:3 international:1 eel:3 together:1 squared:1 central:1 lii:1 li:3 szegedy:1 summarized:1 includes:2 coefficient:1 sipser:1 hofmeister:1 view:1 sup:1 wm:1 capability:1 parallel:2 iwi:1 characteristic:1 yield:4 generalize:2 fo:1 definition:9 infinitesimal:1 pp:3 proof:2 irvine:2 rational:23 gain:2 wixi:1 intensively:1 formalize:1 feed:1 higher:1 stockmeyer:1 improved:2 rand:1 implicit:1 hand:2 nonlinear:6 continuity:1 modulus:1 symmetric:5 laboratory:1 maass:2 generalized:5 paturi:1 complete:2 fj:1 harmonic:6 novel:1 recently:6 common:4 sigmoid:1 rl:2 ji:2 rapidity:1 exponentially:1 extend:1 approximates:1 significant:2 refer:1 mathematics:1 pointed:1 recent:1 certain:1 binary:4 success:1 additional:5 signal:1 ii:4 arithmetic:1 rj:1 academic:1 offer:1 divided:1 concerning:1 basic:3 multilayer:2 chandra:1 yeung:1 dnl:4 addition:4 background:1 interval:2 integer:2 feedforward:5 concerned:1 reduce:1 idea:1 cn:1 sontag:1 remark:3 generally:2 detailed:1 mcclelland:1 reduced:1 roychowdhury:7 discrete:2 dasgupta:1 mat:1 shall:6 vol:2 group:1 key:1 threshold:60 sum:1 powerful:1 almost:13 throughout:1 electronic:1 separation:4 bit:1 bound:22 layer:8 quadratic:2 annual:2 nontrivial:1 x2:2 argument:1 wi:4 tw:1 restricted:3 remains:1 goldmann:1 apply:3 barron:2 spectral:1 gate:22 existence:1 thomas:1 include:2 log2:1 especially:1 approximating:2 classical:1 question:2 added:1 usual:1 traditional:1 said:1 simulated:2 sci:4 majority:2 reason:1 cq:10 difficult:1 october:3 stoc:1 expense:1 upper:4 purdue:1 extended:2 precise:1 pdp:1 arbitrary:3 august:1 pair:1 required:1 california:1 coherent:1 established:2 nip:1 trans:2 xm:2 sbornik:1 power:8 natural:2 bruck:4 telescoping:1 irrespective:1 carried:1 woo:1 prior:1 understanding:2 literature:3 reducibility:1 geometric:1 multiplication:1 expect:1 interesting:2 limitation:3 versus:1 ixl:3 degree:17 editor:1 cd:1 parity:17 formal:1 deeper:1 wide:2 sparse:1 distributed:2 depth:36 xn:5 valid:1 computes:10 forward:1 collection:1 commonly:2 c5:1 ig:1 far:1 polynomially:5 ec:1 transaction:1 approximate:2 compact:1 logic:1 monotonicity:1 deg:3 xi:2 continuous:15 robust:1 ca:2 complex:1 domain:1 vj:1 sp:4 main:2 bounding:1 allowed:4 west:1 referred:1 fashion:1 strengthened:1 explicit:2 xl:5 comput:1 theorem:6 showing:3 explored:1 r2:2 exists:2 workshop:1 sorting:1 tc:3 logarithmic:4 generalizing:1 michigan:1 explore:1 lechner:1 failed:1 applies:1 acm:1 kailath:8 presentation:1 viewed:1 noel:1 change:2 uniformly:1 lemma:4 called:2 accepted:1 beigel:2 dept:1 |
5,414 | 5,900 | Lifted Inference Rules with Constraints
Happy Mittal, Anuj Mahajan
Dept. of Comp. Sci. & Engg.
I.I.T. Delhi, Hauz Khas
New Delhi, 110016, India
happy.mittal@cse.iitd.ac.in,
Vibhav Gogate
Dept. of Comp. Sci.
Univ. of Texas Dallas
Richardson, TX 75080, USA
Parag Singla
Dept. of Comp. Sci. & Engg.
I.I.T. Delhi, Hauz Khas
New Delhi, 110016, India
vgogate@hlt.utdallas.edu
parags@cse.iitd.ac.in
anujmahajan.iitd@gmail.com
Abstract
Lifted inference rules exploit symmetries for fast reasoning in statistical relational models. Computational complexity of these rules is highly dependent on
the choice of the constraint language they operate on and therefore coming up
with the right kind of representation is critical to the success of lifted inference.
In this paper, we propose a new constraint language, called setineq, which allows
subset, equality and inequality constraints, to represent substitutions over the variables in the theory. Our constraint formulation is strictly more expressive than
existing representations, yet easy to operate on. We reformulate the three main
lifting rules: decomposer, generalized binomial and the recently proposed single
occurrence for MAP inference, to work with our constraint representation. Experiments on benchmark MLNs for exact and sampling based inference demonstrate
the effectiveness of our approach over several other existing techniques.
1
Introduction
Statistical relational models such as Markov logic [5] have the power to represent the rich relational
structure as well as the underlying uncertainty, both of which are the characteristics of several real
world application domains. Inference in these models can be carried out using existing probabilistic
inference techniques over the propositionalized theory (e.g., Belief propagation, MCMC sampling,
etc.). This approach can be sub-optimal since it ignores the rich underlying structure in the relational
representation, and as a result does not scale to even moderately sized domains in practice.
Lifted inference ameliorates the aforementioned problems by identifying indistinguishable atoms,
grouping them together and inferring directly over the groups instead of individual atoms. Starting
with the work of Poole [21], a number of lifted inference algorithms have been proposed. These
include lifted exact inference techniques such as lifted Variable Elimination (VE) [3, 17], lifted
approximate inference algorithms based on message passing such as belief propagation [23, 14, 24],
lifted sampling based algorithms [26, 12], lifted search [11], lifted variational inference [2, 20] and
lifted knowledge compilation [10, 6, 9]. There also has been some recent work which examines the
complexity of lifted inference independent of the specific algorithm used [13, 2, 8].
Just as probabilistic inference algorithms use various rules such as sum-out, conditioning and decomposition to exploit the problem structure, lifted inference algorithms use lifted inference rules
to exploit the symmetries. All of them work with an underlying constraint representation that specifies the allowed set of substitutions over variables appearing in the theory. Examples of various
constraint representations include weighted parfactors with constraints [3], normal form parfactors [17], hypercube based representations [24], tree based constraints [25] and the constraint free
normal form [13]. These formalisms differ from each other not only in terms of the underlying
constraint representation but also how these constraints are processed e.g., whether they require a
constraint solver, splitting as needed versus shattering [15], etc.
The choice of the underlying constraint language can have a significant impact on the time as well as
memory complexity of the inference procedure [15], and coming up with the right kind of constraint
representation is of prime importance for the success of lifted inference techniques. Although, there
1
Approach
Lifted VE [4]
CFOVE [17]
GCFOVE [25]
Approx. LBP [24]
Knowledge Compilation
(KC) [10, 7]
Lifted Inference
from Other Side [13]
PTP [11]
Current Work
Constraint
Type
eq/ineq
no subset
eq/ineq
no subset
subset (tree-based)
no inequality
subset (hypercube)
no inequality
eq/ineq
subset
normal forms
(no constraints)
eq/ineq
no subset
eq/ineq
subset
Constraint
Aggregation
intersection
no union
intersection
no union
intersection
union
intersection
union
intersection
no union
none
Tractable
Solver
no
Lifting
Algorithm
lifted VE
yes
lifted VE
yes
lifted VE
yes
lifted message passing
no
intersection
no union
intersection
union
no
first-order knowledge
compilation
lifting rules:
decomposer,binomial
lifted search & sampling:
decomposer, binomial
lifted search & sampling:
decomposer,binomial
single occurrence
yes
yes
Table 1: A comparison of constraint languages proposed in literature across four dimensions. The deficiencies/missing properties for each language have been highlighted in bold. Among the existing work, only KC
allows for a full set of constraints. GCFOVE (tree-based) and LBP (hypercubes) allow for subset constraints
but they do not explicitly handle inequality. PTP does not handle subset constraints. For constraint aggregation,
most approaches allow only intersection of atomic constraints. GCFOVE and LBP allow union of intersections
(DNF) but only deal with subset constraints. See footnote 4 in Broeck [7] regarding KC. Lifted VE, KC and
PTP use a general purpose constraint solver which may not be tractable. Our approach allows for all the features
discussed above and uses a tractable solver. We propose a constrained solution for lifted search and sampling.
Among earlier work, only PTP has looked at this problem (both search and sampling). However, it only allows
a very restrictive set of constraints.
has been some work studying this problem in the context of lifted VE [25], lifted BP [24], and lifted
knowledge compilation [10], existing literature lacks any systematic treatment of this issue in the
context of lifted search and sampling based algorithms. This paper focuses on addressing this issue.
Table 1 presents a detailed comparison of various constraint languages for lifted inference to date.
We make the following contributions. First, we propose a new constraint language called setineq,
which allows for subset (i.e., allowed values are constrained to be either inside a subset or outside
a subset), equality and inequality constraints (called atomic constraints) over substitutions of the
variables. The set of allowed constraints is expressed as a union over individual constraint tuples,
which in turn are conjunctions over atomic constraints. Our constraint language strictly subsumes
several of the existing constraint representations and yet allows for efficient constraint processing,
and more importantly does not require a separate constraint solver. Second, we extend the three main
lifted inference rules: decomposer and binomial [13], and single occurrence [18] for MAP inference,
to work with our proposed constraint language. We provide a detailed analysis of the lifted inference
rules in our constraint formalism and formally prove that the normal form representation is strictly
subsumed by our constraint formalism. Third, we show that evidence can be efficiently represented
in our constraint formulation and is a key benefit of our approach. Specifically, based on the earlier
work of Singla et al. [24], we provide an efficient (greedy) approach to convert the given evidence
in the database tuple form to our constraint representation. Finally, we demonstrate experimentally
that our new approach is superior to normal forms as well as many other existing approaches on
several benchmark MLNs for both exact and approximate inference.
2
Markov Logic
We will use a strict subset of first order logic [22], which is composed of constant, variable,
and predicate symbols. A term is a variable or a constant. A predicate represents a property
of or relation between terms, and takes a finite number of terms as arguments. A literal is a
predicate or its negation. A formula is recursively defined as follows: (1) a literal is a formula,
(2) negation of a formula is a formula, (3) if f1 and f2 are formulas then applying binary logical
operators such as ? and ? to f1 and f2 yields a formula and (4) If x is a variable in a formula f ,
then ?x f and ?x f are formulas. A first order theory (knowledge base (KB)) is a set of quantified
formulas. We will restrict our attention to function-free finite first order logic theory with Herbrand
interpretations [22], as done by most earlier work in this domain [5]. We will also restrict our
2
attention to the case of universally quantified variables. A ground atom is a predicate whose terms
do not contain any variable in them. Similarly, a ground formula is a formula that has no variables.
During the grounding of a theory, each formula is replaced by a conjunction over ground formulas
obtained by substituting the universally quantified variables by constants appearing in the theory.
A Markov logic network (MLN) [5] (or a Markov logic theory) is defined as a set of pairs {fi , wi }m
i=1
where fi is a first-order formula and wi is its weight, a real number. Given a finite set of constants
C, a Markov logic theory represents a Markov network that has one node for every ground atom
in the theory and a feature for every ground formula.
Pm The probability distribution represented by
the Markov network is given by P (?) = Z1 exp( i=1 wi ni (?)), where ni (?) denotes the number
th
of true
Pgroundings
Pm of the i 0formula under the assignment ? to the ground atoms (world) and
Z = ?0 exp( i=1 wi ? ni (? ))) is the normalization constant, called the partition function. It is
well known that prototypical marginal inference task in MLNs ? computing the marginal probability
of a ground atom given evidence ? can be reduced to computing the partition function [11]. Another
key inference task is MAP inference in which the goal is to find an assignment to ground atoms that
has the maximum probability.
In its standard form, a Markov logic theory is assumed to be constraint free i.e. all possible substitutions of variables by constants are considered during the grounding process. In this paper, we
introduce the notion of a constrained Markov logic theory which is specified as a set of triplets
x
{fi , wi , Six }m
i=1 where Si specifies a set (union) of constraints defined over the variables x appearing in the formula. During the grounding process, we restrict to those constant substitutions which
satisfy the constraint set associated with a formula. The probability distribution is now defined
using the restricted set of groundings allowed by the respective constraint sets over the formulas in
the theory. Although, we focus on MLNs in this paper, our results can be easily generalized to other
representations including weighted parfactors [3] and probabilistic knowledge bases [11].
3
Constraint Language
In this section, we formally define our constraint language and its canonical form. We also define
two operators, join and project, for our language. The various features, operators, and properties of
the constraint language presented this section will be useful when we formally extend various lifted
inference rules to the constrained Markov logic theory in the next section (sec. 4).
Language Specification. For simplicity of exposition, we assume that all logical variables take values from the same domain C. Let x = {x1 , x2 , . . . , xk } be a set of logical variables. Our constraint
language called setineq contains three types of atomic constraints: (1) Subset Constraints (setct),
of the form xi ? C (setinct), or xi ?
/ C (setoutct); (2) equality constraints (eqct), of the form
xi = xj ; and (3) inequality constraints (ineqct), of the form xi 6= xj . We will denote an atomic
constraint over set x by Ax . A constraint tuple over x, denoted by T x , is a conjunction of atomic
constraints over x, and a constraint set over x, denoted by S x , is a disjunction of constraint tuples
over x. An example of a constraint set over a pair of variables x = {x1 , x2 } is S x = T1x ?T2x , where
/ {A, B}?x1 = x2 ?x2 ? {B, D}].
T1x = [x1 ? {A, B}?x1 6= x2 ?x2 ? {B, D}], and T2x = [x1 ?
An assignment v to the variables in x is a solution of T x if all constraints in T x are satisfied by v.
Since S x is a disjunction, by definition, v is also a solution of S x .
Next, we define a canonical representation for our constraint language. We require this definition
because symmetries can be easily identified when constraints are expressed in this representation.
We begin with some required definitions. The support of a subset constraint is the set of values
in C that satisfies the constraint. Two subset constraints Ax1 and Ax2 are called value identical
if V1 = V2 , and value disjoint if V1 ? V2 = ?, where V1 and V2 are supports of Ax1 and Ax2
respectively. A constraint tuple T x is transitive over equality if it contains the transitive closure of
all its equality constraints. A constraint tuple T x is transitive over inequality if for every constraint
of the form xi = xj in T x , whenever T x contains xi 6= xk , it also contains xj 6= xk .
Definition 3.1. A constraint tuple T x is in canonical form if the following three conditions are
satisfied: (1) for each variable xi ? x, there is exactly one subset constraint in T x , (2) all equality
and inequality constraints in T x are transitive and (3) all pairs of variables x1 , x2 that participate
either in an equality or an inequality constraint have identical supports. A constraint set S x is in
canonical form if all of its constituent constraint tuples are in canonical form.
3
We can easily express a constraint set in an equivalent canonical form by enforcing the three conditions, one by one on each of its tuples. In our running example, T1x can be converted into canonical
x
x
x
x
x
form by splitting it into four sets of constraint tuples {T11
, T12
, T13
, T14
}, where T11
= [x1 ?
x
x
{B} ? x1 6= x2 ? x2 ? {B}], T12 = [x1 ? {B} ? x2 ? {D}], T13 = [x1 ? {A} ? x2 ? {B}],
x
and T14
= [x1 ? {A} ? x2 ? {D}]. Similarly for T2x . We include the conversion algorithm in the
supplement due to lack of space. The following theorem summarizes its time complexity.
Theorem 3.1. * Given a constraint set S x , each constraint tuple T x in it can be converted to canonical form in time O(mk + k 3 ) where m is the total number of constants appearing in any of the
subset constraints in T x and k is the number of variables in x.
We define following two operations in our constraint language.
Join: Join operation lets us combine a set of constraints (possibly defined over different sets of
variables) into a single constraint. It will be useful when constructing formulas given constrained
predicates (refer Section 4). Let T x and T y be constraints tuples over sets of variables x and y,
respectively, and let z = x ? y. The join operation written as T x o
n T y results in a constraint
z
x
tuple T which has the conjunction of all the constraints present in T and T y . Given the constraint
tuple T1x in our running example and T y = [x1 6= y ? y ? {E, F }], T1x o
n T y results in [x1 ?
{A, B} ? x1 6= x2 ? x1 6= y ? x2 ? {B, D} ? y ? {E, F }]. The complexity of join operation is
linear in the size of constraint tuples being joined.
Project: Project operation lets us eliminate a variable from a given constraint tuple. This is key
operation required in the application of Binomial rule (refer Section 4). Let T x be a constraint tuple.
Given xi ? x, let x?i = x \ {xi }. The project operation written as ?x?i T x results in a constraint
tuple T x?i which contains those constraints in T x not involving xi . We refer to T x?i as the projected
constraint for the variables x?i . Given a solution x?i = v?i to T x?i , the extension count for v?i is
defined as the number of unique assignments xi = vi such that x?i = v?i ,xi = vi is a solution for T x .
T x?i is said to be count preserving if each of its solutions has the same extension count. We require
a tuple to be count preserving in order to correctly maintain the count of the number of solutions
during the project operation (also refer Section 4.3).
Lemma 3.1. * Let T x be a constraint tuple in its canonical form. If xi ? x is a variable which is
either involved only in a subset constraint or is involved in at least one equality constraint then, the
projected constraint T x?i is count preserving. In the former case, the extension count is given by the
size of the support of xi . In the latter case, it is equal to 1.
When dealing with inequality constraints, the extension count for each solution v?i to the projected
constraint T x?i may not be the same and we need to split the constraint first in order to apply the
project operation. For example, consider the constraint [x1 6= x2 ? x1 6= x3 ? x1 , x2 , x3 ?
{A, B, C}]. Then, the extension count for the solution x2 = A, x3 = B to the projected constraint T x?1 is 1 where extension count for the solution x2 = x3 = A is 2. In such cases, we need to
split the tuple T x into multiple constraints such that extension count property is preserved in each
split. Let x?i be a set of variables over which a constraint tuple T x needs to be projected. Let y ? x
be the set of variables with which xi is involved in an inequality constraint in T x . Then, tuple T x
can be broken into an equivalent constraint set by considering each possible division of y into a set
of equivalence classes where variables in the same equivalence class are constrained to be equal and
variables in different equivalence classes are constrained to be not equal to each other. The number of such divisions is given by the Bell number [15]. The divisions inconsistent with the already
existing constraints over variables in y can be ignored. Projection operation has a linear time complexity once the extension count property has been ensured using splitting as described above (see
the supplement for details).
4
Extending Lifted Inference Rules
We extend three key lifted inference rules: decomposer [13], binomial [13] and the single occurrence [18] (for MAP) to work with our constraint formulation. Exposition for Single Occurrence
has been moved to supplement due to lack of space. We begin by describing some important definitions and assumptions. Let M be a constrained MLN theory represented by a set of triplets
x
{(fi , wi , Six )}m
i=1 . We make three assumptions. First, we assume that each constraint set Si is
specified using setineq and is in canonical form. Second, we assume that each formula in the MLN
is constant free. This can be achieved by replacing the appearance of a constant by a variable and
introducing appropriate constraint over the new variable (e.g., replacing A by a variable x and a
4
constraint x ? {A}). Third, we assume that the variables have been standardized apart, i.e., each
formula has a unique set of variables associated with it. In the following, x will denote the set of
all the (logical) variables appearing in M . xi will denote the set of variables in fi . Similar to the
work done earlier [13, 18], we divide the variables in a set of equivalence classes. Two variables are
Tied to each other if they appear as the same argument of a predicate. We take the transitive closure of the Tied relation to obtain the variable equivalence classes. For example, given the theory:
P (x) ? Q(x, y); Q(u, v) ? R(v); R(w) ? T (w, z), the variable equivalence classes are {x, u},
{y, v, w} and {z}. We will use the notation x
? to denote the equivalence class to which x belongs.
4.1
Motivation and Key Operations
The key intuition behind our approach is as follows. Let x be a variable appearing in a formula
fi . Let T xi be an associated constraint tuple and V denote the support for x in T xi . Then, since
constraints are in canonical form, for any other variable x0 ? xi involved in (in)equality constraint
with x with V 0 as the support, we have V = V 0 Therefore, every pair of values vi , vj ? V behave
identically with respect to the constraint tuple T xi and hence, are symmetric to each other. Now, we
could extend this notion to other constraints in which x appears provided the support sets {Vl }rl=1
of x in all such constraints are either identical or disjoint. We could treat each support set Vl for x as
a symmetric group of constants which could be argued about in unison. In an unconstrained theory,
there is a single disjoint partition of constants i.e. the entire domain, such that the constants behave
identically. Our approach generalizes this idea to a groups of constants which behave identically
with each other. Towards this end, we define following 2 key operations over the theory which will
be used over and again during application of lifted inference rules.
Partitioning Operation: We require the support sets of a variable (or sets of variables) over which
lifted rule is being applied to be either identical or disjoint. We say that a theory M defined over a
set of (logical) variables x is partitioned with respect to the variables in the set y ? x if for every
pair of subset constraints Ax1 and Ax2 , x1 , x2 ? y appearing in tuples of S x the supports of Ax1
and Ax2 are either identical or disjoint (but not both). Given a partitioned theory with respect to
variables y, we use V y = {Vly }rl=1 to denote the set of various supports of variables in y. We refer
to the set V y as the partition of y values in M . Our partitioning algorithm considers all the support
sets for variables in y and splits them such that all the splits are identical or disjoint. The constraint
tuples can then be split and represented in terms of these fine-grained support sets. We refer the
reader to the supplement section for a detailed description of our partitioning algorithm.
Restriction Operation: Once the values of a set of variables y have been partitioned into a set
{V y }rl=1 , while applying the lifted inference rules, we will often need to argue about those formula
groundings which are obtained by restricting y values to those in a particular set Vly (since values
in each such support set behave identically to each other). Given x ? y, let Axl denote a subset
constraint over x with Vly as its support. Given a formula fi we define its restriction to the set
xi
Vly as the formula obtained
V xjby replacing its associated constraint tuple T with a new constraint
xi
tuple of the form T
j Al where the conjunction is taken over each variable xj ? y which also
appears in fi . The restriction of an MLN M to the set Vl , denoted by Mly , is the MLN obtained
by restricting each formula in M to the set Vl . Restriction operation can be implemented in a
straightforward manner by taking conjunction with the subset constraints having the desired support
set for variables in y. We next define the formulation of our lifting rules in a constrained theory.
4.2
Decomposer
Let M be an MLN theory. Let x denote the set of variables appearing in M . Let Z(M ) denotes the
partition function for M . We say that an equivalence class x
? is a decomposer [13] of M if a) if x ? x
?
occurs in a formula f ? F , then x appears in every predicate in f and b) If xi , xj ? x
?, then xi , xj
do not appear as different arguments of any predicate P . Let x
? be a decomposer for M . Let M 0
be a new theory in which the domain of all the variables belonging to equivalence class x
? has been
reduced to a single constant. The decomposer rule [13] states that the partition function Z(M ) can
be re-written using Z(M 0 ) as Z(M ) = (Z(M 0 ))m , where m = |Dom(?
x)| in M . The proof follows
from the fact that since x
? is a decomposer, the theory can be decomposed into m independent but
identical (up to the renaming of a constant) theories which do not share any random variables [13].
Next, we will extend the decomposer rule above to work with the constrained theories. We will
assume that the theory has been partitioned with respect to the set of variables appearing in the
5
decomposer x
?. Let the partition of x
? values in M be given by V x? = {Vlx? }rl=1 . Now, we define the
decomposer rule for a constrained theory using the following theorem.
Theorem 4.1. * Let M be a partitioned theory with respect to the decomposer x
?. Let Mlx? denote
x
?
0?
x
x
?
the restriction of M to the partition element Vl . Let Ml further restricts Ml to a singleton {v}
where v ? V x? is some element in the set V x? . Then, the partition function Z(M ) can be written as
x
?
Z(M ) = ?rl=1 Z(Mlx? ) = ?rl=1 Z(Ml0?x )|Vl |
4.3
Binomial
Let M be an unconstrained MLN theory and P be a unary predicate. Let xj denote the set of
variables appearing as first argument of P . Let Dom(xj ) = {ci }ni=1 , ?xj ? xj . Let MkP be the
theory obtained from M as follows. Given a formula fi with weight wi in which P appears, wlog
let xj denote the argument of P in fi . Then, for every such formula fi , we replace it by two new
formulas, fit and fif , obtained by a) substituting true and f alse for the occurrence of P (xj ) in fi ,
respectively, and b) when xj occurs in fit or fif , reducing the domain of xj to {ci }ki=1 in fit and
{ci }ni=k+1 in fif where n = |Dom(xj )|. The weight wit of fit is equal to wi if it has an occurrence
of xj , wi ? k otherwise. Similarly, forPfif . TheBinomial rule
[13] states that the partition function
n
n
P
Z(M ) can be written as: Z(M ) =
Z(M
))
. The proof follows from the fact that
k
k=0 k
calculation of Z can be divided into n + 1 cases, where each case corresponds to considering nk
equivalent possibilities for k number of P groundings being true and n ? k being false, k ranging
from 0 to n.
Next, we extend the above rule for a constrained theory M . Let P be singleton predicate and xj be
set of variables appearing as first arguments of P as before. Let M be partitioned with respect to
x
xj and V xj = {Vl j }rl=1 denote the partition of xj values in M . Let F P denote the set of formulas
in which P appears. For every formula fi ? F P in which xj appears only in P (xj ), assume that
P
the projections over the set x?j are count preserving. Then, we obtain a new MLN Ml,k
from M
P
in the following manner. Given a formula fi ? F with weight wi in which P appears, do the
x
following steps 1) restrict fi to the set of values {v|v ?
/ Vl j } for variable xj 2) for the remaining
xj
tuples (i.e. where xj takes the values from the set Vl ), create two new formulas fit and fif obtained
x
x
xj
x
by restricting fit to the set {Vl1 j , . . . Vlk j } and fif to the set {Vlk+1
, . . . , Vlnj }, respectively. Here,
l
x
the subscript nl = |Vl j | 3) Canonicalize the constraints in fit and fif 4) Substitute true and f alse
for P in fit and fif respectively 5) If xj appears in fit (after the substitution), its weight wit is equal
t
to wi , otherwise split fit into {fitd }D
d=1 such that projection over x?j in each tuple of fid is count
t
t
t
preserving with extension count given by eld . The weight of each fid is wi ? eld . Similarly, for fif .
We are now ready to define the Binomial formulation for a constrained theory:
Theorem 4.2. * Let M be an MLN theory partitioned with respect to variable xj . Let P (xj ) be
a singleton predicate. Let the projections T x?j of tuples associated with the formulas in which xj
x
appears only in P (xj ) be count preserving. Let V xj = {Vl j }rl=1 denote the partition of xj values
xj
in M and let nl = |Vl |. Then, the partition function Z(M ) can be computed using the recursive
application of the following rule for each l:
nl
X
nl
P
Z(M ) =
Z(Ml,k
))
k
k=0
We apply Theorem 4.2 recursively for each partition component in turn to eliminate
P (xj ) comQr
pletely from the theory. The Binomial application as described above
involves
(n
l=1 l + 1) comP
putations of Z whereas a direct grounding method would involve 2 l nl computations (two possibilities for each grounding of P (xj ) in turn). See the supplement for an example.
4.4
Normal Forms and Evidence Processing
Normal Forms: Normal form representation [13] is an unconstrained representation which requires
that a) there are no constants in any formula fl ? F b) the domain of variables belonging to an
equivalence class x
? are identical to each other. An (unconstrained) MLN theory with evidence can
be converted into normal form by a series of mechanical operations in time polynomial in the size
6
Domain
Source
Rules
Friends &
Smokers (FS)
WebKB
Alchemy
[5]
Alchemy
[25],[24]
Alchemy
[16]
Smokes(p) ? Cancer(p); Smokes(p1)
? Friends(p1,p2) ? Smokes(p2)
PageClass(p1,+c1) ? PageClass(p2,+c2)
? Links(p1,p2)
Director(p) ? !WorksFor(p1,p2)
Actor(p) ? !Director(p); Movie(m,p1)
? WorksFor(p1,p2) ? Movie(m,p2)
IMDB
Type (#
of const.)
person (var)
page (271)
class (5)
person(278)
movie (20)
Evidence
Smokes
Cancer
PageClass
Actor
Director
Movie
Table 2: Dataset Details. var: domain size varied. ?+?: a separate weight learned for each grounding
of the theory and the evidence [13, 18]. Any variable values appearing as a constant in a formula
or in evidence is split apart from the rest of the domain and a new variable with singleton domain
created for them. Constrained theories can be normalized in a similar manner by 1) splitting apart
those variables appearing any subset constraints. 2) simple variable substitution for equality and 3)
introducing explicit evidence predicates for inequality. We can now state the following theorem.
Theorem 4.3. * Let M be a constrained MLN theory. The application of the modified lifting rules
over this constrained theory can be exponentially more efficient than first converting the theory in
the normal form and then applying the original formulation of the lifting rules.
Evidence Processing: Given a predicate Pj (x1 , . . . , xk ) let Ej denote its associated evidence. Further, Ejt (Ejf ) denote the set of ground atoms of Pj which are assigned true (f alse) in evidence.
Let Eju denote the set of groundings which are unknown (neither true nor f alse.) Note that the set
Eju is implicitly specified. The first step in processing evidence is to convert the sets Ejt and Ejf
into the constraint representation form for every predicate Pj . This is done by using the hypercube
representation [24] over the set of variables appearing in predicate Pj . A hypercube over a set of
variables can be seen as a constraint tuple specifying a subset constraint over each variable in the
set. A union of hypercubes represents a constraint set representing the union of corresponding constraint tuples. Finding a minimal hypercube decomposition in NP-hard and we employ the greedy
top-down hypercube construction algorithm as proposed Singla et al. [24] (Algorithm 2). The constraint representation for the implicit set Eju can be obtained by eliminating the set Ejt ? Ejf from
its bounding hypercube (i.e. one which includes all the groundings in the set) and then calling the
hypercube construction algorithm over the remaining set. Once the constraint representation has
been created for every set of evidence (and non-evidence) atoms, we join them together to obtain
the constrained representation. The join over constraints is implemented as described in Section 3.
5
Experiments
In our experiments, we compared the performance of our constrained formulation of lifting rules
with the normal forms for the task of calculating the partition function Z. We refer to our approach
as SetInEq and normal forms as Normal. We also compared with PTP [11] available in Alchemy
2 and GCFVOE [25] system. 1 Both our systems and GCFOVE are implemented in Java. PTP
is implemented in C++. We experimented on four benchmark MLN domains for calculating the
partition function using exact as well as approximate inference. Table 2 shows the details of our
datasets. Details for one of the domains Professor and Students (PS) [11] are presented in supplement due to lack of space. Evidence was the only type of constraint considered in our experiments.
The experiments on all the datasets except WebKB were carried on a machine with 2.20GHz Intel
Core i3 CPU and 4GB RAM. WebKB is a much larger dataset and we ran the experiments on 2.20
GHz Xeon(R) E5-2660 v2 server with 10 cores and 128 GB RAM.
5.1
Exact Inference
We compared the performance of the various algorithms using exact inference on two of the domains: FS and PS. We do not compare the value of Z since we are dealing with exact inference
In the following, r% evidence on a type means that r% of the constants of the type are randomly
selected and evidence predicate groundings in which these constants appear are randomly set to true
or false. Remaining evidence groundings are set to unknown. y-axis is plotted on log scale in the
following 3 graphs. Figure 1a shows the results as the domain size of person is varied from 100 to
800 with 40% evidence in the FS domain. We timed out an algorithm after 1 hour. PTP failed to
1
Alchemy-2:code.google.com/p-alchemy-2,GCFOVE: https:dtai.cs.kuleuven.be/software/gcfove
7
scale to even 100 size and are not shown in the figure. The time taken by Normal grows very fast and
it times out after 500 size. SetInEq and GCFOVE have a much slower growth rate. SetInEq is about
an order of magnitude faster than GCFVOE on all domain sizes. Figure 1b shows the time taken
by the three algorithms as we vary the evidence on person with a fixed domain size of 500. For all
the algorithms, the time first increases with evidence and then drops. SetInEq is up to an order of
magnitude faster than GCFVOE and upto 3 orders of magnitude faster than Normal. Figure 1c plots
the number of nodes expanded by Normal and SetInEQ. GCFOVE code did not provide any such
equivalent value. As expected, we see much larger growth rate for Normal compared to SetInEq.
10000
1000
100
10
1
100
100
10
300
400
500
600
700
800
(a) FS: size vs time (sec).
1e+06
100000
10000
1
200
SetInEq
Normal
1e+07
1000
0
Domain Size
5.2
1e+08
SetInEq
Normal
GCFOVE
No. of nodes
SetInEq
Normal
GCFOVE
Time (in seconds)
Time (in seconds)
10000
20
40
60
80
100
1000
100
Evidence %
200
300
400
500
600
700
800
Domain Size
(b) FS: evidence vs time (sec)
(c) FS: size vs # nodes expanded
Figure 1: Results for exact inference on FS
Approximate Inference
Time (in seconds)
Time (in seconds)
For approximate inference, we could only compare Nor100000
SetInEq
mal with SetInEq. GCFOVE does not have an approxiNormal
mate variant for computing marginals or partition func10000
tion. PTP using importance sampling is not fully implemented in Alchemy 2. For approximate inference in
both Normal and SetInEq, we used the unbiased impor1000
tance sampling scheme as described by Gogate & Domingos [11]. We collected a total of 1000 samples for each
100
estimate and averaged the Z values. In all our experi50
100
150
200
250
300
Domain Size
ments below, the log(Z) values calculated by the two algorithms were within 1% of each other hence, the esti(a) WebKB: size vs time (sec)
mates are comparable with other. We compared the per700
SetInEq
formance of the two algorithms on two real world datasets
Normal
600
IMDB and WebKB (see Table 2). For WebKB, we exper500
imented with 5 most frequent page classes in Univ. of
400
Texas fold. It had close to 2.5 million ground clauses.
300
IMDB has 5 equal sized folds with close to 15K ground200
ings in each. The results presented are averaged over the
100
folds. Figure 2a (y-axis on log scale) shows the time taken
0
by two algorithms as we vary the subset of pages in our
0
20
40
60
80
100
Evidence %
data from 0 to 270. The scaling behavior is similar to as
observed earlier for datasets. Figure 2b plots the timing of
(b) IMDB: evidence % vs time (sec)
the two algorithms as we vary the evidence % on IMDB.
Figure
2: Results using approximate inSetInEq is able to exploit symmetries with increasing evference
on WebKB and IMDB
idence whereas Normal?s performance degrades.
6
Conclusion and Future work
In this paper, we proposed a new constraint language called SetInEq for relational probabilistic models. Our constraint formalism subsumes most existing formalisms. We defined efficient operations
over our language using a canonical form representation and extended 3 key lifting rules i.e., decomposer, binomial and single occurrence to work with our constraint formalism. Experiments on
benchmark MLNs validate the efficacy of our approach. Directions for future work include exploiting our constraint formalism to facilitate approximate lifting of the theory.
7
Acknowledgements
Happy Mittal was supported by TCS Research Scholar Program. Vibhav Gogate was partially supported by the DARPA Probabilistic Programming for Advanced Machine Learning Program under
AFRL prime contract number FA8750-14-C-0005. Parag Singla is being supported by Google travel
grant to attend the conference. We thank Somdeb Sarkhel for helpful discussions.
8
References
[1] Udi Apsel, Kristian Kersting, and Martin Mladenov. Lifting relational map-lps using cluster signatures.
In Proc. of AAAI-14, pages 2403?2409, 2014.
[2] H. Bui, T. Huynh, and S. Riedel. Automorphism groups of graphical models and lifted variational inference. In Proc. of UAI-13, pages 132?141, 2013.
[3] R. de Salvo Braz, E. Amir, and D. Roth. Lifted first-order probabilistic inference. In Proc. of IJCAI-05,
pages 1319?1325, 2005.
[4] R. de Salvo Braz, E. Amir, and D. Roth. Lifted first-order probabilistic inference. In L. Getoor and
B. Taskar, editors, Introduction to Statistical Relational Learning. MIT Press, 2007.
[5] P. Domingos and D. Lowd. Markov Logic: An Interface Layer for Artificial Intelligence. Synthesis
Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2009.
[6] G. Van den Broeck. On the completeness of first-order knowledge compilation for lifted probabilistic
inference. In Proc. of NIPS-11, pages 1386?1394, 2011.
[7] G. Van den Broeck. Lifted Inference and Learning in Statistical Relational Models. PhD thesis, KU
Leuven, 2013.
[8] G. Van den Broeck. On the complexity and approximation of binary evidence in lifted inference. In Proc.
of NIPS-13, 2013.
[9] G. Van den Broeck and J. Davis. Conditioning in firsr-order knowledge compilation and lifted probabilistic inference. In Proc. of AAAI-12, 2012.
[10] G. Van den Broeck, N. Taghipour, W. Meert, J. Davis, and L. De Raedt. Lifted probabilistic inference by
first-order knowledge compilation. In Proc. of IJCAI-11, 2011.
[11] V. Gogate and P. Domingos. Probabilisitic theorem proving. In Proc. of UAI-11, pages 256?265, 2011.
[12] V. Gogate, A. Jha, and D. Venugopal. Advances in lifted importance sampling. In Proc. of AAAI-12,
pages 1910?1916, 2012.
[13] A. K. Jha, V. Gogate, A. Meliou, and D. Suciu. Lifted inference seen from the other side : The tractable
features. In Proc. of NIPS-10, pages 973?981, 2010.
[14] K. Kersting, B. Ahmadi, and S. Natarajan. Counting belief propagation. In Proc. of UAI-09, pages
277?284, 2009.
[15] J. Kisy?nski and D. Poole. Constraint processing in lifted probabilistic inference. In Proc. of UAI-09,
2009.
[16] L. Mihalkova and R. Mooney. Bottom-up learning of Markov logic network structure. In Proceedings of
the Twenty-Forth International Conference on Machine Learning, pages 625?632, 2007.
[17] B. Milch, L. S. Zettlemoyer, K. Kersting, M. Haimes, and L. P. Kaebling. Lifted probabilistic inference
with counting formulas. In Proc. of AAAI-08, 2008.
[18] H. Mittal, P. Goyal, V. Gogate, and P. Singla. New rules for domain independent lifted MAP inference.
In Proc. of NIPS-14, pages 649?657, 2014.
[19] M. Mladenov, A. Globerson, and K. Kersting. Lifted message passing as reparametrization of graphical
models. In Proc. of UAI-14, pages 603?612, 2014.
[20] M. Mladenov and K. Kersting. Equitable partitions of concave free energies. In Proc. of UAI-15, 2015.
[21] D. Poole. First-order probabilistic inference. In Proc. of IJCAI-03, pages 985?991, 2003.
[22] S. J. Russell and P. Norvig. Artificial Intelligence - A Modern Approach (3rd edition). Pearson Education,
2010.
[23] P. Singla and P. Domingos. Lifted first-order belief propagation. In Proc. of AAAI-08, pages 1094?1099,
2008.
[24] P. Singla, A. Nath, and P. Domingos. Approximate lifted belief propagation. In Proc. of AAAI-14, pages
2497?2504, 2014.
[25] N. Taghipour, D. Fierens, J. Davis, and H. Blockeel. Lifted variable elimination with arbitrary constraints.
In Proc. of AISTATS-12, Canary Islands, Spain, 2012.
[26] D. Venugopal and V. Gogate. On lifting the Gibbs sampling algorithm. In Proc. of NIPS-12, pages
1664?1672, 2012.
9
| 5900 |@word eliminating:1 polynomial:1 closure:2 decomposition:2 eld:2 fif:8 recursively:2 substitution:7 contains:5 series:1 efficacy:1 fa8750:1 existing:9 current:1 com:2 si:2 gmail:1 yet:2 written:5 partition:18 engg:2 drop:1 plot:2 v:5 greedy:2 selected:1 braz:2 intelligence:3 amir:2 mln:12 xk:4 core:2 completeness:1 cse:2 node:4 c2:1 direct:1 udi:1 director:3 prove:1 combine:1 inside:1 manner:3 introduce:1 x0:1 expected:1 behavior:1 p1:7 nor:1 probabilisitic:1 decomposed:1 alchemy:7 cpu:1 solver:5 considering:2 increasing:1 project:6 parfactors:3 underlying:5 begin:2 notation:1 provided:1 webkb:7 spain:1 kind:2 finding:1 decomposer:16 esti:1 every:10 axl:1 growth:2 concave:1 exactly:1 ensured:1 vgogate:1 partitioning:3 grant:1 appear:3 t13:2 before:1 attend:1 timing:1 dallas:1 treat:1 subscript:1 blockeel:1 quantified:3 equivalence:10 specifying:1 averaged:2 unique:2 globerson:1 atomic:6 practice:1 union:12 recursive:1 goyal:1 x3:4 procedure:1 ax1:4 bell:1 java:1 projection:4 renaming:1 vlx:1 close:2 operator:3 context:2 applying:3 milch:1 restriction:5 equivalent:4 map:6 missing:1 roth:2 straightforward:1 attention:2 starting:1 wit:2 simplicity:1 identifying:1 splitting:4 rule:29 examines:1 importantly:1 proving:1 handle:2 notion:2 construction:2 norvig:1 exact:8 programming:1 us:1 domingo:5 element:2 natarajan:1 database:1 observed:1 taskar:1 bottom:1 vlk:2 mal:1 t12:2 automorphism:1 russell:1 ran:1 intuition:1 meert:1 broken:1 complexity:7 moderately:1 dom:3 signature:1 ings:1 division:3 f2:2 imdb:6 easily:3 darpa:1 various:7 tx:1 represented:4 univ:2 fast:2 dnf:1 artificial:3 outside:1 mladenov:3 pearson:1 disjunction:2 whose:1 larger:2 say:2 otherwise:2 richardson:1 highlighted:1 propose:3 coming:2 frequent:1 date:1 iitd:3 forth:1 description:1 moved:1 eju:3 validate:1 constituent:1 exploiting:1 ijcai:3 cluster:1 p:2 extending:1 friend:2 ac:2 utdallas:1 hauz:2 eq:5 p2:7 implemented:5 c:1 involves:1 differ:1 direction:1 kb:1 elimination:2 education:1 require:5 argued:1 parag:2 f1:2 scholar:1 strictly:3 extension:9 considered:2 ground:10 normal:23 exp:2 claypool:1 substituting:2 vary:3 purpose:1 mlns:5 proc:21 travel:1 singla:7 mittal:4 create:1 weighted:2 mit:1 sarkhel:1 modified:1 i3:1 ej:1 lifted:56 kersting:5 conjunction:6 ax:1 focus:2 t14:2 helpful:1 inference:53 dependent:1 unary:1 vl:12 eliminate:2 entire:1 kc:4 relation:2 issue:2 aforementioned:1 among:2 denoted:3 constrained:18 marginal:2 equal:6 once:3 having:1 sampling:12 atom:9 shattering:1 represents:3 identical:8 vl1:1 future:2 np:1 employ:1 modern:1 randomly:2 composed:1 ve:7 individual:2 replaced:1 maintain:1 negation:2 subsumed:1 message:3 highly:1 possibility:2 unison:1 nl:5 behind:1 compilation:7 suciu:1 tuple:22 respective:1 tree:3 divide:1 desired:1 re:1 plotted:1 timed:1 minimal:1 mk:1 formalism:7 earlier:5 xeon:1 raedt:1 assignment:4 introducing:2 addressing:1 subset:27 predicate:16 broeck:6 nski:1 hypercubes:2 person:4 international:1 probabilistic:13 systematic:1 contract:1 meliou:1 together:2 synthesis:1 again:1 aaai:6 satisfied:2 thesis:1 possibly:1 literal:2 converted:3 singleton:4 de:3 bold:1 subsumes:2 sec:5 includes:1 student:1 jha:2 satisfy:1 explicitly:1 vi:3 ax2:4 tion:1 aggregation:2 reparametrization:1 contribution:1 vly:4 ni:5 formance:1 characteristic:1 efficiently:1 yield:1 yes:5 none:1 comp:4 mooney:1 footnote:1 ptp:8 whenever:1 hlt:1 definition:5 mihalkova:1 energy:1 involved:4 associated:6 proof:2 dataset:2 treatment:1 logical:5 knowledge:9 appears:9 afrl:1 formulation:7 done:3 just:1 implicit:1 expressive:1 replacing:3 propagation:5 lack:4 smoke:4 google:2 lowd:1 vibhav:2 grows:1 usa:1 grounding:13 facilitate:1 contain:1 unbiased:1 true:7 normalized:1 former:1 equality:10 hence:2 assigned:1 symmetric:2 mahajan:1 deal:1 indistinguishable:1 during:5 huynh:1 davis:3 generalized:2 demonstrate:2 interface:1 reasoning:1 ranging:1 variational:2 recently:1 fi:15 superior:1 rl:8 clause:1 conditioning:2 exponentially:1 million:1 discussed:1 extend:6 interpretation:1 marginals:1 significant:1 refer:7 gibbs:1 approx:1 unconstrained:4 leuven:1 pm:2 similarly:4 rd:1 language:19 had:1 specification:1 actor:2 etc:2 base:2 recent:1 belongs:1 apart:3 prime:2 ineq:5 server:1 inequality:12 binary:2 success:2 fid:2 equitable:1 preserving:6 seen:2 morgan:1 converting:1 full:1 multiple:1 faster:3 calculation:1 divided:1 impact:1 ameliorates:1 involving:1 variant:1 represent:2 normalization:1 achieved:1 c1:1 preserved:1 lbp:3 whereas:2 fine:1 zettlemoyer:1 source:1 publisher:1 operate:2 rest:1 ejt:3 strict:1 inconsistent:1 nath:1 effectiveness:1 counting:2 split:8 easy:1 identically:4 xj:37 fit:10 restrict:4 identified:1 regarding:1 idea:1 texas:2 whether:1 six:2 gb:2 f:7 passing:3 ignored:1 useful:2 detailed:3 involve:1 processed:1 reduced:2 http:1 specifies:2 restricts:1 canonical:12 taghipour:2 disjoint:6 correctly:1 herbrand:1 t2x:3 express:1 group:4 key:8 four:3 pj:4 neither:1 v1:3 ram:2 graph:1 sum:1 convert:2 uncertainty:1 reader:1 summarizes:1 scaling:1 comparable:1 ki:1 fl:1 layer:1 fold:3 constraint:143 deficiency:1 riedel:1 bp:1 x2:19 software:1 calling:1 haimes:1 argument:6 expanded:2 martin:1 belonging:2 across:1 idence:1 wi:12 partitioned:7 lp:1 island:1 alse:4 den:5 restricted:1 mkp:1 taken:4 turn:3 count:16 describing:1 fierens:1 needed:1 kuleuven:1 tractable:4 end:1 studying:1 generalizes:1 operation:17 available:1 apply:2 v2:4 appropriate:1 upto:1 occurrence:8 appearing:14 ahmadi:1 slower:1 substitute:1 original:1 binomial:12 denotes:2 include:4 running:2 t11:2 standardized:1 remaining:3 top:1 graphical:2 const:1 calculating:2 exploit:4 restrictive:1 hypercube:8 already:1 looked:1 occurs:2 degrades:1 said:1 separate:2 link:1 sci:3 thank:1 participate:1 argue:1 considers:1 collected:1 enforcing:1 code:2 gogate:8 reformulate:1 happy:3 unknown:2 twenty:1 conversion:1 markov:12 datasets:4 benchmark:4 finite:3 mate:2 behave:4 relational:8 extended:1 varied:2 arbitrary:1 pair:5 required:2 specified:3 mechanical:1 z1:1 delhi:4 pletely:1 learned:1 hour:1 salvo:2 nip:5 able:1 poole:3 below:1 program:2 including:1 memory:1 tance:1 belief:5 power:1 critical:1 getoor:1 advanced:1 representing:1 scheme:1 movie:4 axis:2 created:2 carried:2 ready:1 transitive:5 canary:1 literature:2 acknowledgement:1 mlx:2 fully:1 lecture:1 prototypical:1 versus:1 var:2 editor:1 share:1 cancer:2 supported:3 dtai:1 free:5 side:2 allow:3 apsel:1 india:2 taking:1 benefit:1 ghz:2 van:5 dimension:1 calculated:1 world:3 rich:2 ignores:1 universally:2 projected:5 approximate:9 implicitly:1 bui:1 logic:12 dealing:2 khas:2 ml:4 uai:6 assumed:1 tuples:12 xi:24 search:6 propositionalized:1 triplet:2 table:5 ku:1 symmetry:4 e5:1 constructing:1 domain:23 vj:1 venugopal:2 did:1 aistats:1 main:2 motivation:1 bounding:1 edition:1 allowed:4 x1:21 intel:1 join:7 wlog:1 sub:1 inferring:1 explicit:1 t1x:5 tied:2 third:2 grained:1 formula:39 theorem:9 down:1 specific:1 symbol:1 anuj:1 experimented:1 ments:1 evidence:28 grouping:1 restricting:3 false:2 importance:3 ci:3 supplement:6 lifting:11 magnitude:3 phd:1 nk:1 smoker:1 intersection:9 tc:1 appearance:1 failed:1 kaebling:1 expressed:2 partially:1 joined:1 kristian:1 corresponds:1 satisfies:1 somdeb:1 sized:2 goal:1 exposition:2 towards:1 replace:1 professor:1 experimentally:1 hard:1 specifically:1 except:1 reducing:1 lemma:1 parags:1 called:7 total:2 formally:3 support:16 latter:1 dept:3 mcmc:1 |
5,415 | 5,901 | Sparse PCA via Bipartite Matchings
Megasthenis Asteris
The University of Texas at Austin
megas@utexas.edu
Dimitris Papailiopoulos
University of California, Berkeley
dimitrisp@berkeley.edu
Anastasios Kyrillidis
The University of Texas at Austin
anastasios@utexas.edu
Alexandros G. Dimakis
The University of Texas at Austin
dimakis@austin.utexas.edu
Abstract
We consider the following multi-component sparse PCA problem: given a set of
data points, we seek to extract a small number of sparse components with disjoint
supports that jointly capture the maximum possible variance. Such components
can be computed one by one, repeatedly solving the single-component problem
and deflating the input data matrix, but this greedy procedure is suboptimal. We
present a novel algorithm for sparse PCA that jointly optimizes multiple disjoint
components. The extracted features capture variance that lies within a multiplicative factor arbitrarily close to 1 from the optimal. Our algorithm is combinatorial
and computes the desired components by solving multiple instances of the bipartite maximum weight matching problem. Its complexity grows as a low order
polynomial in the ambient dimension of the input data, but exponentially in its
rank. However, it can be effectively applied on a low-dimensional sketch of the
input data. We evaluate our algorithm on real datasets and empirically demonstrate that in many cases it outperforms existing, deflation-based approaches.
1
Introduction
Principal Component Analysis (PCA) reduces data dimensionality by projecting it onto principal
subspaces spanned by the leading eigenvectors of the sample covariance matrix. It is one of the
most widely used algorithms with applications ranging from computer vision, document clustering
to network anomaly detection (see e.g. [1, 2, 3, 4, 5]). Sparse PCA is a useful variant that offers
higher data interpretability [6, 7, 8] a property that is sometimes desired even at the cost of statistical
fidelity [5]. Furthermore, when the obtained features are used in subsequent learning tasks, sparsity
potentially leads to better generalization error [9].
Given a real n ? d data matrix S representing n centered data points in d variables, the first sparse
principal component is the sparse vector that maximizes the explained variance:
x? ,
arg max
x? Ax,
(1)
kxk2 =1,kxk0 =s
where A = 1/n ? S? S is the d ? d empirical covariance matrix. Unfortunately, the directly enforced
sparsity constraint makes the problem NP-hard and hence computationally intractable in general. A
significant volume of prior work has focused on various algorithms for approximately solving this
optimization problem [3, 5, 7, 8, 10, 11, 12, 13, 14, 15, 16, 17], while some theoretical results have
also been established under statistical or spectral assumptions on the input data.
In most cases one is not interested in finding only the first sparse eigenvector, but rather the first k,
where k is the reduced dimension where the data will be projected. Contrary to the single-component
1
problem, there has been very limited work on computing multiple sparse components. The scarcity
is partially attributed to conventional wisdom stemming from PCA: multiple components can be
computed one by one, repeatedly solving the single-component sparse PCA problem (1) and deflating [18] the input data to remove information captured by previously extracted components. In fact,
multi-component sparse PCA is not a uniquely defined problem in the literature. Deflation-based
approaches can lead to different output depending on the type of deflation [18]; extracted components may or may not be orthogonal, while they may have disjoint or overlapping supports. In the
statistics literature, where the objective is typically to recover a ?true? principal subspace, a branch
of work has focused on the ?subspace row sparsity? [19], an assumption that leads to sparse components all supported on the same set of variables. While in [20] the authors discuss an alternative
perspective on the fundamental objective of the sparse PCA problem.
We focus on the multi-component sparse PCA problem with disjoint supports, i.e., the problem of
computing a small number of sparse components with non-overlapping supports that jointly maximize the explained variance:
X? , arg max T R X? AX ,
(2)
X?Xk
Xk , X ? Rd?k : kXj k2 = 1, kXj k0 = s, supp(Xi ) ? supp(Xj ) = ?, ? j ? [k], i < j ,
with Xj denoting the jth column of X. The number k of the desired components is considered a
small constant. Contrary to the greedy sequential approach that repeatedly uses deflation, our algorithm jointly computes all the vectors in X and comes with theoretical approximation guarantees.
Note that even if we could solve the single-component sparse PCA problem (1) exactly, the greedy
approach could be highly suboptimal. We show this with a simple example in Sec. 7 of the appendix.
Our Contributions:
1. We develop an algorithm that provably approximates the solution to the sparse PCA problem (2)
within a multiplicative factor arbitrarily close to optimal. Our algorithm is the first that jointly
optimizes multiple components with disjoint supports and operates by recasting the sparse PCA
problem into multiple instances of the bipartite maximum weight matching problem.
2. The computational complexity of our algorithm grows as a low order polynomial in the ambient
dimension d, but is exponential in the intrinsic dimension of the input data, i.e., the rank of A.
To alleviate the impact of this dependence, our algorithm can be applied on a low-dimensional
sketch of the input data to obtain an approximate solution to (2). This extra level of approximation introduces an additional penalty in our theoretical approximation guarantees, which
naturally depends on the quality of the sketch and, in turn, the spectral decay of A.
3. We empirically evaluate our algorithm on real datasets, and compare it against state-of-the-art
methods for the single-component sparse PCA problem (1) in conjunction with the appropriate
deflation step. In many cases, our algorithm significantly outperforms these approaches.
2
Our Sparse PCA Algorithm
We present a novel algorithm for the sparse PCA problem with multiple disjoint components. Our
algorithm approximately solves the constrained maximization (2) on a d ? d rank-r Positive SemiDefinite (PSD) matrix A within a multiplicative factor arbitrarily close to 1. It operates by recasting
the maximization into multiple instances of the bipartite maximum weight matching problem. Each
instance ultimately yields a feasible solution to the original sparse PCA problem; a set of k s-sparse
components with disjoint supports. Finally, the algorithm exhaustively determines and outputs the
set of components that maximizes the explained variance, i.e., the quadratic objective in (2).
The computational complexity of our algorithm grows as a low order polynomial in the ambient
dimension d of the input, but exponentially in its rank r. Despite the unfavorable dependence on
the rank, it is unlikely that a substantial improvement can be achieved in general [21]. However,
decoupling the dependence on the ambient and the intrinsic dimension of the input has an interesting
ramification; instead of the original input A, our algorithm can be applied on a low-rank surrogate to
obtain an approximate solution, alleviating the dependence on r. We discuss this in Section 3. In the
sequel, we describe the key ideas behind our algorithm, leading up to its guarantees in Theorem 1.
2
Let A = U?U? denote the truncated eigenvalue decomposition of A; ? is a diagonal r ? r whose
ith diagonal entry is equal to the ith largest eigenvalue of A, while the columns of U coincide with
the corresponding eigenvectors. By the Cauchy-Schwartz inequality, for any x ? Rd ,
2
2
(3)
x? Ax =
?1/2 U? x
? ?1/2 U? x, c , ? c ? Rr : kck2 = 1.
2
In fact, equality in (3) can always be achieved for c colinear to ?1/2 Ux ? Rr and in turn
1/2 2
x,
U?
c
,
x? Ax = max
r?1
c?S2
where
Sr?1
2
denotes the ?2 -unit sphere in r dimensions. More generally, for any X ? Rd?k ,
?
T R X AX =
k
X
X
j?
j
AX =
j=1
max
C:Cj ?Sr?1
?j
2
k
X
j=1
Xj , U?1/2 Cj
2
.
(4)
Under the variational characterization of the trace objective in (4), the sparse PCA problem (2) can
be re-written as a joint maximization over the variables X and C as follows:
k
X
j
2
?
X , U?1/2 Cj .
(5)
max T R X AX = max
max
X?Xk
X?Xk C:Cj ?Sr?1 ?j
2
j=1
The alternative formulation of the sparse PCA problem in (5) may be seemingly more complicated
than the original one in (2). However, it takes a step towards decoupling the dependence of the
optimization on the ambient and intrinsic dimensions d and r, respectively. The motivation behind
the introduction of the auxiliary variable C will become more clear in the sequel.
For a given C, the value of X ? Xk that maximizes the objective in (5) for that C is
k
X
j
2
b , arg max
X , Wj ,
X
X?Xk
(6)
j=1
where W,U?1/2 C is a real d ? k matrix. The constrained, non-convex maximization (6) plays a
central role in our developments. We will later describe a combinatorial O(d ? (s ? k)2 ) procedure to
b , reducing the maximization to an instance of the bipartite maximum weight
efficiently compute X
matching problem. For now, however, let us assume that such a procedure exists.
Let X? , C? be the pair that attains the maximum in (5); in other words, X? is the desired solution
to the sparse PCA problem. If the optimal value C? of the auxiliary variable were known, then
we would be able to recover X? by solving the maximization (6) for C = C? . Of course, C? is
not known, and it is not possible to exhaustively consider all possible values in the domain of C.
Instead, we examine only a finite number of possible values of C over a fine discretization of its
domain. In particular, let N?/2 (Sr?1
) denote a finite ?/2-net of the r-dimensional ?2 -unit sphere; for
2
r?1
any point in S2 , the net contains a point within an ?/2 radius from the former. There are several
ways to construct such a net. Further, let [N?/2 (Sr?1
)]?k ? Rd?k denote the kth Cartesian power
2
of the aforementioned ?/2-net. By construction, this collection of points contains a matrix C that is
column-wise close to C? . In turn, it can be shown using the properties of the net, that the candidate
solution X ? Xk obtained through (6) at that point C will be approximately as good as the optimal
X? in terms of the quadratic objective in (2).
All above observations yield a procedure for approximately solving the sparse PCA problem (2).
The steps are outlined in Algorithm 1. Given the desired number of components k and an accuracy
parameter ? ? (0, 1), the algorithm generates a net [N?/2 (Sr?1
)]?k and iterates over its points. At
2
each point C, it computes a feasible solution for the sparse PCA problem ? a set of k s-sparse
components ? by solving maximization (6) via a procedure (Alg. 2) that will be described in the
sequel. The algorithm collects the candidate solutions identified at the points of the net. The best
among them achieves an objective in (2) that provably lies close to optimal. More formally,
Theorem 1. For any real d ? d rank-r PSD matrix A, desired number of components k, number s
of nonzero entries per component, and accuracy parameter ? ? (0, 1), Algorithm 1 outputs X ? Xk
such that
?
T R X AX ? (1 ? ?) ? T R X?
? AX? ,
r?k
? d ? (s ? k)2 .
where X? , arg maxX?Xk T R X? AX , in time TSVD (r) + O 4?
3
Algorithm 1 is the first nontrivAlgorithm 1 Sparse PCA (Multiple disjoint components)
ial algorithm that provably approxinput : PSD d ? d rank-r matrix A, ? ? (0, 1), k ? Z+ .
imates the solution of the sparse
PCA problem (2). According to
{Theorem 1}
output : X ? Xk
Theorem 1, it achieves an objective
1: C ? {}
value that lies within a multiplica2: [U, ?] ? EIG(A)
tive factor from the optimal, arbi3: for each C ? [N?/2 (S2r?1 )]?k do
trarily close to 1. Its complexity
4:
W ? U?1/2 C
{W ? Rd?k }
j
P
2
grows as a low-order polynomial in
k
b ? arg max
X , Wj
{Alg. 2}
5:
X
the dimension d of the input, but ex X?Xk j=1
b
6:
C
?
C
?
X
ponentially in the intrinsic dimen7: end for
sion r. Note, however, that it can be
8: X ? arg maxX?C T R X? AX
substantially better compared to the
O(ds?k ) brute force approach that
exhaustively considers all candidate supports for the k sparse components. The complexity of our
algorithm follows from the cardinality of the net and the complexity of Algorithm 2, the subroutine
that solves the constrained maximization (6). The latter is a key ingredient of our algorithm, and is
discussed in detail in the next subsection. A formal proof of Theorem 1 is provided in Section 9.2.
2.1
Sparse Components via Bipartite Matchings
In the core of Alg. 1 lies a procedure that solves the constrained maximization (6) (Alg. 2). The
latter breaks down the maximization into two stages. First, it identifies the support of the optimal
b by solving an instance of the maximum weight matching problem on a bipartite graph G.
solution X
Then, it recovers the exact values of its nonzero entries based on the Cauchy-Schwarz inequality. In
the sequel, we provide a brief description of Alg. 2, leading up to its guarantees in Lemma 2.1.
b j ) be the support of the jth column of X,
b j = 1, . . . , k. The objective in (6) becomes
Let Ij ,supp(X
k
X
j=1
b j , Wj
X
2
=
k X
X
j=1 i?Ij
bij ? Wij
X
2
?
k X
X
Wij2 .
(7)
j=1 i?Ij
The inequality is due to Cauchy-Schwarz and the constraint kXj k2 = 1 ? j ? {1, . . . , k}. In fact,
if an oracle reveals the supports Ij , j = 1, . . . , k, the upper bound in (7) can always be achieved
b as in Algorithm 2 (Line 6). Therefore, the key in solving (6) is
by setting the nonzero entries of X
determining the collection of supports to maximize the right-hand side of (7).
By constraint, the sets Ij must be pairwise disjoint,
each with cardinality s. Consider a weighted
bipartite
graph G = U = {U1 , . . . , Uk }, V, E constructed as
follows1 (Fig. 1):
U1
u(1)
s
? V is a set of d vertices v1 , . . . , vd , corresponding to
b.
the d variables, i.e., the d rows of X
? U is a set of k ? s vertices, conceptually partitioned
into k disjoint subsets U1 , . . . , Uk , each of cardinality s. The jth subset, Uj , is associated with the support Ij ; the s vertices u(j)
? , ? = 1, . . . , s in Uj serve
as placeholders for the variables/indices in Ij .
? Finally, the edge set is E = U ? V . The edge
weights are determined by the d?k matrix W in (6).
In particular, the weight of edge (u(j)
? , vi ) is equal
to Wij2 . Note that all vertices in Uj are effectively
identical; they all share a common neighborhood
and edge weights.
1
..
.
Uk
u1(k)
..
.
us(k)
v1
Wi12
..
.
Wi12
vi
V
Wik2
Wik2
..
.
vd
Figure 1: The graph G generated by
Alg. 2. It is used to determine the support
b in (6).
of the solution X
The construction is formally outlined in Algorithm 4 in Section 8.
4
u(1)
1
..
.
Any feasible support {Ij }kj=1 correAlgorithm 2 Compute Candidate Solution
sponds to a perfect matching in G
input Real d ? k matrix W
and vice-versa. Recall that a match
b = arg maxX?X Pk Xj , Wj 2
ing is a subset of the edges conoutput X
j=1
k
taining no two edges incident to the
1: G {Uj }kj=1 , V, E ? G EN B I G RAPH(W)
{Alg. 4}
same vertex, while a perfect match2: M ? M AX W EIGHT M ATCH(G)
{? E}
ing, in the case of an unbalanced
b ? 0d?k
3:
X
bipartite graph G = (U, V, E) with
4: for j = 1, . . . , k do
|U | ? |V |, is a matching that con5:
Ij ? {i ? {1, . . . , d} : (u, vi ) ? M, u ? Uj }
tains at least one incident edge for
b j ]I ? [Wj ]I /k[Wj ]I k2
6:
[X
each vertex in U . Given a perj
j
j
7: end for
fect matching M ? E, the disjoint neighborhoods of Uj s under
M yield a support {Ij }kj=1 . Conversely, any valid support yields a unique perfect matching in G (taking into account that all vertices
in Uj are isomorphic). Moreover, due to the choice of weights in G, the right-hand side of (7) for
a given support {Ij }kj=1 is equal to the weight of the matching M in G induced by the former, i.e.,
Pk P
P
2
(u,v)?M w(u, v). It follows that determining the support of the solution in (6),
j=1
i?Ij Wij =
reduces to solving the maximum weight matching problem on the bipartite graph G.
Algorithm 2 readily follows. Given W ? Rd?k , the algorithm generates a weighted bipartite
graph G as described, and computes its maximum weight matching. Based on the latter, it first
b (Line 5), and subsequently the exact values of its nonzero entries
recovers the desired support of X
(Line 6). The running time
is
dominated
by the computation of the matching, which can be done in
O |E||U | + |U |2 log |U | using a variant of the Hungarian algorithm [22]. Hence,
Lemma 2.1. For any W ? Rd?k , Algorithm 2 computes the solution to (6), in time O d ? (s ? k)2 .
A more formal analysis and proof of Lemma 2.1 is available in Sec. 9.1. This completes the description of our sparse PCA algorithm (Alg. 1) and the proof sketch of Theorem 1.
3
Sparse PCA on Low-Dimensional Sketches
Algorithm 1 approximately solves the
Algorithm 3 Sparse PCA on Low Dim. Sketch
sparse PCA problem (2) on a d ? d rank-r
PSD matrix A in time that grows as a input : Real n ? d S, r ? Z+ , ? ? (0, 1), k ? Z+ .
low-order polynomial in the ambient dimen{Thm. 2}
output X(r) ? Xk .
sion d, but depends exponentially on r. This
1: S ? S KETCH(S, r)
?
dependence can be prohibitive in practice.
2: A ? S S
To mitigate its effect, we can apply our
3: X(r) ? A LGORITHM 1 (A, ?, k).
sparse PCA algorithm on a low-rank sketch
of A. Intuitively, the quality of the extracted
components should depend on how well that low-rank surrogate approximates the original input.
More formally, let S be the real n ? d data matrix representing n (potentially centered) datapoints
in d variables, and A the corresponding d?d covariance matrix. Further, let S be a low-dimensional
sketch of the original data; an n ? d matrix whose rows lie in an r-dimensional subspace, with r
being an accuracy parameter. Such a sketch can be obtained in several ways, including for example
?
exact or approximate SVD, or online sketching methods [23]. Finally, let A = 1/n ? S S be the
covariance matrix of the sketched data. Then, instead of A, we can approximately solve the sparse
PCA problem by applying Algorithm 1 on the low-rank surrogate A. The above are formally outlined in Algorithm 3. We note that the covariance matrix A does not need to be explicitly computed;
Algorithm 1 can operate directly on the (sketched) input data matrix.
Theorem 2. For any n ? d input data matrix S, with corresponding empirical covariance matrix
A = 1/n ? S? S, any desired number of components k, and accuracy parameters ? ? (0, 1) and r,
Algorithm 3 outputs X(r) ? Xk such that
T R X?
? (1 ? ?) ? T R X?
(r) AX(r)
? AX? ? 2 ? k ? kA ? Ak2 ,
r?k
where X? , arg maxX?Xk T R X? AX , in time TSKETCH (r) + TSVD (r) + O 4?
? d ? (s ? k)2 .
5
The error term kA ? Ak2 and in turn the tightness of the approximation guarantees hinges on the
quality of the sketch. Roughly, higher values of the parameter r should allow for a sketch that more
accurately represents the original data, leading to tighter guarantees. That is the case, for example,
when the sketch is obtained through exact SVD. In that sense, Theorem 2 establishes a natural
trade-off between the running time of Algorithm 3 and the quality of the approximation guarantees.
(See [24] for additional results.) A formal proof of Theorem 2 is provided in Appendix Section 9.3.
4
Related Work
A significant volume of work has focused on the single-component sparse PCA problem (1); we
scratch the surface and refer the reader to citations therein. Representative examples range from
early heuristics in [7], to the LASSO based techniques in [8], the elastic net ?1 -regression in [5],
?1 and ?0 regularized optimization methods such as GPower in [10], a greedy branch-and-bound
technique in [11], or semidefinite programming approaches [3, 12, 13]. Many focus on a statistical
analysis that pertains to specific data models and the recovery of a ?true? sparse component. In practice, the most competitive results in terms of the maximization in (1) seem to be achieved by (i) the
simple and efficient truncated power (TPower) iteration of [14], (ii) the approach of [15] stemming
from an expectation-maximization (EM) formulation, and (iii) the (SpanSPCA) framework of [16]
which solves the sparse PCA problem through low rank approximations based on [17].
We are not aware of any algorithm that explicitly addresses the multi-component sparse PCA problem (2). Multiple components can be extracted by repeatedly solving (1) with one of the aforementioned methods. To ensure disjoint supports, variables ?selected? by a component are removed
from the dataset. However, this greedy approach can result in highly suboptimal objective value (see
Sec. 7). More generally, there has been relatively limited work in the estimation of principal subspaces or multiple components under sparsity constraints. Non-deflation-based algorithms include
extensions of the diagonal [25] and iterative thresholding [26] approaches, while [27] and [28] propose methods that rely on the ?row sparsity for subspaces? assumption of [19]. These methods yield
components supported on a common set of variables, and hence solve a problem different from (2).
In [20], the authors discuss the multi-component sparse PCA problem, propose an alternative objective function and for that problem obtain interesting theoretical guarantees. In [29] they consider
a structured variant of sparse PCA where higher-order structure is encoded by an atomic norm regularization. Finally, [30] develops a framework for sparse matrix factorizaiton problems, based on
an atomic norm. Their framework captures sparse PCA ?although not explicitly the constraint of
disjoint supports? but the resulting optimization problem, albeit convex, is NP-hard.
5
Experiments
We evaluate our algorithm on a series of real datasets, and compare it to deflation-based approaches
for sparse PCA using TPower [14], EM [15], and SpanSPCA [16]. The latter are representative
of the state of the art for the single-component sparse PCA problem (1). Multiple components are
computed one by one. To ensure disjoint supports, the deflation step effectively amounts to removing
from the dataset all variables used by previously extracted components. For algorithms that are
randomly initialized, we depict best results over multiple random restarts. Additional experimental
results are listed in Section 11 of the appendix.
Our experiments are conducted in a Matlab environment. Due to its nature, our algorithm is easily
parallelizable; its prototypical implementation utilizes the Parallel Pool Matlab feature to exploit
multicore (or distributed cluster) capabilities. Recall that our algorithm operates on a low-rank approximation of the input data. Unless otherwise specified, it is configured for a rank-4 approximation
obtained via truncated SVD. Finally, we note that our algorithm is slower than the deflation-based
methods. We set a barrier on the execution time of our algorithm at the cost of the theoretical approximation guarantees; the algorithm returns the best result at the time of termination. This ?early
termination? can only hurt the performance of our algorithm.
Leukemia Dataset. We evaluate our algorithm on the Leukemia dataset [31]. The dataset comprises 72 samples, each consisting of expression values for 12582 probe sets. We extract k = 5
sparse components, each active on s = 50 features. In Fig. 2(a), we plot the cumulative explained
variance versus the number of components. Deflation-based approaches are greedy: the leading
6
9
#10 k = 5 components, s = 50 nnz/component
6
TPower
EM-SPCA
SpanSPCA
SPCABiPart
+8:82%
s = 50 nnz/component
9
SPCABiPart
SpanSPCA
EM-SPCA
TPower
+6.88%
+6.67%
+6.39%
+7.87%
+6.80%
+8.82%
8
Total Cumulative Expl. Variance
Cumulative Expl. Variance
5
#10
4
3
2
1
7
6
+8.71%
5
4
+6.51%
+0.48%
3
2
1
0
1
2
3
4
0
5
Number of Components
2
3
4
5
6
7
8
9
10
Number of target components
(a)
(b)
Figure 2: Cumul. variance captured by k s-sparse extracted components; Leukemia dataset [31]. We
arbitrarily set s = 50 nonzero entries per component. Fig. 2(a) depicts the cumul. variance vs the
number of components, for k = 5. Deflation-based approaches are greedy; first components capture
high variance, but subsequent contribute less. Our algorithm jointly optimizes the k components and
achieves higher objective. Fig. 2(b) depicts the cumul. variance achieved for various values of k.
components capture high values of variance, but subsequent ones contribute less. On the contrary,
our algorithm jointly optimizes the k = 5 components and achieves higher total cumulative variance; one cannot identify a top component. We repeat the experiment for multiple values of k.
Fig. 2(b) depicts the total cumulative variance capture by each method, for each value of k.
Additional Datasets. We repeat the experiment on multiple datasets, arbitrarily selected from [31].
Table 1 lists the total cumulative variance captured by k = 5 components, each with s = 40 nonzero
entries, extracted using the four methods. Our algorithm achieves the highest values in most cases.
Bag of Words (BoW) Dataset. [31] This is a collection of text corpora stored under the ?bag-ofwords? model. For each text corpus, a vocabulary of d words is extracted upon tokenization, and
the removal of stopwords and words appearing fewer than ten times in total. Each document is then
represented as a vector in that d-dimensional space, with the ith entry corresponding to the number
of appearances of the ith vocabulary entry in the document.
We solve the sparse PCA problem (2) on the word-by-word cooccurrence matrix, and extract k = 8
sparse components, each with cardinality s = 10. We note that the latter is not explicitly constructed;
our algorithm can operate directly on the input word-by-document matrix. Table 2 lists the variance
captured by each method; our algorithm consistently outperforms the other approaches.
Finally, note that here each sparse component effectively selects a small set of words. In turn, the
k extracted components can be interpreted as a set of well-separated topics. In Table 3, we list the
A MZN C OM R EV
A RCENCE T RAIN
CBCL FACE T RAIN
I SOLET-5
L EUKEMIA
P EMS T RAIN
M FEAT P IX
(1500?10000)
(100?10000)
(2429?361)
(1559?617)
(72?12582)
(267?138672)
(2000?240)
TPower
EM sPCA
SpanSPCA
7.31e + 03
1.08e + 07
5.06e + 00
3.31e + 01
5.00e + 09
3.94e + 00
5.00e + 02
7.32e + 03
1.02e + 07
5.18e + 00
3.43e + 01
5.03e + 09
3.58e + 00
5.27e + 02
7.31e + 03
1.08e + 07
5.23e + 00
3.34e + 01
4.84e + 09
3.89e + 00
5.08e + 02
SPCABiPart
7.79e + 03
1.10e + 07
5.29e + 00
3.51e + 01
5.37e + 09
3.75e + 00
5.47e + 02
Table 1: Total cumulative variance captured by k = 5 40-sparse extracted components on various
datasets [31]. For each dataset, we list the size (#samples?#variables) and the value of variance
captured by each method. Our algorithm operates on a rank-4 sketch in all cases.
7
B OW:NIPS
(1500?12419)
B OW:KOS
(3430?6906)
B OW:E NRON
(39861?28102)
B OW:N Y T IMES (300000?102660)
TPower
EM sPCA
SpanSPCA
2.51e + 03
4.14e + 01
2.11e + 02
4.81e + 01
2.57e + 03
4.24e + 01
2.00e + 02
?
2.53e + 03
4.21e + 01
2.09e + 02
4.81e + 01
SPCABiPart
3.34e + 03 (+29.98%)
6.14e + 01 (+44.57%)
2.38e + 02 (+12.90%)
5.31e + 01 (+10.38%)
Table 2: Total variance captured by k = 8 extracted components, each with s = 15 nonzero entries
? Bag of Words dataset [31]. For each corpus, we list the size (#documents?#vocabulary-size) and
the explained variance. Our algorithm operates on a rank-5 sketch in all cases.
topics extracted from the NY Times corpus (part of the Bag of Words dataset). The corpus consists
of 3 ? 105 news articles and a vocabulary of d = 102660 words.
6
Conclusions
We considered the sparse PCA problem for multiple components with disjoint supports. Existing
methods for the single component problem can be used along with an appropriate deflation step to
compute multiple components one by one, leading to potentially suboptimal results. We presented
a novel algorithm for jointly computing multiple sparse and disjoint components with provable approximation guarantees. Our algorithm is combinatorial and exploits interesting connections between the sparse PCA and the bipartite maximum weight matching problems. Its running time grows
as a low-order polynomial in the ambient dimension of the input data, but depends exponentially on
its rank. To alleviate this dependency, we can apply the algorithm on a low-dimensional sketch of
the input, at the cost of an additional error in our theoretical approximation guarantees. Empirical
evaluation showed that in many cases our algorithm outperforms deflation-based approaches.
Acknowledgments
DP is generously supported by NSF awards CCF-1217058 and CCF-1116404 and MURI AFOSR
grant 556016. This research has been supported by NSF Grants CCF 1344179, 1344364, 1407278,
1422549 and ARO YIP W911NF-14-1-0258.
References
[1] A. Majumdar, ?Image compression by sparse pca coding in curvelet domain,? Signal, image and video processing,
vol. 3, no. 1, pp. 27?34, 2009.
[2] Z. Wang, F. Han, and H. Liu, ?Sparse principal component analysis for high dimensional multivariate time series,? in
Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, pp. 48?56, 2013.
[3] A. d?Aspremont, L. El Ghaoui, M. Jordan, and G. Lanckriet, ?A direct formulation for sparse pca using semidefinite
programming,? SIAM review, vol. 49, no. 3, pp. 434?448, 2007.
Topic 1
1: percent
2: million
3: money
4: high
5: program
6: number
7: need
8: part
9: problem
10: com
Topic 2
Topic 3
Topic 4
Topic 5
Topic 6
Topic 7
Topic 8
zzz united states
zzz u s
zzz american
attack
military
palestinian
war
administration
zzz white house
games
zzz bush
official
government
president
group
leader
country
political
american
law
company
companies
market
stock
business
billion
analyst
firm
sales
cost
team
game
season
player
play
point
run
right
home
won
cup
minutes
add
tablespoon
oil
teaspoon
water
pepper
large
food
school
student
children
women
show
book
family
look
hour
small
zzz al gore
zzz george bush
campaign
election
plan
tax
public
zzz washington
member
nation
Table 3: B OW:N Y T IMES dataset [31]. The table lists the words corresponding to the s = 10
nonzero entries of each of the k = 8 extracted components (topics). Words corresponding to higher
magnitude entries appear higher in the topic.
8
[4] R. Jiang, H. Fei, and J. Huan, ?Anomaly localization for network data streams with graph joint sparse pca,? in Proceedings of the 17th ACM SIGKDD, pp. 886?894, ACM, 2011.
[5] H. Zou, T. Hastie, and R. Tibshirani, ?Sparse principal component analysis,? Journal of computational and graphical
statistics, vol. 15, no. 2, pp. 265?286, 2006.
[6] H. Kaiser, ?The varimax criterion for analytic rotation in factor analysis,? Psychometrika, vol. 23, no. 3, pp. 187?200,
1958.
[7] I. Jolliffe, ?Rotation of principal components: choice of normalization constraints,? Journal of Applied Statistics,
vol. 22, no. 1, pp. 29?35, 1995.
[8] I. Jolliffe, N. Trendafilov, and M. Uddin, ?A modified principal component technique based on the lasso,? Journal of
Computational and Graphical Statistics, vol. 12, no. 3, pp. 531?547, 2003.
[9] C. Boutsidis, P. Drineas, and M. Magdon-Ismail, ?Sparse features for pca-like linear regression,? in Advances in Neural
Information Processing Systems, pp. 2285?2293, 2011.
[10] M. Journ?ee, Y. Nesterov, P. Richt?arik, and R. Sepulchre, ?Generalized power method for sparse principal component
analysis,? The Journal of Machine Learning Research, vol. 11, pp. 517?553, 2010.
[11] B. Moghaddam, Y. Weiss, and S. Avidan, ?Spectral bounds for sparse pca: Exact and greedy algorithms,? NIPS, vol. 18,
p. 915, 2006.
[12] A. d?Aspremont, F. Bach, and L. E. Ghaoui, ?Optimal solutions for sparse principal component analysis,? The Journal
of Machine Learning Research, vol. 9, pp. 1269?1294, 2008.
[13] Y. Zhang, A. d?Aspremont, and L. Ghaoui, ?Sparse pca: Convex relaxations, algorithms and applications,? Handbook
on Semidefinite, Conic and Polynomial Optimization, pp. 915?940, 2012.
[14] X.-T. Yuan and T. Zhang, ?Truncated power method for sparse eigenvalue problems,? The Journal of Machine Learning
Research, vol. 14, no. 1, pp. 899?925, 2013.
[15] C. D. Sigg and J. M. Buhmann, ?Expectation-maximization for sparse and non-negative pca,? in Proceedings of the
25th International Conference on Machine Learning, ICML ?08, (New York, NY, USA), pp. 960?967, ACM, 2008.
[16] D. Papailiopoulos, A. Dimakis, and S. Korokythakis, ?Sparse pca through low-rank approximations,? in Proceedings
of The 30th International Conference on Machine Learning, pp. 747?755, 2013.
[17] M. Asteris, D. S. Papailiopoulos, and G. N. Karystinos, ?The sparse principal component of a constant-rank matrix,?
Information Theory, IEEE Transactions on, vol. 60, pp. 2281?2290, April 2014.
[18] L. Mackey, ?Deflation methods for sparse pca,? NIPS, vol. 21, pp. 1017?1024, 2009.
[19] V. Vu and J. Lei, ?Minimax rates of estimation for sparse pca in high dimensions,? in International Conference on
Artificial Intelligence and Statistics, pp. 1278?1286, 2012.
[20] M. Magdon-Ismail and C. Boutsidis, ?Optimal sparse linear auto-encoders and sparse pca,? arXiv preprint
arXiv:1502.06626, 2015.
[21] M. Magdon-Ismail, ?Np-hardness and inapproximability of sparse PCA,? CoRR, vol. abs/1502.05675, 2015.
[22] L. Ramshaw and R. E. Tarjan, ?On minimum-cost assignments in unbalanced bipartite graphs,? HP Labs, Palo Alto,
CA, USA, Tech. Rep. HPL-2012-40R1, 2012.
[23] N. Halko, P.-G. Martinsson, and J. A. Tropp, ?Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions,? SIAM review, vol. 53, no. 2, pp. 217?288, 2011.
[24] M. Asteris, D. Papailiopoulos, A. Kyrillidis, and A. G. Dimakis, ?Sparse pca via bipartite matchings,? arXiv preprint
arXiv:1508.00625, 2015.
[25] I. M. Johnstone and A. Y. Lu, ?On consistency and sparsity for principal components analysis in high dimensions,?
Journal of the American Statistical Association, vol. 104, no. 486, 2009.
[26] Z. Ma, ?Sparse principal component analysis and iterative thresholding,? The Annals of Statistics, vol. 41, no. 2,
pp. 772?801, 2013.
[27] V. Q. Vu, J. Cho, J. Lei, and K. Rohe, ?Fantope projection and selection: A near-optimal convex relaxation of sparse
pca,? in NIPS, pp. 2670?2678, 2013.
[28] Z. Wang, H. Lu, and H. Liu, ?Nonconvex statistical optimization: minimax-optimal sparse pca in polynomial time,?
arXiv preprint arXiv:1408.5352, 2014.
[29] R. Jenatton, G. Obozinski, and F. Bach, ?Structured sparse principal component analysis,? in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, AISTATS, pp. 366?373, 2010.
[30] E. Richard, G. R. Obozinski, and J.-P. Vert, ?Tight convex relaxations for sparse matrix factorization,? in Advances in
Neural Information Processing Systems, pp. 3284?3292, 2014.
[31] M. Lichman, ?UCI machine learning repository,? 2013.
9
| 5901 |@word repository:1 compression:1 polynomial:8 norm:2 termination:2 seek:1 covariance:6 decomposition:2 sepulchre:1 liu:2 contains:2 series:2 united:1 lichman:1 denoting:1 document:5 outperforms:4 existing:2 ketch:1 ka:2 discretization:1 com:1 written:1 must:1 readily:1 stemming:2 subsequent:3 recasting:2 analytic:1 remove:1 plot:1 depict:1 v:1 mackey:1 greedy:8 prohibitive:1 selected:2 fewer:1 intelligence:3 xk:14 ith:4 ial:1 core:1 alexandros:1 characterization:1 iterates:1 contribute:2 attack:1 zhang:2 stopwords:1 along:1 constructed:2 direct:1 become:1 yuan:1 consists:1 dimen:1 pairwise:1 hardness:1 market:1 roughly:1 examine:1 multi:5 mzn:1 gpower:1 company:2 food:1 election:1 cardinality:4 becomes:1 provided:2 psychometrika:1 moreover:1 maximizes:3 alto:1 follows1:1 interpreted:1 substantially:1 eigenvector:1 dimakis:4 finding:2 guarantee:11 berkeley:2 mitigate:1 fantope:1 nation:1 gore:1 exactly:1 k2:3 schwartz:1 brute:1 unit:2 uk:3 grant:2 sale:1 appear:1 positive:1 despite:1 jiang:1 approximately:6 therein:1 collect:1 conversely:1 limited:2 campaign:1 factorization:1 range:1 unique:1 acknowledgment:1 atomic:2 practice:2 vu:2 procedure:6 asteris:3 empirical:3 nnz:2 maxx:4 significantly:1 vert:1 matching:14 projection:1 word:13 onto:1 close:6 cannot:1 selection:1 applying:1 conventional:1 expl:2 nron:1 convex:5 focused:3 recovery:1 eukemia:1 spanned:1 datapoints:1 hurt:1 president:1 papailiopoulos:4 annals:1 construction:2 play:2 target:1 alleviating:1 anomaly:2 exact:5 programming:2 us:1 lanckriet:1 muri:1 role:1 preprint:3 wang:2 capture:6 imes:2 wj:6 news:1 richt:1 trade:1 removed:1 highest:1 substantial:1 environment:1 complexity:6 cooccurrence:1 nesterov:1 exhaustively:3 ultimately:1 depend:1 solving:11 colinear:1 tight:1 serve:1 upon:1 bipartite:14 localization:1 matchings:3 drineas:1 kxj:3 joint:2 easily:1 k0:1 stock:1 various:3 represented:1 separated:1 describe:2 artificial:3 majumdar:1 neighborhood:2 firm:1 whose:2 heuristic:1 widely:1 solve:4 encoded:1 tightness:1 otherwise:1 statistic:8 jointly:8 seemingly:1 online:1 eigenvalue:3 rr:2 net:9 propose:2 aro:1 deflating:2 uci:1 ramification:1 bow:1 tax:1 sixteenth:1 ismail:3 description:2 billion:1 cluster:1 r1:1 perfect:3 depending:1 develop:1 multicore:1 ij:12 school:1 solves:5 auxiliary:2 hungarian:1 come:1 radius:1 subsequently:1 centered:2 public:1 government:1 generalization:1 alleviate:2 tighter:1 extension:1 con5:1 considered:2 cbcl:1 achieves:5 early:2 estimation:2 bag:4 combinatorial:3 palo:1 utexas:3 schwarz:2 largest:1 vice:1 wi12:2 establishes:1 weighted:2 generously:1 always:2 arik:1 modified:1 rather:1 season:1 sion:2 conjunction:1 ax:15 focus:2 improvement:1 consistently:1 rank:20 tech:1 political:1 sigkdd:1 attains:1 sense:1 dim:1 el:1 typically:1 unlikely:1 journ:1 wij:2 subroutine:1 interested:1 selects:1 provably:3 sketched:2 arg:8 fidelity:1 aforementioned:2 among:1 development:1 plan:1 art:2 constrained:4 ak2:2 yip:1 tokenization:1 equal:3 construct:1 aware:1 washington:1 identical:1 represents:1 look:1 icml:1 leukemia:3 uddin:1 np:3 develops:1 richard:1 randomly:1 consisting:1 psd:4 ab:1 detection:1 highly:2 evaluation:1 introduces:1 semidefinite:4 behind:2 ambient:7 moghaddam:1 edge:7 huan:1 orthogonal:1 raph:1 unless:1 initialized:1 desired:8 re:1 theoretical:6 instance:6 column:4 military:1 w911nf:1 assignment:1 maximization:13 cost:5 vertex:7 entry:12 subset:3 conducted:1 stored:1 dependency:1 encoders:1 cho:1 fundamental:1 international:5 siam:2 sequel:4 probabilistic:1 off:1 pool:1 sketching:1 fect:1 central:1 woman:1 book:1 american:3 leading:6 return:1 supp:3 account:1 curvelet:1 sec:3 coding:1 student:1 configured:1 explicitly:4 depends:3 vi:3 stream:1 multiplicative:3 later:1 break:1 lab:1 competitive:1 recover:2 complicated:1 parallel:1 capability:1 contribution:1 om:1 accuracy:4 variance:21 efficiently:1 yield:5 wisdom:1 identify:1 conceptually:1 accurately:1 lu:2 randomness:1 parallelizable:1 against:1 boutsidis:2 pp:23 naturally:1 proof:4 attributed:1 recovers:2 associated:1 dataset:11 recall:2 subsection:1 dimensionality:1 cj:4 jenatton:1 higher:7 restarts:1 wei:1 april:1 formulation:3 done:1 furthermore:1 stage:1 hpl:1 d:1 sketch:15 hand:2 tropp:1 overlapping:2 eig:1 quality:4 lei:2 grows:6 atch:1 oil:1 effect:1 usa:2 true:2 lgorithm:1 ccf:3 former:2 hence:3 equality:1 regularization:1 nonzero:8 white:1 game:2 uniquely:1 won:1 criterion:1 generalized:1 demonstrate:1 percent:1 ranging:1 variational:1 wise:1 novel:3 tpower:6 image:2 common:2 rotation:2 empirically:2 exponentially:4 volume:2 million:1 discussed:1 martinsson:1 approximates:2 association:1 significant:2 refer:1 zzz:8 versa:1 cup:1 rd:7 outlined:3 consistency:1 hp:1 ramshaw:1 han:1 surface:1 money:1 add:1 multivariate:1 showed:1 perspective:1 optimizes:4 nonconvex:1 inequality:3 rep:1 arbitrarily:5 palestinian:1 captured:7 minimum:1 additional:5 george:1 kxk0:1 determine:1 maximize:2 signal:1 ii:1 branch:2 multiple:18 reduces:2 anastasios:2 ing:2 match:1 offer:1 sphere:2 bach:2 award:1 impact:1 variant:3 regression:2 ko:1 avidan:1 vision:1 expectation:2 arxiv:6 iteration:1 sometimes:1 normalization:1 achieved:5 fine:1 thirteenth:1 completes:1 country:1 wij2:2 extra:1 operate:2 sr:6 induced:1 member:1 contrary:3 seem:1 jordan:1 ee:1 near:1 spca:4 iii:1 xj:4 pepper:1 hastie:1 identified:1 suboptimal:4 lasso:2 idea:1 kyrillidis:2 texas:3 administration:1 expression:1 pca:56 war:1 penalty:1 york:1 repeatedly:4 matlab:2 useful:1 generally:2 clear:1 eigenvectors:2 listed:1 amount:1 ten:1 rcence:1 reduced:1 kck2:1 nsf:2 disjoint:16 per:2 tibshirani:1 vol:16 tsvd:2 group:1 key:3 four:1 v1:2 graph:8 relaxation:3 enforced:1 run:1 family:1 reader:1 utilizes:1 home:1 appendix:3 bound:3 quadratic:2 oracle:1 constraint:6 fei:1 dominated:1 generates:2 u1:4 sponds:1 relatively:1 structured:2 according:1 em:7 partitioned:1 sigg:1 projecting:1 explained:5 intuitively:1 ghaoui:3 computationally:1 previously:2 discus:3 turn:5 deflation:14 jolliffe:2 end:2 available:1 magdon:3 eight:1 apply:2 probe:1 spectral:3 appropriate:2 appearing:1 alternative:3 slower:1 original:6 denotes:1 clustering:1 running:3 ensure:2 include:1 top:1 rain:3 hinge:1 graphical:2 placeholder:1 exploit:2 uj:7 objective:12 kaiser:1 ofwords:1 dependence:6 diagonal:3 surrogate:3 ow:5 kth:1 subspace:6 dp:1 vd:2 topic:12 cauchy:3 considers:1 water:1 provable:1 analyst:1 megasthenis:1 index:1 unfortunately:1 potentially:3 trace:1 negative:1 implementation:1 upper:1 observation:1 datasets:6 finite:2 varimax:1 truncated:4 team:1 thm:1 tarjan:1 tive:1 pair:1 specified:1 connection:1 california:1 established:1 hour:1 nip:4 address:1 able:1 dimitris:1 ev:1 sparsity:6 program:1 interpretability:1 max:9 including:1 video:1 power:4 natural:1 force:1 regularized:1 rely:1 business:1 buhmann:1 representing:2 imates:1 minimax:2 brief:1 identifies:1 conic:1 aspremont:3 extract:3 auto:1 kj:4 text:2 prior:1 literature:2 review:2 removal:1 determining:2 afosr:1 law:1 interesting:3 prototypical:1 versus:1 ingredient:1 incident:2 article:1 thresholding:2 share:1 austin:4 row:4 course:1 supported:4 repeat:2 jth:3 formal:3 side:2 allow:1 johnstone:1 taking:1 barrier:1 face:1 sparse:83 distributed:1 dimension:12 vocabulary:4 valid:1 cumulative:7 computes:5 author:2 collection:3 projected:1 coincide:1 taining:1 transaction:1 approximate:4 citation:1 feat:1 tains:1 active:1 reveals:1 handbook:1 corpus:5 xi:1 leader:1 iterative:2 table:7 nature:1 ca:1 decoupling:2 elastic:1 alg:8 zou:1 constructing:1 domain:3 official:1 aistats:1 pk:2 s2:2 motivation:1 child:1 fig:5 representative:2 en:1 depicts:3 ny:2 comprises:1 exponential:1 lie:5 kxk2:1 candidate:4 house:1 ix:1 bij:1 theorem:9 down:1 removing:1 minute:1 specific:1 rohe:1 list:6 decay:1 intractable:1 intrinsic:4 exists:1 albeit:1 sequential:1 effectively:4 corr:1 magnitude:1 execution:1 cartesian:1 halko:1 appearance:1 ux:1 partially:1 inapproximability:1 trendafilov:1 determines:1 extracted:14 acm:3 ma:1 obozinski:2 cumul:3 towards:1 feasible:3 hard:2 determined:1 operates:5 reducing:1 principal:15 lemma:3 total:7 isomorphic:1 svd:3 experimental:1 unfavorable:1 player:1 formally:4 support:23 latter:5 unbalanced:2 pertains:1 bush:2 scarcity:1 evaluate:4 scratch:1 ex:1 |
5,416 | 5,902 | Weighted Theta Functions and Embeddings with
Applications to Max-Cut, Clustering and
Summarization
Fredrik D. Johansson
Computer Science & Engineering
Chalmers University of Technology
G?oteborg, SE-412 96, Sweden
frejohk@chalmers.se
Ankani Chattoraj?
Brain & Cognitive Sciences
University of Rochester
Rochester, NY 14627-0268, USA
achattor@ur.rochester.edu
Chiranjib Bhattacharyya
Computer Science and Automation
Indian Institute of Science
Bangalore 560012, Karnataka, India
chiru@csa.iisc.ernet.in
Devdatt Dubhashi
Computer Science & Engineering
Chalmers University of Technology
G?oteborg, SE-412 96, Sweden
dubhashi@chalmers.se
Abstract
We introduce a unifying generalization of the Lov?asz theta function, and the associated geometric embedding, for graphs with weights on both nodes and edges.
We show how it can be computed exactly by semidefinite programming, and how
to approximate it using SVM computations. We show how the theta function can
be interpreted as a measure of diversity in graphs and use this idea, and the graph
embedding in algorithms for Max-Cut, correlation clustering and document summarization, all of which are well represented as problems on weighted graphs.
1
Introduction
Embedding structured data, such as graphs, in geometric spaces, is a central problem in machine
learning. In many applications, graphs are attributed with weights on the nodes and edges ? information that needs to be well represented by the embedding. Lov?asz introduced a graph embedding
together with the famous theta function in the seminal paper [19], giving his celebrated solution to
the problem of computing the Shannon capacity of the pentagon. Indeed, Lov?asz?s embedding is a
very elegant and powerful representation of unweighted graphs, that has come to play a central role
in information theory, graph theory and combinatorial optimization [10, 8]. However, despite there
being at least eight different formulations of ?(G) for unweighted graphs, see for example [20], there
does not appear to be a version that applies to graphs with weights on the edges. This is surprising,
as it has a natural interpretation in the information theoretic problem of the original definition [19].
A version of the Lov?asz number for edge-weighted graphs, and a corresponding geometrical representation, could open the way to new approaches to learning problems on data represented as
similarity matrices. Here we propose such a generalization for graphs with weights on both nodes
and edges, by combining a few key observations. Recently, Jethava et al. [14] discovered an interesting connection between the original theta function and a central problem in machine learning,
namely the one class Support Vector Machine (SVM) formulation [14]. This kernel based method
gives yet another equivalent characterization of the Lov?asz number. Crucially, it is easily modified
to yield an equivalent characterization of the closely related Delsarte version of the Lov?asz number
?
This work was performed when the author was affiliated with CSE at Chalmers University of Technology.
1
introduced by Schrijver [24] which is more flexible and often more convenient to work with. Using
this kernel characterization of the Delsarte version of Lov?asz number, we define a theta function and
embedding of weighted graphs, suitable for learning with data represented as similarity matrices.
The original theta function is limited to applications on small graphs, because of its formulation as a
semidefinite program (SDP). In [14], Jethava et al. showed that their kernel characterization can be
used to compute a number and an embedding of a graph that are often good approximations to the
theta function and embedding, and that can be computed fast, scaling to very large graphs. Here we
give the analogous approximate method for weighted graphs. We use this approximation to solve
the weighted maximum cut problem faster than the classical SDP relaxation.
Finally, we show that our edge-weighted theta function has a natural interpretation as a measure of
diversity in graphs. We use this intuition to define a centroid-based correlation clustering algorithm
that automatically chooses the number of clusters and initializes the centroids. We also show how to
use the support vectors, computed in the kernel characterization with both node and edge weights,
to perform extractive document summarization.
To summarize our main contributions:
? We introduce a unifying generalization of the famous Lov?asz number applicable to graphs
with weights on both nodes and edges.
? We show that via our characterization, we can compute a good approximation to our
weighted theta function and the corresponding embeddings using SVM computations.
? We show that the weighted version of the Lov?asz number can be interpreted as a measure
of diversity in graphs, and we use this to define a correlation clustering algorithm dubbed
?-means that automatically a) chooses the number of clusters, and b) initializes centroids.
? We apply the embeddings corresponding to the weighted Lov?asz numbers to solve weighted
maximum cut problems faster than the classical SDP methods, with similar accuracy.
? We apply the weighted kernel characterization of the theta function to document summarization, exploiting both node and edge weights.
2
Extensions of Lov?asz and Delsarte numbers for weighted graphs
Background Consider embeddings of undirected graphs G = (V, E). Lov?asz introduced an elegant embedding, implicit in the definition of his celebrated theta function ?(G) [19], famously an
upper bound on the Shannon capacity and sandwiched between the independence number and the
chromatic number of the complement graph.
1
?(G) = min max > 2 , u>
(1)
i uj = 0, ?(i, j) 6? E, kui k = kck = 1 .
(c ui )
{ui },c i
The vectors {ui }, c are so-called orthonormal representations or labellings, the dimension of which
is determined by the optimization. We refer to both {ui }, and the matrix U = [u1 , . . . , un ] as an
embedding G, and use the two notations interchangeably. Jethava et al. [14] introduced a characterization of the Lov?asz ? function that established a close connection with the one-class support vector
machine [23]. They showed that, for an unweighted graph G = (V, E),
?(G)
=
min ?(K),
K?K(G)
where
(2)
K(G)
:= {K 0 | Kii = 1, Kij = 0, ?(i, j) 6? E},
X
X
?(K) := max f (?; K), f (?; K) := 2
?i ?
Kij ?i ?j
?i ?0
i
(3)
(4)
i,j
is the dual formulation of the one-class SVM problem, see [16]. Note that the conditions on K only
refer to the non-edges of G. In the sequel, ?(K) and f (?; K) always refer to the definitions in (4).
2.1
New weighted versions of ?(G)
A key observation in proving (2), is that the set of valid orthonormal representations is equivalent
to the set of kernels K. This equivalence can be preserved in a natural way when generalizing the
2
definition to weighted graphs: any constraint on the inner product uTi uj may be represented as
constraints on the elements Kij of the kernel matrix.
To define weighted extensions of the theta function, we need to first pass to the closely related
Delsarte version of the Lov?asz number introduced by Schrijver [24]. In the Delsarte version, the
orthogonality constraint for non-edges is relaxed to uTi uj ? 0, (i, j) 6? E. With reference to the
formulation (2) it is easy to observe that the Delsarte version is given by
?1 (G) =
min
K?K1 (G)
?(K), where K1 (G) := {K 0 | Kii = 1, Kij ? 0, ?(i, j) 6? E}
(5)
In other words, the Lov?asz number corresponds to orthogonal labellings of G with orthogonal vectors on the unit sphere assigned to non?adjacent nodes whereas the Delsarte version corresponds to
obtuse labellings, i.e. the vectors corresponding to non?adjacent nodes are vectors on the unit sphere
meeting at obtuse angles. In both cases, the corresponding number is essentially the half-angle of
the smallest spherical cap containing all the vectors assigned to the nodes. Comparing (2) and (5) it
follows that ?1 (G) ? ?(G). In the sequel, we will use the Delsarte version and obtuse labellings to
define weighted generalizations of the theta function.
We observe in passing, that for any K ? K1 , and for any independent set I in the graph, taking
?i = 1 if i ? I and 0 otherwise,
X
X
X
X
X
?(K) ? 2
?i ?
?i ?j Kij =
?i ?
?i ?j Kij ?
?i = |I|
(6)
i
i,j
i
i
i6=j
since for each term in the second sum, either (i, j) is an edge, in which case either ?i or ?j is zero,
or (i, j) is a non?edge in which case Kij ? 0. Thus, like ?(G), the Delsarte version ?1 (G) is also
an upper bound on the stability or independence number ?(G).
Kernel characterization of theta functions on node-weighted graphs Lov?asz number has a
classical extension to graphs with node weights ? = [?1 , . . . , ?n ]> , see for example [17]. The
generalization, in the Delsarte version (note the inequality constraint), is the following
?i
?(G, ?) = min max > 2 , u>
(7)
i uj ? 0, ?(i, j) 6? E, kui k = kck = 1 .
(c ui )
{ui },c i
By passing to the dual of (7), see section 2.1 and [16], we may, as for unweighted graphs, characterize ?(G, ?) by a minimization over the set of kernels,
K(G, ?) := {K 0 | Kii = 1/?i , Kij ? 0, ?(i, j) 6? E}
(8)
and, just like in the unweighted case, ?1 (G, ?) = minK?K(G,?) ?(K). When ?i = 1, ?i ? V , this
reduces to the unweighted case. We also note that for any K ? K(G, ?) and for any independent
set I in the graph, taking ?i = ?i if i ? I and 0 otherwise,
?(K) ? 2
X
?i ?
i
X
i,j
?i ?j Kij = 2
X
i?I
?i ?
X ?2
i
i?I
?i
?
X
?i ?j Kij ?
X
i6=j
?i ,
(9)
i?I
since Kij ? 0 ?(i, j) 6? E. Thus, ?1 (G, ?) ? ?(K) is an upper bound on the weight of the
maximum-weight independent set.
Extension to edge-weighted graphs The kernel characterization of ?1 (G) allows one to define a
natural extension to data given as similarity matrices represented in the form of a weighted graph
G = (V, S). Here, S is a similarity function on (unordered) node pairs, and S(i, j) ? [0, 1] with +1
representing complete similarity and 0 complete dissimilarity. The obtuse labellings corresponding
to the Delsarte version are somewhat more flexible even for unweighted graphs, but is particularly
well suited for weighted graphs. We define
?1 (G, S) :=
min
K?K(G,S)
?(K) where K(G, S) := {K 0 | Kii = 1, Kij ? Sij }
In the case of an unweighted graph, where Sij ? {0, 1}, this reduces exactly to (5).
3
(10)
Table 1: Characterizations of weighted theta functions. In the first row are characterizations following the original definition. In the second are kernel characterizations. The bottom row are versions
of the LS-labelling [14]. In all cases, kui k = kck = 1. A refers to the adjacency matrix of G.
Unweighted
min min max
{ui }
c
i
Node-weighted
?i
min min max > 2
i
{ui } c
(c ui )
1
(c> ui )2
KLS =
A
+I
|?n (A)|
min min max
{ui }
u>
i uj = 0, ?(i, j) 6? E
u>
i uj ? 0, ?(i, j) 6? E
KG = {K 0 | Kii = 1,
Kij = 0, ?(i, j) 6? E}
Edge-weighted
KG,? = {K 0 | Kii = 1/?i ,
Kij = 0, ?(i, j) 6? E}
?
KLS
=
c
i
1
(c> ui )2
u>
i uj ? Sij , i 6= j
KG,S = {K 0 | Kii = 1,
Kij ? Sij , i 6= j}
A
+diag(?)?1
?max |?n (A)|
S
KLS
=
S
+I
|?n (S)|
Unifying weighted generalization We may now combine both node and edge weights to form a
fully general extension to the Delsarte version of the Lov?asz number,
1
Sij
?1 (G, ?, S) =
min
?(K), K(G, ?, S) := K 0 | Kii = , Kij ? ?
(11)
?i
?i ?j
K?K(G,?,S)
It is easy to see that for unweighted graphs, Sij ? {0, 1}, ?i = 1, the definition reduces to the
Delsarte version of the theta function in (5). ?1 (G, ?, S) is hence a strict generalization of ?1 (G).
All the proposed weighted extensions are defined by the same objective, ?(K). The only difference
is the set K, specialized in various ways, over which the minimum, minK?K ?(K), is computed.
It also is important to note, that with the generalization of the theta function comes an implicit
generalization of the geometric representation of G. Specifically, for any feasible K in (11), there
?
is an embedding U = [u1 , . . . , un ] such that K = U > U with the properties u>
i uj ?i ?j ? Sij ,
?
?
kui k2 = 1/ ?i , which can be retrieved using matrix decomposition. Note that u>
i uj ?i ?j is
exactly the cosine similarity between ui and uj , which is a very natural choice when Sij ? [0, 1].
The original definition of the (Delsarte) theta function and its extensions, as well as their kernel
characterizations, can be seen in table 1. We can prove the equivalence of the embedding (top) and
kernel characterizations (middle) using the following result.
Proposition 2.1. For any embedding U ? Rd?n with K = U > U , and f in (4), the following holds
min max
c?S d?1
i
1
(c> ui )2
= max f (?; K) .
(12)
?i ?0
Proof. The result is given as part of the proof of Theorem 3 in Jethava et al. [14]. See also [16].
As we have already established in section 2 that any set of geometric embeddings have a characterization as a set of kernel matrices, it follows that the minimizing the LHS in (12) over a (constrained)
set of orthogonal representations, {ui }, is equivalent to minimizing the RHS over a kernel set K.
3
Computation and fixed-kernel approximation
The weighted generalization of the theta function, ?1 (G, ?, S), defined in the previous section, may
be computed as a semidefinite program. In fact ?1 (G, ?, S) = 1/(t? )2 for t? , the solution to the
following problem. For details, see the supplementary material [16].
minimize t subject to X 0, X ? R(n+1)?(n+1)
X
Xi,n+1 ? t, Xii = 1/?i ,
?
Xij ? Sij / ?i ?j ,
4
i ? [n]
i 6= j, i, j ? [n]
(13)
While polynomial in time complexity [13], solving the SDP is too slow in many cases. To address
this, Jethava et al. [14] introduced a fast approximation to (the unweighted) ?(G), dubbed SVMtheta. They showed that in some cases, the minimization over K in (2) can be replaced by a fixed
choice of K, while causing just a constant-factor error. Specifically, for unweighted graphs with
adjacency matrix A, Jethava et al. [14] defined the so called LS-labelling, KLS (G) = A/|?n (A)| +
I, and showed that for large families of graphs ?(G) ? ?(KLS (G)) ? ??(G), for a constant ?.
We extend the LS-labelling to weighted graphs. For graphs with edge weights, represented by
a similarity matrix S, the original definition may be used, with S substituted for A. For node
weighted graphs we also must satisfy the constraint Kii = 1/?i , see (8). A natural choice, still
ensuring positive semidefiniteness is,
A
KLS (G, ?) =
+ diag(?)?1
(14)
?max |?n (A)|
where diag(?)?1 is the diagonal matrix ? with elements ?ii = 1/?i , and ?max = maxni=1 ?i . Both
weighted versions of the LS-labelling are presented in table 1. The fully generalized labelling, for
graphs with weights on both nodes and edges, KLS (G, ?, S) can be obtained by substituting S for
A in (14). As with the exact characterization, we note that KLS (G, ?, S) reduces to KLS (G) for
the uniform case, Sij ? {0, 1}, ?i = 1. For all versions of the LS-labelling of G, as with the exact
characterization, a geometric embedding U may be obtained from KLS using matrix decompotion.
3.1
Computational complexity
Solving the full problem in the kernel characterization (11), is not faster than the computing the
SDP characterization (13). However, for a fixed K, the one-class SVM can be solved in O(n2 )
time [12]. Retrieving the embedding U : K = U T U may be done using Cholesky or singular value
decomposition (SVD). In general, algorithms for these problems have complexity O(n3 ). However,
in many cases, a rank d approximation to the decomposition is sufficient, see for example [9]. A thin
2
(or truncated)
? SVD corresponding to the top d singular values may be computed in O(n d) time [5]
for d = O( n). The remaining issue is the computation of K. The complexity of computing the
LS-labelling discussed in the previous section is dominated by the computation of the minimum
?
eigenvalue ?n (A). This can be done approximately in O(m)
time, where m is the number of edges
of the graph [1]. Overall, the complexity of computing both the embedding U and ?(K) is O(dn2 ).
4
The theta function as diversity in graphs: ?-means clustering
In section 2, we defined extensions of the Delsarte version of the Lov?asz number, ?1 (G) and the
associated geometric embedding, for weighted graphs. Now we wish to show how both ?(G) and
the geometric embedding are useful for solving common machine learning tasks. We build on an
intuition of ?(G) as a measure of diversity in graphs, illustrated here by a few simple examples. For
complete graphs Kn , it is well known that ?(Kn ) = 1, and for empty graphs K n , ?(K n ) = n.
We may interpret these graphs as having 1 and n clusters respectively. Graphs with several disjoint
clusters make a natural middle-ground. For a graph G that is a union of k disjoint cliques, ?(G) = k.
Now, consider the analogue of (6) for graphs with edge weights Sij . For any K ? K(G, S) and for
any subset H of nodes, let ?i = 1 if i ? H and 0 otherwise. Then, since Kij ? Sij ,
X
X
X
X
X
2
?i ?
?i ?j Kij =
?i ?
?i ?j Kij ? |H| ?
Sij .
i
ij
i
i6=j
i6=j,i,j?H
Maximizing this expression may be viewed as the trade-off of finding a subset of nodes that is both
large and diverse; the objective function is the size of the set subjected to a penalty for non?diversity.
In general support vector machines, non-zero support values ?i correspond to support vectors, defining the decision boundary. As a result, nodes i ? V with high values ?i may be interpreted as an
important and diverse set of nodes.
4.1
?-means clustering
A common problem related to diversity in graphs is correlation clustering [3]. In correlation clustering, the task is to cluster a set of items V = {1, . . . , n}, based on their similarity, or correlation,
5
Algorithm 1 ?-means clustering
1: Input: Graph G, with weight matrix S and node weights ?.
2: Compute kernel K ? K(G, ?, S)
3: ?i? ? arg max?i f (?; K), as in (4)
4: Sort alphas according to ji such that ?j1 ? ?j2 ? ... ? ?jn
? where ?? ? ?(K) = f (?? ; K)
5: Let k = d?e
6: either a)
Initialize labels Zi = arg maxj?{j1 ,...,jk } Kij
? and Z as initial labels
Output: result of kernel k-means with kernel K, k = d?e
9: or b)
10:
Compute U : K = U T U , with columns Ui , and let C ? {Uji : i ? k}
? and C as initial cluster centroids
11: Output: result of k-means with k = d?e
7:
8:
S : V ? V ? Rn?n , without specifying the number of clusters beforehand. This is naturally posed
as a problem of clustering the nodes of an edge-weighted graph. In a variant called overlapping
correlation clustering [4], items may belong to several, overlapping, clusters. The usual formulation
of correlation clustering is an integer linear program [3]. Making use of geometric embeddings, we
may convert the graph clustering problem to the more standard problem of clustering a set of points
{ui }ni=1 ? Rd?n , allowing the use of an arsenal of established techniques, such as k-means clustering. However, we remind ourselves of two common problems with existing clustering algorithms.
Problem 1: Number of clusters Many clustering algorithms relies on the user making a good
choice of k, the number of clusters. As this choice can have dramatic effect on both the accuracy
and speed of the algorithm, heuristics for choosing k, such as Pham et al. [22], have been proposed.
Problem 2: Initialization Popular clustering algorithms such as Lloyd?s k-means, or expectationmaximization for Gaussian mixture models require an initial guess of the parameters. As a result,
these algorithms are often run repeatedly with different random initializations.
We propose solutions to both problems based on ?1 (G). To solve Problem 1, we choose k =
d?1 (G)e. This is motivated by ?1 (G) being a measure of diversity. For Problem 2, we propose
initializing parameters based on the observation that the non-zero ?i are support vectors. Specifically, we let the initial clusters by represented by the set of k nodes, I ? V , with the largest ?i . In
k-means clustering, this corresponds to letting the initial centroids be {ui }i?I . We summarize these
ideas in algorithm 1, comprising both ?-means and kernel ?-means clustering.
In section 3.1, we showed that computating the approximate
weighted theta function and embedding,
?
can be done in O(dn2 ) time for a rank d = O( n) approximation to the SVD. As is well-known,
Lloyd?s algorithm has a very high worst-case complexity and will dominate the overall complexity.
5
5.1
Experiments
Weighted Maximum Cut
The maximum cut problem (Max-Cut), a fundamental problem in graph algorithms, with applications in machine learning [25], has famously been solved using geometric embeddings defined by
semidefinite programs [9]. Here, given a graph G, we compute an embedding U ? Rd?n , the
SVM-theta labelling in [15], using ?
the LS-labelling, KLS . To reduce complexity, while preserving
accuracy [9], we use a rank d = 2n truncated SVD, see section 3.1. We apply the GoemansWilliamson random hyperplane rounding [9] to partition the embedding into two sets of points,
representing the cut. The rounding was repeated 5000 times, and the maximum cut is reported.
Helmberg & Rendl [11] constructed a set of 54 graphs, 24 of which are weighted, that has since
often been used as benchmarks for Max-Cut. We use the six of the weighted graphs for which there
are multiple published results [6, 21]. Our approach is closest to that of the SDP-relaxation, which
6
Table 2: Weighted maximum cut. c is the weight of the produced cut.
Graph
G11
G12
G13
G32
G33
G34
SDP [6]
c
Time
528
165s
522
145s
542
145s
1280 1318s
1248 1417s
1264 1295s
SVM-?
c
Time
522 3.13s
518 2.94s
540 2.97s
1286 35.5s
1260 36.4s
1268 37.9s
Best known [21]
c
Time
564
171.8s
556
241.5s
580
227.5s
1398
900.6s
1376
925.6s
1372
925.6s
Table 3: Clustering of the (mini) newsgroup dataset. Average (and std. deviation) over 5 splits. k? is
the average number of clusters predicted. The true number is k = 16.
VOTE /BOEM
P IVOT /BOEM
B EST /BOEM
F IRST /BOEM
k- MEANS + RAND
k- MEANS + INIT
?- MEANS + RAND
?- MEANS
F1
31.29 ? 4.0
30.07 ? 3.4
29.67 ? 3.4
26.76 ? 3.8
17.31 ? 1.3
20.06 ? 6.8
35.60 ? 4.3
36.20 ? 4.9
k?
124
120
112
109
2
3
25
25
Time
8.7m
14m
13m
14m
15m
5.2m
45s
11s
has time complexity O(mn log2 n/3 ) [2]. In comparison, our method takes O(n2.5 ) time, see section 3.1. The results are presented in table 2. For all graphs, the SVM approximation is comparable
to or better than the SDP solution, and considerably faster than the best known method [21].1
5.2
Correlation clustering
We evaluate several different versions of algorithm 1 in the task of correlation clustering, see sec? but random initialization
tion 4.1. We consider a) the full version (?- MEANS), b) one with k = d?e
of centroids (?- MEANS + RAND), c) one with ?-based initialization but choosing k according to Pham
et al. [22] (k- MEANS + INIT) and d) k according to [22] and random initialization (k- MEANS + RAND).
For the randomly initialized versions, we use 5 restarts of k-means++. In all versions, we cluster the
points of the embedding defined by the fixed kernel (LS-labelling) K = KLS (G, S).
Elsner & Schudy [7] constructed five affinity matrices for a subset of the classical 20-newsgroups
dataset. Each matrix, corresponding to a different split of the data, represents the similarity between
messages in 16 different newsgroups. The task is to cluster the messages by their respective newsgroup. We run algorithm 1 on every split, and compute the F1 -score [7], reporting the average and
? We compare our
standard deviation over all splits, as well as the predicted number of clusters, k.
results to several greedy methods described by Elsner & Schudy [7], see table 3. We only compare
to their logarithmic weighting schema, as the difference to using additive weights was negligible [7].
The results are presented in table 3. We observe that the full ?-means method achieves the highest
F1 -score, followed by the version with random initialization (instead of using embeddings of nodes
with highest ?i , see algorithm 1). We note also that choosing k by the method of Pham et al. [22]
consistently results in too few clusters, and with the greedy search methods, far too many.
5.3
Overlapping Correlation Clustering
Bonchi et al. [4] constructed a benchmark for overlapping correlation clustering based on two
datasets for multi-label classification, Yeast and Emotion. The datasets consist of 2417 and 593
items belonging to one or more of 14 and 6 overlapping clusters respectively. Each set can be represented as an n ? k binary matrix L, where k is the number of clusters and n is the number of items,
1
Note that the timing results for the SDP method are from the original paper, published in 2001.
7
Table 4: Clustering of the Yeast and Emotion datasets. ? The total time for finding the best solution.
The times for OCC-I SECT for a single k was 2.21s and 80.4s respectively.
OCC-I SECT [4]
?-means (no k-means)
Prec.
0.98
1
Emotion
Rec.
F1
1
0.99
1
1
Time
12.1?
0.34s
Prec.
0.99
0.94
Yeast
Rec.
F1
1.00 1.00
1
0.97
Time
716s?
6.67s
such that Lic = 1 iff item i belongs to cluster c. From L, a weight matrix S is defined such that Sij
is the Jaccard coefficient between rows i and j of L. S is often sparse, as many of the pairs do not
share a single cluster. The correlation clustering task is to reconstruct L from S.
Here, we use only the centroids C = {uj1 , ..., ujk } produced by algorithm 1, without running kmeans. We let each centroid c = 1, ..., k represent a cluster, and assign a node i ? V to that cluster,
? ic = 1, iff uT uj > 0. We compute the precision and recall following Bonchi et al. [4]. For
i.e. L
c
i
?
comparison with Bonchi et al. [4], we run their algorithm called OCC-I SECT with the parameter k,
bounding the number of clusters, in the interval 1, ..., 16 and select the one resulting in lowest cost.
The results are presented in table 4. For Emotion and Yeast, ?-means estimated the number of
clusters, k to be 6 (the correct number) and 8 respectively. For OCC-Isect, the k with the lowest
cost were 10 and 13. We note that while very similar in performance, the ?-means algorithms is
considerably faster than OCC-I SECT, especially when k is unknown.
5.4
Document summarization
Finally, we briefly examine the idea of using ?i to select a both relevant and diverse set of items, in a
very natural application of the weighted theta function ? extractive summarization [18]. In extractive
summarization, the goal is to automatically summarize a text by picking out a small set of sentences
that best represents the whole text. We may view the sentences of a text as the nodes of a graph, with
edge weights Sij , the similarity between sentences, and node weights ?i representing the relevance
of the sentence to the text as a whole. The trade-off between brevity and relevance described above
can then be viewed as finding a set of nodes that has both high total weight and high diversity. This is
naturally accomplished using our framework by computing [?1? , . . . , ?n? ]> = arg max?i >0 f (?; K)
for fixed K = KLS (G, ?, S) and picking the sentences with the highest ?i? .
2
We apply this method to the multi-document summarization task of DUC-04
P . We2 let Sij be the
TF-IDF sentence similarity described by Lin & Bilmes [18], and let ?i = ( j Sij ) . State-of-theart systems, purpose-built for summarization, achieve around 0.39 in recall and F1 score [18]. Our
method achieves a score of 0.33 on both measures which is about the same as the basic version
of [18]. This is likely possible to improve by tuning the trade-off between relevance and diversity,
such as a making a more sophisticated choice of S and ?. However, we leave this to future work.
6
Conclusions
We have introduced a unifying generalization of Lov?asz?s theta function and the corresponding
geometric embedding to graphs with node and edge weights, characterized as a minimization over
a constrained set of kernel matrices. This allows an extension of a fast approximation of the Lov?asz
number to weighted graphs, defined by an SVM problem for a fixed kernel matrix. We have shown
that the theta function has a natural interpretation as a measure of diversity in graphs, a useful
function in several machine learning problems. Exploiting these results, we have defined algorithms
for weighted maximum cut, correlation clustering and document summarization.
Acknowledgments
This work is supported in part by the Swedish Foundation for Strategic Research (SSF).
2
http://duc.nist.gov/duc2004/
8
References
[1] S. Arora, E. Hazan, and S. Kale. Fast algorithms for approximate semidefinite programming using the
multiplicative weights update method. In Foundations of Computer Science, 2005. FOCS 2005. 46th
Annual IEEE Symposium on, pages 339?348. IEEE, 2005.
[2] S. Arora, E. Hazan, and S. Kale. The multiplicative weights update method: a meta-algorithm and
applications. Theory of Computing, 8(1):121?164, 2012.
[3] N. Bansal, A. Blum, and S. Chawla. Correlation clustering. Machine Learning, 56(1-3):89?113, 2004.
[4] F. Bonchi, A. Gionis, and A. Ukkonen. Overlapping correlation clustering. Knowledge and information
systems, 35(1):1?32, 2013.
[5] M. Brand. Fast low-rank modifications of the thin singular value decomposition. Linear algebra and its
applications, 415(1):20?30, 2006.
[6] S. Burer and R. D. Monteiro. A projected gradient algorithm for solving the maxcut sdp relaxation.
Optimization methods and Software, 15(3-4):175?200, 2001.
[7] M. Elsner and W. Schudy. Bounding and comparing methods for correlation clustering beyond ilp. In
Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing, pages
19?27. Association for Computational Linguistics, 2009.
[8] M. X. Goemans. Semidefinite programming in combinatorial optimization. Math. Program., 79:143?161,
1997.
[9] M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM (JACM), 42(6):1115?1145,
1995.
[10] M. Gr?otschel, L. Lov?asz, and A. Schrijver. Geometric Algorithms and Combinatorial Optimization,
volume 2 of Algorithms and Combinatorics. Springer, 1988.
[11] C. Helmberg and F. Rendl. A spectral bundle method for semidefinite programming. SIAM Journal on
Optimization, 10(3):673?696, 2000.
[12] D. Hush, P. Kelly, C. Scovel, and I. Steinwart. Qp algorithms with guaranteed accuracy and run time for
support vector machines. Journal of Machine Learning Research, 7:733?769, 2006.
[13] G. Iyengar, D. J. Phillips, and C. Stein. Approximating semidefinite packing programs. SIAM Journal on
Optimization, 21(1):231?268, 2011.
[14] V. Jethava, A. Martinsson, C. Bhattacharyya, and D. Dubhashi. Lov?asz ? function, svms and finding
dense subgraphs. The Journal of Machine Learning Research, 14(1):3495?3536, 2013.
[15] V. Jethava, J. Sznajdman, C. Bhattacharyya, and D. Dubhashi. Lovasz ?, svms and applications. In
Information Theory Workshop (ITW), 2013 IEEE, pages 1?5. IEEE, 2013.
[16] F. D. Johanson, A. Chattoraj, C. Bhattacharyya, and D. Dubhashi. Supplementary material, 2015.
[17] D. E. Knuth. The sandwich theorem. Electr. J. Comb., 1, 1994.
[18] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In Proc. of the
49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesVolume 1, pages 510?520. Association for Computational Linguistics, 2011.
[19] L. Lov?asz. On the shannon capacity of a graph. IEEE Transactions on Information Theory, 25(1):1?7,
1979.
[20] L. Lov?asz and K. Vesztergombi. Geometric representations of graphs. Paul Erdos and his Mathematics,
1999.
[21] R. Mart??, A. Duarte, and M. Laguna. Advanced scatter search for the max-cut problem. INFORMS
Journal on Computing, 21(1):26?38, 2009.
[22] D. T. Pham, S. S. Dimov, and C. Nguyen. Selection of k in k-means clustering. Proceedings of the
Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 219(1):103?
119, 2005.
[23] B. Sch?olkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson. Estimating the support of
a high-dimensional distribution. Neural computation, 13(7):1443?1471, 2001.
[24] A. Schrijver. A comparison of the delsarte and lov?asz bounds. Information Theory, IEEE Transactions
on, 25(4):425?429, 1979.
[25] J. Wang, T. Jebara, and S.-F. Chang. Semi-supervised learning using greedy max-cut. The Journal of
Machine Learning Research, 14(1):771?800, 2013.
9
| 5902 |@word middle:2 version:26 polynomial:1 briefly:1 johansson:1 open:1 crucially:1 decomposition:4 dramatic:1 initial:5 celebrated:2 score:4 document:7 bhattacharyya:4 existing:1 scovel:1 comparing:2 surprising:1 yet:1 scatter:1 must:1 additive:1 partition:1 j1:2 update:2 half:1 greedy:3 guess:1 item:6 electr:1 institution:1 characterization:20 math:1 node:29 cse:1 five:1 constructed:3 symposium:1 retrieving:1 focs:1 prove:1 combine:1 bonchi:4 comb:1 introduce:2 lov:25 indeed:1 examine:1 sdp:10 multi:2 brain:1 spherical:1 automatically:3 gov:1 iisc:1 estimating:1 notation:1 lowest:2 kg:3 interpreted:3 finding:4 dubbed:2 every:1 exactly:3 k2:1 platt:1 unit:2 appear:1 positive:1 negligible:1 engineering:3 timing:1 laguna:1 despite:1 approximately:1 initialization:6 equivalence:2 specifying:1 schudy:3 limited:1 acknowledgment:1 union:1 arsenal:1 g13:1 convenient:1 word:1 refers:1 lic:1 close:1 selection:1 seminal:1 equivalent:4 maximizing:1 kale:2 l:8 subgraphs:1 orthonormal:2 dominate:1 his:3 embedding:24 proving:1 stability:1 analogous:1 play:1 user:1 exact:2 programming:6 element:2 particularly:1 jk:1 rec:2 std:1 cut:16 bottom:1 role:1 solved:2 initializing:1 worst:1 wang:1 sect:4 trade:3 devdatt:1 highest:3 intuition:2 ui:18 complexity:9 solving:4 algebra:1 packing:1 easily:1 represented:9 various:1 fast:5 choosing:3 maxni:1 supplementary:2 solve:3 posed:1 heuristic:1 jethava:8 otherwise:3 reconstruct:1 eigenvalue:1 propose:3 product:1 causing:1 j2:1 combining:1 relevant:1 iff:2 achieve:1 olkopf:1 exploiting:2 cluster:24 empty:1 leave:1 informs:1 ij:1 expectationmaximization:1 predicted:2 fredrik:1 come:2 extractive:3 closely:2 correct:1 human:1 material:2 adjacency:2 kii:9 require:1 assign:1 f1:6 generalization:11 proposition:1 extension:10 hold:1 pham:4 around:1 ground:1 ic:1 substituting:1 achieves:2 smallest:1 purpose:1 proc:1 applicable:1 combinatorial:3 label:3 largest:1 tf:1 weighted:40 minimization:3 lovasz:1 iyengar:1 always:1 gaussian:1 modified:1 johanson:1 chromatic:1 consistently:1 rank:4 centroid:8 duarte:1 comprising:1 monteiro:1 issue:1 dual:2 flexible:2 overall:2 arg:3 classification:1 constrained:2 ernet:1 initialize:1 emotion:4 having:1 langauge:1 represents:2 thin:2 theart:1 uj1:1 future:1 bangalore:1 few:3 randomly:1 maxj:1 replaced:1 ourselves:1 sandwich:1 message:2 we2:1 mixture:1 semidefinite:9 bundle:1 beforehand:1 edge:23 obtuse:4 lh:1 respective:1 sweden:2 orthogonal:3 taylor:1 initialized:1 kij:20 column:1 cost:2 strategic:1 deviation:2 subset:3 uniform:1 rounding:2 gr:1 too:3 characterize:1 reported:1 kn:2 considerably:2 chooses:2 fundamental:1 siam:2 sequel:2 off:3 picking:2 together:1 central:3 containing:1 choose:1 cognitive:1 diversity:11 semidefiniteness:1 unordered:1 lloyd:2 sec:1 automation:1 coefficient:1 gionis:1 satisfy:1 combinatorics:1 multiplicative:2 performed:1 tion:1 view:1 schema:1 hazan:2 sort:1 irst:1 rochester:3 contribution:1 minimize:1 ni:1 accuracy:4 yield:1 correspond:1 famous:2 helmberg:2 produced:2 bilmes:2 published:2 definition:8 naturally:2 associated:2 attributed:1 proof:2 dataset:2 popular:1 recall:2 knowledge:1 cap:1 ut:1 satisfiability:1 sophisticated:1 supervised:1 restarts:1 improved:1 rand:4 swedish:1 formulation:6 done:3 just:2 implicit:2 smola:1 correlation:17 steinwart:1 duc:2 overlapping:6 yeast:4 usa:1 effect:1 true:1 hence:1 assigned:2 illustrated:1 adjacent:2 interchangeably:1 cosine:1 generalized:1 bansal:1 theoretic:1 complete:3 geometrical:1 recently:1 common:3 specialized:1 ji:1 qp:1 volume:1 extend:1 interpretation:3 discussed:1 belong:1 interpret:1 association:3 martinsson:1 refer:3 phillips:1 rd:3 tuning:1 mathematics:1 i6:4 maxcut:1 submodular:1 language:1 shawe:1 similarity:11 closest:1 showed:5 retrieved:1 belongs:1 inequality:1 binary:1 meta:1 itw:1 meeting:2 accomplished:1 seen:1 minimum:2 preserving:1 relaxed:1 somewhat:1 elsner:3 ii:1 semi:1 full:3 multiple:1 reduces:4 faster:5 characterized:1 burer:1 sphere:2 lin:2 rendl:2 ensuring:1 variant:1 basic:1 essentially:1 kernel:24 represent:1 preserved:1 background:1 whereas:1 interval:1 singular:3 sch:1 asz:25 strict:1 subject:1 elegant:2 undirected:1 integer:2 ssf:1 vesztergombi:1 split:4 embeddings:8 easy:2 newsgroups:2 independence:2 zi:1 ujk:1 inner:1 idea:3 reduce:1 expression:1 motivated:1 six:1 penalty:1 passing:2 repeatedly:1 useful:2 se:4 stein:1 svms:2 http:1 xij:1 estimated:1 disjoint:2 diverse:3 xii:1 g32:1 kck:3 key:2 blum:1 graph:69 relaxation:3 sum:1 convert:1 run:4 angle:2 powerful:1 reporting:1 family:1 uti:2 decision:1 scaling:1 jaccard:1 comparable:1 bound:4 followed:1 guaranteed:1 annual:2 constraint:5 orthogonality:1 idf:1 n3:1 software:1 dominated:1 u1:2 speed:1 chalmers:5 min:13 structured:1 according:3 belonging:1 ur:1 labellings:5 making:3 modification:1 sij:17 chiranjib:1 ilp:1 letting:1 subjected:1 eight:1 apply:4 observe:3 prec:2 spectral:1 chawla:1 jn:1 original:7 top:2 clustering:32 remaining:1 running:1 linguistics:3 log2:1 unifying:4 pentagon:1 giving:1 k1:3 especially:1 uj:11 build:1 sandwiched:1 classical:4 dubhashi:5 approximating:1 initializes:2 objective:2 already:1 usual:1 diagonal:1 affinity:1 gradient:1 otschel:1 capacity:3 remind:1 mini:1 minimizing:2 mink:2 affiliated:1 summarization:11 unknown:1 perform:1 allowing:1 upper:3 observation:3 datasets:3 benchmark:2 nist:1 truncated:2 defining:1 discovered:1 rn:1 jebara:1 introduced:7 complement:1 namely:1 pair:2 mechanical:2 connection:2 sentence:6 established:3 hush:1 address:1 beyond:1 summarize:3 program:6 built:1 max:19 analogue:1 suitable:1 natural:10 advanced:1 mn:1 representing:3 improve:1 technology:3 theta:26 arora:2 text:4 geometric:12 kelly:1 fully:2 ukkonen:1 interesting:1 foundation:2 sufficient:1 famously:2 share:1 row:3 supported:1 institute:1 india:1 taking:2 sparse:1 boundary:1 dimension:1 valid:1 unweighted:12 dn2:2 author:1 projected:1 nguyen:1 far:1 transaction:2 approximate:4 alpha:1 erdos:1 clique:1 xi:1 un:2 search:2 uji:1 table:10 init:2 csa:1 kui:4 williamson:2 diag:3 substituted:1 main:1 dense:1 rh:1 bounding:2 whole:2 paul:1 n2:2 repeated:1 ny:1 slow:1 precision:1 wish:1 weighting:1 theorem:2 g11:1 svm:9 consist:1 workshop:2 knuth:1 dissimilarity:1 labelling:10 suited:1 generalizing:1 logarithmic:1 likely:1 jacm:1 oteborg:2 chang:1 applies:1 springer:1 corresponds:3 relies:1 kls:13 acm:1 mart:1 chiru:1 viewed:2 goal:1 kmeans:1 g12:1 occ:5 feasible:1 determined:1 specifically:3 hyperplane:1 engineer:1 called:4 total:2 pas:1 goemans:2 schrijver:4 svd:4 shannon:3 vote:1 newsgroup:2 est:1 brand:1 select:2 support:9 cholesky:1 computating:1 brevity:1 relevance:3 indian:1 evaluate:1 |
5,417 | 5,903 | Online Rank Elicitation for Plackett-Luce:
A Dueling Bandits Approach
Bal?azs Sz?or?enyi
Technion, Haifa, Israel /
MTA-SZTE Research Group on
Artificial Intelligence, Hungary
szorenyibalazs@gmail.com
?
R?obert Busa-Fekete, Adil Paul, Eyke Hullermeier
Department of Computer Science
University of Paderborn
Paderborn, Germany
{busarobi,adil.paul,eyke}@upb.de
Abstract
We study the problem of online rank elicitation, assuming that rankings of a set
of alternatives obey the Plackett-Luce distribution. Following the setting of the
dueling bandits problem, the learner is allowed to query pairwise comparisons
between alternatives, i.e., to sample pairwise marginals of the distribution in an
online fashion. Using this information, the learner seeks to reliably predict the
most probable ranking (or top-alternative). Our approach is based on constructing
a surrogate probability distribution over rankings based on a sorting procedure, for
which the pairwise marginals provably coincide with the marginals of the PlackettLuce distribution. In addition to a formal performance and complexity analysis,
we present first experimental studies.
1
Introduction
Several variants of learning-to-rank problems have recently been studied in an online setting, with
preferences over alternatives given in the form of stochastic pairwise comparisons [6]. Typically, the
learner is allowed to select (presumably most informative) alternatives in an active way?making a
connection to multi-armed bandits, where single alternatives are chosen instead of pairs, this is also
referred to as the dueling bandits problem [28].
Methods for online ranking can mainly be distinguished with regard to the assumptions they make
about the probabilities pi,j that, in a direct comparison between two alternatives i and j, the former
is preferred over the latter. If these probabilities are not constrained at all, a complexity that grows
quadratically in the number M of alternatives is essentially unavoidable [27, 8, 9]. Yet, by exploiting
(stochastic) transitivity properties, which are quite natural in a ranking context, it is possible to
devise algorithms with better performance guaranties, typically of the order M log M [29, 28, 7].
The idea of exploiting transitivity in preference-based online learning establishes a natural connection to sorting algorithms. Naively, for example, one could simply apply an efficient sorting
algorithm such as MergeSort as an active sampling scheme, thereby producing a random order of
the alternatives. What can we say about the optimality of such an order? The problem is that the
probability distribution (on rankings) induced by the sorting algorithm may not be well attuned with
the original preference relation (i.e., the probabilities pi,j ).
In this paper, we will therefore combine a sorting algorithm, namely QuickSort [15], and a stochastic preference model that harmonize well with each other?in a technical sense to be detailed later
on. This harmony was first presented in [1], and our main contribution is to show how it can be
exploited for online rank elicitation. More specifically, we assume that pairwise comparisons obey
the marginals of a Plackett-Luce model [24, 19], a widely used parametric distribution over rankings
(cf. Section 5). Despite the quadratic worst case complexity of QuickSort, we succeed in developing
its budgeted version (presented in Section 6) with a complexity of O(M log M ). While only returning partial orderings, this version allows us to devise PAC-style algorithms that find, respectively, a
close-to-optimal item (Section 7) and a close-to-optimal ranking of all items (Section 8), both with
high probability.
1
2
Related Work
Several studies have recently focused on preference-based versions of the multi-armed bandit setup,
also known as dueling bandits [28, 6, 30], where the online learner is only able to compare arms in
a pairwise manner. The outcome of the pairwise comparisons essentially informs the learner about
pairwise preferences, i.e., whether or not an option is preferred to another one. A first group of
papers, including [28, 29], assumes the probability distributions of pairwise comparisons to possess
certain regularity property, such as strong stochastic transitivity. A second group does not make
assumptions of that kind; instead, a target (?ground-truth?) ranking is derived from the pairwise
preferences, for example using the Copeland, Borda count and Random Walk procedures [9, 8, 27].
Our work is obviously closer to the first group of methods. In particular, the study presented in this
paper is related to [7] which investigates a similar setup for the Mallows model.
There are several approaches to estimating the parameters of the Plackett-Luce (PL) model, including standard statistical methods such as likelihood estimation [17] and Bayesian parameter estimation [14]. Pairwise marginals are also used in [26], in connection with the method-of-moments
approach; nevertheless, the authors assume that full rankings are observed from a PL model.
Algorithms for noisy sorting [2, 3, 12] assume a total order over the items, and that the comparisons
are representative of that order (if i precedes j, then the probability of option i being preferred to
j is bigger than some > 1/2). In [25], the data is assumed to consist of pairwise comparisons
generated by a Bradley-Terry model, however, comparisons are not chosen actively but according to
some fixed probability distribution.
Pure exploration algorithms for the stochastic multi-armed bandit problem sample the arms a certain
number of times (not necessarily known in advance), and then output a recommendation, such as the
best arm or the m best arms [4, 11, 5, 13]. While our algorithms can be viewed as pure exploration
strategies, too, we do not assume that numerical feedback can be generated for individual options;
instead, our feedback is qualitative and refers to pairs of options.
3
Notation
A set of alternatives/options/items to be ranked is denoted by I. To keep the presentation simple,
we assume that items are identified by natural numbers, so I = [M ] = {1, . . . , M }. A ranking is a
bijection r on I, which can also be represented as a vector r = (r1 , . . . , rM ) = (r(1), . . . , r(M )),
where rj = r(j) is the rank of the jth item. The set of rankings can be identified with the symmetric
group SM of order M . Each ranking r naturally defines an associated ordering o = (o1 , . . . , oM ) 2
SM of the items, namely the inverse o = r 1 defined by or(j) = j for all j 2 [M ].
For a permutation r, we write r(i, j) for the permutation in which ri and rj , the ranks of items i
and j, are replaced with each other. We denote by L(ri = j) = {r 2 SM | ri = j} the subset of
permutations for which the rank of item i is j, and by L(rj > ri ) = {r 2 SM | rj > ri } those for
which the rank of j is higher than the rank of i, that is, item i is preferred to j, written i j. We
write i r j to indicate that i is preferred to j with respect to ranking r.
We assume SM to be equipped with a probability distribution P : SM ! [0, 1]; thus, for each
ranking r, we denote by P(r) the probability to observe this ranking. Moreover, for each pair of
items i and j, we denote by
X
pi,j = P(i j) =
P(r)
(1)
r2L(rj >ri )
the probability that i is preferred to j (in a ranking randomly drawn according to P). These pairwise
probabilities are called the pairwise marginals of the ranking distribution P. We denote the matrix
composed of the values pi,j by P = [pi,j ]1?i,j?M .
4
Preference-based Approximations
Our learning problem essentially consists of making good predictions about properties of P. Concretely, we consider two different goals of the learner, depending on whether the application calls
for the prediction of a single item or a full ranking of items:
In the first problem, which we call PAC-Item or simply PACI, the goal is to find an item that is
almost as good as the optimal one, with optimality referring to the Condorcet winner. An item i? is
2
a Condorcet winner if pi? ,i > 1/2 for all i 6= i? . Then, we call an item j a PAC-item, if it is beaten
by the Condorcet winner with at most an ?-margin: |pi? ,j 1/2| < ?. This setting coincides with
those considered in [29, 28]. Obviously, it requires the existence of a Condorcet winner, which is
indeed guaranteed in our approach, thanks to the assumption of a Plackett-Luce model.
The second problem, called AMPR, is defined as finding the most probable ranking [7], that is,
r? = argmaxr2SM P(r). This problem is especially challenging for ranking distributions for which
the order of two items is hard to elicit (because many entries of P are close to 1/2). Therefore, we
again relax the goal of the learner and only require it to find a ranking r with the following property:
There is no pair of items 1 ? i, j ? M , such that ri? < rj? , ri > rj and pi,j > 1/2 + ?. Put in
words, the ranking r is allowed to differ from r? only for those items whose pairwise probabilities
are close to 1/2. Any ranking r satisfying this property is called an approximately most probable
ranking (AMPR).
Both goals are meant to be achieved with probability at least 1
, for some > 0. Our learner
operates in an online setting. In each iteration, it is allowed to gather information by asking for a
single pairwise comparison between two items?or, using the dueling bandits jargon, to pull two
arms. Thus, it selects two items i and j, and then observes either preference i
j or j
i; the
former occurs with probability pi,j as defined in (1), the latter with probability pj,i = 1 pi,j . Based
on this observation, the learner updates its estimates and decides either to continue the learning
process or to terminate and return its prediction. What we are mainly interested in is the sample
complexity of the learner, that is, the number of pairwise comparisons it queries prior to termination.
Before tackling the problems introduced above, we need some additional notation. The pair of items
chosen by the learner in the t-th comparison is denoted (it , j t ), where it < j t , and the feedback
received is defined as ot = 1 if it
j t and ot = 0 if j t
it . The set of steps among the
t
first t iterations in which the learner decides to compare items i and j is denoted by Ii,j
= {` 2
` `
t
t 1
[t] | (i , j ) = (i, j)}, and the size of this set by ni,j = #Ii,j . The proportion of ?wins? of item
P
t
`
i against item j up to iteration t is then given by pbi,j
= n1t
`2I t o . Since our samples are
i,j
i,j
t
independent and identically distributed (i.i.d.), the relative frequency pbi,j
is a reasonable estimate of
the pairwise probability (1).
5
The Plackett-Luce Model
The Plackett-Luce (PL) model is a widely-used probability distribution on rankings [24, 19]. It is
parameterized by a ?skill? vector v = (v1 , . . . , vM ) 2 RM
+ and mimics the successive construction
of a ranking by selecting items position by position, each time choosing one of the remaining items
i with a probability proportional to its skill vi . Thus, with o = r 1 , the probability of a ranking r is
P(r | v) =
M
Y
v oi
.
v + voi+1 + ? ? ? + voM
i=1 oi
(2)
As an appealing property of the PL model, we note that the marginal probabilities (1) are very easy
to calculate [21], as they are simply given by
vi
pi,j =
.
(3)
vi + vj
Likewise, the most probable ranking r? can be obtained quite easily, simply by sorting the items
according to their skill parameters, that is, ri? < rj? iff vi > vj . Moreover, the PL model satisfies
strong stochastic transitivity, i.e., pi,k max(pi,j , pj,k ) whenever pi,j 1/2 and pj,k 1/2 [18].
6
Ranking Distributions based on Sorting
In the classical sorting literature, the outcome of pairwise comparisons is deterministic and determined by an underlying total order of the items, namely the order the sorting algorithm seeks to find.
Now, if the pairwise comparisons are stochastic, the sorting algorithm can still be run, however, the
result it will return is a random ranking. Interestingly, this is another way to define a probability distribution over the rankings: P(r) = P(r | P) is the probability that r is returned by the algorithm if
1
We omit the index t if there is no danger of confusion.
3
stochastic comparisons are specified by P. Obviously, this view is closely connected to the problem
of noisy sorting (see the related work section).
In a recent work by Ailon [1], the well-known QuickSort algorithm is investigated in a stochastic
setting, where the pairwise comparisons are drawn from the pairwise marginals of the Plackett-Luce
model. Several interesting properties are shown about the ranking distribution based on QuickSort,
notably the property of pairwise stability. We denote the QuickSort-based ranking distribution by
PQS (? | P), where the matrix P contains the marginals (3) of the Plackett-Luce model. Then, it can
be shown that PQS (? | P) obeys the property of pairwise stability, which means that it preserves the
marginals, although the distributions themselves might not be identical, i.e., PQS (? | P) 6= P(? | v).
Theorem 1 (Theorem 4.1 in [1]). LetP
P be given by the pairwise marginals (3), i.e., pi,j = vi /(vi +
vj ). Then, pi,j = PQS (i j | P) = r2L(rj >ri ) PQS (r | P).
One drawback of the QuickSort algorithm is its complexity: To generate a random ranking, it compares O(M 2 ) items in the worst case. Next, we shall introduce a budgeted version of the QuickSort algorithm, which terminates if the algorithm compares too many pairs, namely, more than
O(M log M ). Upon termination, the modified Quicksort algorithm only returns a partial order.
Nevertheless, we will show that it still preserves the pairwise stability property.
6.1 The Budgeted QuickSort-based Algorithm
Algorithm 1 shows a budgeted version of the
QuickSort-based random ranking generation Algorithm 1 BQS(A, B)
process described in the previous section. It Require: A, the set to be sorted, and a budget B
works in a way quite similar to the standard Ensure: (r, B 00 ), where B 00 is the remaining budQuickSort-based algorithm, with the notable
get, and r is the (partial) order that was condifference of terminating as soon as the number
structed based on B B 00 samples
of pairwise comparisons exceeds the budget B, 1: Initialize r to be the empty partial order over A
which is a parameter assumed as an input. Ob- 2: if B ? 0 or |A| ? 1 then return (r, 0)
viously, the BQS algorithm run with A = [M ] 3: pick an element i 2 A uniformly at random
and B = 1 (or B > M 2 ) recovers the orig- 4: for all j 2 A \ {i} do
inal QuickSort-based sampling algorithm as a 5:
draw a random sample oij according to the
special case.
PL marginal (3)
update r accordingly
A run of BQS(A, 1) can be represented quite 6:
naturally as a random tree ? : the root is labeled 7: A0 = {j 2 A | j 6= i & oi,j = 0}
[M ], end whenever a call to BQS(A, B) initi- 8: A1 = {j 2 A | j 6= i & oi,j = 1}
ates a recursive call BQS(A0 , B 0 ), a child node 9: (r0 , B 0 ) = BQS(A0 , B |A| + 1)
with label A0 is added to the node with label A. 10: (r00 , B 00 ) = BQS(A1 , B 0 )
Note that each such tree determines a ranking, 11: update r based on r0 and r00
12: return (r, B 00 )
which is denoted by r? , in a natural way.
The random ranking generated by BQS(A, 1)
for some subset A ? [M ] was analyzed by Ailon [1], who showed that it gives back the same
marginals as the original Plackett-Luce model (as recalled in Theorem 1). Now, for B > 0, denote
by ? B the tree the algorithm would have returned for the budget B instead of 1. 2 Additionally, let
B
T B denote the set of all possible outcomes of ? B , and for two distinct indices i and j, let Ti,j
denote
B
the set of all trees T 2 T in which i and j are incomparable in the associated ranking (i.e., some
leaf of T is labelled by a superset of {i, j}).
The main result of this section is that BQS does not introduce any bias in the marginals (3), i.e.,
Theorem 1 also holds for the budgeted version of BQS.
Proposition 2. For any B > 0, any set A ? I and any indices i, j 2 A, the partial order r = r? B
B
i
generated by BQS(A, B) satisfies P(i r j | ? B 2 T B \ Ti,j
) = viv+v
.
j
That is, whenever two items i and j are comparable by the partial ranking r generated by BQS,
i
i r j with probability exactly viv+v
. The basic idea of the proof (deferred to the appendix) is to
j
show that, conditioned on the event that i and j are incomparable by r, i r j would have been
2
Put differently, ? is obtained from ? B by continuing the execution of BQS ignoring the stopping criterion
B ? 0.
4
i
obtained with probability viv+v
in case execution of BQS had been continued (see Claim 6). The
j
result then follows by combining this with Theorem 1.
7
The PAC-Item Problem and its Analysis
Our algorithm for finding the PAC item is
based on the sorting-based sampling tech- Algorithm 2 PLPAC( , ?)
nique described in the previous section. The 1: for i, j = 1 ! M do
. Initialization
pseudocode of the algorithm, called PLPAC, 2:
b
pbi,j = 0
. P = [b
pi,j ]M ?M
is shown in Algorithm 2. In each iteration,
b
3:
n
=
0
.
N
=
[n
i,j
i,j ]M ?M
we generate a ranking, which is partial (line
6), and translate this ranking into pairwise 4: Set A = {1, . . . , M }
comparisons that are used to update the es- 5: repeat
r = BQS(A, a 1) where a = #A
.
timates of the pairwise marginals. Based on 6:
Sorting based random ranking
these estimates, we apply a simple eliminab and N correspondtion strategy, which consists of eliminating an 7:
update the entries of P
item i if it is significantly beaten by another
ing to A basedron r
item j, that is, pbi,j + ci,j < 1/2 (lines 9?
4M 2 n2i,j
set ci,j = 2n1i,j log
for all i 6= j
11). Finally, the algorithm terminates when 8:
it finds a PAC-item for which, by definition, 9:
for (i, j 2 A) ^ (i 6= j) do
|pi? ,i 1/2| < ?. To identify an item i as 10:
if pbi,j + ci,j < 1/2 then
a PAC-item, it is enough to guarantee that i 11:
A = A \ {i}
. Discard
is not beaten by any j 2 A with a margin
12:
C
=
{i
2
A
|
(8j
2
A
\
{i})
bigger than ?, that is, pi,j > 1/2 ? for all
pbi,j ci,j > 1/2 ?}
j 2 A. This sufficient condition is implemented in line 12. Since we only have empir- 13: until (#C 1)
ical estimates of the pi,j values, the test of the 14: return C
condition does of course also take the confidence intervals into account.
Note that vi = vj , i 6= j, implies pi,j = 1/2. In this case, it is not possible to decide whether pi,j
is above 1/2 or not on the basis of a finite number of pairwise comparisons. The ?-relaxation of the
goal to be achieved provides a convenient way to circumvent this problem.
7.1
Sample Complexity Analysis of PLPAC
First, let rt denote the (partial) ordering produced by BQS in the t-th iteration. Note that each of
these (partial) orderings defines a bucket order: The indices are partitioned into different classes
(buckets) in such a way that none of the pairs are comparable within one class, but pairs from
different classes are; thus, if i and i0 belong to some class and j and j 0 belong to some other class,
then either i rt j and i0 rt j 0 , or j rt i and j 0 rt i0 . More specifically, the BQS algorithm
with budget a 1 (line 6) always results in a bucket order containing only two buckets since no
recursive call is carried out with this budget. Then one might show that the optimal arm i? and
an arbitrary arm i(6= i? ) fall into different buckets ?often enough?. This observation allows us to
upper-bound the number of pairwise comparisons taken by PLPAC with high probability. The proof
of the next theorem is deferred to Appendix B.
vi? v i
= (1/2) max{?, pi? ,i 1/2} = (1/2) max{?, 2(v
} for each index i 6= i? .
i? +vi )
?
?
With probability at least 1
, after O maxi6=i? 12 log Mi
calls for BQS with budget M
i
1, ?PLPAC terminates and ?outputs an ?-optimal arm. Therefore, the total number of samples is
O M maxi6=i? 12 log Mi .
Theorem 3. Set
i
i
In Theorem 3, the dependence on M is of order M log M . It is easy to show that ?(M log M ) is a
lower bound, therefore our result is optimal from this point of view.
Our model assumptions based on the PL model imply some regularity properties for the pairwise
marginals, such as strong stochastic transitivity and stochastic triangle inequality (see Appendix
A of [28] for the proof). Therefore, the I NTERLEAVED F ILTER [28] and B EAT T HE M EAN [29]
algorithms can be directly applied in our online framework. Both algorithms achieve a similar
sample complexity of order M log M . Yet, our experimental study in Section 9.1 clearly shows
that, provided our model assumptions on pairwise marginals are valid, PLPAC outperforms both
algorithms in terms of empirical sample complexity.
5
8
The AMPR Problem and its Analysis
For strictly more than two elements, the sorting-based surrogate distribution and the PL distribution
are in general not identical, although their mode rankings coincide [1]. The mode r? of a PL model
is the ranking that sorts the items in decreasing order of their skill values: ri < rj iff vi > vj
for any i 6= j. Moreover, since vi > vj implies pi,j > 1/2, sorting based on the Copeland score
bi = #{1 ? j ? M | (i 6= j) ^ (pi,j > 1/2)} yields a most probable ranking r? .
Our algorithm is based on estimating the Copeland score of the items. Its pseudo-code is shown in
Algorithm 3 in Appendix C. As a first step, it generates rankings based on sorting, which is used to
b Then, it computes a lower and upper bound b and bi
update the pairwise probability estimates P.
i
for each of the scores bi . The lower bound is given as bi = #{j 2 [M ]\{i} | pbi,j c > 1/2}, which
is the number of items that are beaten by item i based on the current empirical estimates of pairwise
marginals. Similarly, the upper bound is given as bi = bi + si , where si = #{j 2 [M ] \ {i} | 1/2 2
[b
pi,j c, pbi,j + c]}. Obviously, si is the number of pairs for which, based on the current empirical
estimates, it cannot be decided whether pi,j is above or below 1/2.
As an important observation, note that there is no need to generate a full ranking based on sorting
in every case, because if [bi , bi ] \ [bj , bj ] = ;, then we already know the order of items i and j with
respect to r? . Motivated by this observation, consider the interval graph G = ([M ], E) based on the
[bi , bi ], where E = {(i, j) 2 [M ]2 | [bi , bi ] \ [bj , bj ] 6= ;}. Denote the connected components of this
graph by C1 , . . . , Ck ? [M ]. Obviously, if two items belong to different components, then they do
not need to be compared anymore. Therefore, it is enough to call the sorting-based sampling with
the connected components.
Finally, the algorithm terminates if the goal is achieved (line 20). More specifically, it terminates if
there is no pair of items i and j, for which the ordering with respect to r? is not elicited yet, i.e.,
[bi , bi ] \ [bj , bj ] 6= ;, and their pairwise probabilities is close to 1/2, i.e., |pi,j 1/2| < ?.
8.1
Sample Complexity Analysis of PLPAC-AMPR
Denote by qM the expected number of comparisons of the (standard) QuickSort algorithm on M
elements, namely, qM = 2M log M + O(log M ) (see e.g., [22]). Thanks to the concentration
property of the performance of the QuickSort algorithm, there is no pair of items that falls into the
same bucket ?too often? in bucket order which is output by BQS. This observation allows us to
upper-bound the number of pairwise comparisons taken by PLPAC-AMPR with high probability.
The proof of the next theorem is deferred to Appendix D.
Theorem 4. Set
v
v
(i)
= (1/2) max{?, 2(v(i+1)
} for each 1 ? i ? M , where v(i) denotes the i(i+1) +v(i) )
?
?
th largest skill parameter. With probability at least 1
, after O max1?i?M 1 ( 01 )2 log M
0
0
(i)
(i)
(i)
calls for BQS with budget 32 qM , the algorithm
arm.
? PLPAC terminates and outputs an ?-optimal
?
Therefore, the total number of samples is O (M log M ) max1?i?M 1 ( 01 )2 log M
.
0
(i)
(i)
Remark 5. The RankCentrality algorithm proposed in [23] converts the empirical pairwise
b into a row-stochastic matrix Q.
b Then, considering Q
b as a transition matrix of a
marginals P
Markov chain, it ranks the items based on its stationary distribution. In [25], the authors show that
if the pairwise marginals obey a PL distribution, this algorithm produces the mode of this distribution if the sample size is sufficiently large. In their setup, the learning algorithm has no influence on
the selection of pairs to be compared; instead, comparisons are sampled using a fixed underlying
distribution over the pairs. For any sampling distribution, their PAC bound is of order at least M 3 ,
whereas our sample complexity bound in Theorem 4 is of order M log2 M .
9
Experiments
Our approach strongly exploits the assumption of a data generating process that can be modeled by
means of a PL distribution. The experimental studies presented in this section are mainly aimed at
showing that it is doing so successfully, namely, that it has advantages compared to other approaches
in situations where this model assumption is indeed valid. To this end, we work with synthetic data.
6
Nevertheless, in order to get an idea of the robustness of our algorithm toward violation of the model
assumptions, some first experiments on real data are presented in Appendix I.3
9.1
The PAC-Item Problem
We compared our PLPAC algorithm with other preference-based algorithms applicable in our setting, namely I NTERLEAVED F ILTER (IF) [28], B EAT T HE M EAN (BTM) [29] and M ALLOWS MPI
[7]. While each of these algorithms follows a successive elimination strategy and discards items
one by one, they differ with regard to the sampling strategy they follow. Since the time horizon
must be given in advance for IF, we run it with T 2 {100, 1000, 10000}, subsequently referred
to as IF(T ). The BTM algorithm can be accommodated into our setup as is (see Algorithm 3 in
[29]). The M ALLOWS MPI algorithm assumes a Mallows model [20] instead of PL as an underlying
probability distribution over rankings, and it seeks to find the Condorcet winner?it can be applied
in our setting, too, since a Condorcet winner does exist for PL. Since the baseline methods are not
able to handle ?-approximation except the BTM, we run our algorithm with ? = 0 (and made sure
that vi 6= vj for all 1 ? i 6= j ? M ).
4
#10 4
7
PLPAC
IF(100)
IF(1000)
IF(10000)
BTM
MallowsMPI
6
3
2
1
0
5
#10 4
18
PLPAC
IF(100)
IF(1000)
IF(10000)
BTM
MallowsMPI
16
Sample complexity
5
Sample complexity
Sample complexity
6
4
3
2
1
5
10
15
0
14
12
#10 4
PLPAC
IF(100)
IF(1000)
IF(10000)
BTM
MallowsMPI
10
8
6
4
2
5
10
15
0
5
10
Number of arms
Number of arms
Number of arms
(a) c = 0
(b) c = 2
(c) c = 5
Figure 1: The sample complexity for M = {5, 10, 15},
over 100 repetitions.
15
= 0.1, ? = 0. The results are averaged
We tested the learning algorithm by setting the parameters of PL to vi = 1/(c + i) with c =
{0, 1, 2, 3, 5}. The parameter c controls the complexity of the rank elicitation task, since the gaps
between pairwise probabilities and 1/2 are of the form |pi,j 1/2| = | 12 1+1i+c |, which converges
j+c
to zero as c ! 1. We evaluated the algorithm on this test case with varying numbers of items
M = {5, 10, 15} and with various values of parameter c, and plotted the sample complexities, that
is, the number of pairwise comparisons taken by the algorithms prior to termination. The results
are shown in Figure 1 (only for c = {0, 2, 5}, the rest of the plots are deferred to Appendix E). As
can be seen, the PLPAC algorithm significantly outperforms the baseline methods if the pairwise
comparisons match with the model assumption, namely, they are drawn from the marginals of a PL
distribution. M ALLOWS MPI achieves a performance that is slightly worse than PLPAC for M = 5,
and its performance is among the worst ones for M = 15. This can be explained by the elimination
strategy of M ALLOWS MPI, which heavily relies on the existence of a gap mini6=j |pi,j 1/2| > 0
between all pairwise probabilities and 1/2; in our test case, the minimal gap pM,M 1 1/2 =
1
1/2 > 0 is getting smaller with increasing M and c . The poor performance of BTM
2 1/(c+M )
for large c and M can be explained by the same argument.
9.2
The AMPR Problem
Since the RankCentrality algorithm produces the most probable ranking if the pairwise marginals
obey a PL distribution and the sample size is sufficiently large (cf. Remark 5), it was taken as a baseline. Using the same test case as before, input data of various size was generated for RankCentrality
based on uniform sampling of pairs to be compared. Its performance is shown by the black lines in
Figure 2 (the results for c = {1, 3, 4} are again deferred to Appendix F). The accuracy in a single
run of the algorithm is 1 if the output of RankCentrality is identical with the most probable ranking,
and 0 otherwise; this accuracy was averaged over 100 runs.
3
In addition, we conducted some experiments to asses the impact of parameter ? and to test our algorithms
based on Clopper-Pearson confidence intervals. These experiments are deferred to Appendix H and G due to
lack of space.
7
0.8
0.6
0.4
RankCentrality (M=5)
RankCentrality (M=10)
RankCentrality (M=15)
PLPAC-AMPR (M=5)
PLPAC-AMPR (M=10)
PLPAC-AMPR (M=15)
0.2
0
10 2
10 4
10 6
1
0.8
0.6
0.4
RankCentrality (M=5)
RankCentrality (M=10)
RankCentrality (M=15)
PLPAC-AMPR (M=5)
PLPAC-AMPR (M=10)
PLPAC-AMPR (M=15)
0.2
0
10 2
10 4
10 6
Optimal recovery fraction
1
Optimal recovery fraction
Optimal recovery fraction
1
0.8
0.6
0.4
RankCentrality (M=5)
RankCentrality (M=10)
RankCentrality (M=15)
PLPAC-AMPR (M=5)
PLPAC-AMPR (M=10)
PLPAC-AMPR (M=15)
0.2
0
10 2
10 4
Sample size
Sample size
Sample size
(a) c = 0
(b) c = 2
(c) c = 5
10 6
Figure 2: Sample complexity for finding the approximately most probable ranking (AMPR) with
parameters M 2 {5, 10, 15}, = 0.05, ? = 0. The results are averaged over 100 repetitions.
We also run our PLPAC-AMPR algorithm and determined the number of pairwise comparisons it
takes prior to termination. The horizontal lines in Figure 2 show the empirical sample complexity
achieved by PLPAC-AMPR with ? = 0. In accordance with Theorem 4, the accuracy of PLPACAMPR was always significantly higher than 1
(actually equal to 1 in almost every case).
As can be seen, RankCentrality slightly outperforms PLPAC-AMPR in terms of sample complexity,
that is, it achieves an accuracy of 1 for a smaller number of pairwise comparisons. Keep in mind,
however, that PLPAC-AMPR only terminates when its output is correct with probability at least
1
. Moreover, it computes the confidence intervals for the statistics it uses based on the ChernoffHoeffding bound, which is known to be very conservative. As opposed to this, RankCentrality is
an offline algorithm without any performance guarantee if the sample size in not sufficiently large
(see Remark 5). Therefore, it is not surprising that, asymptotically, its empirical sample complexity
shows a better behavior than the complexity of our online learner.
As a final remark, ranking distributions can principally be defined based on any sorting algorithm,
for example MergeSort. However, to the best of our knowledge, pairwise stability has not yet
been shown for any sorting algorithm other than QuickSort. We empirically tested the MergeSort algorithm in our experimental study, simply by using it in place of budgeted QuickSort in the
PLPAC-AMPR algorithm. We found MergeSort inappropriate for the PL model, since the accuracy of PLPAC-AMPR, when being used with MergeSort instead of QuickSort, drastically drops
on complex tasks; for details, see Appendix J. The question of pairwise stability of different sorting
algorithms for various ranking distributions, such as the Mallows model, is an interesting research
avenue to be explored.
10
Conclusion and Future Work
In this paper, we studied different problems of online rank elicitation based on pairwise comparisons
under the assumption of a Plackett-Luce model. Taking advantage of this assumption, our idea is
to construct a surrogate probability distribution over rankings based on a sorting procedure, namely
QuickSort, for which the pairwise marginals provably coincide with the marginals of the PL distribution. In this way, we manage to exploit the (stochastic) transitivity properties of PL, which is at
the origin of the efficiency of our approach, together with the idea of replacing the original QuickSort with a budgeted version of this algorithm. In addition to a formal performance and complexity
analysis of our algorithms, we also presented first experimental studies showing the effectiveness of
our approach.
Needless to say, in addition to the problems studied in this paper, there are many other interesting
problems that can be tackled within the preference-based framework of online learning. For examb of the entire distriple, going beyond a single item or ranking, we may look for a good estimate P
?
?
b < ?. With
bution P, for example, an estimate with small Kullback-Leibler divergence: KL P, P
regard to the use of sorting algorithms, another interesting open question is the following: Is there
any sorting algorithm with a worst case complexity of order M log M , which preserves the marginal
probabilities? This question might be difficult to answer since, as we conjecture, the MergeSort and
the InsertionSort algorithms, which are both well-known algorithms with an M log M complexity,
do not satisfy this property.
8
References
[1] Nir Ailon. Reconciling real scores with binary comparisons: A new logistic based model for ranking. In
Advances in Neural Information Processing Systems 21, pages 25?32, 2008.
[2] M. Braverman and E. Mossel. Noisy sorting without resampling. In Proceedings of the nineteenth annual
ACM-SIAM Symposium on Discrete algorithms, pages 268?276, 2008.
[3] M. Braverman and E. Mossel. Sorting from noisy information. CoRR, abs/0910.1191, 2009.
[4] S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in multi-armed bandits problems. In Proceedings
of the 20th ALT, ALT?09, pages 23?37, Berlin, Heidelberg, 2009. Springer-Verlag.
[5] S. Bubeck, T. Wang, and N. Viswanathan. Multiple identifications in multi-armed bandits. In Proceedings
of The 30th ICML, pages 258?265, 2013.
[6] R. Busa-Fekete and E. H?ullermeier. A survey of preference-based online learning with bandit algorithms.
In Algorithmic Learning Theory (ALT), volume 8776, pages 18?39, 2014.
[7] R. Busa-Fekete, E. H?ullermeier, and B. Sz?or?enyi. Preference-based rank elicitation using statistical models: The case of Mallows. In (ICML), volume 32 (2), pages 1071?1079, 2014.
[8] R. Busa-Fekete, B. Sz?or?enyi, and E. H?ullermeier. Pac rank elicitation through adaptive sampling of
stochastic pairwise preferences. In AAAI, pages 1701?1707, 2014.
[9] R. Busa-Fekete, B. Sz?or?enyi, P. Weng, W. Cheng, and E. H?ullermeier. Top-k selection based on adaptive
sampling of noisy preferences. In Proceedings of the 30th ICML, JMLR W&CP, volume 28, 2013.
[10] C. J. Clopper and E. S. Pearson. The Use of Confidence or Fiducial Limits Illustrated in the Case of the
Binomial. Biometrika, 26(4):404?413, 1934.
[11] E. Even-Dar, S. Mannor, and Y. Mansour. PAC bounds for multi-armed bandit and markov decision
processes. In Proceedings of the 15th COLT, pages 255?270, 2002.
[12] Uriel Feige, Prabhakar Raghavan, David Peleg, and Eli Upfal. Computing with noisy information. SIAM
J. Comput., 23(5):1001?1018, October 1994.
[13] V. Gabillon, M. Ghavamzadeh, A. Lazaric, and S. Bubeck. Multi-bandit best arm identification. In NIPS
24, pages 2222?2230, 2011.
[14] J. Guiver and E. Snelson. Bayesian inference for plackett-luce ranking models. In Proceedings of the
26th ICML, pages 377?384, 2009.
[15] C. A. R. Hoare. Quicksort. Comput. J., 5(1):10?15, 1962.
[16] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American
Statistical Association, 58:13?30, 1963.
[17] D.R. Hunter. MM algorithms for generalized bradley-terry models. The Annals of Statistics, 32(1):384?
406, 2004.
[18] R. Luce and P. Suppes. Handbook of Mathematical Psychology, chapter Preference, Utility and Subjective
Probability, pages 249?410. Wiley, 1965.
[19] R. D. Luce. Individual choice behavior: A theoretical analysis. Wiley, 1959.
[20] C. Mallows. Non-null ranking models. Biometrika, 44(1):114?130, 1957.
[21] John I. Marden. Analyzing and Modeling Rank Data. Chapman & Hall, 1995.
[22] C.J.H. McDiarmid and R.B. Hayward. Large deviations for quicksort. Journal of Algorithms, 21(3):476?
507, 1996.
[23] S. Negahban, S. Oh, and D. Shah. Iterative ranking from pairwise comparisons. In Advances in Neural
Information Processing Systems, pages 2483?2491, 2012.
[24] R. Plackett. The analysis of permutations. Applied Statistics, 24:193?202, 1975.
[25] Arun Rajkumar and Shivani Agarwal. A statistical convergence perspective of algorithms for rank aggregation from pairwise data. In ICML, pages 118?126, 2014.
[26] H. A. Soufiani, W. Z. Chen, D. C. Parkes, and L. Xia. Generalized method-of-moments for rank aggregation. In Advances in Neural Information Processing Systems (NIPS), pages 2706?2714, 2013.
[27] T. Urvoy, F. Clerot, R. F?eraud, and S. Naamane. Generic exploration and k-armed voting bandits. In
Proceedings of the 30th ICML, JMLR W&CP, volume 28, pages 91?99, 2013.
[28] Y. Yue, J. Broder, R. Kleinberg, and T. Joachims. The k-armed dueling bandits problem. Journal of
Computer and System Sciences, 78(5):1538?1556, 2012.
[29] Y. Yue and T. Joachims. Beat the mean bandit. In Proceedings of the ICML, pages 241?248, 2011.
[30] M. Zoghi, S. Whiteson, R. Munos, and M. Rijke. Relative upper confidence bound for the k-armed
dueling bandit problem. In ICML, pages 10?18, 2014.
9
| 5903 |@word version:7 eliminating:1 proportion:1 open:1 termination:4 seek:3 pick:1 thereby:1 moment:2 contains:1 score:4 selecting:1 interestingly:1 outperforms:3 subjective:1 bradley:2 current:2 com:1 surprising:1 si:3 gmail:1 yet:4 written:1 tackling:1 must:1 john:1 numerical:1 informative:1 plot:1 drop:1 update:6 resampling:1 stationary:1 intelligence:1 leaf:1 item:53 accordingly:1 parkes:1 provides:1 mannor:1 bijection:1 node:2 preference:16 successive:2 mcdiarmid:1 harmonize:1 mathematical:1 direct:1 symposium:1 qualitative:1 consists:2 combine:1 busa:5 introduce:2 manner:1 pairwise:53 notably:1 expected:1 indeed:2 behavior:2 themselves:1 multi:7 decreasing:1 armed:9 equipped:1 considering:1 increasing:1 inappropriate:1 provided:1 estimating:2 notation:2 moreover:4 underlying:3 bounded:1 hayward:1 null:1 israel:1 what:2 kind:1 voi:1 finding:3 guarantee:2 pseudo:1 every:2 ti:2 voting:1 exactly:1 returning:1 rm:2 qm:3 biometrika:2 control:1 omit:1 producing:1 before:2 accordance:1 limit:1 despite:1 analyzing:1 approximately:2 might:3 black:1 initialization:1 studied:3 challenging:1 bi:14 obeys:1 averaged:3 decided:1 mallow:5 recursive:2 procedure:3 danger:1 empirical:6 elicit:1 significantly:3 convenient:1 word:1 confidence:5 refers:1 get:2 cannot:1 close:5 selection:2 needle:1 put:2 context:1 influence:1 deterministic:1 focused:1 survey:1 guiver:1 recovery:3 pure:3 continued:1 pull:1 marden:1 oh:1 stability:5 handle:1 adil:2 annals:1 target:1 construction:1 heavily:1 us:1 origin:1 element:3 rajkumar:1 satisfying:1 timates:1 labeled:1 observed:1 wang:1 worst:4 calculate:1 hullermeier:1 soufiani:1 connected:3 ordering:5 observes:1 complexity:25 rankcentrality:15 n1t:1 ghavamzadeh:1 terminating:1 orig:1 upon:1 max1:2 efficiency:1 learner:13 basis:1 triangle:1 easily:1 differently:1 represented:2 various:3 chapter:1 enyi:4 distinct:1 artificial:1 query:2 precedes:1 outcome:3 choosing:1 paderborn:2 pearson:2 quite:4 whose:1 guaranty:1 widely:2 nineteenth:1 say:2 relax:1 otherwise:1 statistic:3 noisy:6 final:1 online:14 obviously:5 advantage:2 combining:1 hungary:1 iff:2 translate:1 achieve:1 az:1 getting:1 exploiting:2 convergence:1 regularity:2 empty:1 r1:1 prabhakar:1 produce:2 generating:1 maxi6:2 converges:1 depending:1 informs:1 received:1 strong:3 implemented:1 indicate:1 implies:2 peleg:1 differ:2 closely:1 drawback:1 correct:1 mini6:1 stochastic:14 subsequently:1 exploration:4 raghavan:1 elimination:2 require:2 proposition:1 probable:8 strictly:1 pl:19 hold:1 mm:1 sufficiently:3 considered:1 hall:1 ground:1 presumably:1 algorithmic:1 predict:1 bj:6 claim:1 urvoy:1 naamane:1 achieves:2 estimation:2 applicable:1 harmony:1 label:2 largest:1 repetition:2 establishes:1 successfully:1 arun:1 clearly:1 always:2 modified:1 ck:1 varying:1 derived:1 joachim:2 rank:17 likelihood:1 mainly:3 zoghi:1 tech:1 baseline:3 sense:1 inference:1 plackett:13 stopping:1 i0:3 typically:2 entire:1 a0:4 ical:1 bandit:17 relation:1 going:1 selects:1 germany:1 provably:2 interested:1 among:2 colt:1 denoted:4 constrained:1 special:1 initialize:1 marginal:3 equal:1 construct:1 sampling:9 chapman:1 identical:3 look:1 icml:8 mimic:1 future:1 ullermeier:4 randomly:1 composed:1 preserve:3 divergence:1 individual:2 replaced:1 ab:1 braverman:2 deferred:6 violation:1 analyzed:1 weng:1 chain:1 closer:1 partial:9 ilter:2 stoltz:1 tree:4 continuing:1 accommodated:1 walk:1 haifa:1 plotted:1 plackettluce:1 theoretical:1 minimal:1 modeling:1 asking:1 deviation:1 subset:2 entry:2 uniform:1 technion:1 conducted:1 too:4 answer:1 synthetic:1 referring:1 thanks:2 broder:1 siam:2 negahban:1 vm:1 together:1 gabillon:1 again:2 aaai:1 unavoidable:1 manage:1 containing:1 opposed:1 hoeffding:1 nique:1 worse:1 american:1 style:1 return:6 actively:1 account:1 de:1 pqs:5 mallowsmpi:3 satisfy:1 notable:1 ranking:60 vi:13 later:1 view:2 root:1 doing:1 bution:1 sort:1 option:5 elicited:1 aggregation:2 borda:1 contribution:1 ass:1 om:1 ni:1 oi:4 accuracy:5 who:1 likewise:1 yield:1 identify:1 rijke:1 bayesian:2 identification:2 produced:1 hunter:1 none:1 whenever:3 definition:1 against:1 frequency:1 copeland:3 naturally:2 associated:2 recovers:1 proof:4 mi:2 sampled:1 knowledge:1 actually:1 back:1 higher:2 follow:1 evaluated:1 strongly:1 uriel:1 until:1 horizontal:1 replacing:1 empir:1 eraud:1 lack:1 defines:2 mode:3 logistic:1 grows:1 mergesort:6 clerot:1 former:2 symmetric:1 jargon:1 leibler:1 illustrated:1 eyke:2 transitivity:6 mpi:4 coincides:1 bal:1 criterion:1 generalized:2 confusion:1 cp:2 upb:1 snelson:1 recently:2 obert:1 pseudocode:1 empirically:1 winner:6 volume:4 belong:3 he:2 association:1 marginals:22 pm:1 similarly:1 had:1 recent:1 showed:1 perspective:1 discard:2 certain:2 verlag:1 inequality:2 binary:1 continue:1 devise:2 exploited:1 seen:2 additional:1 r0:2 plpac:30 ii:2 multiple:1 full:3 rj:10 exceeds:1 technical:1 ing:1 match:1 bigger:2 a1:2 impact:1 prediction:3 variant:1 basic:1 essentially:3 iteration:5 agarwal:1 achieved:4 c1:1 addition:4 whereas:1 interval:4 ot:2 rest:1 posse:1 sure:1 yue:2 induced:1 n1i:1 effectiveness:1 call:9 clopper:2 identically:1 easy:2 superset:1 enough:3 psychology:1 identified:2 incomparable:2 idea:5 avenue:1 luce:14 whether:4 motivated:1 utility:1 returned:2 remark:4 dar:1 detailed:1 aimed:1 shivani:1 generate:3 exist:1 lazaric:1 write:2 discrete:1 shall:1 group:5 nevertheless:3 drawn:3 budgeted:7 pj:3 v1:1 graph:2 relaxation:1 asymptotically:1 fraction:3 sum:1 convert:1 run:8 inverse:1 parameterized:1 eli:1 place:1 almost:2 reasonable:1 decide:1 draw:1 pbi:8 ob:1 appendix:10 decision:1 investigates:1 comparable:2 bound:11 guaranteed:1 tackled:1 cheng:1 letp:1 quadratic:1 annual:1 szte:1 ri:11 generates:1 kleinberg:1 argument:1 optimality:2 eat:2 conjecture:1 department:1 mta:1 according:4 developing:1 ailon:3 viswanathan:1 poor:1 terminates:7 ate:1 slightly:2 smaller:2 feige:1 partitioned:1 appealing:1 making:2 explained:2 principally:1 bucket:7 taken:4 count:1 know:1 mind:1 end:2 apply:2 obey:4 observe:1 generic:1 distinguished:1 anymore:1 alternative:10 robustness:1 shah:1 existence:2 original:3 top:2 assumes:2 cf:2 remaining:2 ensure:1 denotes:1 log2:1 reconciling:1 binomial:1 exploit:2 especially:1 classical:1 added:1 already:1 occurs:1 question:3 parametric:1 strategy:5 rt:5 dependence:1 concentration:1 surrogate:3 fiducial:1 win:1 r2l:2 berlin:1 condorcet:6 toward:1 assuming:1 code:1 o1:1 index:5 modeled:1 setup:4 difficult:1 october:1 btm:7 reliably:1 upper:5 observation:5 markov:2 sm:6 finite:1 beat:1 situation:1 mansour:1 arbitrary:1 introduced:1 david:1 pair:14 namely:9 specified:1 kl:1 connection:3 recalled:1 quadratically:1 nip:2 able:2 elicitation:7 beyond:1 below:1 including:2 max:4 dueling:7 terry:2 event:1 natural:4 ranked:1 oij:1 circumvent:1 arm:13 scheme:1 mossel:2 imply:1 carried:1 nir:1 prior:3 literature:1 relative:2 permutation:4 interesting:4 generation:1 proportional:1 upfal:1 attuned:1 gather:1 sufficient:1 pi:30 row:1 course:1 repeat:1 soon:1 jth:1 offline:1 formal:2 bias:1 drastically:1 fall:2 taking:1 munos:2 distributed:1 regard:3 feedback:3 xia:1 valid:2 transition:1 computes:2 author:2 concretely:1 made:1 coincide:3 adaptive:2 viv:3 skill:5 preferred:6 kullback:1 keep:2 sz:4 active:2 quicksort:20 decides:2 handbook:1 assumed:2 iterative:1 additionally:1 terminate:1 ignoring:1 ean:2 whiteson:1 heidelberg:1 investigated:1 necessarily:1 complex:1 constructing:1 vj:7 hoare:1 main:2 paul:2 allowed:4 child:1 referred:2 representative:1 fashion:1 wiley:2 position:2 comput:2 jmlr:2 theorem:12 pac:11 showing:2 explored:1 beaten:4 alt:3 naively:1 consist:1 structed:1 corr:1 ci:4 ampr:22 execution:2 budget:7 conditioned:1 margin:2 horizon:1 sorting:27 gap:3 chen:1 simply:5 bubeck:3 n2i:1 recommendation:1 fekete:5 springer:1 truth:1 satisfies:2 determines:1 relies:1 acm:1 succeed:1 inal:1 viewed:1 presentation:1 goal:6 sorted:1 labelled:1 hard:1 specifically:3 determined:2 operates:1 uniformly:1 except:1 conservative:1 total:4 called:4 experimental:5 e:1 select:1 latter:2 meant:1 tested:2 |
5,418 | 5,904 | Segregated Graphs and Marginals of Chain Graph
Models
Ilya Shpitser
Department of Computer Science
Johns Hopkins University
ilyas@cs.jhu.edu
Abstract
Bayesian networks are a popular representation of asymmetric (for example
causal) relationships between random variables. Markov random fields (MRFs)
are a complementary model of symmetric relationships used in computer vision,
spatial modeling, and social and gene expression networks. A chain graph model
under the Lauritzen-Wermuth-Frydenberg interpretation (hereafter a chain graph
model) generalizes both Bayesian networks and MRFs, and can represent asymmetric and symmetric relationships together.
As in other graphical models, the set of marginals from distributions in a chain
graph model induced by the presence of hidden variables forms a complex model.
One recent approach to the study of marginal graphical models is to consider a
well-behaved supermodel. Such a supermodel of marginals of Bayesian networks,
defined only by conditional independences, and termed the ordinary Markov
model, was studied at length in [6].
In this paper, we show that special mixed graphs which we call segregated graphs
can be associated, via a Markov property, with supermodels of marginals of chain
graphs defined only by conditional independences. Special features of segregated
graphs imply the existence of a very natural factorization for these supermodels, and imply many existing results on the chain graph model, and the ordinary
Markov model carry over. Our results suggest that segregated graphs define an
analogue of the ordinary Markov model for marginals of chain graph models.
We illustrate the utility of segregated graphs for analyzing outcome interference
in causal inference via simulated datasets.
1
Introduction
Graphical models are a flexible and widely used tool for modeling and inference in high dimensional
settings. Directed acyclic graph (DAG) models, also known as Bayesian networks [11, 8], are often
used to model relationships with an inherent asymmetry, perhaps induced by a temporal order on
variables, or cause-effect relationships. Models represented by undirected graphs (UGs), such as
Markov random fields (MRFs), are used to model symmetric relationships, for instance proximity in
social graphs, expression co-occurrence in gene networks, coinciding magnetization of neighboring
atoms, or similar colors of neighboring pixels in an image.
Some graphical models can represent both symmetric and asymmetric relationships together. One
such model is the chain graph model under the Lauritzen-Wermuth-Frydenberg interpretation, which
we will shorten to ?the chain graph model.? We will not consider the chain graph model under the
Andersen-Madigan-Perlman (AMP) interpretation, or other chain graph models [22, 1] discussed in
[4] in this paper. Just as the DAG models and MRFs, the chain graph model has a set of equivalent
(under some assumptions) definitions via a set of Markov properties, and a factorization.
1
Modeling and inference in multivariate settings is complicated by the presence of hidden, yet relevant variables. Their presence motivates the study of marginal graphical models. Marginal DAG
models are complicated objects, inducing not only conditional independence constraints, but also
more general equality constraints such as the ?Verma constraint? [21], and inequality constraints
such as the instrumental variable inequality [3], and the Bell inequality in quantum mechanics [2].
One approach to studying marginal DAG models has therefore been to consider tractable supermodels defined by some easily characterized set of constraints, and represented by a mixed graph. One
such supermodel, defined only by conditional independence constraints induced by the underlying
hidden variable DAG on the observed margin, is the ordinary Markov model, studied in depth in [6].
Another supermodel, defined by generalized independence constraints including the Verma constraint [21] as a special case, is the nested Markov model [16]. There is a rich literature on Markov
properties of mixed graphs, and corresponding independence models. See for instance [15, 14, 7].
In this paper, we adapt a similar approach to the study of marginal chain graph models. Specifically,
we consider a supermodel defined only by conditional independences on observed variables of a
hidden variable chain graph, and ignore generalized equality constraints and inequalities. We show
that we can associate this supermodel with special mixed graphs which we call segregated graphs via
a global Markov property. Special features of segregated graphs imply the existence of a convenient
factorization, which we show is equivalent to the Markov property for positive distributions. This
equivalence, along with properties of the factorization, implies many existing results on the chain
graph model, and the ordinary Markov model carry over.
The paper is organized as follows. Section 2 describes a motivating example from causal inference
for the use of hidden variable chain graphs, with details deferred until section 6. In section 3, we introduce the necessary background on graphs and probability theory, define segregated graphs (SGs)
and an associated global Markov property, and show that the global Markov properties for DAG
models, chain graph models, and the ordinary Markov model induced by hidden variable DAGs are
special cases. In section 4, we define the model of conditional independence induced by hidden variable chain graphs, and show it can always be represented by a SG via an appropriate global Markov
property. In section 5, we define segregated factorization and show that under positivity, the global
Markov property in section 4 and segregated factorization are equivalent. In section 6, we introduce
causal inference and interference analysis as an application domain for hidden variable chain graph
models, and thus for SGs, and discuss a simulation study that illustrates our results and shows how
parameters of the model represented by a SG can directly encode parameters representing outcome
interference in the underlying hidden variable chain graph. Section 7 contains our conclusions. We
will provide outlines of arguments for our claims below, but will generally defer detailed proofs to
the supplementary material.
2
Motivating Example: Interference in Causal Inference
Consider a dataset obtained from a placebo-controlled vaccination trial, described in [20], consisting
of mother/child pairs where the children were vaccinated against pertussis. We suspect that though
mothers were not vaccinated directly, the fact that children were vaccinated, and each mother will
generally only contract pertussis from her child, the child?s vaccine may have a protective effect
on the mother. At the same time, if only the mothers but not children were vaccinated, we would
expect the same protective effect to operate in reverse. This is an example of interference, an effect
of treatment on experimental units other than those to which the treatment was administered. The
relationship between the outcomes of mother and child due to interference in this case has some
features of a causal relationship, but is symmetric.
We model this study by a chain graph shown in Fig. 1 (a), see section 6 for a justification of
this model. Here B1 is the vaccine (or placebo) given to children, and Y1 is the children?s outcomes. B2 is the treatment given to mothers (in our case no treatment), and Y2 is the mothers?
outcomes. Directed edges represent the direct causal effect of treatment on unit, and the undirected
edge represents the interference relationship among the mother/child outcome pair. In this model
(B1 ?
? B2 ) (mother and child treatment are assigned independently), and (Y1 ?? B2 | B1 , Y1 ),
(Y2 ?
? B1 | B2 , Y1 ) (mother?s outcome is independent of child?s treatment, if we know child?s
outcome, and mother?s treatment, and vice versa). Since treatments in this study were randomly
assigned, there are no unobserved confounders.
2
B1
Y1
B2
Y2
(a)
B1
A
W
U
Y1
Y2
B1
A
(b)
W
Y1
Y2
(c)
B1
A
W
Y1
Y2
(d)
Figure 1: (a) A chain graph representing the mother/child vaccination example in [20]. (b) A
more complex vaccination example with a followup booster shot. (c) A naive generalization of the
latent projection idea applied to (b), where ? and ? edges meet. (d) A segregated graph preserving
conditional independences in (b) not involving U .
Consider, however, a more complex example, where both mother and child are given the initial
vaccine A, but possibly based on results W of a followup visit, children are given a booster B1 , and
we consider the child?s (Y1 ) and the mother?s (Y2 ) outcomes, where the same kind of interference
relationship is operating. We model the child?s unobserved health status, which influences both W
and Y1 , by a (possibly very high dimensional) hidden variable U . The result is a hidden variable
chain graph in Fig. 1 (b). Since U is unobserved and possibly very complex, modeling it directly
may lead to model misspecification. An alternative, explored for instance in [13, 6, 16], is to consider
a model defined by conditional independences induced by the hidden variable model in Fig. 1 (b)
on observed variables A, B1 , W, Y1 , Y2 .
A simple approach that directly generalizes what had been done in DAG models is to encode conditional independences via a path separation criterion on a mixed graph constructed from a hidden
variable chain graph via a latent projection operation [21]. The difficulty with this approach is that
simple generalizations of latent projections to the chain graph case may yield graphs where ? and
? edges met, as happens in Fig. 1 (c). This is an undesirable feature of a graphical representation,
since existing factorization and parameterization results for chain graphs or ordinary Markov models, which decompose the joint distribution into pieces corresponding to sets connected by ? or ?
edges, do not generalize.
In the remainder of the paper, we show that for any hidden variable chain graph it is always possible
to construct a (not necessarily unique) mixed graph called a segregated graph (SG) where ? and
? edges do not meet, and which preserves all conditional independences on the observed variables.
One SG for our example is shown in Fig. 1 (d). Conditional independences implied by this graph
are B1 ?
? A1 | W and Y2 ?
? W, B1 | A1 , Y1 . Properties of SGs imply existing results on chain
graphs and the ordinary Markov model carry over with little change. For example, we may directly
apply the parameterization in [6], and the fitting algorithm in [5] to the model corresponding to Fig.
1 (d) if the state spaces are discrete, as we illustrate in section 6.1. The construction we give for
SGs may replace undirected edges by directed edges in a way that may break the symmetry of the
underlying interference relationship. Thus, directed edges in a SG do not have a straightforward
causal interpretation.
3
Background and Preliminaries
We will consider mixed graphs with three types of edges, undirected (?), directed (?), and directed (?), where a pair of vertices is connected either by a single edge, or a pair of edges one
of which is directed and one bidirected. We will denote an edge as an ordered pair of vertices
with a subscript indicating the type of edge, for example (AB)? . We will suppress the subscript if edge orientation is not important. An alternating sequence of nodes and edges of the form
A1 , (A1 A2 ), A2 , (A2 A3 ), A3 , . . . Ai?1 , (Ai?1 Ai ), Ai where we allow Ai = Aj if i 6= j ?1 is called
a walk (in some references also a route). We will denote walks by lowercase Greek letters. A walk
with non-repeating edges is called a trail. A trial with non-repeating vertices is called a path. A
directed cycle is a trail of the form A1 , (A1 A2 )? , A2 , . . . , Ai , (Ai A1 )? , A1 . A partially directed
cycle is a trail with ?, ? edges, and at least one ? edge where there exists a way to orient ? edges
to create a directed cycle. We will sometimes write a path from A to B where intermediate vertices
are not important, but edge orientation is as, for example, A ? ? ? . . . ? ? ? B.
3
A mixed graph with no ? and ? edges, and no directed cycles is called a directed acyclic graph
(DAG). A mixed graph with no ? edges, and no directed cycles is called an acyclic directed mixed
graph (ADMG). A mixed graph with no ? edges, and no partially directed cycles is called a chain
graph (CG). A segregated graph (SG) is a mixed graph with no partially directed cycles where no
path of the form Ai (Ai Aj )? Aj (Aj Ak )? Ak exists. DAGs are special cases of ADMGs and CGs
which are special cases of SGs.
We consider sets of distributions over a set V defined by independence constraints linked to above
types of graphs via (global) Markov properties. We will refer to V as either vertices in a graph or
random variables in a distribution, it will be clear from context what we mean.
A Markov model of a graph G defined via a global Markov property has the general form
o
n
? ?C
? ? V), (A ?? B | C)G ? (A ?? B | C)p(V) ,
P(G) ? p(V)(?A?B
where the consequent means ?A is independent of B conditional on C in p(V),? and the antecedent
means ?A is separated from B given C according to a certain walk separation property in G.? Since
DAGs, ADMGs, and CGs are special cases of SGs, we will define the appropriate path separation
property for SGs, which will recover known separation properties in DAGs, ADMGs and CGs as
special cases.
A walk ? contained in another walk ? is called a subwalk of ?. A maximal subwalk in ? where
all edges are undirected is called a section of ?. A section may consist of a single node and no
edges. We say a section ? of a walk ? is a collider section if edges in ? immediately preceding and
following ? contain arrowheads into ?. Otherwise, ? is a non-collider section. A walk ? from A
to B is said to be s-separated by a set C in a SG G if there exists a collider section ? that does not
contain an element of C, or a non-collider section that does (such a section is called blocked). A is
said to be s-separated from B given C in a SG G if every walk from a vertex in A to a vertex in B
is s-separated by C, and is s-connected given C otherwise.
Lemma 3.1 The Markov properties defined by superactive routes (walks) [17] in CGs, mseparation [14] in ADMGs, and d-separation [11] in DAGs are special cases of the Markov property
defined by s-separation in SGs.
4
A Segregated Graph Representation of CG Independence Models
For a SG G, and WP? V, define the model P(G)W to be the set of distributions where all conditional
independences in W p(V) implied by G hold. That is
o
n
? ?C
? ? V \ W), (A ?? B | C)G ? (A ?? B | C)p(V) .
P(G)W ? p(V \ W)(?A?B
P(G1 )W1 may equal P(G2 )W2 even if G1 , W1 and G2 , W2 are distinct. If W is empty, P(G)W
simply reduces to the Markov model defined by s-separation on the entire graph.
We are going to show that there is always a SG that represents the conditional independences that
define P(G)W , using a special type of vertex we call sensitive. A vertex V in an SG G is sensitive
if for any other vertex W , if W ? ? ? . . . ? ? ? V exists in G, then W ? V exists in G. We
first show that if V is sensitive, we can orient all undirected edges away from V and this results in
a new SG that gives the same set of conditional independence via s-separation. This is Lemma 4.1.
Next, we show that for any V with a child Z with adjacent undirected edges, if Z is not sensitive,
we can make it sensitive by adding appropriate edges, and this results in a new SG that preserves all
conditional independences that do not involve V . This is Lemma 4.3. Given above, for any vertex V
in a SG G, we can construct a new SG that preserves all conditional independences in G that do not
involve V , and where no children of V have adjacent undirected edges. This is Lemma 4.4. We then
?project out V ? to get another SG that preserves all conditional independences not involving V in G.
This is Theorem 4.1. We are then done, Corollary 4.1 states that there is always a (not necessarily
unique) SG for the conditional independence structure of a marginal of a CG.
Lemma 4.1 For V sensitive in a SG G, let G hV i be the graph be obtained from G by replacing all ?
edges adjacent to V by ? edges pointing away from V . Then G hV i is an SG, and P(G) = P(G hV i ).
4
The intuition here is that directed edges differ from undirected edges due to collider bias induced by
the former. That is, dependence between parents of a block is created by conditioning on variables
in the block. But a sensitive vertex in a block is already dependent on all the parents in the block, so
orienting undirected edges away from such a vertex and making it a block parent does not change
the set of advertised independences.
Lemma 4.2 Let G be an SG, and G 0 a graph obtained from adding an edge W ? V for two nonadjacent vertices W, V where W ? ? ? . . . ? ? ? V exists in G. Then G 0 is an SG.
Lemma 4.3 For any V in an SG G, let G V be obtained from G by adding W ? Z, whenever
W ? ? ? . . . ? ? ? Z ? V exists in G. Then G V is an SG, and P(G)V = P(G V )V .
This lemma establishes that two graphs, one an edge supergraph of the other, agree on the conditional
independences not involving V . Certainly the subgraph advertises at least as many constraints as the
supergraph. To see the converse, note that definition of s-separation, coupled with our inability to
condition on V can always be used to create dependence between W and Z, the vertices joined by
an edge in the supergraph explicitly. This dependence can be created regardless of the conditioning
set, either via the path W ? ? ? . . . ? ? ? Z, or via the walk path W ? ? ? . . . ? ? ? Z ? V ? Z.
It can thus be shown that adding these edges does not remove any independences.
Lemma 4.4 Let V be a vertex in a SG G with at least two vertices. Then there exists an SG G V
where V ? ? ? ? does not exist, and P(G)V = P(G V )V .
Proof: This follows by an inductive application of Lemmas 4.1, 4.2, and 4.3.
V
Note that Lemma 4.4 does not guarantee that the graph G is unique. In fact, depending on the order
in which we apply the induction, we may obtain different SGs with the required property.
Theorem 4.1 If G is an SG with at least 2 vertices V, and V ? V, there exists an SG G V with
vertices V \ {V } such that P(G)V = P(G V )V .
This theorem exploits previous results to construct a graph which agrees with G on all independences
not involving V and which does not contain children of V that are a part of the block with size greater
than two. Given a graph with this structure, we can adapt the latent projection construction to yield
a SG that preserves all independences.
Corollary 4.1 Let G be an SG with vertices V. Then for any W ? V, there exists an SG G ? with
vertices V \ W such that P(G)W = P(G ? ).
5
Segregated Factorization
We now show that, for positive distributions, the Markov property we defined and a certain factorization for SGs give the same model.
A set of vertices that form a connected component in a graph obtained from G by dropping all edges
except ?, and where no vertex is adjacent to a ? edge in G is called a district in G. A non-trivial
block is a set of vertices forming a connected component of size two or more in a graph obtained
from G by dropping all edges except ?. We denote the set of districts, and non-trivial blocks in G
by D(G), and B ? (G), respectively. It is trivial to show that in a SG G with vertices V, D(G), and
B ? (G) partition V.
For a vertex set S in G, define pasG (S) ? {W 6? S | (W V )? is in G, V ? S}, and pa?G (S) ?
pasG (S) ? S. For A ? V in G, let GA be the subgraph of G containing only vertices in A and
edges between them. The anterior of a set S, denoted by antG (S) is the set of vertices V with a
partially directed path into a node in S. A set A ? V is called anterial in G if whenever V ? A,
a
ant
S G (V ) ? A. We denote the set of non-empty anterial subsets of V in G by A(G). Let D (G) ?
A?A(G) D(GA ). A clique in an UG G is a maximal connected component. The set of cliques in an
UG G will be denoted by C(G). A vertex ordering ? is topological for a SG G if whenever V ? W ,
W 6? antG (V ). For a vertex V in G, and a topological ?, define preG,? (V ) ? {W 6= V | W ? V }.
5
A1
C
Y1
C
A
Y
(a)
A1
Y11
Y12
(b)
Y2
Y11
Y12
...
C
A2
A1
A2
Y21
(c)
Y22
...
A2
Y21
Y22
(d)
Figure 2: (a) A simple causal DAG model. (b),(c) Causal DAG models for interference. (d) A
causal DAG representing a Markov chain with an equilibrium distribution in the chain graph model
in Fig. 1 (a).
Given a SG G, define the augmented graph G a to be an undirected graph with the same vertex set as
G where A, B share an undirected edge in G a if A, B are connected by a walk consisting exclusively
of collider sections in G (note that this trivially includes all A, B that share an edge). We say p(V)
satisfies the augmented global Markov property with respect to a SG G if for any A ? A(G), p(A)
satisfies the UG global Markov property with respect to (GA )a . We denote a model, that is a set of
p(V) satisfying this property with respect to G, as P a (G).
By analogy with the ordinary Markov model and the chain graph model, we say that p(V)
obeys the segregated factorization
with respect
to a SG G if there exists a set of kers
S ? Da (G) ? B ? (G) such that for every A ? A(G), p(A) =
nels
[8]
f
(S
|
pa
(S))
S
G
Q
f (S | pasG (S)), and for every S ? B ? (G), fS (S | pasG (S)) =
QS?D(GA )?B? (GA ) S
C?C((Gpa? (S) )a ) ?C (C), where ?C (C) is a mapping from values of C to non-negative reals.
G
Lemma 5.1 If p(V) factorizes withQ
respect to G then fS (S | pasG (S)) = p(S | pasG (S)) for every
?
s
S ? B (G), and fS (S | paG (S)) = V ?S p(V | preG,? (V ) ? antG (S)) for every S ? Da (G) and
any topological ordering ? on G.
Theorem 5.1 If p(V) factorizes with respect to a SG G, then p(V) ? P a (G).
Lemma 5.2 If there exists a walk ? in G between A ? A, B ? B with all non-collider sections not
intersecting C, and all collider sections in antG (A ? B ? C), then there exist A? ? A, B ? ? B
such that A? and B ? are s-connected given C in G.
Theorem 5.2 P(G) = P a (G).
Theorem 5.3 For a SG G, if p(V) ? P(G) and is positive, then p(V) factorizes with respect to G.
Corollary 5.1 For any SG G, if p(V) is positive, then p(V) ? P(G) if and only if p(V) factorizes
with respect to G.
6
Causal Inference and Interference Analysis
In this section we briefly describe interference analysis in causal inference, as a motivation for the
use of SGs. Causal inference is concerned with using observational data to infer cause effect relationships as encoded by interventions (setting variable valus from ?outside the model.?). Causal
DAGs are often used as a tool, where directed arrows represent causal relationships, not just statistical relevance. See [12] for an extensive discussion of causal inference. Much of recent work
on interference in causal inference, see for instance [10, 19], has generalized causal DAG models
to settings where an intervention given to a subjects affects other subjects. A classic example is
herd immunity in epidemiology ? vaccinating a subset of subjects can render all subjects, even those
who were not vaccinated, immune. Interference is typically encoded by having vertices in a causal
diagram represent not response variability in a population, but responses of individual units, or appropriately defined groups of units, where interference only occurs between groups, not within a
6
?
0.5
?
?
0.075
?
0.50
?
?
?
?
?
?
0.45
?
?
0.050
?
?
?
30
?
?
0.2
?
?
?
?
0.0
?
?
?
?0.2
?
?
?
?
0.000
20
?
0.35
?
(a)
?
?
?
?
10
?
?
0.40
?
?
?
0.2
?
?
?
?
0.3
?
?
?
0.025
?
?
?
0.4
?
0.30
?0.2
(b)
0.0
?
0.2
(c)
Figure 3: (a) ?2 density with 14 degrees of freedom (red) and a histogram of observed deviances of
ordinary Markov models of Fig. 1 (d) fitted with data sampled from a randomly sampled model of
Fig. 1 (b). (b) Y axis: values of parameters p(Y5 = 0 | Y4 = 0, A = 0) (red), and p(Y5 = 0 | Y4 =
1, A = 0) (green) in the fitted nested Markov model of Fig. 1 (d). X axis: value of the interaction
parameter ?45 (and 3 ? ?145 ) in the underlying chain graph model for Fig. 1. (c) Same plot with
p(Y5 = 0 | Y4 = 0, A = 1) (yellow), and p(Y5 = 0 | Y4 = 1, A = 1) (blue).
group. For example, the DAG in Fig. 2 (b) represents a generalization of the model in Fig. 2 (a) to
a setting with unit pairs where assigning a vaccine to one unit may also influence another unit, as
was the case in the example in Section 2. Furthermore, we may consider more involved examples
of interference if we record responses over time, as is shown in Fig. 2 (c). Extensive discussions on
this type of modeling approach can be found in [18, 10].
We consider an alternative approach to encoding interference between responses using chain graph
models. We give two justifications for the use of chain graphs. First, we may assume that interference arises as a dependence between responses Y1 and Y2 in equilibrium of a Markov chain
where transition probabilities represent the causal influence of Y1 on Y2 , and vice versa, at multiple points in time before equilibrium is reached. Under certain assumptions [9], it can be shown
that such an equilibrium distribution obeys the Markov property of a chain graph. For example the
DAG shown in Fig. 2 encodes transition probabilities p(Y1t+1 , Y1t+1 | Y1t , Y2t , a1 , a2 ) = p(Y2t+1 |
Y1t+1 , a1 , a2 )p(Y1t+1 | Y2t , a1 , a2 ), for particular values a1 , a2 . For suitably chosen conditional
distributions, these transition probabilities lead to an equilibrium distribution that lies in the model
corresponding to the chain graph in Fig. 1 (a) [9]. Second, we may consider certain independence
assumptions in our problem as reasonable, and sometimes such assumptions lead naturally to a chain
graph model. For example, we may study the effect of a marketing intervention in a social network,
and consider it reasonable that we can predict the response of any person only knowing the treatment for that person and responses of all friends of this person in a social network (in other words,
the treatments on everyone else are irrelevant given this information). These assumptions result in a
response model that is a chain graph with directed arrows from treatment to every person?s response,
and undirected edges between friends only.
6.1
An Example of Interference Analysis Using Segregated Graph Models
Given ubiquity of unobserved confounding variables in causal inference, and our our choice of chain
graphs for modeling interference, we use models represented by SGs to avoid having to deal with
a hidden variable chain graph model directly, due to the possibility of misspecifying the likely high
dimensional hidden variables involved. We briefly describe a simulation we performed to illustrate
how SGs may be used for interference analysis.
As a running example, we used a model shown in Fig. 1 (b), with A, W, B1 , Y1 , Y2 binary, and U
15-valued. We first considered the following family of parameterizations. In all members of this
family, A was assigned via a fair coin, p(W | A, U ) was a logistic model with no interactions, B1
was randomly assigned via a fair coin given no complications (W = 1), otherwise B1 was heavily
weighted (0.8 probability) towards treatment assignment. The distribution p(Y1 , Y2 | U, B1 , A) was
7
obtained from a jointP
distribution p(Y1 ,Y2 , U, B1 , A) in a log-linear model of an undirected graph G
1
kxC k1
of the form: Z exp
?C , where C ranges over all cliques in G, k.k1 is the L1 -norm,
C (?1)
?C are interactions parameters, and Z is a normalizing constant. In our case G was an undirected
graph over A, B1 , U, Y1 , Y2 where edges from Y2 to B1 and U were missing, and all other edges
were present. Parameters ?C were generated from N (0, 0.3). It is not difficult to show that all
elements in our family lie in the chain graph model in Fig. 1 (b).
Since all observed variables in our example are binary, the saturated model has 25 ? 1 = 31 parameters, and the model corresponding to Fig. 1 (d) is missing 14 of them. 2 are missing because
p(B1 | W, A) does not depend on A, and 12 are missing because p(Y2 | Y1 , B1 , W, A) does not
depend on W, B1 . If our results on SGs are correct, we would expect the ordinary Markov model
[6] of a graph in Fig. 1 (b) to be a good fit for the data generated from our hidden variable chain
graph family, where we omit the values of U . In particular, we would expect the observed deviances
of our models fitted to data generated from our family to closely follow a ?2 distribution with 14
degrees of freedom. We generated 1000 members of our family described above, used each member
to generate 5000 samples, and fitted the ordinary Markov model using an approach described in [5].
The resulting deviances, plotting against the appropriate ?2 distribution, are shown in Fig. 3 (a),
which looks as we expect. We did not vary the parameters for A, W, B1 . This is because models for
Fig. 1 (b) and Fig. 1 (d) will induce the same marginal model for p(A, B1 , W ) by construction.
In addition, we wanted to illustrate that we can encode interaction parameters directly via parameters
in a SG. To this end, we generated a set of distributions p(Y1 , Y2 | A, U, B1 ) via the binary log-linear
model as described above, where all ?C parameters were fixed, except we constrained ?{Y1 ,Y2 } to
equal 3 ? ?{A,Y1 ,Y5 } , and varied ?{Y1 ,Y2 } from ?0.3 to 0.3. These parameters represent two-way
interaction of Y1 and Y2 , and three-way interaction of A, Y1 and Y2 , and thus directly encode the
strength of the interference relationship between responses. Since the SG in Fig. 1 (d) ?breaks the
symmetry? by replacing the undirected edge between Y1 and Y2 by a directed edge, the strength
of interaction is represented by the degree of dependence of Y2 and Y1 conditional on A. As can
be seen in Fig. 3 (b),(c) we obtain independence precisely when ?{Y4 ,Y5 } and ?{A,Y4 ,Y5 } in the
underlying hidden variable chain graph model is 0, as expected.
Our simulations did not require the modification of the fitting procedure in [5], since Fig. 1 (d) is
an ADMG. In general, a SG will have undirected blocks. However, the special property of SGs
allows for a trivial modification of the fitting procedure. Since the likelihood decomposes into
pieces corresponding to districts and blocks of the SG, we can simply fit each district piece using
the approach in [5], and each block piece using any of the existing fitting procedures for discrete
chain graph models.
7
Discussion and Conclusions
In this paper we considered a graphical representation of the ordinary Markov chain graph model,
the set of distributions defined by conditional independences implied by a marginal of a chain graph
model. We show that this model can be represented by segregated graphs via a global Markov property which generalizes Markov properties in chain graphs, DAGs, and mixed graphs representing
marginals of DAG models. Segregated graphs have the property that bidirected and undirected edges
are never adjacent. Under positivity, this global Markov property is equivalent to segregated factorization which decomposes the joint distribution into pieces that correspond either to sections of the
graph containing bidirected edges, or sections of the graph containing undirected edges, but never
both together. The convenient form of this factorization implies many existing results on chain graph
and ordinary Markov models, in particular parameterizations and fitting algorithms, carry over. We
illustrated the utility of segregated graphs for interference analysis in causal inference via simulated
datasets.
Acknowledgements
The author would like to thank Thomas Richardson for suggesting mixed graphs where ? and ?
edges do not meet as interesting objects to think about, and Elizabeth Ogburn and Eric Tchetgen
Tchetgen for clarifying discussions of interference. This work was supported in part by an NIH
grant R01 AI104459-01A1.
8
References
[1] S. A. Andersson, D. Madigan, and M. D. Perlman. A characterization of Markov equivalence classes for
acyclic digraphs. Annals of Statistics, 25:505?541, 1997.
[2] J. Bell. On the Einstein Podolsky Rosen paradox. Physics, 1(3):195?200, 1964.
[3] Z. Cai, M. Kuroki, J. Pearl, and J. Tian. Bounds on direct effects in the presence of confounded intermediate variables. Biometrics, 64:695 ? 701, 2008.
[4] M. Drton. Discrete chain graph models. Bernoulli, 15(3):736?753, 2009.
[5] R. J. Evans and T. S. Richardson. Maximum likelihood fitting of acyclic directed mixed graphs to binary
data. In Proceedings of the Twenty Sixth Conference on Uncertainty in Artificial Intelligence, volume 26,
2010.
[6] R. J. Evans and T. S. Richardson. Markovian acyclic directed mixed graphs for discrete data. Annals of
Statistics, pages 1?30, 2014.
[7] J. T. A. Koster. Marginalizing and conditioning in graphical models. Bernoulli, 8(6):817?840, 2002.
[8] S. L. Lauritzen. Graphical Models. Oxford, U.K.: Clarendon, 1996.
[9] S. L. Lauritzen and T. S. Richardson. Chain graph models and their causal interpretations (with discussion). Journal of the Royal Statistical Society: Series B, 64:321?361, 2002.
[10] E. L. Ogburn and T. J. VanderWeele. Causal diagrams for interference. Statistical Science, 29(4):559?578,
2014.
[11] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan and Kaufmann, San Mateo, 1988.
[12] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, 2 edition, 2009.
[13] T. Richardson and P. Spirtes. Ancestral graph Markov models. Annals of Statistics, 30:962?1030, 2002.
[14] T. S. Richardson. Markov properties for acyclic directed mixed graphs. Scandinavial Journal of Statistics,
30(1):145?157, 2003.
[15] K. Sadeghi and S. Lauritzen. Markov properties for mixed graphs. Bernoulli, 20(2):676?696, 2014.
[16] I. Shpitser, R. J. Evans, T. S. Richardson, and J. M. Robins. Introduction to nested Markov models.
Behaviormetrika, 41(1):3?39, 2014.
[17] M. Studeny. Bayesian networks from the point of view of chain graphs. In Proceedings of the Fourteenth
Conference on Uncertainty in Artificial Intelligence (UAI-98), pages 496?503. Morgan Kaufmann, San
Francisco, CA, 1998.
[18] M. J. van der Laan. Causal inference for networks. Working paper, 2012.
[19] M. J. van der Laan. Causal inference for a population of causally connected units. Journal of Causal
Inference, 2(1):13?74, 2014.
[20] T. J. VanderWeele, E. J. T. Tchetgen, and M. E. Halloran. Components of the indirect effect in vaccine
trials: identification of contagion and infectiousness effects. Epidemiology, 23(5):751?761, 2012.
[21] T. S. Verma and J. Pearl. Equivalence and synthesis of causal models. Technical Report R-150, Department of Computer Science, University of California, Los Angeles, 1990.
[22] N. Wermuth. Probability distributions with summary graph structure. Bernoulli, 17(3):845?879, 2011.
9
| 5904 |@word trial:3 briefly:2 instrumental:1 norm:1 suitably:1 simulation:3 shot:1 carry:4 initial:1 contains:1 exclusively:1 hereafter:1 series:1 amp:1 existing:6 anterior:1 yet:1 assigning:1 john:1 evans:3 partition:1 wanted:1 remove:1 plot:1 intelligence:2 parameterization:2 record:1 characterization:1 parameterizations:2 node:3 complication:1 district:4 along:1 constructed:1 direct:2 supergraph:3 fitting:6 introduce:2 expected:1 mechanic:1 little:1 project:1 underlying:5 what:2 kind:1 unobserved:4 guarantee:1 temporal:1 every:6 unit:8 converse:1 intervention:3 omit:1 grant:1 causally:1 positive:4 before:1 encoding:1 ak:2 analyzing:1 oxford:1 meet:3 path:8 subscript:2 studied:2 mateo:1 equivalence:3 co:1 factorization:12 range:1 tian:1 obeys:2 directed:24 unique:3 perlman:2 block:11 procedure:3 jhu:1 bell:2 convenient:2 projection:4 word:1 deviance:3 induce:1 madigan:2 suggest:1 get:1 undesirable:1 ga:5 context:1 influence:3 equivalent:4 missing:4 straightforward:1 regardless:1 independently:1 immediately:1 shorten:1 q:1 classic:1 population:2 justification:2 annals:3 construction:3 heavily:1 trail:3 associate:1 element:2 pa:2 satisfying:1 asymmetric:3 observed:7 wermuth:3 hv:3 connected:9 cycle:7 ordering:2 intuition:1 nonadjacent:1 depend:2 eric:1 easily:1 joint:2 indirect:1 represented:7 separated:4 distinct:1 describe:2 artificial:2 admg:2 outcome:9 outside:1 encoded:2 widely:1 supplementary:1 y1t:5 say:3 valued:1 otherwise:3 statistic:4 g1:2 richardson:7 think:1 sequence:1 cai:1 ilyas:1 interaction:7 maximal:2 remainder:1 neighboring:2 relevant:1 subgraph:2 inducing:1 los:1 parent:3 empty:2 asymmetry:1 object:2 illustrate:4 depending:1 friend:2 lauritzen:5 c:1 implies:2 met:1 differ:1 collider:8 greek:1 closely:1 correct:1 observational:1 material:1 require:1 generalization:3 decompose:1 preliminary:1 hold:1 proximity:1 considered:2 exp:1 equilibrium:5 mapping:1 predict:1 claim:1 pointing:1 kuroki:1 vary:1 a2:12 sensitive:7 agrees:1 vice:2 create:2 establishes:1 tool:2 weighted:1 always:5 avoid:1 factorizes:4 corollary:3 encode:4 bernoulli:4 likelihood:2 cg:3 inference:17 mrfs:4 dependent:1 lowercase:1 entire:1 typically:1 hidden:18 her:1 going:1 pixel:1 among:1 flexible:1 orientation:2 denoted:2 spatial:1 special:13 constrained:1 marginal:8 field:2 construct:3 equal:2 having:2 never:2 atom:1 represents:3 look:1 rosen:1 report:1 intelligent:1 inherent:1 randomly:3 preserve:5 individual:1 consisting:2 antecedent:1 ab:1 freedom:2 drton:1 possibility:1 certainly:1 saturated:1 deferred:1 gpa:1 chain:53 edge:55 necessary:1 arrowhead:1 biometrics:1 walk:13 causal:29 fitted:4 instance:4 modeling:6 markovian:1 bidirected:3 assignment:1 ordinary:14 vertex:32 placebo:2 subset:2 motivating:2 ugs:1 confounders:1 person:4 density:1 epidemiology:2 ancestral:1 contract:1 physic:1 probabilistic:1 together:3 ilya:1 hopkins:1 synthesis:1 intersecting:1 w1:2 andersen:1 containing:3 possibly:3 positivity:2 booster:2 shpitser:2 suggesting:1 protective:2 b2:5 includes:1 explicitly:1 piece:5 performed:1 break:2 view:1 linked:1 red:2 reached:1 recover:1 complicated:2 defer:1 kaufmann:2 who:1 yield:2 correspond:1 ant:1 yellow:1 generalize:1 bayesian:5 identification:1 studeny:1 herd:1 whenever:3 definition:2 sixth:1 against:2 involved:2 naturally:1 associated:2 proof:2 sampled:2 dataset:1 treatment:13 popular:1 color:1 organized:1 clarendon:1 follow:1 coinciding:1 response:10 done:2 though:1 furthermore:1 just:2 marketing:1 until:1 working:1 replacing:2 logistic:1 aj:4 perhaps:1 behaved:1 orienting:1 effect:10 contain:3 y2:25 former:1 equality:2 assigned:4 inductive:1 alternating:1 symmetric:5 wp:1 y12:2 spirtes:1 illustrated:1 deal:1 adjacent:5 criterion:1 generalized:3 outline:1 magnetization:1 l1:1 reasoning:2 image:1 nih:1 ug:3 conditioning:3 volume:1 discussed:1 interpretation:5 marginals:6 refer:1 blocked:1 versa:2 cambridge:1 dag:22 mother:15 ai:9 trivially:1 had:1 immune:1 operating:1 multivariate:1 recent:2 confounding:1 irrelevant:1 reverse:1 termed:1 route:2 certain:4 inequality:4 binary:4 der:2 preserving:1 seen:1 greater:1 morgan:2 preceding:1 multiple:1 reduces:1 infer:1 technical:1 characterized:1 adapt:2 y22:2 visit:1 a1:16 controlled:1 involving:4 y21:2 vision:1 histogram:1 represent:7 sometimes:2 background:2 addition:1 diagram:2 else:1 appropriately:1 w2:2 operate:1 induced:7 suspect:1 subject:4 undirected:19 vaccinated:5 member:3 call:3 presence:4 intermediate:2 concerned:1 independence:30 affect:1 fit:2 followup:2 idea:1 knowing:1 angeles:1 administered:1 expression:2 utility:2 f:3 render:1 vanderweele:2 cause:2 generally:2 detailed:1 clear:1 involve:2 repeating:2 generate:1 exist:2 blue:1 discrete:4 write:1 dropping:2 group:3 graph:113 orient:2 koster:1 letter:1 uncertainty:2 fourteenth:1 family:6 reasonable:2 separation:9 frydenberg:2 bound:1 topological:3 strength:2 constraint:11 precisely:1 encodes:1 argument:1 department:2 according:1 describes:1 elizabeth:1 making:1 happens:1 vaccination:3 modification:2 advertised:1 interference:25 agree:1 discus:1 know:1 tractable:1 end:1 confounded:1 studying:1 generalizes:3 operation:1 apply:2 einstein:1 away:3 appropriate:4 ubiquity:1 occurrence:1 alternative:2 coin:2 existence:2 thomas:1 running:1 graphical:9 exploit:1 k1:2 society:1 r01:1 implied:3 already:1 occurs:1 dependence:5 said:2 y2t:3 thank:1 simulated:2 clarifying:1 y5:7 trivial:4 induction:1 length:1 relationship:15 y4:6 difficult:1 negative:1 suppress:1 y11:2 motivates:1 twenty:1 markov:47 datasets:2 variability:1 misspecification:1 paradox:1 y1:28 varied:1 pair:6 required:1 extensive:2 immunity:1 california:1 pearl:4 below:1 including:1 green:1 royal:1 everyone:1 analogue:1 natural:1 difficulty:1 representing:4 sadeghi:1 imply:4 contagion:1 axis:2 created:2 naive:1 health:1 coupled:1 literature:1 sg:57 acknowledgement:1 segregated:21 marginalizing:1 expect:4 mixed:18 interesting:1 acyclic:7 analogy:1 nels:1 admgs:4 degree:3 plotting:1 verma:3 share:2 summary:1 supported:1 tchetgen:3 bias:1 allow:1 pag:1 van:2 depth:1 transition:3 rich:1 quantum:1 author:1 san:2 social:4 ignore:1 status:1 gene:2 clique:3 global:11 uai:1 b1:25 francisco:1 latent:4 decomposes:2 robin:1 ca:1 symmetry:2 halloran:1 complex:4 necessarily:2 domain:1 da:2 did:2 arrow:2 motivation:1 misspecifying:1 edition:1 child:21 complementary:1 fair:2 augmented:2 fig:26 causality:1 vaccine:5 cgs:4 lie:2 theorem:6 explored:1 consequent:1 a3:2 normalizing:1 exists:12 consist:1 adding:4 illustrates:1 margin:1 simply:2 likely:1 forming:1 ordered:1 contained:1 partially:4 g2:2 joined:1 nested:3 satisfies:2 conditional:23 digraph:1 towards:1 replace:1 change:2 specifically:1 except:3 laan:2 lemma:13 called:12 andersson:1 experimental:1 indicating:1 inability:1 arises:1 relevance:1 |
5,419 | 5,905 | Approximating Sparse PCA from Incomplete Data
Abhisek Kundu ?
Petros Drineas ?
Malik Magdon-Ismail ?
Abstract
We study how well one can recover sparse principal components of a data matrix using a sketch formed from a few of its elements. We show that for a wide
class of optimization problems, if the sketch is close (in the spectral norm) to the
original data matrix, then one can recover a near optimal solution to the optimization problem by using the sketch. In particular, we use this approach to obtain
sparse principal components and show that for m data points in n dimensions,
O(?2 k? max{m, n}) elements gives an -additive approximation to the sparse
PCA problem (k? is the stable rank of the data matrix). We demonstrate our algorithms extensively on image, text, biological and financial data. The results show
that not only are we able to recover the sparse PCAs from the incomplete data, but
by using our sparse sketch, the running time drops by a factor of five or more.
1
Introduction
Principal components analysis constructs a low dimensional subspace of the data such that projection
of the data onto this subspace preserves as much information as possible (or equivalently maximizes
the variance of the projected data). The earliest reference to principal components analysis (PCA)
is in [15]. Since then, PCA has evolved into a classic tool for data analysis. A challenge for the
interpretation of the principal components (or factors) is that they can be linear combinations of all
the original variables. When the original variables have direct physical significance (e.g. genes in
biological applications or assets in financial applications) it is desirable to have factors which have
loadings on only a small number of the original variables. These interpretable factors are sparse
principal components (SPCA).
The question we address is not how to better perform sparse PCA; rather, it is whether one can perform sparse PCA on incomplete data and be assured some degree of success. (i.e., can we do sparse
PCA when we have a small sample of data points and those data points have missing features?).
Incomplete data is a situation that one is confronted with all too often in machine learning. For
example, with user-recommendation data, one does not have all the ratings of any given user. Or in
a privacy preserving setting, a client may not want to give us all entries in the data matrix. In such a
setting, our goal is to show that if the samples that we do get are chosen carefully, the sparse PCA
features of the data can be recovered within some provable error bounds. A significant part of this
work is to demonstrate our algorithms on a variety of data sets.
More formally, The data matrix is A ? Rm?n (m data points in n dimensions). Data matrices
often have low effective rank. Let Ak be the best rank-k approximation to A; in practice, it is often
possible to choose a small value of k for which kA ? Ak k2 is small. The best rank-k approximation
Ak is obtained by projecting A onto the subspace spanned by its top-k principal components Vk ,
which is the n ? k matrix containing the top-k right singular vectors of A. These top-k principal
?
Department of Computer Science, Rensselaer Polytechnic Institute, Troy, NY, kundua2@rpi.edu.
Department of Computer Science, Rensselaer Polytechnic Institute, Troy, NY, drinep@cs.rpi.edu.
?
Department of Computer Science, Rensselaer Polytechnic Institute, Troy, NY, magdon@cs.rpi.edu.
?
1
components are the solution to the variance maximization problem:
Vk =
trace(VT AT AV).
arg max
V?Rn?k ,VT V=I
We denote the maximum variance attainable by OPTk , which is the sum of squares of the topk singular values of A. To get sparse principal components, we add a sparsity constraint to the
optimization problem: every column of V should have at most r non-zero entries (the sparsity
parameter r is an input),
Sk =
trace(VT AT AV).
arg max
(1)
V?Rn?k ,VT V=I,kV(i) k0 ?r
The sparse PCA problem is itself a very hard problem that is not only NP-hard, but also inapproximable [12] There are many heuristics for obtaining sparse factors [2, 18, 20, 5, 4, 14, 16]
including some approximation algorithms with provable guarantees [1]. The existing research typically addresses the task of getting just the top principal component (k = 1) (some exceptions are
[11, 3, 19, 9]). While the sparse PCA problem is hard and interesting, it is not the focus of this work.
We address the question: What if we do not know A, but only have a sparse sampling of some of
the entries in A (incomplete data)? The sparse sampling is used to construct a sketch of A, denoted
? There is not much else to do but solve the sparse PCA problem with the sketch A
? instead of the
A.
?k,
full data A to get S
?k =
S
? T AV).
?
trace(VT A
arg max
V?Rn?k ,VT V=I,kV(i) k
(2)
0 ?r
? k performs as an approximation to Sk with respective to the objective that we
We study how S
are trying to optimize, namely trace(ST AT AS) ? the quality of approximation is measured with
?TA
?
respect to the true A. We show that the quality of approximation is controlled by how well A
T
T
T
? A.
? This is a
approximates A A as measured by the spectral norm of the deviation A A ? A
?
general result that does not rely on how one constructs the sketch A.
Theorem 1 (Sparse PCA from a Sketch) Let Sk be a solution to the sparse PCA problem that
? k a solution to the sparse PCA problem for the sketch A
? which solves (2). Then,
solves (1), and S
? k T AT AS
? k ) ? trace(Sk T AT ASk ) ? 2kkAT A ? A
? T Ak
? 2.
trace(S
? then we can compute, from A,
? sparse
Theorem 1 says that if we can closely approximate A with A,
components which capture almost as much variance as the optimal sparse components computed
from the full data A.
? is computed from a sparse sampling of the data elements in A (incomIn our setting, the sketch A
plete data). To determine which elements to sample, and how to form the sketch, we leverage some
recent results in elementwise matrix completion ([8]). In a nutshell, if one samples larger data ele? the error
ments with higher probability than smaller data elements, then, for the resulting sketch A,
T
T
? Ak
? 2 will be small. The details of the sampling scheme and how the error depends on
kA A ? A
? 2 from Theorem 4
the number of samples is given in Section 2.1. Combining the bound on kA ? Ak
in Section 2.1 with Theorem 1, we get our main result:
Theorem 2 (Sampling Complexity for Sparse PCA) Sample s data-elements from A ? Rm?n to
? using Algorithm 1. Let Sk be a solution to the sparse PCA problem that
form the sparse sketch A
?
?
solves (1), and let Sk , which solves (2), be a solution to the sparse PCA problem for the sketch A
formed from the s sampled data elements. Suppose the number of samples s satisfies
s ? 2k 2 ?2 (?2 + ?/(3k)) log((m + n)/?)
(?2 and ? are dimensionless quantities that depend only on A). Then, with probability at least 1 ? ?
? k T AT AS
? k ) ? trace(Sk T AT ASk ) ? 2(2 + /k)kAk2 .
trace(S
2
2
The dependence of ?2 and ? on A are given in Section 2.1. Roughly speaking, we can ignore
the term with ? since it is multiplied by /k, and ?2 = O(k? max{m, n}), where k? is the stable
(numerical) rank of A. To paraphrase Theorem 2, when the stable rank is a small constant, with
O(k 2 max{m, n}) samples, one can recover almost as good sparse principal components as with
all data (the price being a small fraction of the optimal variance, since OPTk ? kAk22 ). As far as
we know, the only prior work related to the problem we consider here is [10] which proposed a
specific method to construct sparse PCA from incomplete data. However, we develop a general tool
that can be used with any existing sparse PCA heuristic. Moreover, we derive much simpler bounds
(Theorems 1 and 2) using matrix concentration inequalities, as opposed to -net arguments in [10].
We also give an application of Theorem 1 to running sparse PCA after ?denoising? the data using a
greedy thresholding algorithm that sets the small elements to zero (see Theorem 3). Such denoising
is appropriate when the observed matrix has been element-wise perturbed by small noise, and the
uncontaminated data matrix is sparse and contains large elements. We show that if an appropriate
fraction of the (noisy) data is set to zero, one can still recover sparse principal components. This
gives a principled approach to regularizing sparse PCA in the presence of small noise when the data
is sparse.
Not only do our algorithms preserve the quality of the sparse principal components, but iterative
algorithms for sparse PCA, whose running time is proportional to the number of non-zero entries in
? Our experiments show about five-fold speed gains
the input matrix, benefit from the sparsity of A.
while producing near-comparable sparse components using less than 10% of the data.
Discussion. In summary, we show that one can recover sparse PCA from incomplete data while
gaining computationally at the same time. Our result holds for the optimal sparse components from
? One cannot efficiently find these optimal components (since the problem is NPA versus from A.
hard to even approximate), so one runs a heuristic, in which case the approximation error of the
heuristic would have to be taken into account. Our experiments show that using the incomplete data
with the heuristics is just as good as those same heuristics with the complete data.
In practice, one may not be able to sample the data, but rather the samples are given to you. Our
result establishes that if the samples are chosen with larger values being more likely, then one can
recover sparse PCA. In practice one has no choice but to run the sparse PCA on these sampled
elements and hope. Our theoretical results suggest that the outcome will be reasonable. This is
because, while we do not have specific control over what samples we get, the samples are likely to
represent the larger elements. For example, with user-recommendation data, users are more likely
to rate items they either really like (large positive value) or really dislike (large negative value).
Notation. We use bold uppercase (e.g., X) for matrices and bold lowercase (e.g., x) for column vectors. The i-th row of X is X(i) , and the i-th column of X is X(i) . Let [n] denote
the set {1, 2, ..., n}. E(X) is the expectation of a random variable X; for a matrix, E(X) denotes the P
element-wise expectation. For a matrix X ? Rm?n , the Frobenius norm kXkF is
m,n
2
kXkF = i,j=1 X2ij , and the spectral (operator) norm kXk2 is kXk2 = maxkyk2 =1 kXyk2 . We
Pm,n
also have the `1 and `0 norms: kXk`1 = i,j=1 |Xij | and kXk0 (the number of non-zero entries
in X). The k-th largest singular value of X is ?k (X). and log x is the natural logarithm of x.
2
Sparse PCA from a Sketch
In this section, we will prove Theorem 1 and give a simple application to zeroing small fluctuations
as a way to regularize to noise. In the next section we will use a more sophisticated way to select
the elements of the matrix allowing us to tolerate a sparser matrix (more incomplete data) but still
recovering sparse PCA to reasonable accuracy.
Theorem 1 will be a corollary of a more general result, for a class of optimization problems involving
a Lipschitz-like objective function over an arbitrary (not necessarily convex) domain. Let f (V, X)
be a function that is defined for a matrix variable V and a matrix parameter X. The optimization
variable V is in some feasible set S which is arbitrary. The parameter X is also arbitrary. We assume
that f is locally Lipschitz in X with, that is
? ? ?(X)kX ? Xk
? 2
|f (V, X) ? f (V, X)|
?V ? S.
3
?
(Note we allow the ?Lipschitz constant? to depend on the fixed matrix X but not the variables V, X;
this is more general than a globally Lipshitz objective) The next lemma is the key tool we need
to prove Theorem 1 and it may be on independent interest in other optimization settings. We are
?
interested in maximizing f (V, X) w.r.t. V to obtain V? . But, we only have an approximation X
?
? to obtain V
? , which will be a suboptimal solution with respect
for X, and so we maximize f (V, X)
? ? , X) which quantifies how suboptimal V
? ? is w.r.t. X.
to X. We wish to bound f (V? , X) ? f (V
Lemma 1 (Surrogate optimization bound) Let f (V, X) be ?-locally Lipschitz w.r.t. X over the
? ? = arg maxV?S f (V, X).
? Then,
domain V ? S. Define V? = arg maxV?S f (V, X); V
?
?
? , X) ? 2?(X)kX ? Xk
? 2.
f (V , X) ? f (V
In the lemma, the function f and the domain S are arbitrary. In our setting, X ? Rn?n , the domain
S = {V ? Rn?k ; VT V = Ik ; kV(j) k0 ? r}, and f (V, X) = trace(VT XV). We first show that
f is Lipschitz w.r.t. X with ? = k (a constant independent of X). Let the representation of V by its
columns be V = [v1 , . . . , vk ]. Then,
T
?
?
|trace(VT XV) ? trace(VT XV)|
= |trace((X ? X)VV
)| ?
k
X
? ? kkX ? Xk
? 2
?i (X ? X)
i=1
where, ?i (A) is the i-th largest singular value of A (we used Von-neumann?s trace inequality
and the fact that VVT is a k-dimensional projection). Now, by Lemma 1, trace(V? T XV? ) ?
? ?T XV
? ? ) ? 2kkX ? Xk
? 2 . Theorem 1 follows by setting X = AT A and X
? =A
?TA
? 1.
trace(V
Greedy thresholding. We give the simplest scenario of incomplete data where Theorem 1 gives
some reassurance that one can compute good sparse principal components. Suppose the smallest
data elements have been set to zero. This can happen, for example, if only the largest elements are
measured, or in a noisy setting if the small elements are treated as noise and set to zero. So
Aij |Aij | ? ?;
?
Aij =
0
|Aij | < ?.
P
2
2
?
? + ?.
A2 . Let A = A
Recall k = kAk /kAk (stable rank of A), and define kA? k2 =
F
2
F
|Aij |<?
ij
By construction, k?k2F = kA? k2F . Then,
? 2 = kAT ? + ?T A ? ?T ?k2 ? 2kAk2 k?k2 + k?k2 .
? T Ak
(3)
kAT A ? A
2
Suppose the zeroing of elements only loses a fraction of the energy in A, i.e. ? is selected so
? that is an /k? fraction of the total variance in A has been lost in the
that kA? k2F ? 2 kAk2F /k;
p
unmeasured (or zero) data. Then k?k2 ? k?kF ? kAkF / k? = kAk2 .
? is created from A by zeroing all elements that are less than ?, and ?
Theorem 3 Suppose that A
? Then the sparse PCA solution V
??
is such that the truncated norm satisfies kA? k22 ? 2 kAk2F /k.
satisfies
? ?T AAV
? ? ) ? trace(V?T AAT V? ) ? 2kkAk2 (2 + ).
trace(V
2
Theorem 3 shows that it is possible to recover sparse PCA after setting small elements to zero. This
is appropriate when most of the elements in A are small noise and a few
? of the elements in A
contain large data elements. For example
elements (of
? if the data consists of sparse O( nm) large ?
magnitude, say, 1) and many nm ? O( nm) small elements whose magnitude is o(1/ nm)?(high
signal-to-noise setting), then kA? k22 /kAk22 ? 0 and with just a sparse sampling of the O( nm)
large elements (very incomplete data), we recover near optimal sparse PCA.
Greedily keeping only the large elements of the matrix requires a particular structure in A to work,
and it is based on a crude Frobenius-norm bound for the spectral error. In Section 2.1, we use recent
results in element-wise matrix sparsification to choose the elements in a randomized way, with a bias
toward large elements. With high probability, one can directly bound the spectral error and hence
get better performance.
? T XV)
? = trace(VT XV) ?
Theorem 1 can also be proved as follows: trace(VT XV) ? trace(V
T ?
T ?
T
T
? XV)
? ? kkX ? Xk
? 2 + trace(V XV)
?
? T XV)
? ?
trace(V XV) + trace(V XV) ? trace(V
? trace(V
T ? ?
T
?
?
?
?
?
kkX ? Xk2 + trace(V XV) ? trace(V XV) ? 2kkX ? Xk2 .
1
4
Algorithm 1 Hybrid (`1 , `2 )-Element Sampling
Input: A ? Rm?n ; # samples s; probabilities {pij }.
? = 0m?n .
1: Set A
2: for t = 1 . . . s (i.i.d. trials with replacement) do
3:
Randomly sample indices (it , jt ) ? [m] ? [n] with P [(it , jt ) = (i, j)] = pij .
? A
? ij ? A
? ij + Aij /(s ? pij ).
4:
Update A:
?
5: return A (with at most s non-zero entries).
2.1
An (`1 , `2 )-Sampling Based Sketch
In the previous section, we created the sketch by deterministically setting the small data elements to
zero. Instead, we could randomly select the data elements to keep. It is natural to bias this random
sampling toward the larger elements. Therefore, we define sampling probabilities for each data
element Aij which are proportional to a mixture of the absolute value and square of the element:
pij = ?
A2ij
|Aij |
+ (1 ? ?)
2 ,
kAk`1
kAkF
(4)
where ? ? (0, 1] is a mixing parameter. Such a sampling probability was used in [8] to sample
? We repeat the prototypical algorithm for
data elements in independent trials to get a sketch A.
element-wise matrix sampling in Algorithm 1.
Note that unlike with the deterministic zeroing of small elements, in this sampling scheme, one
samples the element Aij with probability pij and then rescales it by 1/pij . To see the intuition for
? ij ] = pij ? (Aij /pij ) + (1 ?
this rescaling, consider the expected outcome for a single sample: E[A
?
pij ) ? 0 = Aij ; that is, A is a sparse but unbiased estimate for A. This unbiasedness holds for any
choice of the sampling probabilities pij defined over the elements of A in Algorithm 1. However, for
an appropriate choice of the sampling probabilities, we get much more than unbiasedness; we can
? 2 . In particular, the hybrid-(`1 , `2 ) distribution
control the spectral norm of the deviation, kA ? Ak
in (4) was analyzed in [8], where they suggest an optimal choice for the mixing parameter ?? which
? 2 . This algorithm to choose ?? is summarized in
minimizes the theoretical bound on kA ? Ak
Algorithm 1 of [8].
? using Algorithm 1, with ?? selected using
Using the probabilities in (4) to create the sketch A
?
Algorithm 1 of [8], one can prove a bound for kA ? Ak2 . We state a simplified version of the bound
from [8] in Theorem 4.
Theorem 4 ([8]) Let A ? Rm?n and let > 0 be an accuracy parameter. Define probabilities
? be the sparse sketch produced using
pij as in (4) with ?? chosen using Algorithm 1 of [8]. Let A
?2 2
Algorithm 1 with a number of samples s ? 2 (? + ?/3) log((m + n)/?), where
?1
?2 = k? ? max{m, n} ? ? k? ? kAk2 / kAk`1 + (1 ? ?)
,
Then, with probability at least 1 ? ?,
3
and
? ?1+
p
?
mnk/?.
? 2 ? kAk2 .
kA ? Ak
Experiments
We show the experimental performance of sparse PCA from a sketch using several real data matrices.
As we mentioned, sparse PCA is NP-Hard, and so we must use heuristics. These heuristics are
discussed next, followed by the data, the experimental design and finally the results.
Algorithms for Sparse PCA: Let G (ground truth) denote the algorithm which computes the principal components (which may not be sparse) of the full data matrix A; the optimal variance is OPTk .
We consider six heuristics for getting sparse principal components.
5
Gmax,r
Gsp,r
Hmax,r
Hsp,r
Umax,r
Usp,r
The r largest-magnitude entries in each principal component generated by G.
r-sparse components using the Spasm toolbox of [17] with A.
?
The r largest entries of the principal components for the (`1 , `2 )-sampled sketch A.
?
r-sparse components using Spasm with the (`1 , `2 )-sampled sketch A.
?
The r largest entries of the principal components for the uniformly sampled sketch A.
?
r-sparse components using Spasm with the uniformly sampled sketch A.
Output of an algorithm Z is sparse principal components V, and our metric is f (Z) =
trace(VT AT AV), where A is the original centered data. We consider the following statistics.
f (Gmax,r ) Relative loss of greedy thresholding versus Spasm, illustrating the value of a good
f (Gsp,r ) sparse PCA algorithm. Our sketch based algorithms do not address this loss.
? instead of complete data A. A ratio close
f (Hmax/sp,r ) Relative loss of using the (`1 , `2 )-sketch A
f (Gmax/sp,r ) to 1 is desired.
? instead of complete data A. A benchmark
f (Umax/sp,r ) Relative loss of using the uniform sketch A
f (Gmax/sp,r ) to highlight the value of a good sketch.
We also report the computation time for the algorithms. We show results to confirm that sparse PCA
algorithms using the (`1 , `2 )-sketch are nearly comparable to those same algorithms on the complete
data; and, gain in computation time from sparse sketch is proportional to the sparsity.
Data Sets: We show results on image, text, stock, and gene expression data.
? Digit Data (m = 2313, n = 256): We use the [7] handwritten zip-code digit images (300
pixels/inch in 8-bit gray scale). Each pixel is a feature (normalized to be in [?1, 1]). Each 16 ? 16
digit image forms a row of the data matrix A. We focus on three digits: ?6? (664 samples), ?9? (644
samples), and ?1? (1005 samples).
? TechTC Data (m = 139, n = 15170): We use the Technion Repository of Text Categorization
Dataset (TechTC, see [6]) from the Open Directory Project (ODP). We removed words (features)
with fewer than 5 letters. Each document (row) has unit norm.
? Stock Data (m = 7056, n = 1218): We use S&P100 stock market data with 7056 snapshots
of prices for 1218 stocks. The prices of each day form a row of the data matrix and a principal
component represents an ?index? of sorts ? each stock is a feature.
? Gene Expression Data (m = 107, n = 22215): We use GSE10072 gene expression data for
lung cancer from the NCBI Gene Expression Omnibus database. There are 107 samples (58 lung
tumor cases and 49 normal lung controls) forming the rows of the data matrix, with 22,215 probes
(features) from the GPL96 platform annotation table.
3.1
Results
We report results for primarily the top principal component (k = 1) which is the case most considered in the literature. When k > 1, our results do not qualitatively change. We note the optimal
mixing parameter ?? using Algorithm 1 of [8] for various datasets in Table 1.
Handwritten Digits. We sample approximately 7% of the elements from the centered data using
(`1 , `2 )-sampling, as well as uniform sampling. The performance for small r is shown in Table 1,
including the running time ? . For this data, f (Gmax,r )/f (Gsp,r ) ? 0.23 (r = 10), so it is important
to use a good sparse PCA algorithm. We see from Table 1 that the (`1 , `2 )-sketch significantly
outperforms the uniform sketch. A more extensive comparison of recovered variance is given in
Figure 2(a). We also observe a speed-up of a factor of about 6 for the (`1 , `2 )-sketch. We point
out that the uniform sketch is reasonable for the digits data because most data elements are close to
either +1 or ?1, since the pixels are either black or white.
We show a visualization of the principal components in Figure 1. We observe that the sparse components from the (`1 , `2 )-sketch are almost identical to that of from the complete data.
TechTC Data. We sample approximately 5% of the elements from the centered data using our
(`1 , `2 )-sampling, as well as uniform sampling. For this data, f (Gmax,r )/f (Gsp,r ) ? 0.84 (r =
10). We observe a very significant performance difference between the (`1 , `2 )-sketch and uniform
sketch. A more extensive comparison of recovered variance is given in Figure 2(b). We also observe
6
Digit
TechTC
Stock
Gene
??
r
f (Hmax/sp,r )
f (Gmax/sp,r )
? (G)
? (H)
f (Umax/sp,r )
f (Gmax/sp,r )
? (G)
? (U)
.42
1
.10
.92
40
40
40
40
0.99/0.90
0.94/0.99
1.00/1.00
0.82/0.88
6.21
5.70
3.72
3.61
1.01/0.70
0.41/0.38
0.66/0.66
0.65/0.15
5.33
5.96
4.76
2.53
Table 1: Comparison of sparse principal components from the (`1 , `2 )-sketch and uniform sketch.
(a) r = 100%
(b) r = 50%
(c) r = 30%
(d) r = 10%
Figure 1: [Digits] Visualization of top-3 sparse principal components. In each figure, left panel
shows Gsp,r and right panel shows Hsp,r .
1
1
0.8
0.8
0.8
f (Hsp,r )/f (Gsp,r )
f (Usp,r )/f (Gsp,r )
0.6
f(Hsp,r )/f(Gsp,r )
f(Usp,r )/f(Gsp,r )
0.6
20
40
60
80 100
f(Hsp,r )/f(Gsp,r )
f(Usp,r )/f(Gsp,r )
0.8
0.2
0.6
f (Hsp,r )/f (Gsp,r )
f (Usp,r )/f (Gsp,r )
0.4
0.4
20
40
60
0.6
80 100
20
40
60
80 100
0.2
20
40
60
80 100
Sparsity constraint: r (percent)
Sparsity constraint: r (percent)
Sparsity constraint: r (percent)
Sparsity constraint: r (percent)
(a) Digit
(b) TechTC
(c) Stock
(d) Gene
Figure 2: Performance of sparse PCA for (`1 , `2 )-sketch and uniform sketch over an extensive
range for the sparsity constraint r. The performance of the uniform sketch is significantly worse
highlighting the importance of a good sketch.
a speed-up of a factor of about 6 for the (`1 , `2 )-sketch. Unlike the digits data which is uniformly
near ?1, the text data is ?spikey? and now it is important to sample with a bias toward larger
elements, which is why the uniform-sketch performs very poorly.
As a final comparison, we look at the actual sparse top component with sparsity parameter r = 10.
The topic IDs in the TechTC data are 10567=?US: Indiana: Evansville? and 11346=?US: Florida?.
The top-10 features (words) in the full PCA on the complete data are shown in Table 2.
In Table 3 we show which words appear in the top sparse principal component with sparsity r = 10
using various sparse PCA algorithms. We observe that the sparse PCA from the (`1 , `2 )-sketch with
only 5% of the data sampled matches quite closely with the same sparse PCA algorithm using the
complete data (Gmax/sp,r matches Hmax/sp,r ).
Stock Data. We sample about 2% of the non-zero elements from the centered data using the (`1 , `2 )sampling, as well as uniform sampling. For this data, f (Gmax,r )/f (Gsp,r ) ? 0.96 (r = 10). We
observe a very significant performance difference between the (`1 , `2 )-sketch and uniform sketch.
A more extensive comparison of recovered variance is given in Figure 2(c). We also observe a
speed-up of a factor of about 4 for the (`1 , `2 )-sketch. Similar to TechTC data this dataset is also
?spikey?, so biased sampling toward larger elements significantly outperforms the uniform-sketch.
Gene Expression Data. We sample about 9% of the elements from the centered data using the
(`1 , `2 )-sampling, as well as uniform sampling. For this data, f (Gmax,r )/f (Gsp,r ) ? 0.05 (r = 10)
7
Top 10 in Gmax,r
evansville
florida
south
miami
indiana
information
beach
lauderdale
estate
spacer
ID
1
2
3
4
5
6
7
8
9
10
ID
11
12
13
14
15
16
17
18
19
20
21
Other words
service
small
frame
tours
faver
transaction
needs
commercial
bullet
inlets
producer
Gmax,r
1
2
3
4
5
6
7
8
9
10
Table 2: [TechTC] Top ten words in top principal component of the complete data (the other
words are discovered by some of the sparse
PCA algorithms).
Hmax,r
1
2
3
4
5
7
6
8
11
12
Umax,r
6
14
15
16
17
7
18
19
20
21
Gsp,r
1
2
3
4
5
6
7
8
9
13
Hsp,r
1
2
3
4
5
7
8
6
12
11
Usp,r
6
14
15
16
17
7
18
19
20
21
Table 3: [TechTC] Relative ordering of the
words (w.r.t. Gmax,r ) in the top sparse principal
component with sparsity parameter r = 10.
which means a good sparse PCA algorithm is imperative. We observe a very significant performance difference between the (`1 , `2 )-sketch and uniform sketch. A more extensive comparison of
recovered variance is given in Figure 2(d). We also observe a speed-up of a factor of about 4 for
the (`1 , `2 )-sketch. Similar to TechTC data this dataset is also ?spikey?, and consequently biased
sampling toward larger elements significantly outperforms the uniform-sketch.
Performance of Other Sketches: We briefly report on other options for sketching A. We consider
suboptimal ? (not ?? from Algorithm 1 of [8] ) in (4) to construct a suboptimal hybrid distribution,
and use this in proto-Algorithm 1 to construct a sparse sketch. Figure 3 reveals that a good sketch
using the ?? is important.
0.95
0.9
f (Hsp,r ), ?? = 0.1
f (Hsp,r ), ? = 1.0
Figure 3: [Stock data] Performance of sketch using suboptimal ? to illustrate the importance of
the optimal mixing parameter ?? .
0.85
20
40
60
80
100
Sparsity constraint: r (percent)
Conclusion: It is possible to use a sparse sketch (incomplete data) to recover nearly as good sparse
principal components as one would have gotten with the complete data. We mention that, while Gmax
which uses the largest weights in the unconstrained PCA does not perform well with respect to the
variance, it does identify good features. A simple enhancement to Gmax is to recalibrate the sparse
component after identifying the features - this is an unconstrained PCA problem on just the columns
of the data matrix corresponding to the features. This method of recalibrating can be used to improve
any sparse PCA algorithm.
Our algorithms are simple and efficient, and many interesting avenues for further research remain.
Can the sampling complexity for the top-k sparse PCA be reduced from O(k 2 ) to O(k). We suspect
Pk
? we used the crude
? T A);
that this should be possible by getting a better bound on i=1 ?i (AT A? A
T
T
? Ak
? 2 . We also presented a general surrogate optimization bound which may
bound kkA A ? A
be of interest in other applications. In particular, it is pointed out in [13] that though PCA optimizes
variance, a more natural way to look at PCA is as the linear projection of the data that minimizes
the information loss. [13] gives efficient algorithms to find sparse linear dimension reduction that
minimizes information loss ? the information loss of sparse PCA can be considerably higher than optimal. To minimize information loss, the objective to maximize is f (V) = trace(AT AV(AV)? A).
It would be interesting to see whether one can recover sparse low-information-loss linear projectors
from incomplete data.
Acknowledgments: AK and PD are partially supported by NSF IIS-1447283 and IIS-1319280.
8
References
[1] M. Asteris, D. Papailiopoulos, and A. Dimakis. Non-negative sparse PCA with provable guarantees. In Proc. ICML, 2014.
[2] J. Cadima and I. Jolliffe. Loadings and correlations in the interpretation of principal components. Applied Statistics, 22:203?214, 1995.
[3] T. T. Cai, Z. Ma, and Y. Wu. Sparse pca: Optimal rates and adaptive estimation. The Annals
of Statistics, 41(6):3074?3110, 2013.
[4] Alexandre d?Aspremont, Francis Bach, and Laurent El Ghaoui. Optimal solutions for sparse
principal component analysis. Journal of Machine Learning Research, 9:1269?1294, June
2008.
[5] Alexandre d?Aspremont, Laurent El Ghaoui, Michael I. Jordan, and Gert R. G. Lanckriet. A
direct formulation for sparse PCA using semidefinite programming. SIAM Review, 49(3):434?
448, 2007.
[6] E. Gabrilovich and S. Markovitch. Text categorization with many redundant features: using
aggressive feature selection to make SVMs competitive with C4.5. In Proceedings of International Conference on Machine Learning, 2004.
[7] J. J. Hull. A database for handwritten text recognition research. In IEEE Transactions on
Pattern Analysis and Machine Intelligence, pages 550?554, 16(5), 1994.
[8] A. Kundu, P. Drineas, and M. Magdon-Ismail. Recovering PCA from Hybrid-(`1 , `2 ) Sparse
Sampling of Data Elements. In http://arxiv.org/pdf/1503.00547v1.pdf, 2015.
[9] J. Lei and V. Q. Vu. Sparsistency and agnostic inference in sparse pca. The Annals of Statistics,
43(1):299?322, 2015.
[10] Karim Lounici. Sparse principal component analysis with missing observations. arxiv report:
http://arxiv.org/abs/1205.7060, 2012.
[11] Z. Ma. Sparse principal component analysis and iterative thresholding. The Annals of Statistics,
41(2):772?801, 2013.
[12] M. Magdon-Ismail. NP-hardness and inapproximability of sparse pca.
http://arxiv.org/abs/1502.05675, 2015.
arxiv report:
[13] M. Magdon-Ismail and C. Boutsidis. arxiv report: http://arxiv.org/abs/1502.06626, 2015.
[14] B. Moghaddam, Y. Weiss, and S. Avidan. Generalized spectral bounds for sparse LDA. In
Proc. ICML, 2006.
[15] K. Pearson. On lines and planes of closest fit to systems of points in space. Philosophical
Magazine, 2:559?572, 1901.
[16] Haipeng Shen and Jianhua Z. Huang. Sparse principal component analysis via regularized low
rank matrix approximation. Journal of Multivariate Analysis, 99:1015?1034, July 2008.
[17] K. Sjstrand, L.H. Clemmensen, R. Larsen, and B. Ersbll. Spasm: A matlab toolbox for sparse
statistical modeling. In Journal of Statistical Software (Accepted for publication), 2012.
[18] N. Trendafilov, I. T. Jolliffe, and M. Uddin. A modified principal component technique based
on the lasso. Journal of Computational and Graphical Statistics, 12:531?547, 2003.
[19] Z. Wang, H. Lu, and H. Liu. Nonconvex statistical optimization: Minimax-optimal sparse pca
in polynomial time. http://arxiv.org/abs/1408.5352?context=cs.LG, 2014.
[20] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Journal of Computational & Graphical Statistics, 15(2):265?286, 2006.
9
| 5905 |@word trial:2 illustrating:1 briefly:1 version:1 repository:1 norm:9 loading:2 polynomial:1 open:1 attainable:1 mention:1 reduction:1 liu:1 contains:1 document:1 reassurance:1 existing:2 outperforms:3 recovered:5 ka:12 rpi:3 must:1 additive:1 numerical:1 happen:1 drop:1 interpretable:1 update:1 maxv:2 greedy:3 selected:2 fewer:1 item:1 intelligence:1 directory:1 xk:5 plane:1 org:5 simpler:1 five:2 direct:2 ik:1 kak22:2 prove:3 consists:1 privacy:1 market:1 hardness:1 expected:1 roughly:1 gabrilovich:1 globally:1 techtc:10 actual:1 project:1 moreover:1 notation:1 maximizes:1 panel:2 agnostic:1 what:2 evolved:1 minimizes:3 dimakis:1 sparsification:1 indiana:2 guarantee:2 every:1 nutshell:1 rm:5 k2:6 control:3 lipshitz:1 unit:1 appear:1 producing:1 positive:1 service:1 aat:1 xv:15 ak:12 id:3 laurent:2 fluctuation:1 approximately:2 black:1 range:1 acknowledgment:1 vu:1 practice:3 lost:1 kat:2 digit:10 asteris:1 npa:1 significantly:4 projection:3 word:7 suggest:2 get:8 onto:2 close:3 cannot:1 operator:1 selection:1 context:1 dimensionless:1 optimize:1 deterministic:1 projector:1 missing:2 maximizing:1 convex:1 shen:1 identifying:1 spanned:1 regularize:1 financial:2 classic:1 gert:1 unmeasured:1 markovitch:1 papailiopoulos:1 annals:3 construction:1 suppose:4 commercial:1 user:4 magazine:1 programming:1 us:1 lanckriet:1 element:50 recognition:1 database:2 observed:1 wang:1 capture:1 ordering:1 removed:1 principled:1 intuition:1 mentioned:1 pd:1 complexity:2 optk:3 depend:2 drineas:2 k0:2 stock:9 various:2 effective:1 outcome:2 pearson:1 whose:2 heuristic:9 larger:7 solve:1 quite:1 say:2 statistic:7 itself:1 noisy:2 final:1 confronted:1 net:1 cai:1 combining:1 mixing:4 poorly:1 ismail:4 frobenius:2 kv:3 haipeng:1 getting:3 enhancement:1 neumann:1 categorization:2 derive:1 develop:1 completion:1 illustrate:1 measured:3 ij:4 solves:4 recovering:2 c:3 closely:2 gotten:1 hull:1 centered:5 really:2 biological:2 hold:2 miami:1 considered:1 ground:1 normal:1 smallest:1 a2:1 xk2:2 estimation:1 proc:2 largest:7 create:1 establishes:1 tool:3 hope:1 modified:1 rather:2 publication:1 earliest:1 corollary:1 focus:2 june:1 vk:3 rank:8 greedily:1 inference:1 el:2 lowercase:1 typically:1 interested:1 pixel:3 arg:5 denoted:1 platform:1 ak2:1 construct:6 beach:1 sampling:27 identical:1 represents:1 vvt:1 look:2 k2f:3 nearly:2 icml:2 uddin:1 np:3 report:6 producer:1 few:2 primarily:1 randomly:2 preserve:2 sparsistency:1 replacement:1 ab:4 interest:2 mixture:1 analyzed:1 semidefinite:1 uppercase:1 moghaddam:1 gmax:16 respective:1 incomplete:13 logarithm:1 desired:1 theoretical:2 column:5 modeling:1 kxkf:2 maximization:1 recalibrate:1 deviation:2 entry:9 imperative:1 tour:1 uniform:16 technion:1 too:1 perturbed:1 considerably:1 unbiasedness:2 st:1 international:1 randomized:1 siam:1 spasm:5 lauderdale:1 michael:1 sketching:1 topk:1 von:1 nm:5 containing:1 choose:3 opposed:1 huang:1 worse:1 return:1 rescaling:1 account:1 aggressive:1 bold:2 summarized:1 rescales:1 depends:1 francis:1 competitive:1 recover:11 sort:1 lung:3 option:1 annotation:1 odp:1 minimize:1 formed:2 square:2 accuracy:2 variance:13 efficiently:1 identify:1 inch:1 handwritten:3 produced:1 lu:1 pcas:1 asset:1 energy:1 uncontaminated:1 boutsidis:1 larsen:1 petros:1 sampled:7 gain:2 proved:1 dataset:3 ask:2 ele:1 recall:1 carefully:1 sophisticated:1 alexandre:2 ta:2 higher:2 tolerate:1 day:1 wei:1 formulation:1 lounici:1 though:1 just:4 correlation:1 inlet:1 sketch:60 quality:3 gray:1 aav:1 bullet:1 lei:1 lda:1 omnibus:1 k22:2 contain:1 true:1 unbiased:1 normalized:1 hence:1 karim:1 white:1 kak:4 generalized:1 trying:1 pdf:2 complete:9 demonstrate:2 performs:2 percent:5 image:4 wise:4 plete:1 physical:1 discussed:1 interpretation:2 approximates:1 elementwise:1 significant:4 unconstrained:2 pm:1 zeroing:4 pointed:1 stable:4 add:1 closest:1 multivariate:1 recent:2 optimizes:1 scenario:1 nonconvex:1 inequality:2 success:1 vt:13 preserving:1 drinep:1 kxk0:1 zip:1 recalibrating:1 determine:1 maximize:2 redundant:1 signal:1 ii:2 july:1 full:4 desirable:1 match:2 bach:1 controlled:1 involving:1 avidan:1 expectation:2 metric:1 arxiv:8 represent:1 want:1 else:1 singular:4 biased:2 unlike:2 south:1 suspect:1 clemmensen:1 jordan:1 near:4 leverage:1 presence:1 spca:1 estate:1 cadima:1 variety:1 fit:1 hastie:1 lasso:1 suboptimal:5 avenue:1 whether:2 six:1 pca:57 expression:5 speaking:1 matlab:1 extensively:1 locally:2 ten:1 svms:1 simplest:1 reduced:1 http:5 xij:1 nsf:1 tibshirani:1 mnk:1 key:1 v1:2 fraction:4 sum:1 run:2 letter:1 you:1 almost:3 reasonable:3 wu:1 jianhua:1 comparable:2 bit:1 bound:14 followed:1 fold:1 constraint:7 software:1 speed:5 argument:1 department:3 combination:1 smaller:1 remain:1 usp:6 projecting:1 ghaoui:2 taken:1 computationally:1 visualization:2 jolliffe:2 know:2 magdon:5 multiplied:1 probe:1 polytechnic:3 observe:9 spectral:7 appropriate:4 florida:2 original:5 top:14 running:4 denotes:1 graphical:2 ncbi:1 approximating:1 malik:1 objective:4 question:2 quantity:1 concentration:1 dependence:1 kak2:6 surrogate:2 subspace:3 hsp:9 kak2f:2 topic:1 toward:5 provable:3 code:1 index:2 ratio:1 equivalently:1 lg:1 troy:3 trace:29 negative:2 a2ij:1 design:1 perform:3 allowing:1 av:6 observation:1 snapshot:1 datasets:1 benchmark:1 truncated:1 situation:1 frame:1 rn:5 discovered:1 arbitrary:4 paraphrase:1 rating:1 namely:1 toolbox:2 extensive:5 philosophical:1 kkx:5 c4:1 address:4 able:2 pattern:1 sparsity:13 challenge:1 max:7 including:2 gaining:1 natural:3 client:1 rely:1 treated:1 hybrid:4 regularized:1 kundu:2 minimax:1 scheme:2 improve:1 created:2 umax:4 aspremont:2 text:6 prior:1 literature:1 review:1 dislike:1 kf:1 relative:4 loss:9 kakf:2 highlight:1 interesting:3 prototypical:1 proportional:3 gsp:16 versus:2 degree:1 pij:11 thresholding:4 row:5 cancer:1 summary:1 repeat:1 supported:1 keeping:1 aij:11 bias:3 allow:1 vv:1 institute:3 wide:1 absolute:1 sparse:101 benefit:1 dimension:3 x2ij:1 computes:1 qualitatively:1 adaptive:1 projected:1 simplified:1 far:1 transaction:2 approximate:2 ignore:1 gene:8 keep:1 confirm:1 reveals:1 rensselaer:3 iterative:2 quantifies:1 sk:7 why:1 table:9 p100:1 obtaining:1 necessarily:1 zou:1 domain:4 assured:1 sp:10 significance:1 main:1 pk:1 noise:6 ny:3 wish:1 deterministically:1 kxk2:2 crude:2 hmax:5 theorem:19 specific:2 jt:2 ments:1 importance:2 magnitude:3 kx:2 sparser:1 likely:3 forming:1 highlighting:1 kxk:1 partially:1 recommendation:2 inapproximability:1 trendafilov:1 loses:1 satisfies:3 truth:1 ma:2 goal:1 consequently:1 price:3 lipschitz:5 feasible:1 hard:5 change:1 inapproximable:1 uniformly:3 denoising:2 principal:36 lemma:4 total:1 tumor:1 accepted:1 experimental:2 exception:1 formally:1 select:2 proto:1 regularizing:1 |
5,420 | 5,906 | Recovering Communities in the General Stochastic
Block Model Without Knowing the Parameters
Emmanuel Abbe
Department of Electrical Engineering and PACM
Princeton University
Princeton, NJ 08540
eabbe@princeton.edu
Colin Sandon
Department of Mathematics
Princeton University
Princeton, NJ 08540
sandon@princeton.edu
Abstract
The stochastic block model (SBM) has recently gathered significant attention due
to new threshold phenomena. However, most developments rely on the knowledge of the model parameters, or at least on the number of communities. This
paper introduces efficient algorithms that do not require such knowledge and yet
achieve the optimal information-theoretic tradeoffs identified in Abbe-Sandon ?15.
In the constant degree regime, an algorithm is developed that requires only a
lower-bound on the relative sizes of the communities and achieves the optimal
accuracy scaling for large degrees. This lower-bound requirement is removed for
the regime of diverging degrees. For the logarithmic degree regime, this is further enhanced into a fully agnostic algorithm that simultaneously learns the model
parameters, achieves the optimal CH-limit for exact recovery, and runs in quasilinear time. These provide the first algorithms affording efficiency, universality
and information-theoretic optimality for strong and weak consistency in the SBM.
1
Introduction
This paper studies the problem of recovering communities in the general stochastic block model
with linear size communities, for constant and logarithmic degree regimes. In contrast to [1], this
paper does not require knowledge of the parameters. It shows how to learn these from the graph
toplogy. We next provide some motivations on the problem and further background on the model.
Detecting communities (or clusters) in graphs is a fundamental problem in networks, computer
science and machine learning. This applies to a large variety of complex networks (e.g., social and
biological networks) as well as to data sets engineered as networks via similarly graphs, where one
often attempts to get a first impression on the data by trying to identify groups with similar behavior.
In particular, finding communities allows one to find like-minded people in social networks, to
improve recommendation systems, to segment or classify images, to detect protein complexes, to
find genetically related sub-populations, or discover new tumor subclasses. See [1] for references.
While a large variety of community detection algorithms have been deployed in the past decades, the
understanding of the fundamental limits of community detection has only appeared more recently,
in particular for the SBM [1?7]. The SBM is a canonical model for community detection. We use
here the notation SBM(n, p, W ) to refer to a random graph ensemble on the vertex-set V = [n],
where each vertex v ? V is assigned independently a hidden (or planted) label ?v in [k] under a
probability distribution p = (p1 , . . . , pk ) on [k], and each unordered pair of nodes (u, v) ? V ? V
is connected independently with probability W?u ,?v , where W is a symmetric k ? k matrix with
entries in [0, 1]. Note that G ? SBM(n, p, W ) denotes a random graph drawn under this model,
without the hidden (or planted) clusters (i.e., the labels ?v ) revealed. The goal is to recover these
labels by observing only the graph.
1
Recently the SBM came back at the center of the attention at both the practical level, due to extensions allowing overlapping communities that have proved to fit well real data sets in massive
networks [8], and at the theoretical level due to new phase transition phenomena [2?6]. The latter
works focus exclusively on the SBM with two symmetric communities, i.e., each community is of
the same size and the connectivity in each community is identical. Denoting by p the intra- and q
the extra-cluster probabilities, most of the results are concerned with two figure of merits: (i) recovery (also called exact recovery or strong consistency), which investigates the regimes of p and
q for which there exists an algorithm that recovers with high probability the two communities completely [7, 9?19], (ii) detection, which investigates the regimes for which there exists an algorithm
that recovers with high probability a positively correlated partition [2?4].
The sharp threshold for exact recovery was obtained in [5, 6], showing1 that
? for?p = a log(n)/n,
?
q = b log(n)/n, a, b > 0, exact recovery is solvable if and only if | a ? b| ? 2, with efficient
algorithms achieving the threshold. In addition, [5] introduces an SDP, proved to achieve the threshold in [20, 21], while [22] shows that a spectral algorithm also achieves the threshold. The sharp
threshold for detection was obtained in [3, 4], showing that detection is solvable (and so efficiently)
if and only if (a ? b)2 > 2(a + b), when p = a/n, q = b/n, settling a conjecture from [2].
Besides the detection and the recovery properties, one may ask about the partial recovery of the
communities, studied in [1, 19, 23?25]. Of particular interest to this paper is the case of strong
recovery (also called weak consistency), where only a vanishing fraction of the nodes is allowed to
be misclassified. For two-symmetric communities, [6] shows that strong recovery is possible if and
only if n(p ? q)2 /(p + q) diverges, extended in [1] for general SBMs.
In the next section, we discuss the results for the general SBM of interest in this paper and the
problem of learning the model parameters. We conclude this section by providing motivations on
the problem of achieving the threshold with an efficient and universal algorithm.
Threshold phenomena have long been studied in fields such as information theory (e.g., Shannon?s
capacity) and constrained satisfaction problems (e.g., the SAT threshold). In particular, the quest of
achieving the threshold has generated major algorithmic developments in these fields (e.g., LDPC
codes, polar codes, survey propagation to name a few). Likewise, identifying thresholds in community detection models is key to benchmark and guide the development of clustering algorithms.
However, it is particularly crucial to develop benchmarks that do not depend sensitively on the
knowledge of the model parameters. A natural question is hence whether one can solve the various
recovery problems in the SBM without having access to the parameters. This paper answers this
question in the affirmative for the exact and strong recovery of the communities.
1.1
Prior results on the general SBM with known parameters
Most of the previous works are concerned with the SBM having symmetric communities (mainly
2 or sometimes k), with the exception of [19] which provides the first general achievability results
for the SBM.2 Recently, [1] studied fundamental limits for the general model SBM(n, p, W ), with
p independent of n. The results are summarized below. Recall first the recovery requirements:
Definition 1. (Recovery requirements.) An algorithm recovers or detects communities in
SBM(n, p, W ) with an accuracy of ? ? [0, 1], if it outputs a labelling of the nodes {? 0 (v), v ? V },
which agrees with the true labelling ? on a fraction ? of the nodes with probability 1 ? on (1).
The agreement is maximized over relabellings of the communities. Strong recovery refers to
? = 1 ? on (1) and exact recovery refers to ? = 1.
The problem is solvable information-theoretically if there exists an algorithm that solves it, and
efficiently if the algorithm runs in polynomial-time in n. Note that exact recovery in SBM(n, p, W )
requires the graph not to have vertices of degree 0 in multiple communities with high probability.
Therefore, for exact recovery, we focus on W = ln(n)Q/n where Q is fixed.
I. Partial and strong recovery in the general SBM. The first result of [1] concerns the regime
where the connectivity matrix W scales as Q/n for a positive symmetric matrix Q (i.e., the node
1
2
[6] generalizes this to a, b = ?(1).
[24] also study variations of the k-symmetric model.
2
average degree is constant). The following notion of SNR is first introduced
SNR = |?min |2 /?max
(1)
3
where ?min and ?max are respectively the smallest and largest eigenvalues of diag(p)Q. The algorithm Sphere-comparison is proposed that solves partial recovery with exponential accuracy
and quasi-linear complexity when the SNR diverges.
Theorem 1. [1] Given any k ? Z, p ? (0, 1)k with |p| = 1, and symmetric matrix Q with no
two rows equal, let ? be the largest eigenvalue of P Q, and ?0 be the eigenvalue of P Q with the
0 2
smallest nonzero magnitude. If SNR := |??| > 4, ?7 < (?0 )8 , and 4?3 < (?0 )4 , for some
? = ?(?, ?0 ) and C = C(p, Q) > 0, Sphere-comparison detectscommunities
in graphs
C?
(?0 )4
C?
drawn from SBM(n, p, Q/n) with accuracy 1 ? 4ke? 16k /(1 ? exp(? 16k
?
1
)), provided
?3
1+
i pi
that the above is larger than 1 ? 2min
) time. Moreover, ? can be made
ln(4k) , and runs in O(n
?
arbitrarily small with 8 ln(? 2/|?0 |)/ ln(?), and C(p, ?Q) is independent of ?.
2
(a?b)
Note that for k symmetric clusters, SNR reduces to k(a+(k?1)b)
, which is the quantity of interest for detection [2, 26]. Moreover, the SNR must diverge to ensure strong recovery in the symmetric case [1]. The following is an important consequence of the previous theorem, stating that
Sphere-comparison solves strong recovery when the entries of Q are amplified.
Corollary 1. [1] For any k ? Z, p ? (0, 1)a with |p| = 1, and symmetric matrix Q with no two rows
equal, there exist (c) = O(1/ ln(c)) such that for all sufficiently large c, Sphere-comparison
detects communities in SBM(n, p, cQ/n) with accuracy 1 ? e??(c) and complexity On (n1+(c) ).
The above gives the optimal scaling both in accuracy and complexity.
II. Exact recovery in the general SBM. The second result in [1] is for the regime where the connectivity matrix scales as ln(n)Q/n, Q independent of n, where it is shown that exact recovery has
a sharp threshold characterized by the divergence function
X
D+ (f, g) = max
tf (x) + (1 ? t)g(x) ? f (x)t g(x)1?t ,
t?[0,1]
x?[k]
named the CH-divergence in [1]. Specifically, if all pairs of columns in diag(p)Q are at D+ -distance
at least 1 from each other, then exact recovery is solvable in the general SBM. We refer to Section
2.3 in [1] for discussion on the connection with Shannon?s channel coding theorem (and CH vs.
KL divergence). An algorithm (Degree-profiling) is also developed in [1] that solves exact
recovery down to the D+ limit in quasi-linear time, showing that exact recovery has no informational
to computational gap.
Theorem 2. [1] (i) Exact recovery is solvable in SBM(n, p, ln(n)Q/n) if and only if
min D+ ((P Q)i ||(P Q)j ) ? 1.
i,j?[k],i6=j
(ii) The Degree-profiling algorithm (see [1]) solves exact recovery whenever it is
information-theoretically solvable and runs in o(n1+ ) time for all > 0.
Exact and strong recovery are thus solved for the general SBM with linear-size communities, when
the parameters are known. We next remove the latter assumption.
1.2
Estimating the parameters
For the estimation of the parameters, some results are known for two-symmetric communities. In
the logarithmic degree regime, since the SDP is agnostic to the parameters (it is a relaxation of the
min-bisection), the parameters can be estimated by recovering the communities [5, 20, 21]. For the
constant-degree regime, [26] shows that the parameters can be estimated above the threshold by
counting cycles (which is efficiently approximated by counting non-backtracking walks). These are,
however, for 2 communities. We also became aware of a parallel work [27], which considers private
graphon estimation (including SBMs). In particular, for the logarithmic degree regime, [27] obtains
a (non-efficient) procedure to estimate parameters of graphons in an appropriate version of the L2
norm. For the general SBM, learning the model was to date mainly open.
3
The smallest eigenvalue of diag(p)Q is the one with least magnitude.
3
2
Results
Agnostic algorithms are developed for the constant and diverging node degrees (with p, k independent of n). These afford optimal accuracy and complexity scaling for large node degrees and achieve
the CH-divergence limit for logarithmic node degrees. In particular, the SBM can be learned efficiently for any diverging degrees.
Note that the assumptions on p and k being independent of n could be slightly relaxed, for example
to slowly growing k, but we leave this for future work.
2.1
Partial recovery
Our main result for partial recovery holds in the constant degree regime and requires a lower bound
? on the least relative size of the communities. This requirement is removed when working with
diverging degrees, as stated in the corollary below.
P
Theorem 3. Given ? > 0 and for any k ? Z, p ? (0, 1)k with
pi = 1 and 0 < ? ? min pi ,
and any symmetric matrix Q with no two rows equal such that every entry in Qk is positive (in other
words, Q such that there is a nonzero probability of a path between vertices in any two communities
in a graph drawn from SBM(n, p, Q/n)), there exist (c) = O(1/ ln(c)) such that for all sufficiently large ?, Agnostic-sphere-comparison detects communities in graphs drawn from
SBM(n, p, ?Q/n) with accuracy at least 1 ? e??(?) in On (n1+(?) ) time.
Note that a vertex in community i has degree 0 with probability exponential in c, and there is no
way to differentiate between vertices of degree 0 from different communities. So, an error rate
that decreases exponentially with c is optimal. In [28], we provide a more detailed version of this
theorem, which yields a quantitate statement on the accuracy of the algorithm in terms of the SNR
(?0 )2 /? for general SBM(n, p, Q/n).
Corollary 2. If ? = ?(1) in Theorem 3, the knowledge requirement on ? can be removed.
2.2
Exact recovery
Recall that from [1], exact recovery is information-theoretically and computationally solvable in
SBM(n, p, ln(n)Q/n) if and only if,
min D+ ((P Q)i , (P Q)j ) ? 1.
i<j
(2)
We next show that this can be achieved without any knowledge on the parameters for
SBM(n, p, ln(n)Q/n).
Theorem 4. The Agnostic-degree-profiling algorithm (see Section 3.2) solves exact recovery in any SBM(n, p, ln(n)Q/n) for which exact recovery is solvable, using no input except the
graph in question, and runs in o(n1+ ) time for all > 0. In particular, exact recovery is efficiently
and universally solvable whenever it is information-theoretically solvable.
3
Proof Techniques and Algorithms
3.1
3.1.1
Partial recovery and the Agnostic-sphere-comparison algorithm
Simplified version of the algorithm for the symmetric case
To ease the presentation of the algorithm, we focus first on the symmetric case, i.e., the SBM with
k communities of relative size 1/k, probability of connecting na inside communities and nb across
communities. Let d = (a + (k ? 1)b)/k be the average degree.
Definition 2. For any vertex v, let Nr[G] (v) be the set of all vertices with shortest path in G to v of
length r. We often drop the subscript G if the graph in question is the original SBM. We also refer
?r (v) as the vector whose i-th entry is the number of vertices in Nr (v) that are in community i.
to N
For an arbitrary vertex v and reasonably small r, there will be typically about dr vertices in Nr (v),
r
and about ( a?b
k ) more of them will be in v?s community than in each other community. Of course,
4
this only holds when r < log n/ log d because there are not enough vertices in the graph otherwise.
The obvious way to try to determine whether or not two vertices v and v 0 are in the same community
is to guess that they are in the same community if |Nr (v) ? Nr (v 0 )| > d2r /n and different communities otherwise. Unfortunately, whether or not a vertex is in Nr (v) is not independent of whether
or not it is in Nr (v 0 ), which compromises this plan. Instead, we propose to rely on the following
graph-splitting step: Randomly assign every edge in G to some set E with a fixed probability c and
then count the number of edges in E that connect Nr[G\E] and Nr0 [G\E] . Formally:
Definition 3. For any v, v 0 ? G, r, r0 ? Z, and subset of G?s edges E, let Nr,r0 [E] (v ? v 0 ) be the
number of pairs (v1 , v2 ) such that v1 ? Nr[G\E] (v), v2 ? Nr0 [G\E] (v 0 ), and (v1 , v2 ) ? E.
Note that E and G\E are disjoint. However, in SBM(n, p, Q/n), G is sparse enough that even if
the two graphs were generated independently, a given pair of vertices would have an edge in both
graphs with probability O( n12 ). So, E is approximately independent of G\E.
Thus, given v, r, and denoting by ?1 = (a + (k ? 1)b)/k and ?2 = (a ? b)/k the two eigvenvalues
of P Q in the symmetric case, the expected number of intra-community neighbors at depth r from v
is approximately k1 (?r1 + (k ? 1)?r2 ), whereas the expected number of extra-community neighbors
at depth r from v is approximately k1 (?r1 ? ?r2 ) for each of the other (k ? 1) communities. All of
these are scaled by 1 ? c if we do the computations in G\E. Using now the emulated independence
between E and G\E, and assuming v and v 0 to be in the same community, the expected number
of edges in E connecting Nr[G\E] (v) to Nr0 [G\E] (v 0 ) is approximately given by the inner product
ut (c ? P Q)u, where u = k1 (?r1 + (k ? 1)?r2 , ?r1 ? ?r2 , . . . , ?r1 ? ?r2 ) and (P Q) is the matrix with a
on the diagonal and b elsewhere. When v and v 0 are in different communities, the inner product is
between u and a permutation of u. After simplifications, this gives
"
#
r+r0 +1
r+r 0
0
c(1
?
c)
a
?
b
dr+r +1 +
(k??v ,?v0 ? 1)
(3)
Nr,r0 [E] (v ? v 0 ) ?
n
k
where ??v ,?v0 is 1 if v and v 0 are in the same community and 0 otherwise. In order for Nr,r0 [E] (v ?v 0 )
0
r+r 0 +1
k is large
to depend on the relative communities of v and v 0 , it must be that c(1 ? c)r+r | a?b
k |
a?b
0
enough, i.e., more than n, so r + r needs to be at least log n/ log | k |. A difficulty is that for a
0
specific pair of vertices, the dr+r +1 term will be multiplied by a random factor dependent on the
0
0
degrees of v, v , and the nearby vertices. So, in order to stop the variation in the dr+r +1 term from
0
r+r +1
(k??v ,?v0 ? 1) term, it is necessary to cancel out the dominant term.
drowning out the a?b
k
This brings us to introduce the following sign-invariant statistics:
2
0
Ir,r0 [E] (v ? v 0 ) := Nr+2,r0 [E] (v ? v 0 ) ? Nr,r0 [E] (v ? v 0 ) ? Nr+1,r
0 [E] (v ? v )
2
r+r0 +1
0
c2 (1 ? c)2r+2r +2
a?b
a?b
r+r 0 +1
?
?
d
?
?
d
(k??v ,?v0 ? 1)
n2
k
k
In particular, for r + r0 odd, Ir,r0 [E] (v ? v 0 ) will tend to be positive if v and v 0 are in the same
community and negative otherwise, irrespective of the specific values of a, b, k. That suggests the
following algorithm for partial recovery, it requires knowledge of ? < 1/k in the constant degree
regime, but not in the regime where a, b scale with n.
1. Set r =
3
4
log n/ log d and put each of the graph?s edges in E with probability 1/10.
2. Set kmax = 1/? and select kmax ln(4kmax ) random vertices, v1 , ..., vkmax ln(4kmax ) .
3. Compute Ir,r0 [E] (vi ? vj ) for each i and j.
4. If there is a possible assignment of these vertices to communities such that Ir,r0 [E] (vi ?vj ) >
0 if and only if vi and vj are in the same community, then randomly select one vertex from
each apparent community, v[1], v[2], ...v[k 0 ]. Otherwise, fail.
5. For every v 0 in the graph, guess that v 0 is in the same community as the v[i] that maximizes
the value of Ir,r0 [E] (v[i] ? v 0 ).
5
This algorithm succeeds as long as |a ? b|/k > (10/9)1/6 ((a + (k ? 1)b)/k)5/6 , to ensure that
the above estimates on Nr,r0 [E] (v ? v 0 ) are reliable. Further, if a, b are scaled by ? = ?(1), setting
? = 1/ log log ? allows removal of the knowledge requirement on ?.
One alternative to our approach could be to count the non-backtracking walks of a given length
between v and v 0 , like in [4, 29], instead of using Nr,r0 [E] (v ? v 0 ). However, proving that the number
of non-backtracking walks is close to its expected value is difficult. Proving that Nr,r0 [E] (v ? v 0 )
is within a desired range is substantially easier because for any v1 and v2 , whether or not there is
an edge between v1 and v2 directly effects Nr (v) for at most one value of r. Algorithms based on
shortest path have also been studied in [30].
3.1.2
The general case
?r (v) and Nr,r0 [E] (v ? v 0 ) as in the previous section. Now, for
In the general case, define Nr (v), N
any v1 ? Nr[G/E] (v) and v2 ? Nr0 [G/E] (v 0 ), (v1 , v2 ) ? E with a probability of approximately
cQ?v1 ,?v2 /n. As a result,
?r[G\E] (v) ? cQ N
?r0 [G\E] (v 0 ) ? ((1 ? c)P Q)r e? ? cQ ((1 ? c)P Q)r0 e? 0
Nr,r0 [E](v ? v 0 ) ? N
v
v
n
n
r+r 0
r+r 0
= c(1 ? c)
e?v ? Q(P Q)
e?v0 /n.
v
E
...
Nr[G\E] (v)
v0
...
Nr0 [G\E] (v 0 )
Figure 1: The purple edges represent the edges counted by Nr,r0 [E](v ? v 0 ).
Let ?1 , ..., ?h be the distinct eigenvalues of P Q, ordered so that |?1 | ? |?2 | ? ... ? |?h | ? 0.
Also define h0 so that h0 = h if ?h 6= 0 and h0 = h ? 1 if ?h = 0. If Wi is the eigenspace of P Q
corresponding to the eigenvalue ?i , and PWi is the projection operator on to Wi , then
0
0
Nr,r0 [E] (v ? v 0 ) ? c(1 ? c)r+r e?v ? Q(P Q)r+r e?v0 /n
0
c(1 ? c)r+r X r+r0 +1
?i
PWi (e?v ) ? P ?1 PWi (e?v0 )
=
n
i
(4)
(5)
where the final equality holds because for all i 6= j,
?i PWi (e?v ) ? P ?1 PWj (e?v0 ) = (P QPWi (e?v )) ? P ?1 PWj (e?v0 )
= PWi (e?v ) ? QPWj (e?v0 ) = PWi (e?v ) ? P ?1 ?j PWj (e?v0 ),
and since ?i 6= ?j , this implies that PWi (e?v ) ? P ?1 PWj (e?v0 ) = 0.
Definition 4. Let ?i (v ? v 0 ) = PWi (e?v ) ? P ?1 PWi (e?v0 ) for all i, v, and v 0 .
0
0
+1
Equation (5) is dominated by the ?1r+r +1 term, so getting good estimate of the ?r+r
through
2
r+r 0 +1
?h0
terms requires cancelling it out somehow. As a start, if ?1 > ?2 > ?3 then
2
0
Nr+2,r0 [E] (v ? v 0 ) ? Nr,r0 [E] (v ? v 0 ) ? Nr+1,r
0 [E] (v ? v )
0
0
c2 (1 ? c)2r+2r +2 2
+1 r+r 0 +1
?
(?1 + ?22 ? 2?1 ?2 )?r+r
?2
?1 (v ? v 0 )?2 (v ? v 0 )
1
n2
Nr,r0 [E] (v ? v 0 )
Nr+1,r0 [E] (v ? v 0 )
.
Note that the left hand side of this expression is equal to det
Nr+1,r0 [E] (v ? v 0 ) Nr+2,r0 [E] (v ? v 0 )
Definition 5. Let Mm,r,r0 [E] (v ? v 0 ) be the m ? m matrix such that Mm,r,r0 [E] (v ? v 0 )i,j =
Nr+i+j,r0 [E] (v ? v 0 ) for each i and j.
6
As shown in [28], there exists constant ?(?1 , ..., ?m ) such that
det(Mm,r,r0 [E] (v ? v 0 )) ?
0
m
Y
0
cm (1 ? c)m(r+r )
+1
?(?
,
...,
?
)
?r+r
?i (v ? v 0 )
1
m
i
m
n
i=1
(6)
where we assumed that |?m | > |?m+1 | above to simplify the discussion (the case |?m | = |?m+1 | is
similar). This suggests the following plan for estimating the eigenvalues corresponding to a graph.
First, pick several vertices at random. Then, use the fact that |Nr[G\E] (v)| ? ((1 ? c)?1 )r for any
good vertex v to estimate ?1 . Next, take ratios of (6) for m and m ? 1 (with r = r0 ), and look for
the smallest m making that ratio small enough (this will use the estimate on ?1 ), estimating h0 by
this value minus one. Then estimate consecutively all of P Q?s eigenvalues for each selected vertex
using ratios of (6). Finally, take the median of these estimates.
In general, whether |?m | > |?m+1 | or |?m | = |?m+1 |,
det(Mm,r+1,r0 [E] (v ? v 0 )) ? (1 ? c)m ?m+1
Qm?1
?i det(Mm,r,r0 [E] (v ? v 0 ))
det(Mm?1,r+1,r0 [E] (v ? v 0 )) ? (1 ? c)m?1 ?m
Qm?2
?i det(Mm?1,r,r0 [E] (v ? v 0 ))
?
i=1
i=1
0
c ?(?1 , ..., ?m ) ?m?1 (?m ? ?m+1 )
((1 ? c)?m )r+r +2 ?m (v ? v 0 ).
n ?(?1 , ..., ?m?1 ) ?m (?m?1 ? ?m )
This fact can be used to approximate ?i (v ? v 0 ) for arbitrary v, v 0 , and i. Of course, this requires r
r+r 0
0
+1
and r0 to be large enough that c(1?c)
?r+r
?i (v ? v 0 ) is large relative to the error terms for all
i
n
0
r+r 0 +1
i ? h . This requires at least |(1 ? c)?i |
= ?(n) for all i ? h0 . Moreover, for any v and v 0 ,
0 ? PWi (e?v ? e?v0 ) ? P ?1 PWi (e?v ? e?v0 ) = ?i (v ? v) ? 2?i (v ? v 0 ) + ?i (v 0 ? v 0 )
with equality for all i if and only if ?v = ?v0 , so sufficiently good approximations of ?i (v?v), ?i (v?v 0 )
and ?i (v 0 ? v 0 ) can be used to determine which pairs of vertices are in the same community.
One could generate a reasonable classification based solely on this method of comparing vertices
(with an appropriate choice of the parameters, as later detailed). However, that would require computing Nr,r0 [E] (v ? v) for every vertex in the graph with fairly large r + r0 , which would be slow.
Instead, we use the fact that for any vertices v, v 0 , and v 00 with ?v = ?v0 6= ?v00 ,
?i (v 0 ? v 0 ) ? 2?i (v ? v 0 ) + ?i (v ? v) = 0 ? ?i (v 00 ? v 00 ) ? 2?i (v ? v 00 ) + ?i (v ? v)
for all i, and the inequality is strict for at least one i. So, subtracting ?i (v ? v) from both sides,
?i (v 0 ? v 0 ) ? 2?i (v ? v 0 ) ? ?i (v 00 ? v 00 ) ? 2?i (v ? v 00 )
for all i, and the inequality is still strict for at least one i. So, given a representative vertex in each
community, we can determine which of them a given vertex, v, is in the same community as without
needing to know the value of ?i (v ? v).
This runs fairly quickly if r is large and r0 is small because the algorithm only requires focusing
on |Nr0 (v 0 )| vertices. This leads to the following plan for partial recovery. First, randomly select
a set of vertices that is large enough to contain at least one vertex from each community with high
probability. Next, compare all of the selected vertices in an attempt to determine which of them are
in the same communities. Then, pick one in each community. Call these anchor nodes. After that,
use the algorithm referred to above to determine which community each of the remaining vertices
is in. As long as there actually was at least one vertex from each community in the initial set and
none of the approximations were particularly bad, this should give a reasonable classification. The
risk that this randomly gives a bad classification due to a bad set of initial vertices can be mitigated
by repeating the previous classification procedure several times as discussed in [28]. This completes
the Agnostic-sphere-comparison algorithm. We refer to [28] for the details.
3.2
Exact recovery and the Agnostic-degree-profiling algorithm
The exact recovery part is similar to [1] and uses the fact that once a good enough clustering has been
obtained from Agnostic-sphere-comparison, the classification can be finished by making
local improvements based on the node?s neighborhoods. Similar techniques have been used in [5,
11, 19, 31, 32]. However, we establish here a sharp characterization of the local procedure error.
7
The key result is that, when testing between two multivariate Poisson distributions of means
log(n)?1 and log(n)?2 respectively, where ?1 , ?2 ? Zk+ , the probability of error (of maximum
a posteriori decoding) is
(7)
n?D+ (?1 ,?2 )+o(1) .
This is proved in [1]. In the case of unknown parameters, the algorithmic approach is largely unchanged, adding a step where the best known classification is used to estimate p and Q prior to any
local improvement step. The analysis of the algorithm requires however some careful handling.
First, it is necessary to prove that given a labelling of the graph?s vertices ?
with an error rate of x,
one can compute approximations of p and Q that are within O(x + log(n)/ n) of their true values
with probability 1 ? o(1). Secondly, one needs to modify the above hypothesis testing estimates to
control the error probability. In attempting to determine vertices? communities based on estimates of
p and Q that are off by at most ?, say p0 and Q0 , one must show that a classification of its neighbors
that has an error rate of ? classifies the vertices with an error rate only eO(? log n) times higher than
it would be if the parameter really were p0 and Q0 and the vertices? neighbors were all classified
correctly. Thirdly, one needs to show that since D+ ((P Q)i , (P Q)j ) is differentiable with respect
to any element of P Q, the error rate if the parameters really were p0 and Q0 is at worst eO(? log n)
as high as the error rate with the actual parameters. Combining these yields the conclusion that any
errors in the estimates of the SBM?s parameters do not disrupt vertex classification any worse than
the errors in the preliminary classifications already were.
The Agnostic-degree-profiling algorithm. The inputs are (G, ?), where G is a graph,
and ? ? [0, 1] (see [28] for how to set ? specifically). The algorithm outputs each node?s label.
(1) Define the graph g 0 on the vertex set [n] by selecting each edge in g independently with probability ?, and define the graph g 00 that contains the edges in g that are not in g 0 .
(2) Run Agnostic-sphere-comparison on g 0 with ? = 1/ log log(n) to obtain the classification ? 0 ? [k]n .
(3) Determine the size of each alleged community, and the edge density between each pair of alleged
communities.
(4) For each node v ? [n], determine the most likely community label of node v based on its degree
?1 (v) computed from the preliminary classification ? 0 , and call it ? 00 .
profile N
v
(5) Use ?v00 to get new estimates of p and Q.
(6) For each node v ? [n], determine the most likely community label of node v based on its degree
?1 (v) computed from ? 00 . Output this labelling.
profile N
In step (3) and (6), the most likely label is the one that maximizes the probability that the degree
profile comes from a multivariate distribution of mean ln(n)(P Q)i for i ? [k]. Note that this
algorithm does not require a lower bound on min pi because setting ? to a slowly decreasing function
of n results in ? being within an acceptable range for all sufficiently large n.
4
Data implementation and open problems
We tested a simplified version of our algorithm on real data (see [28]), for the blog network of
Adamic and Glance ?05. We obtained an error rate of about 60/1222 (best trial was 57, worst 67),
achieving the state-of-the-art (as described in [32]). The results in this paper should extend quite
directly to a slowly growing number of communities (e.g., up to logarithmic). It would be interesting
to extend the current approach to smaller sized or more communities, watching the complexity
scaling, as well as to corrected-degrees, labeled-edges, or overlapping communities (though the
approach in this paper already applies to linear-sized overlaps).
References
[1] E. Abbe and C. Sandon. Community detection in general stochastic block models: fundamental limits
and efficient recovery algorithms. arXiv:1503.00609. To appear in FOCS15., March 2015.
[2] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborov?a. Asymptotic analysis of the stochastic block model
for modular networks and its algorithmic applications. Phys. Rev. E, 84:066106, December 2011.
[3] L. Massouli?e. Community detection thresholds and the weak Ramanujan property. In STOC 2014: 46th
Annual Symposium on the Theory of Computing, pages 1?10, New York, United States, June 2014.
8
[4] E. Mossel, J. Neeman, and A. Sly. A proof of the block model threshold conjecture. Available online at
arXiv:1311.4115 [math.PR], January 2014.
[5] E. Abbe, A. S. Bandeira, and G. Hall. Exact recovery in the stochastic block model. To appear in IEEE
Transactions on Information Theory. Available at ArXiv:1405.3267, May 2014.
[6] E. Mossel, J. Neeman, and A. Sly. Consistency thresholds for binary symmetric block models.
Arxiv:arXiv:1407.1591. To appear in STOC15., July 2014.
[7] J. Xu Y. Chen. Statistical-computational tradeoffs in planted problems and submatrix localization with a
growing number of clusters and submatrices. arXiv:1402.1267, February 2014.
[8] P. K. Gopalan and D. M. Blei. Efficient discovery of overlapping communities in massive networks.
Proceedings of the National Academy of Sciences, 2013.
[9] P. W. Holland, K. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social Networks,
5(2):109?137, 1983.
[10] T.N. Bui, S. Chaudhuri, F.T. Leighton, and M. Sipser. Graph bisection algorithms with good average case
behavior. Combinatorica, 7(2):171?191, 1987.
[11] M.E. Dyer and A.M. Frieze. The solution of some random NP-hard problems in polynomial expected
time. Journal of Algorithms, 10(4):451 ? 489, 1989.
[12] Mark Jerrum and Gregory B. Sorkin. The metropolis algorithm for graph bisection. Discrete Applied
Mathematics, 82(13):155 ? 175, 1998.
[13] A. Condon and R. M. Karp. Algorithms for graph partitioning on the planted partition model. Lecture
Notes in Computer Science, 1671:221?232, 1999.
[14] T. A. B. Snijders and K. Nowicki. Estimation and Prediction for Stochastic Blockmodels for Graphs with
Latent Block Structure. Journal of Classification, 14(1):75?100, January 1997.
[15] F. McSherry. Spectral partitioning of random graphs. In Foundations of Computer Science, 2001. Proceedings. 42nd IEEE Symposium on, pages 529?537, 2001.
[16] P. J. Bickel and A. Chen. A nonparametric view of network models and newmangirvan and other modularities. Proceedings of the National Academy of Sciences, 2009.
[17] K. Rohe, S. Chatterjee, and B. Yu. Spectral clustering and the high-dimensional stochastic blockmodel.
The Annals of Statistics, 39(4):1878?1915, 08 2011.
[18] D. S. Choi, P. J. Wolfe, and E. M. Airoldi. Stochastic blockmodels with a growing number of classes.
Biometrika, pages 1?12, 2012.
[19] V. Vu. A simple svd algorithm for finding hidden partitions. Available online at arXiv:1404.3918, 2014.
[20] J. Xu B. Hajek, Y. Wu. Achieving exact cluster recovery threshold via semidefinite programming.
arXiv:1412.6156, November 2014.
[21] A. S. Bandeira. Random laplacian matrices and convex relaxations. arXiv:1504.03987, 2015.
[22] S. Yun and A. Proutiere. Accurate community detection in the stochastic block model via spectral algorithms. arXiv:1412.7335, December 2014.
[23] E. Mossel, J. Neeman, and A. Sly. Belief propagation, robust reconstruction, and optimal recovery of
block models. Arxiv:arXiv:1309.1380, 2013.
[24] O. Gu?edon and R. Vershynin. Community detection in sparse networks via Grothendieck?s inequality.
ArXiv:1411.4686, November 2014.
[25] P. Chin, A. Rao, and V. Vu. Stochastic block model and community detection in the sparse graphs: A
spectral algorithm with optimal rate of recovery. arXiv:1501.05021, January 2015.
[26] E. Mossel, J. Neeman, and A. Sly. Stochastic block models and reconstruction. Available online at
arXiv:1202.1499 [math.PR], 2012.
[27] C. Borgs, J. Chayes, and A. Smith. Private graphon estimation for sparse graphs. In preparation, 2015.
[28] E. Abbe and C. Sandon. Recovering communities in the general stochastic block model without knowing
the parameters. arXiv:1506.03729, June 2015.
[29] C. Bordenave, M. Lelarge, and L. Massouli?e. Non-backtracking spectrum of random graphs: community
detection and non-regular ramanujan graphs. Available at arXiv:1501.06087, 2015.
[30] S. Bhattacharyya and P. J. Bickel. Community Detection in Networks using Graph Distance. ArXiv
e-prints, January 2014.
[31] N. Alon and N. Kahale. A spectral technique for coloring random 3-colorable graphs. In SIAM Journal
on Computing, pages 346?355, 1994.
[32] A. Y. Zhang H. H. Zhou C. Gao, Z. Ma. Achieving optimal misclassification proportion in stochastic
block model. arXiv:1505.03772, 2015.
9
| 5906 |@word trial:1 private:2 version:4 polynomial:2 norm:1 leighton:1 nd:1 proportion:1 open:2 condon:1 p0:3 pick:2 minus:1 initial:2 contains:1 exclusively:1 selecting:1 united:1 denoting:2 neeman:4 bhattacharyya:1 past:1 current:1 comparing:1 yet:1 universality:1 must:3 partition:3 remove:1 drop:1 v:1 selected:2 guess:2 vanishing:1 smith:1 blei:1 detecting:1 provides:1 node:15 characterization:1 math:2 zhang:1 c2:2 symposium:2 pwj:4 prove:1 inside:1 krzakala:1 introduce:1 theoretically:4 expected:5 behavior:2 p1:1 sdp:2 growing:4 detects:4 informational:1 decreasing:1 actual:1 provided:1 discover:1 notation:1 moreover:3 estimating:3 agnostic:11 maximizes:2 eigenspace:1 mitigated:1 classifies:1 cm:1 substantially:1 affirmative:1 developed:3 finding:2 nj:2 every:4 subclass:1 biometrika:1 scaled:2 qm:2 control:1 partitioning:2 appear:3 positive:3 engineering:1 local:3 modify:1 decelle:1 limit:6 consequence:1 subscript:1 path:3 solely:1 approximately:5 studied:4 suggests:2 ease:1 range:2 practical:1 testing:2 vu:2 block:15 procedure:3 universal:1 submatrices:1 projection:1 word:1 refers:2 regular:1 protein:1 get:2 close:1 operator:1 nb:1 put:1 kmax:4 risk:1 center:1 ramanujan:2 attention:2 independently:4 convex:1 survey:1 ke:1 colorable:1 recovery:46 identifying:1 splitting:1 sbm:35 population:1 proving:2 notion:1 variation:2 n12:1 annals:1 enhanced:1 massive:2 exact:25 programming:1 us:1 hypothesis:1 agreement:1 element:1 wolfe:1 approximated:1 particularly:2 labeled:1 modularities:1 electrical:1 solved:1 worst:2 connected:1 cycle:1 decrease:1 removed:3 complexity:5 depend:2 segment:1 compromise:1 localization:1 efficiency:1 completely:1 gu:1 various:1 distinct:1 neighborhood:1 h0:6 whose:1 apparent:1 larger:1 solve:1 quite:1 say:1 modular:1 otherwise:5 statistic:2 jerrum:1 final:1 online:3 chayes:1 differentiate:1 eigenvalue:8 differentiable:1 propose:1 subtracting:1 leinhardt:1 product:2 reconstruction:2 cancelling:1 combining:1 date:1 chaudhuri:1 achieve:3 amplified:1 academy:2 getting:1 cluster:6 requirement:6 diverges:2 r1:5 leave:1 develop:1 stating:1 alon:1 odd:1 solves:6 strong:10 recovering:4 implies:1 come:1 stochastic:15 consecutively:1 engineered:1 require:4 assign:1 really:2 preliminary:2 biological:1 secondly:1 extension:1 graphon:2 hold:3 mm:7 sufficiently:4 hall:1 exp:1 algorithmic:3 major:1 achieves:3 bickel:2 smallest:4 polar:1 estimation:4 label:7 agrees:1 largest:2 tf:1 minded:1 sensitively:1 zhou:1 karp:1 corollary:3 focus:3 june:2 improvement:2 mainly:2 contrast:1 blockmodel:1 detect:1 posteriori:1 dependent:1 typically:1 hidden:3 proutiere:1 misclassified:1 quasi:2 classification:12 development:3 plan:3 constrained:1 art:1 fairly:2 field:2 equal:4 aware:1 having:2 once:1 identical:1 look:1 cancel:1 abbe:5 yu:1 future:1 np:1 simplify:1 few:1 randomly:4 frieze:1 simultaneously:1 divergence:4 national:2 phase:1 n1:4 attempt:2 detection:16 interest:3 intra:2 introduces:2 semidefinite:1 mcsherry:1 accurate:1 edge:13 partial:8 necessary:2 walk:3 desired:1 theoretical:1 classify:1 column:1 rao:1 assignment:1 vertex:42 entry:4 subset:1 snr:7 connect:1 answer:1 gregory:1 vershynin:1 density:1 fundamental:4 siam:1 off:1 decoding:1 diverge:1 connecting:2 quickly:1 na:1 connectivity:3 slowly:3 dr:4 worse:1 watching:1 unordered:1 summarized:1 coding:1 vi:3 sipser:1 later:1 try:1 view:1 observing:1 start:1 recover:1 parallel:1 alleged:2 purple:1 ir:5 accuracy:9 became:1 qk:1 largely:1 likewise:1 efficiently:5 gathered:1 identify:1 ensemble:1 maximized:1 yield:2 weak:3 bisection:3 emulated:1 none:1 classified:1 phys:1 whenever:2 nr0:6 definition:5 lelarge:1 obvious:1 proof:2 recovers:3 stop:1 proved:3 ask:1 recall:2 knowledge:8 ut:1 hajek:1 actually:1 back:1 coloring:1 focusing:1 higher:1 though:1 sly:4 working:1 hand:1 adamic:1 overlapping:3 propagation:2 glance:1 somehow:1 brings:1 laskey:1 name:1 effect:1 contain:1 true:2 hence:1 assigned:1 equality:2 symmetric:16 nonzero:2 q0:3 moore:1 nowicki:1 trying:1 yun:1 impression:1 theoretic:2 chin:1 image:1 recently:4 exponentially:1 thirdly:1 discussed:1 extend:2 significant:1 refer:4 consistency:4 mathematics:2 similarly:1 i6:1 access:1 v0:18 dominant:1 multivariate:2 bandeira:2 inequality:3 blog:1 came:1 arbitrarily:1 binary:1 relaxed:1 eo:2 r0:43 determine:9 shortest:2 colin:1 july:1 ii:3 multiple:1 needing:1 reduces:1 snijders:1 characterized:1 profiling:5 long:3 sphere:9 laplacian:1 prediction:1 poisson:1 arxiv:19 sometimes:1 represent:1 achieved:1 background:1 addition:1 whereas:1 completes:1 median:1 crucial:1 extra:2 strict:2 tend:1 december:2 call:2 counting:2 revealed:1 enough:7 concerned:2 variety:2 independence:1 fit:1 sorkin:1 identified:1 inner:2 knowing:2 tradeoff:2 det:6 whether:6 expression:1 quasilinear:1 york:1 afford:1 detailed:2 gopalan:1 repeating:1 nonparametric:1 generate:1 exist:2 canonical:1 sign:1 estimated:2 disjoint:1 correctly:1 eabbe:1 discrete:1 group:1 key:2 threshold:17 achieving:6 drawn:4 v1:9 graph:35 relaxation:2 fraction:2 run:7 pwi:11 massouli:2 named:1 reasonable:2 wu:1 acceptable:1 scaling:4 investigates:2 submatrix:1 bound:4 simplification:1 annual:1 bordenave:1 nearby:1 dominated:1 optimality:1 min:8 attempting:1 conjecture:2 department:2 march:1 across:1 slightly:1 smaller:1 wi:2 metropolis:1 rev:1 making:2 invariant:1 pr:2 ln:14 computationally:1 equation:1 discus:1 count:2 fail:1 edon:1 know:1 merit:1 dyer:1 generalizes:1 available:5 multiplied:1 v2:8 spectral:6 appropriate:2 alternative:1 original:1 denotes:1 clustering:3 ensure:2 remaining:1 sbms:2 emmanuel:1 k1:3 establish:1 february:1 unchanged:1 question:4 quantity:1 already:2 print:1 planted:4 nr:37 diagonal:1 zdeborov:1 distance:2 capacity:1 considers:1 assuming:1 besides:1 code:2 ldpc:1 length:2 cq:4 providing:1 ratio:3 difficult:1 unfortunately:1 statement:1 stoc:1 stated:1 negative:1 implementation:1 unknown:1 allowing:1 benchmark:2 november:2 january:4 extended:1 sharp:4 arbitrary:2 community:80 introduced:1 pair:7 kl:1 connection:1 sandon:5 learned:1 below:2 regime:14 appeared:1 graphons:1 genetically:1 max:3 including:1 reliable:1 belief:1 overlap:1 satisfaction:1 natural:1 rely:2 settling:1 difficulty:1 solvable:10 misclassification:1 improve:1 mossel:4 finished:1 irrespective:1 grothendieck:1 prior:2 understanding:1 l2:1 removal:1 discovery:1 relative:5 asymptotic:1 fully:1 lecture:1 permutation:1 interesting:1 foundation:1 degree:30 pi:4 row:3 achievability:1 course:2 elsewhere:1 guide:1 side:2 neighbor:4 sparse:4 depth:2 transition:1 made:1 universally:1 simplified:2 counted:1 social:3 transaction:1 approximate:1 obtains:1 bui:1 anchor:1 sat:1 conclude:1 assumed:1 disrupt:1 spectrum:1 latent:1 decade:1 learn:1 channel:1 reasonably:1 zk:1 robust:1 correlated:1 complex:2 diag:3 vj:3 pk:1 main:1 blockmodels:3 motivation:2 profile:3 n2:2 allowed:1 positively:1 xu:2 v00:2 representative:1 referred:1 deployed:1 slow:1 sub:1 exponential:2 learns:1 theorem:8 down:1 choi:1 bad:3 specific:2 rohe:1 borgs:1 showing:2 r2:5 concern:1 exists:4 adding:1 airoldi:1 magnitude:2 labelling:4 chatterjee:1 gap:1 easier:1 chen:2 logarithmic:6 backtracking:4 likely:3 gao:1 ordered:1 recommendation:1 holland:1 applies:2 ch:4 ma:1 goal:1 presentation:1 sized:2 careful:1 hard:1 specifically:2 except:1 corrected:1 tumor:1 called:2 svd:1 diverging:4 succeeds:1 shannon:2 exception:1 formally:1 select:3 combinatorica:1 people:1 quest:1 latter:2 mark:1 preparation:1 princeton:6 tested:1 phenomenon:3 handling:1 |
5,421 | 5,907 | Maximum Likelihood Learning With Arbitrary
Treewidth via Fast-Mixing Parameter Sets
Justin Domke
NICTA, Australian National University
justin.domke@nicta.com.au
Abstract
Inference is typically intractable in high-treewidth undirected graphical models,
making maximum likelihood learning a challenge. One way to overcome this is to
restrict parameters to a tractable set, most typically the set of tree-structured parameters. This paper explores an alternative notion of a tractable set, namely a set
of ?fast-mixing parameters? where Markov chain Monte Carlo (MCMC) inference
can be guaranteed to quickly converge to the stationary distribution. While it is
common in practice to approximate the likelihood gradient using samples obtained
from MCMC, such procedures lack theoretical guarantees. This paper proves that
for any exponential family with bounded sufficient statistics, (not just graphical
models) when parameters are constrained to a fast-mixing set, gradient descent
with gradients approximated by sampling will approximate the maximum likelihood solution inside the set with high-probability. When unregularized, to find a
solution ?-accurate in log-likelihood requires a total amount of effort cubic in 1/?,
disregarding logarithmic factors. When ridge-regularized, strong convexity allows
a solution ?-accurate in parameter distance with effort quadratic in 1/?. Both of
these provide of a fully-polynomial time randomized approximation scheme.
1 Introduction
In undirected graphical models, maximum likelihood learning is intractable in general. For example, Jerrum and Sinclair [1993] show that evaluation of the partition function (which can easily be
computed from the likelihood) for an Ising model is #P-complete, and that even the existence of a
fully-polynomial time randomized approximation scheme (FPRAS) for the partition function would
imply that RP = NP.
If the model is well-specified (meaning that the target distribution falls in the assumed family) then
there exist several methods that can efficiently recover correct parameters, among them the pseudolikelihood [3], score matching [16, 22], composite likelihoods [20, 30], Mizrahi et al.?s [2014]
method based on parallel learning in local clusters of nodes and Abbeel et al.?s [2006] method based
on matching local probabilities. While often useful, these methods have some drawbacks. First,
these methods typically have inferior sample complexity to the likelihood. Second, these all assume
a well-specified model. If the target distribution is not in the assumed class, the maximum-likelihood
solution will converge to the M-projection (minimum of the KL-divergence), but these estimators
do not have similar guarantees. Third, even when these methods succeed, they typically yield a
distribution in which inference is still intractable, and so it may be infeasible to actually make use
of the learned distribution.
Given these issues, a natural approach is to restrict the graphical model parameters to a tractable set
?, in which learning and inference can be performed efficiently. The gradient of the likelihood is
determined by the marginal distributions, whose difficulty is typically determined by the treewidth of
the graph. Thus, probably the most natural tractable family is the set of tree-structured distributions,
1
where ? = {? : ?tree T, ?(i, j) ?? T, ?ij = 0}. The Chow-Liu algorithm [1968] provides an
efficient method for finding the maximum likelihood parameter vector ? in this set, by computing
the mutual information of all empirical pairwise marginals, and finding the maximum spanning tree.
Similarly, Heinemann and Globerson [2014] give a method to efficiently learn high-girth models
where correlation decay limits the error of approximate inference, though this will not converge to
the M-projection when the model is mis-specified.
This paper considers a fundamentally different notion of tractability, namely a guarantee that Markov
chain Monte Carlo (MCMC) sampling will quickly converge to the stationary distribution. Our
fundamental result is that if ? is such a set, and one can project onto ?, then there exists a FPRAS
for the maximum likelihood solution inside ?. While inspired by graphical models, this result works
entirely in the exponential family framework, and applies generally to any exponential family with
bounded sufficient statistics.
The existence of a FPRAS is established by analyzing a common existing strategy for maximum
likelihood learning of exponential families, namely gradient descent where MCMC is used to generate samples and approximate the gradient. It is natural to conjecture that, if the Markov chain is
fast mixing, is run long enough, and enough gradient descent iterations are used, this will converge
to nearly the optimum of the likelihood inside ?, with high probability. This paper shows that this is
indeed the case. A separate analysis is used for the ridge-regularized case (using strong convexity)
and the unregularized case (which is merely convex).
2 Setup
Though notation is introduced when first used, the most important symbols are given here for more
reference.
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? - parameter vector to be learned
M? - Markov chain operator corresponding to ?
?k - estimated parameter vector at k-th gradient descent iteration
qk = Mv?k?1 r - approximate distribution sampled from at iteration k. (v iterations of the
Markov chain corresponding to ?k?1 from arbitrary starting distribution r.)
? - constraint set for ?
f - negative log-likelihood on training data
L - Lipschitz constant for the gradient of f .
?? = arg min??? f (?) - minimizer of likelihood inside of ?
K - total number of gradient descent steps
M - total number of samples drawn via MCMC
N - length of vector x.
v - number of Markov chain transitions applied for each sample
C, ? - parameters determining the mixing rate of the Markov chain. (Equation 3)
Ra - sufficient statistics norm bound.
?f - desired optimization accuracy for f
?? - desired optimization accuracy for ?
? - permitted probability of failure to achieve a given approximation accuracy
This paper is concerned with an exponential family of the form
p? (x) = exp(? ? t(x) ? A(?)),
where t(x) is a vector of sufficient statistics, and the log-partition function A(?) ensures normalization. An undirected model can be seen as an exponential family where t consists of indicator
functions for each possible configuration of each clique [32]. While such graphical models motivate
this work, the results are most naturally stated in terms of an exponential family and apply more
generally.
2
? Initialize ?0 = 0.
f (? )
? For k = 1, 2, ..., K
? Draw samples. For i = 1, ..., M , sample
xik?1 ? qk?1 := Mv?k?1 r.
??
? Estimate the gradient as
M
1 ! k?1
t(xi ) ? t? + ??.
f ? (?k?1 ) + ek ?
M i=1
? Update the parameter vector as
"
#
1
?k ? ?? ?k?1 ? (f ? (?k?1 ) + ek )) .
L
$
K
1
? Output ?K or K
k=1 ?k .
?0
?
Figure 1: Left: Algorithm 1, approximate gradient descent with gradients approximated via
MCMC, analyzed in this paper. Right: A cartoon of the desired performance, stochastically finding
a solution near ?? , the minimum of the regularized negative log-likelihood f (?) in the set ?.
We are interested in performing maximum-likelihood learning, i.e. minimizing, for a dataset
z1 , ..., zD ,
D
1 !
?
?
f (?) = ?
log p? (zi ) + ???22 = A(?) ? ? ? t? + ???22 ,
(1)
D i=1
2
2
$D
1
where we define t? = D
i=1 t(zi ). It is easy to see that the gradient of f takes the form
f ? (?) = Ep? [t(X)] ? t? + ??.
If one would like to optimize f using a gradient-based method, computing the expectation of t(X)
with respect to p? can present a computational challenge. With discrete graphical models, the expected value of t is determined by the marginal distributions of each factor in the graph. Typically, the computational difficulty of computing these marginal distributions is determined by the
treewidth of the graph? if the graph is a tree, (or close to a tree) the marginals can be computed by the
junction-tree algorithm [18]. One option, with high treewidth, is to approximate the marginals with
a variational method. This can be seen as exactly optimizing a ?surrogate likelihood? approximation
of Eq. 1 [31].
Another common approach is to use Markov chain Monte Carlo (MCMC) to compute a sample
$M
{xi }M
i=1 from a distribution close to p? , and then approximate Ep? [t(X)] by (1/M )
i=1 t(xi ).
This strategy is widely used, varying in the model type, the sampling algorithm, how samples are
initialized, the details of optimization, and so on [10, 25, 27, 24, 7, 33, 11, 2, 29, 5]. Recently,
Steinhardt and Liang [28] proposed learning in terms of the stationary distribution obtained from a
chain with a nonzero restart probability, which is fast-mixing by design.
While popular, such strategies generally lack theoretical guarantees. If one were able to exactly
sample from p? , this could be understood simply as stochastic gradient descent. But, with MCMC,
one can only sample from a distribution approximating p? , meaning the gradient estimate is not
only noisy, but also biased. In general, one can ask how should the step size, number of iterations,
number of samples, and number of Markov chain transitions be set to achieve a convergence level.
The gradient descent strategy analyzed in this paper, in which one updates a parameter vector ?k
using approximate gradients is outlined and shown as a cartoon in Figure 1. Here, and in the rest
of the paper, we use pk as a shorthand for p?k , and we let ek denote the difference between the
estimated gradient and the true gradient f ? (?k?1 ). The projection operator is defined by ?? [?] =
arg min??? ||? ? ?||2 .
We assume that the parameter set ? is constrained to a set ? such that MCMC is guaranteed to mix
at a certain rate (Section 3.1). With convexity, this assumption can bound the mean and variance
3
of the errors at each iteration, leading to a bound on the sum of errors. With strong convexity, the
error of the gradient at each iteration is bounded with high probability. Then, using results due to
[26] for projected gradient descent with errors in the gradient, we show a schedule the number of
iterations K, the number of samples M , and the number of Markov transitions v such that with high
probability,
#
!
K
1 "
f
?k ? f (?? ) ? ?f or ??K ? ?? ?2 ? ?? ,
K
k=1
for the convex or strongly convex cases, respectively, where ?? ? arg min??? f (?). The total number of Markov transitions applied through the entire algorithm, KM v grows as (1/?f )3 log(1/?f )
for the convex case, (1/?2? ) log(1/?2? ) for the strongly convex case, and polynomially in all other
parameters of the problem.
3 Background
3.1 Mixing times and Fast-Mixing Parameter Sets
This Section discusses some background on mixing times for MCMC. Typically, mixing times are
defined in terms of the total-variation distance ?p ? q?T V = maxA |p(A) ? q(A)|, where the maximum ranges over
$the sample space. For discrete distributions, this can be shown to be equivalent to
?p ? q?T V = 12 x |p(x) ? q(x)|.
We assume that a sampling algorithm is known, a single iteration of which can be thought of an
operator M? that transforms some starting distribution into another. The stationary distribution is
p? , i.e. limv?? Mv? q = p? for all q. Informally, a Markov chain will be fast mixing if the total
variation distance between the starting distribution and the stationary distribution decays rapidly in
the length of the chain. This paper assumes that a convex set ? and constants C and ? are known
such that for all ? ? ? and all distributions q,
?Mv? q ? p? ?T V ? C?v .
(2)
d(v) := sup ?Mv q ? p?T V ? C?v .
(3)
This means that the distance between an arbitrary starting distribution q and the stationary distribution p? decays geometrically in terms of the number of Markov iterations v. This assumption is
justified by the Convergence Theorem [19, Theorem 4.9], which states that if M is irreducible and
aperiodic with stationary distribution p, then there exists constants ? ? (0, 1) and C > 0 such that
q
Many results on mixing times in the literature, however, are stated in a less direct form. Given a
constant ?, the mixing time is defined by ? (?) = min{v
% : d(v) ?&?}. It often happens that bounds
on mixing times are stated as something like ? (?) ? a + b ln 1? for some constants a and b. It
follows from this that ?Mv q ? p?T V ? C?v with C = exp(a/b) and ? = exp(?1/b).
A simple example of a fast-mixing exponential family is the Ising model, defined for x ?
{?1, +1}N as
?
?
"
"
?i xi ? A(?)? .
p(x|?) = exp ?
?ij xi xj +
i
(i,j)?Pairs
A simple result for this model is that, if the maximum degree of any node is ? and |?ij | ? ? for
N log(N/?)
all (i, j), then for univariate Gibbs sampling with random updates, ? (?) ? ? 1??
tanh(?) ? [19]. The
algorithm discussed in this paper needs the ability to project some parameter vector ? onto ? to find
arg min??? ||???||2 . Projecting a set of arbitrary parameters onto this set of fast-mixing parameters
is trivial? simply set ?ij = ? for ?ij > ? and ?ij ? ?? for ?ij < ??.
For more dense graphs, it is known [12, 9] that, for a matrix norm ? ? ? that is the spectral norm ? ? ?2 ,
or induced 1 or infinity norms,
,
+
N log(N/?)
(4)
? (?) ?
1 ? ?R(?)?
4
where Rij (?) = |?ij |. Domke and Liu [2013] show how to perform this projection for the Ising
model when ? ? ? is the spectral norm ? ? ?2 with a convex optimization utilizing the singular value
decomposition in each iteration.
Loosely speaking, the above result shows that univariate Gibbs sampling on the Ising model is fastmixing, as long as the interaction strengths are not too strong. Conversely, Jerrum and Sinclair
[1993] exhibited an alternative Markov chain for the Ising model that is rapidly mixing for arbitrary
interaction strengths, provided the model is ferromagnetic, i.e. that all interaction strengths are
positive with ?ij ? 0 and that the field is unidirectional. This Markov chain is based on sampling
in different ?subgraphs world? state-space. Nevertheless, it can be used to estimate derivatives of
the Ising model log-partition function with respect to parameters, which allows estimation of the
gradient of the log-likelihood. Huber [2012] provided a simulation reduction to obtain an Ising
model sample from a subgraphs world sample.
More generally, Liu and Domke [2014] consider a pairwise Markov random field, defined as
?
?
#
#
?i (xi ) ? A(?)? ,
p(x|?) = exp ?
?ij (xi , xj ) +
i
i,j
and show that, if one defines Rij (?) = maxa,b,c 21 |?ij (a, b)??ij (a, c)|, then again Equation 4 holds.
An algorithm for projecting onto the set ? = {? : ?R(?)? ? c} exists.
There are many other mixing-time bounds for different algorithms, and different types of models
[19]. The most common algorithms are univariate Gibbs sampling (often called Glauber dynamics
in the mixing time literature) and Swendsen-Wang sampling. The Ising model and Potts models are
the most common distributions studied, either with a grid or fully-connected graph structure. Often,
the motivation for studying these systems is to understand physical systems, or to mathematically
characterize phase-transitions in mixing time that occur as interactions strengths vary. As such,
many existing bounds assume uniform interaction strengths. For all these reasons, these bounds
typically require some adaptation for a learning setting.
4 Main Results
4.1 Lipschitz Gradient
For lack of space, detailed proofs are postponed to the appendix. However, informal proof sketches
are provided to give some intuition for results that have longer proofs. Our first main result is that
the regularized log-likelihood has a Lipschitz gradient.
Theorem 1. The regularized log-likelihood gradient is L-Lipschitz with L = 4R22 + ?, i.e.
?f ? (?) ? f ? (?)?2 ? (4R22 + ?)?? ? ??2 .
dA
Proof sketch. It is easy, by the triangle inequality, that ?f ? (?)?f ? (?)?2 ? ? dA
d? ? d? ?2 +??????2 .
dA
Next, using the assumption that ?t(x)?2 ? R2 , one can bound that ? dA
d? ? d? ?2 ? 2R2 ?p? ?p? ?T V .
Finally, some effort can bound that ?p? ? p? ?T V ? 2R2 ?? ? ??2 .
4.2 Convex convergence
Now, our first major result is a guarantee on the convergence that is true both in the regularized case
where ? > 0 and the unregularized case where ? = 0.
Theorem 2. With probability at least 1 ? ?, at long as M ? 3K/ log( ?1 ), Algorithm 1 will satisfy
&
'
(
)2
K
1 #
8R22 L??0 ? ?? ?2
1
K
f
+ log + ? + KC?v .
?k ? f (?? ) ?
K
KL
4R2
?
M
k=1
Proof sketch. First, note that f is convex, since the Hessian of f is the covariance of t(X) when
1 *M
k
? = 0 and ? > 0 only adds a quadratic. Now, define the quantity dk = M
m=1 t(Xm ) ?
5
Eqk [t(X)] to be the difference between the estimated expected value of t(X) under qk and the
true value. An elementary argument can bound the expected value of ?dk ?, while the Efron-Stein
inequality can bounds its variance. Using both of these bounds
? in Bernstein?s inequality can then
!K
show that, with probability 1 ? ?, k=1 ?dk ? ? 2R2 (K/ M + log ?1 ). Finally, we can observe
!K
!K
!K
that k=1 ?ek ? ? k=1 ?dk ? + k=1 ?Eqk [t(X)] ? Ep?k [t(X)]?2 . By the assumption on mixing
!K
v
speed, the
? last term1is bounded byv 2KR2 C? . And so, with probability 1 ? ?, k=1 ?ek ? ?
2R2 (K/ M + log ? ) + 2KR2 C? . Finally, a result due to Schmidt et al. [26] on the convergence
of gradient descent with errors in estimated gradients gives the result.
Intuitively, this result has the right character. If M grows on the order of K 2 and v grows on the
order of log K/(? log ?), then all terms inside the quadratic will be held constant, and so if we set
K of the order 1/?, the sub-optimality will on the order of ? with a total computational effort roughly
on the order of (1/?)3 log(1/?). The following results pursue this more carefully. Firstly, one can
observe that a minimum amount of work must be performed.
1
Theorem 3. For a, b, c, ? > 0, if K, M, v > 0 are set so that K
(a + b ?KM + Kc?v )2 ? ?, then
a4 b2 log ac
?
.
?3 (? log ?)
"
?
?
?
v
Since it must
? be true that a/ K + b K/M + Kc? ? ?, each of these three terms must also
be at most ?, giving lower-bounds on K, M , and v. Multiplying these gives the result.
KM v ?
Next, an explicit schedule for K, M , and v is possible, in terms of a convex set of parameters
?1 , ?2 , ?3 . Comparing this to the lower-bound above shows that this is not too far from optimal.
Theorem 4. Suppose that a, b, c, ? > 0. If ?1 + ?2 + ?3 = 1, ?1 , ?2 , ?3 > 0, then setting
2
ac
1
2
?K
K = ?a2 ? , M = ( ?1ab
?2 ? ) , v = log ?1 ?3 ? /(? log ?) is sufficient to guarantee that K (a + b M +
1
Kc?v )2 ? ? with a total work of
KM v =
ac
1 a4 b2 log ?1 ?3 ?
.
?14 ?22 ?3 (? log ?)
Simply verify that the ? bound holds, and multiply the terms together.
4 2
log
ac
+5.03
For example, setting ?1 = 0.66, ?2 = 0.33 and ?3 = 0.01 gives that KM v ? 48.4 a?3b (??log ?) .
Finally, we can give an explicit schedule for K, M , and v, and bound the total amount of work that
needs to be performed.
#
$
2
Theorem 5. If D ? max ??0 ? ?? ?2 , 4R
log 1? , then for all ? there is a setting of K, M, v such
L
!K
1
?
that f ( K
k=1 ?k ) ? f (? ) ? ?f with probability 1 ? ? and
KM v
?
4DR2 C
32LR22D4
log
.
4
2
3
?1 ?2 ?f (1 ? ?)
?1 ?3 ? f
[Proof sketch] This follows from setting K, M , and v as in Theorem 4 with a = L??0 ?
?? ?2 /(4R2 ) + log 1? , b = 1, c = C, and ? = ?f L/(8R22 ).
4.3 Strongly Convex Convergence
This section gives the main result for convergence that is true only in the regularized case where
? > 0. Again, the main difficulty in this proof is showing that the sum of the errors of estimated
gradients at each iteration is small. This is done by using a concentration inequality to show that the
error of each estimated gradient is small, and then applying a union bound to show that the sum is
small. The main result is as follows.
Theorem 6. When the regularization constant obeys ? > 0, with probability at least 1?? Algorithm
1 will satisfy
'
%&
%
'
&
? K
R2
L
K
?
v
?
??K ? ? ?2 ? (1 ? ) ??0 ? ? ?2 +
1 + 2 log
+ 2R2 C? .
L
?
2M
?
6
Proof sketch. When ? = 0, f is convex (as in Theorem 2) and so is strongly convex when
? > 0. The basic proof technique here is to decompose the error in a particular step as ?ek+1 ?2 ?
1 !M
k
?M
variant of Hoi=1 t(xi ) ? Eqk [t(X)]?2 + ?Eqk [t(X)] ? Ep?k [t(X)]?2 . A multidimensional
"
?
?
effding?s inequality can bound the first term, with probability 1 ? ? by R2 (1 + 2 log 1? )/ M ,
while our assumption on mixing speed can bound the second term by 2R2 C?v . Applying this to
all iterations using ? ? = ?/K gives that all errors are simultaneously bounded as before. This can
then be used in another result due to Schmidt et al. [26] on the convergence of gradient descent with
errors in estimated gradients in the strongly convex case.
A similar proof strategy could be used for the convex case where, rather than directly bounding the
sum of the norm of errors of all steps using the Efron-Stein inequality and Bernstein?s bound, one
could simply bound the error of each step using a multidimensional Hoeffding-type inequality, and
then apply this with probability ?/K to each step. This yields a slightly weaker result than that
shown in Theorem 2. The reason for applying a uniform bound on the errors in gradients here is
that Schmidt et al.?s bound [26] on the convergence of proximal gradient descent on strongly convex
functions depends not just on the sum of the norms of gradient errors, but a non-uniform weighted
variant of these.
Again, we consider how to set parameters to guarantee that ?K is not too far from ?? with a minimum
amount of work. Firstly, we show a lower-bound.
#
Theorem 7. Suppose a, b, c > 0. Then for any K, M, v such that ? K a+ ?bM log(K/?)+c?v ? ?.
%
it must be the case that
$
log a? log c?
log a?
b2
.
KM v ? 2
log
? (? log ?)(? log ?)
?(? log ?)
"
v
[Proof sketch] This is established by noticing that ? K a, ?bM log K
? , and c? must each be less
than ?, giving lower bounds on K, M , and v.
Next, we can give an explicit schedule that is not too far off from this lower-bound.
Theorem 8. Suppose that a, b, c, ? > 0. If ?1 + ?2 + ?3 = 1, ?i > 0, then setting K =
'2
& '
&
#
2
log( ?a1 ? )/(? log ?), M = ?2b? 2 1 + 2 log(K/?) and v = log ?c3 ? /(? log ?) is sufficient
2
#
to guarantee that ? K a + ?bM (1 + 2 log(K/?)) + c?v ? ? with a total work of at most
& '
& '?
?2
*
a
c
a
2 log ? ? log ? ?
log(
)
b
1
3
?1 ? ?
?1 + 2 log
KM V ? 2 2
.
? ?2 (? log ?)(? log ?)
?(? log ?)
?
?
For example, if you choose ?2 = 1/ 2 and ?1 = ?3 = (1 ? 1/ 2)/2 ? 0.1464, then this varies
from the lower-bound in Theorem 7 by a factor of two, and a multiplicative factor of 1/?3 ? 6.84
inside the logarithmic terms.
&
&
'
'2
#
2
??0 ???2
Corollary 9. If we choose K ? L
, M ? 2?L2 ?R2 ?2 2 1 + 2 log(K/?) , and v ?
? log
?1 ?
2
1
1??
log (2LR2 C/(?3 ??)), then ??K ? ?? ?2 ? ?? with probability at least 1 ? ?, and the total
amount of work is bounded by
*
$
%%%.2
$
$
L
L3 R2
??0 ? ??2
??0 ? ??2
KM v ? 2 2 3
1 + 2 log
log
log
.
2?? ?2 ? (1 ? ?)
?1 ? ?
??
?1 ? ?
5 Discussion
An important detail in the previous results is that the convex analysis gives convergence in terms of
the regularized log-likelihood, while the strongly-convex analysis gives convergence in terms of the
parameter distance. If we drop logarithmic factors, the amount of work necessary for ?f - optimality
in the log-likelihood using the convex algorithm is of the order 1/?3f , while the amount of work
necessary for ?? - optimality using the strongly convex analysis is of the order 1/?2? . Though these
quantities are not directly comparable, the standard bounds on sub-optimality for ?-strongly convex
functions with L-Lipschitz gradients are that ??2? /2 ? ?f ? L?2? /2. Thus, roughly speaking, when
regularized for the strongly-convex analysis shows that ?f optimality in the log-likelihood can be
achieved with an amount of work only linear in 1/?f .
7
0
10
f(? ) ? f(? )
0
10
0.4
|| ? ? ?* ||
*
k
k
2
0.2
?1
10
0
?0.2
?5
10
0
?2
20
40
iterations k
60
10
0
20
40
iterations k
60
?0.4 0
10
1
10
iterations k
2
10
Figure 2: Ising Model Example. Left: The difference of the current test log-likelihood from the
optimal log-likelihood on 5 random runs. Center: The distance of the current estimated parameters
from the optimal parameters on 5 random runs. Right: The current estimated parameters on one run,
as compared to the optimal parameters (far right).
6 Example
While this paper claims no significant
! practical contribution, it is useful to visualize an example.
Take an Ising model p(x) ? exp( (i,j)?Pairs ?ij xi xj ) for xi ? {?1, 1} on a 4 ? 4 grid with 5
random vectors as training data.
? The sufficient statistics are t(x) = {xi xj |(i, j) ? Pairs}, and with
24 pairs, ?t(x)?2 ? R2 = 24. For a fast-mixing set, constrain |?ij | ? .2 for all pairs. Since
N log(N/?)
the maximum degree is 4, ? (?) ? ? 1?4
tanh(.2) ? . Fix ? = 1, ?? = 2 and ? = 0.1. Though the
theory above suggests the Lipschitz constant L = 4R22 + ? = 97, a lower value of L = 10 is used,
which converged faster
" in practice (with exact or approximate gradients). Now, one can derive that
?
??0 ? ? ?2 ? D = 24 ? (2 ? .2)2 , C = log(16) and ? = exp(?(1 ? 4 tanh .2)/16). Applying
Corollary 9 with ?1 = .01, ?2 = .9 and ?3 = .1 gives K = 46, M = 1533 and v = 561. Fig. 2
shows the results. In practice, the algorithm finds a solution tighter than the specified ?? , indicating
a degree of conservatism in the theoretical bound.
7 Conclusions
This section discusses some weaknesses of the above analysis, and possible directions for future
work. Analyzing complexity in terms of the total sampling effort ignores the complexity of projection itself. Since projection only needs to be done K times, this time will often be very small in
comparison to sampling time. (This is certainly true in the above example.) However, this might not
be the case if the projection algorithm scales super-linearly in the size of the model.
Another issue to consider is how the samples are initialized. As far as the proof of correctness
goes, the initial distribution r is arbitrary. In the above example, a simple uniform distribution was
used. However, one might use the empirical distribution of the training data, which is equivalent to
contrastive divergence [5]. It is reasonable to think that this will tend to reduce the mixing time when
the p? is close to the model generating the data. However, the number of Markov chain transitions
v prescribed above is larger than typically used with contrastive divergence, and Algorithm 1 does
not reduce the step size over time. While it is common to regularize to encourage fast mixing
with contrastive divergence [14, Section 10], this is typically done with simple heuristic penalties.
Further, contrastive divergence is often used with hidden variables. Still, this provides a bound for
how closely a variant of contrastive divergence could approximate the maximum likelihood solution.
The above analysis does not encompass the common strategy for maximum likelihood learning
where one maintains a ?pool? of samples between iterations, and initializes one Markov chain at
each iteration from each element of the pool. The idea is that if the samples at the previous iteration
were close to pk?1 and pk?1 is close to pk , then this provides an initialization close to the current
solution. However, the proof technique used here is based on the assumption that the samples xki at
each iteration are independent, and so cannot be applied to this strategy.
Acknowledgements
Thanks to Ivona Bez?kov?, Aaron Defazio, Nishant Mehta, Aditya Menon, Cheng Soon Ong and
Christfried Webers. NICTA is funded by the Australian Government through the Dept. of Communications and the Australian Research Council through the ICT Centre of Excellence Program.
8
References
[1] Abbeel, P., Koller, D., and Ng, A. Learning factor graphs in polynomial time and sample complexity.
Journal of Machine Learning Research, 7:1743?1788, 2006.
[2] Asuncion, A., Liu, Q., Ihler, A., and Smyth, P. Learning with blocks composite likelihood and contrastive
divergence. In AISTATS, 2010.
[3] Besag, J. Statistical analysis of non-lattice data. Journal of the Royal Statistical Society. Series D (The
Statistician), 24(3):179?195, 1975.
[4] Boucheron, S., Lugosi, G., and Massart, P. Concentration Inequalities: A Nonasymptotic Theory of
Independence. Oxford University Press, 2013.
[5] Carreira-Peripi??n, M. A. and Hinton, G. On contrastive divergence learning. In AISTATS, 2005.
[6] Chow, C. I. and Liu, C. N. Approximating discrete probability distributions with dependence trees. IEEE
Transactions on Information Theory, 14:462?467, 1968.
[7] Descombes, X., Robin Morris, J. Z., and Berthod, M. Estimation of markov Random field prior parameters using Markov chain Monte Carlo maximum likelihood. IEEE Transactions on Image Processing, 8
(7):954?963, 1996.
[8] Domke, J. and Liu, X. Projecting Ising model parameters for fast mixing. In NIPS, 2013.
[9] Dyer, M. E., Goldberg, L. A., and Jerrum, M. Matrix norms and rapid mixing for spin systems. Ann.
Appl. Probab., 19:71?107, 2009.
[10] Geyer, C. Markov chain Monte Carlo maximum likelihood. In Symposium on the Interface, 1991.
[11] Gu, M. G. and Zhu, H.-T. Maximum likelihood estimation for spatial models by Markov chain Monte
Carlo stochastic approximation. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 63(2):339?355, 2001.
[12] Hayes, T. A simple condition implying rapid mixing of single-site dynamics on spin systems. In FOCS,
2006.
[13] Heinemann, U. and Globerson, A. Inferning with high girth graphical models. In ICML, 2014.
[14] Hinton, G. A practical guide to training restricted boltzmann machines. Technical report, University of
Toronto, 2010.
[15] Huber, M. Simulation reductions for the ising model. Journal of Statistical Theory and Practice, 5(3):
413?424, 2012.
[16] Hyv?rinen, A. Estimation of non-normalized statistical models by score matching. Journal of Machine
Learning Research, 6:695?709, 2005.
[17] Jerrum, M. and Sinclair, A. Polynomial-time approximation algorithms for the ising model. SIAM Journal
on Computing, 22:1087?1116, 1993.
[18] Koller, D. and Friedman, N. Probabilistic Graphical Models: Principles and Techniques. MIT Press,
2009.
[19] Levin, D. A., Peres, Y., and Wilmer, E. L. Markov chains and mixing times. American Mathematical
Society, 2006.
[20] Lindsay, B. Composite likelihood methods. Contemporary Mathematics, 80(1):221?239, 1988.
[21] Liu, X. and Domke, J. Projecting Markov random field parameters for fast mixing. In NIPS, 2014.
[22] Marlin, B. and de Freitas, N. Asymptotic efficiency of deterministic estimators for discrete energy-based
models: Ratio matching and pseudolikelihood. In UAI, 2011.
[23] Mizrahi, Y., Denil, M., and de Freitas, N. Linear and parallel learning of markov random fields. In ICML,
2014.
[24] Papandreou, G. and Yuille, A. L. Perturb-and-map random fields: Using discrete optimization to learn
and sample from energy models. In ICCV, 2011.
[25] Salakhutdinov, R. Learning in Markov random fields using tempered transitions. In NIPS, 2009.
[26] Schmidt, M., Roux, N. L., and Bach, F. Convergence rates of inexact proximal-gradient methods for
convex optimization. In NIPS, 2011.
[27] Schmidt, U., Gao, Q., and Roth, S. A generative perspective on MRFs in low-level vision. In CVPR,
2010.
[28] Steinhardt, J. and Liang, P. Learning fast-mixing models for structured prediction. In ICML, 2015.
[29] Tieleman, T. Training restricted Boltzmann machines using approximations to the likelihood gradient. In
ICML, 2008.
[30] Varin, C., Reid, N., and Firth, D. An overview of composite likelihood methods. Statistica Sinica, 21:
5?24, 2011.
[31] Wainwright, M. Estimating the "wrong" graphical model: Benefits in the computation-limited setting.
Journal of Machine Learning Research, 7:1829?1859, 2006.
[32] Wainwright, M. and Jordan, M. Graphical models, exponential families, and variational inference. Found.
Trends Mach. Learn., 1(1-2):1?305, 2008.
[33] Zhu, S. C., Wu, Y., and Mumford, D. Filters, random fields and maximum entropy (FRAME): Towards a
unified theory for texture modeling. International Journal of Computer Vision, 27(2):107?126, 1998.
9
| 5907 |@word polynomial:4 norm:8 mehta:1 km:9 hyv:1 simulation:2 decomposition:1 covariance:1 contrastive:7 reduction:2 initial:1 liu:7 configuration:1 score:2 series:2 existing:2 freitas:2 current:4 com:1 comparing:1 must:5 partition:4 drop:1 update:3 stationary:7 implying:1 generative:1 geyer:1 provides:3 node:2 toronto:1 firstly:2 mathematical:1 direct:1 symposium:1 focs:1 consists:1 shorthand:1 kov:1 inside:6 excellence:1 pairwise:2 huber:2 ra:1 expected:3 indeed:1 rapid:2 roughly:2 inspired:1 salakhutdinov:1 project:2 provided:3 bounded:6 notation:1 estimating:1 pursue:1 maxa:2 unified:1 finding:3 marlin:1 guarantee:8 multidimensional:2 descombes:1 exactly:2 wrong:1 reid:1 positive:1 before:1 understood:1 local:2 limit:1 mach:1 analyzing:2 oxford:1 lugosi:1 might:2 au:1 studied:1 initialization:1 conversely:1 suggests:1 appl:1 limited:1 range:1 obeys:1 practical:2 globerson:2 practice:4 union:1 block:1 procedure:1 empirical:2 thought:1 composite:4 matching:4 projection:7 onto:4 close:6 cannot:1 operator:3 kr2:2 applying:4 optimize:1 equivalent:2 deterministic:1 map:1 center:1 fpras:3 roth:1 go:1 starting:4 convex:23 roux:1 subgraphs:2 estimator:2 utilizing:1 regularize:1 notion:2 variation:2 target:2 suppose:3 lindsay:1 rinen:1 exact:1 smyth:1 goldberg:1 element:1 trend:1 approximated:2 ising:13 ep:4 rij:2 wang:1 ensures:1 ferromagnetic:1 connected:1 contemporary:1 intuition:1 convexity:4 complexity:4 ong:1 dynamic:2 motivate:1 yuille:1 efficiency:1 triangle:1 gu:1 easily:1 fast:14 monte:6 varin:1 whose:1 heuristic:1 widely:1 larger:1 cvpr:1 ability:1 statistic:5 jerrum:4 think:1 noisy:1 itself:1 eqk:4 interaction:5 adaptation:1 rapidly:2 mixing:31 achieve:2 convergence:12 cluster:1 optimum:1 generating:1 derive:1 ac:4 ij:14 eq:1 strong:4 treewidth:5 australian:3 direction:1 drawback:1 correct:1 aperiodic:1 filter:1 stochastic:2 closely:1 hoi:1 require:1 government:1 abbeel:2 fix:1 decompose:1 tighter:1 elementary:1 mathematically:1 hold:2 swendsen:1 exp:7 visualize:1 claim:1 major:1 vary:1 a2:1 estimation:4 tanh:3 council:1 mizrahi:2 correctness:1 weighted:1 mit:1 super:1 rather:1 denil:1 varying:1 corollary:2 potts:1 likelihood:37 besag:1 inference:6 mrfs:1 typically:10 entire:1 chow:2 hidden:1 kc:4 koller:2 interested:1 issue:2 among:1 arg:4 constrained:2 spatial:1 initialize:1 mutual:1 marginal:3 field:8 ng:1 sampling:11 cartoon:2 icml:4 nearly:1 future:1 np:1 report:1 fundamentally:1 irreducible:1 simultaneously:1 national:1 divergence:8 phase:1 statistician:1 ab:1 friedman:1 multiply:1 evaluation:1 certainly:1 weakness:1 analyzed:2 held:1 chain:20 accurate:2 encourage:1 necessary:2 tree:8 loosely:1 initialized:2 desired:3 theoretical:3 modeling:1 papandreou:1 lattice:1 tractability:1 uniform:4 levin:1 too:4 characterize:1 varies:1 proximal:2 thanks:1 explores:1 randomized:2 fundamental:1 siam:1 international:1 probabilistic:1 off:1 pool:2 together:1 quickly:2 again:3 choose:2 hoeffding:1 sinclair:3 stochastically:1 ek:6 derivative:1 leading:1 american:1 nonasymptotic:1 de:2 b2:3 satisfy:2 mv:6 depends:1 performed:3 multiplicative:1 sup:1 recover:1 option:1 parallel:2 maintains:1 unidirectional:1 lr2:1 asuncion:1 inferning:1 contribution:1 spin:2 accuracy:3 qk:3 variance:2 efficiently:3 yield:2 carlo:6 multiplying:1 converged:1 inexact:1 failure:1 energy:2 naturally:1 proof:13 mi:1 ihler:1 sampled:1 dataset:1 popular:1 ask:1 efron:2 schedule:4 carefully:1 actually:1 permitted:1 methodology:1 done:3 though:4 strongly:10 just:2 correlation:1 sketch:6 lack:3 defines:1 menon:1 grows:3 verify:1 true:6 normalized:1 regularization:1 boucheron:1 nonzero:1 glauber:1 inferior:1 ridge:2 complete:1 interface:1 meaning:2 variational:2 weber:1 image:1 recently:1 common:7 physical:1 overview:1 discussed:1 marginals:3 significant:1 gibbs:3 outlined:1 grid:2 similarly:1 mathematics:1 centre:1 funded:1 l3:1 longer:1 add:1 something:1 perspective:1 optimizing:1 certain:1 inequality:8 postponed:1 tempered:1 seen:2 minimum:4 converge:5 encompass:1 mix:1 technical:1 faster:1 bach:1 long:3 a1:1 prediction:1 variant:3 basic:1 vision:2 expectation:1 iteration:20 normalization:1 achieved:1 justified:1 background:2 singular:1 biased:1 rest:1 exhibited:1 probably:1 massart:1 induced:1 tend:1 undirected:3 jordan:1 near:1 bernstein:2 enough:2 concerned:1 easy:2 xj:4 independence:1 zi:2 restrict:2 reduce:2 idea:1 defazio:1 effort:5 penalty:1 speaking:2 hessian:1 useful:2 generally:4 detailed:1 informally:1 amount:8 transforms:1 stein:2 morris:1 generate:1 exist:1 estimated:9 r22:5 zd:1 discrete:5 nevertheless:1 drawn:1 graph:7 merely:1 geometrically:1 sum:5 run:4 noticing:1 you:1 family:11 reasonable:1 wu:1 draw:1 appendix:1 comparable:1 entirely:1 bound:30 guaranteed:2 cheng:1 quadratic:3 strength:5 occur:1 constraint:1 infinity:1 constrain:1 speed:2 argument:1 min:5 optimality:5 prescribed:1 performing:1 xki:1 conjecture:1 structured:3 slightly:1 character:1 making:1 happens:1 projecting:4 intuitively:1 restricted:2 iccv:1 unregularized:3 ln:1 equation:2 discus:2 dyer:1 tractable:4 informal:1 studying:1 junction:1 apply:2 observe:2 spectral:2 alternative:2 schmidt:5 rp:1 existence:2 assumes:1 graphical:11 a4:2 giving:2 perturb:1 prof:1 approximating:2 society:3 initializes:1 quantity:2 mumford:1 strategy:7 concentration:2 dependence:1 surrogate:1 gradient:41 distance:6 separate:1 restart:1 considers:1 trivial:1 spanning:1 nicta:3 reason:2 length:2 ratio:1 minimizing:1 liang:2 setup:1 sinica:1 xik:1 negative:2 stated:3 design:1 boltzmann:2 perform:1 markov:26 descent:12 peres:1 hinton:2 communication:1 frame:1 arbitrary:6 introduced:1 namely:3 pair:5 specified:4 kl:2 z1:1 c3:1 learned:2 nishant:1 established:2 nip:4 justin:2 able:1 xm:1 challenge:2 program:1 max:1 royal:2 wainwright:2 natural:3 difficulty:3 regularized:9 indicator:1 zhu:2 scheme:2 firth:1 imply:1 prior:1 literature:2 l2:1 acknowledgement:1 ict:1 probab:1 determining:1 asymptotic:1 fully:3 degree:3 sufficient:7 dr2:1 principle:1 last:1 soon:1 wilmer:1 infeasible:1 guide:1 pseudolikelihood:2 understand:1 weaker:1 fall:1 benefit:1 overcome:1 transition:7 world:2 ignores:1 projected:1 bm:3 far:5 polynomially:1 transaction:2 approximate:11 clique:1 hayes:1 uai:1 assumed:2 xi:11 robin:1 learn:3 conservatism:1 da:4 aistats:2 pk:4 dense:1 main:5 linearly:1 statistica:1 motivation:1 bounding:1 fig:1 site:1 cubic:1 sub:2 explicit:3 exponential:9 third:1 theorem:14 showing:1 symbol:1 r2:14 disregarding:1 decay:3 dk:4 intractable:3 exists:3 texture:1 entropy:1 logarithmic:3 girth:2 simply:4 univariate:3 gao:1 steinhardt:2 aditya:1 applies:1 minimizer:1 tieleman:1 succeed:1 ann:1 towards:1 lipschitz:6 heinemann:2 determined:4 carreira:1 domke:6 total:12 called:1 indicating:1 aaron:1 limv:1 dept:1 mcmc:10 |
5,422 | 5,908 | Testing Closeness With Unequal Sized Samples
Gregory Valiant?
Department of Computer Science
Stanford University
California, CA 94305
valiant@stanford.edu
Bhaswar B. Bhattacharya
Department of Statistics
Stanford University
California, CA 94305
bhaswar@stanford.edu
Abstract
We consider the problem of testing whether two unequal-sized samples were
drawn from identical distributions, versus distributions that differ significantly.
Specifically, given a target error parameter ? > 0, m1 independent draws from
an unknown distribution p with discrete support, and m2 draws from an unknown
distribution q of discrete support, we describe a test for distinguishing the case that
p = q from the case that ||p ? q||1 ? ?. If p and q are supported on at most n elements, then our
with high probability provided m1 ? n2/3 /?4/3
test is successful
?
and m2 = ? max{ ?mn ?2 , ?2n } . We show that this tradeoff is information the1
oretically optimal throughout this range in the dependencies on all parameters,
n, m1 , and ?, to constant factors for worst-case distributions. As a consequence,
we obtain an algorithm for estimating the mixing time of a Markov chain on n
? 3/2 ?mix ) queries to a ?next node? orastates up to a log n factor that uses O(n
cle. The core of our testing algorithm is a relatively simple statistic that seems to
perform well in practice, both on synthetic and on natural language data. We believe that this statistic might prove to be a useful primitive within larger machine
learning and natural language processing systems.
1
Introduction
One of the most basic problems in statistical hypothesis testing is the question of distinguishing
whether two unknown distributions are very similar, or significantly different. Classical tests, like
the Chi-squared test or the Kolmogorov-Smirnov statistic, are optimal in the asymptotic regime,
for fixed distributions as the sample sizes tend towards infinity. Nevertheless, in many modern
settings?such as the analysis of customer, web logs, natural language processing, and genomics,
despite the quantity of available data?the support sizes and complexity of the underlying distributions are far larger than the datasets, as evidenced by the fact that many phenomena are observed
only a single time in the datasets, and the empirical distributions of the samples are poor representations of the true underlying distributions.1 In such settings, we must understand these statistical
tasks not only in the asymptotic regime (in which the amount of available data goes to infinity), but
in the ?undersampled? regime in which the dataset is significantly smaller than the size or complexity of the distribution in question. Surprisingly, despite an intense history of study by the statistics,
information theory, and computer science communities, aspects of basic hypothesis testing and estimation questions?especially in the undersampled regime?remain unresolved, and require both new
algorithms, and new analysis techniques.
?
Supported in part by NSF CAREER Award CCF-1351108
To give some specific examples, two recent independent studies [19, 26] each considered the genetic sequences of over 14,000 individuals, and found that rare variants are extremely abundant, with over 80% of
mutations observed just once in the sample. A separate recent paper [16] found that the discrepancy in rare mutation abundance cited in different demographic modeling studies can largely be explained by discrepancies in
the sample sizes of the respective studies, as opposed to differences in the actual distributions of rare mutations
across demographics, highlighting the importance of improved statistical tests in this ?undersampled? regime.
1
1
In this work, we examine the basic hypothesis testing question of deciding whether two unknown
distributions over discrete supports are identical (or extremely similar), versus have total variation
distance at least ?, for some specified parameter ? > 0. We consider (and largely resolve) this
question in the extremely practically relevant setting of unequal sample sizes. Informally, taking
? to be a small constant, we show that provided p and q are supported on at most n elements, for
any ? ? [0, 1/3], the hypothesis test can be successfully performed (with high probability over the
random samples) given samples of size m1 = ?(n2/3+? ) from p, and m2 = ?(n2/3??/2 ) from
q, where n is the size of the supports of the distributions p and q. Furthermore, for every ? in
this range, this tradeoff between m1 and m2 is necessary, up to constant factors. Thus, our results
smoothly interpolate between the known bounds of ?(n2/3 ) on the sample size
? necessary in the
setting where one is given two equal-sized samples [6, 9], and the bound of ?( n) on the sample
size in the setting in which the sample is drawn from one distribution and the other distribution is
known to the algorithm [22, 29]. Throughout most of the regime of parameters, when m1 m22 ,
our algorithm is a natural extension of the algorithm proposed in [9], and is similar to the algorithm
proposed in [3] except with the addition of a normalization term that seems crucial
? to obtaining our
information theoretic optimality. In the extreme regime when m1 ? n and m2 ? n, our algorithm
introduces an additional statistic which (we believe) is new. Our algorithm is relatively simple, and
practically viable. In Section 4 we illustrate the efficacy of our approach on both synthetic data, and
on the real-world problem of deducing whether two words are synonyms, based on a small sample
of the bi-grams in which they occur.
We also note that, as pointed out in several related work [3, 12, 6], this hypothesis testing question
has applications to other problems, such as estimating or testing the mixing time of Markov chains,
and our results yield improved algorithms in these settings.
1.1
Related Work
The general question of how to estimate or test properties of distributions using fewer samples
than would be necessary to actually learn the distribution, has been studied extensively since the
late ?90s. Most of the work has focussed on ?symmetric? properties (properties whose value is
invariant to relabeling domain elements) such as entropy, support size, and distance metrics between
distributions (such as `1 distance). This has included both algorithmic work (e.g. [4, 5, 7, 8, 10, 13,
20, 21, 27, 28, 29]), and results on developing techniques and tools for establishing lower bounds
(e.g. [23, 30, 27]). See the recent survey by Rubinfeld for a more thorough summary of the
developments in this area [24]).
The specific problem of ?closeness testing? or ?identity testing?, that is, deciding whether two distributions, p and q, are similar, versus have significant distance, has two main variants: the oneunknown-distribution setting in which q is known and a sample is drawn from p, and the twounknown-distributions settings in which both p and q are unknown and samples are drawn from
both. We briefly summarize the previous results for these two settings.
In the one-unknown-distribution setting (which can be thought of as the limiting setting in the case
that we have an arbitrarily large sample drawn from distribution q, and a relatively modest sized
sample from p), initial work of Goldreich and Ron [12] considered the problem of testing ?
whether
p is the uniform distribution over [n], versus has distance at least ?. The tight bounds of ?( n/ ?2 )
were later shown by Paninski [22], essentially leveraging the birthday paradox and the intuition
that, among distributions supported on n elements, the uniform distribution maximizes the number
of domain elements that will be observed once. Batu et al. [8] showed that, up to polylogarithmic
factors of n, and polynomial factors of ?, this dependence was optimal for worst-case distributions
over [n]. Recently, an ?instance?optimal? algorithm and matching lower bound was shown: for any
? max
distribution q, up to constant factors, max{ 1? , ??2 ||q??(?)
||2/3 } samples from p are both necessary
? max
and sufficient to test p = q versus ||p ? q|| ? ?, where ||q??(?)
||2/3 ? ||q||2/3 is the 2/3-rd norm
of the vector of probabilities of distribution q after the maximum element has been removed, and
the smallest elements up to ?(?) total mass have been removed.
(This immediately implies the tight
?
bounds that if q is any distribution supported on [n], O( n/ ?2 ) samples are sufficient to test its
identity.)
The two-unknown-distribution setting was introduced to this community by Batu et al. [6]. The
optimal sample complexity of this problem was recently determined by Chan et al. [9]: they showed
2
that m = ?(n2/3 /?4/3 ) samples are necessary and sufficient. In a slightly different vein, Acharya et
al. [1, 2] recently considered the question of closeness testing with two unknown distributions from
the standpoint of competitive analysis. They proposed an algorithm that performs the desired task
using O(s3/2 polylog s) samples, and established a lower bound of ?(s7/6 ), where s represents the
number of samples required to determine whether a set of samples were drawn from p versus q, in
the setting where p and q are explicitly known.
A natural generalization of this hypothesis testing problem, which interpolates between the twounknown-distribution setting and the one-unknown-distribution setting, is to consider unequal sized
samples from the two distributions. More formally, given m1 samples from the distribution p, the
asymmetric closeness testing problem is to determine how many samples, m2 , are required from the
distribution q such that the hypothesis p = q versus ||p ? q||1 > ? can be distinguished with large
constant probability (say 2/3). Note that the results of Chan et al. [9] imply that it is sufficient to
consider m1 ? ?(n2/3 /?4/3 ). This problem was studied recently by Acharya et al. [3]: ?
they gave
n
n log
n
an algorithm that given m1 samples from the distribution p uses m2 = O(max{ ?3 ?m , n?log
})
2
1
samples from q, to distinguish
the
two
distributions
with
high
probability.
They
also
proved
a
lower
?
2
bound of m2 = ?(max{ ?2n , ?4nm2 }). There is a polynomial gap in these upper and lower bounds
1
?
in the dependence on n, m1 and ?.
As a corollary to our main hypothesis testing result, we obtain an improved algorithm for testing
the mixing time of a Markov chain. The idea of testing mixing properties of a Markov chain goes
back to the work of Goldreich and Ron [12], which conjectured an algorithm for testing expansion
of bounded-degree graphs. Their test is based on picking a random node and testing whether random walks from this node reach a distribution that is close to?the uniform distribution on the nodes
of the graph. They conjectured that their algorithm had O( n) query complexity. Later, Czumaj
and Sohler [11], Kale and Seshadhri [15], and Nachmias and Shapira [18] have independently concluded that the algorithm of Goldreich and Ron is provably a test for expansion property of graphs.
Rapid mixing of a chain can also be tested using eigenvalue computations. Mixing is related to the
separation between the two largest eigenvalues [25, 17], and eigenvalues of a dense n ? n matrix
can be approximated in O(n3 ) time and O(n2 ) space. However, for a sparse n ? n symmetric
matrix with m nonzero entries, the same task can be achieved in O(n(m + log n)) operations and
O(n + m) space. Batu et al. [6] used their `1 distance test on the t-step distributions, to test mixing
properties of Markov chains. Given a finite Markov chain with state space [n] and transition matrix
P = ((P (x, y))), they essentially show that one can estimate the mixing time ?mix up to a factor
? 5/3 ?mix ) queries to a next node oracle, which takes a state x ? [n] and outputs a
of log n using O(n
state y ? [n] drawn from the distribution P (x, ?). Such an oracle can often be simulated significantly
more easily than actually computing the transition matrix P (x, y).
We conclude this related work section with a comment on ?robust? hypothesis testing and distance
estimation. A natural hope would be to simply estimate ||p ? q|| to within some additive ?, which is
a strictly more difficult task than distinguishing p = q from ||p ? q|| ? ?. The results of Valiant and
Valiant [27, 28, 29] show that this problem is significantly more difficult than hypothesis testing:
the distance can be estimated to additive error ? for distributions supported on ? n elements using
samples of size O(n/ log n) (in both the setting where either one, or both distributions are unknown).
Moreover, ?(n/ log n) samples are information theoretically necessary, even if q is the uniform
1
from the case that
distribution over [n], and one wants to distinguish the case that ||p ? q||1 ? 10
9
||p ? q||1 ? 10 . Recall that the non-robust test of distinguishing p = q versus ||p ? q|| > 9/10
?
requires a sample of size only O( n). The exact worst-case sample complexity of distinguishing
1
whether ||p ? q||1 ? nc versus ||p ? q||1 ? ? is not well understood, though in the case of constant
?, up to logarithmic factors, the required sample size seems to scale linearly in the exponent between
n2/3 and n as c goes from 1/3 to 0.
1.2
Our results
Our main result resolves the minimax sample complexity of the closeness testing problem in the
unequal sample setting, to constant factors, in terms of n, the support sizes of the distributions in
question:
3
Theorem 1. Given m1 ? n2/3 /?4/3 and ? > n?1/12 , and sample access to distributions p and q
over [n], there is ?an O(m1 ) time algorithm which takes m1 independent draws from p and m2 =
O(max{ ?mn ?2 , ?2n }) independent draws from q, and with probability at least 2/3 distinguishes
1
whether
1
||p ? q||1 ? O
versus ||p ? q||1 ? ?.
(1)
m2
?
Moreover, given m1 samples from p, ?(max{ ?mn ?2 , ?2n }) samples from q are information1
theoretically necessary to distinguish p = q from ||p ? q||1 ? ? with any constant probability
bounded below by 1/2.
The lower bound in the above theorem
is proved using the machinery developed in Valiant [30],
?
and ?interpolates? between the ?( n/ ?2 ) lower bound in the one-unknown-distribution setting of
testing uniformity [22] and the ?(n2/3 / ?4/3 ) lower bound in the setting of equal sample sizes from
two unknown distributions [9]. The algorithm establishing the upper bound involves a re-weighted
version of a statistic proposed in [9], and is similar to the algorithm proposed in [3] modulo the
addition of a normalizing term, which
seems crucial to obtaining our tight results. In the extreme
?
regime when m1 ? n and m2 ? n/ ?2 , we incorporate an additional statistic that has not appeared
before in the literature.
As an application of Theorem 1 in the extreme regime when m1 ? n, we obtain an improved
algorithm for estimating the mixing time of a Markov chain:
Corollary 1. Consider a finite Markov chain with state space [n] and a next node oracle; there is
an algorithm that estimates the mixing time, ?mix , up to a multiplicative factor of log n, that uses
? 3/2 ?mix ) time and queries to the next node oracle.
O(n
Concurrently to our work, Hsu et al. [14] considered the question of estimating the mixing time
based on a single sample path (as opposed to our model of a sampling oracle). In contrast to our
approach via hypothesis testing, they considered the natural spectral approach, and showed that the
? 3 /?min ),
mixing time can be approximated, up to logarithmic factors, given a path of length O(?
mix
where ?min is the minimum probability of a state under the stationary distribution. Hence, if the
3
?
stationary distribution is uniform over n states, this becomes O(n?
mix ). It remains an intriguing
open question whether one can simultaneously achieve both the linear dependence on ?mix of our
results and the linear dependence on 1/?min or the size of the state space, n, as in their results.
1.3
Outline
We begin by stating our testing algorithm, and describe the intuition behind the algorithm. The
formal proof of the performance guarantees of the algorithm require rather involved bounds on the
moments of various parameters, and are provided in the supplementary material. We also defer
the entirety of the matching information theoretic lower bounds to the supplementary material, as
the techniques may not appeal to as wide an audience as the algorithmic portion of our work. The
application of our testing results to the problem of testing or estimating the mixing time of a Markov
chain is discussed in Section 3. Finally, Section 4 contains some empirical results, suggesting that
the statistic at the core of our testing algorithm performs very well in practice. This section contains
both results on synthetic data, as well as an illustration of how to apply these ideas to the problem
of estimating the semantic similarity of two words based on samples of the n-grams that contain the
words in a corpus of text.
2
Algorithms for `1 Testing
In this section we describe our algorithm for `1 testing with unequal samples. This gives the upper
bound in Theorem 1 on the sample sizes necessary to distinguish p = q from ||p ? q||1 ? ?. For
clarity and ease of exposition, in this section we consider ? to be some absolute constant, and supress
the dependency on ? . The slightly more involved algorithm that also obtains the optimal dependency
on the parameter ? is given in the supplementary material.
We begin by presenting the algorithm, and then discuss the intuition for the various steps.
4
Algorithm 1 The Closeness Testing Algorithm
Suppose ? = ?(1) and m1 = O(n1?? ) for some ? ? 0. Let S1 , S2 denote two independent sets of
m1 samples drawn from p and let T1 , T2 denote two independent sets of m2 samples drawn from q.
We wish to test p = q versus ||p ? q||1 > ?.
n
? Let b = C0 log
m2 , for an absolute constant C0 , and define the set
X
S1
Y
T1
B = {i ? [n] : mi1 > b} ? {i ? [n] : mi 2 > b}, where XiS1 denotes the number of
occurrences of i in S1 , and YiT1 denotes the number of occurrences of i in T1 .
? Let Xi denote the number of occurrences of element i in S2 , and Yi denote the number of
occurrences of element i in T2 :
1. Check if
X Xi
Yi
m1 ? m2 ? ?/6.
(2)
i?B
2. Check if
Z :=
X (m2 Xi ? m1 Yi )2 ? (m2 Xi + m2 Yi )
3/2
2
1
? C? m1 m2 ,
Xi + Yi
(3)
i?[n]\B
for an appropriately chosen constant C? (depending on ?).
3. If ? ? 1/9:
? If (2) and (3) hold, then ACCEPT. Otherwise, REJECT.
4. Otherwise, if ? < 1/9 :
? Check if
R :=
X 1 {Yi = 2}
m2
? C1 2 ,
Xi + 1
m1
(4)
i?[n]\B
where C1 is an appropriately chosen absolute constant.
? REJECT if there exists i ? [n] such that Yi ? 3 and Xi ? C2 m2mn11/3 , where C2 is an
appropriately chosen absolute constant.
? If (2), (3), and (4) hold, then ACCEPT. Otherwise, REJECT.
The intuition behind the above algorithm is as follows: with high probability, all elements in the
set B satisfy either pi > b/2, or qi > b/2 (or both). Given that these elements are ?heavy?, their
contribution to the `1 distance will be accurately captured by the `1 distance of their empirical
frequencies (where these empirical frequencies are based on the second set of samples, S2 , T2 ).
For the elements that are not in set B?the ?light? elements?their empirical frequencies will,
in general, not accurately reflect their true probabilities, and hence the distance between the empirical distributions of the ?light? elements will be misleading. The Z statistic of Equation 3 is
designed specifically for this regime. If the denominator of this statistic were omitted, then this
would give an estimator for the squared `2 distance between the distributions (scaled by a factor of
m21 m22 ). To see this, note that if pi and qi are small, then Binomial(m1 , pi ) ? P oisson(m1 pi )
and Binomial(m2 , qi ) ? P oisson(m2 qi ); furthermore,
a simple calculation yields that if Xi ?
P oisson(m1 pi ) and Yi ? P oisson(m2 qi ), then E (m2 Xi ? m1 Yi )2 ? (m22 Xi + m21 Yi ) =
m21 m22 (p ? q)2 . The normalization by Xi + Yi ?linearizes? the Z statistic, essentially turning the
squared `2 distance into an estimate of the `1 distance between light elements of the two distributions. Similar results can possibly be obtained using other linear functions of Xi and Yi in the
m1
denominator, though we note that the ?obvious? normalizing factor of Xi + m
Yi does not seem to
2
work theoretically, and seems to have extremely poor performance in practice.
?
For the extreme case (corresponding to ? < 1/9) where m1 ? n and m2 ? n/ ?2 , the statistic
Z might have a prohibitively large variance; this is essentially due to the ?birthday paradox? which
might cause a constant?number of rare elements (having probability O(1/n) to occur twice in a
sample of size m2 ? n/ ?2 ). Each such element will contribute ?(m21 ) ? n2 to the Z statistic,
5
and hence the variance can be ? n4 . The statistic R of Equation (4) is tailored to deal with these
cases, and captures the intuition that we are more tolerant of indices i for which Yi = 2 if the
corresponding Xi is larger. It is worth noting that one can also define a natural analog of the R
statistic corresponding to the indices i for which Yi = 3, etc., using which the robustness parameter
of the test can be improved. The final check?ensuring that in this regime with m1 m2 there are
no elements for which Yi ? 3 but Xi is small?rules out the remaining sets of distributions, p, q, for
which the variance of the Z statistic is intolerably large.
Finally, we should emphasize that the crude step of using two independent batches of samples?
the first to obtain the partition of the domain into ?heavy? and ?light? elements, and the second to
actually compute the statistics, is for ease of analysis. As our empirical results of Section 4 suggest,
for practical applications one may want to use only the Z-statistic of (3), and one certainly should
not ?waste? half the samples to perform the ?heavy?/?light? partition.
3
Estimating Mixing Times in Markov Chains
The basic hypothesis testing question of distinguishing identical distributions from those with significant `1 distance can be employed for several other practically relevant tasks. One example is the
problem of estimating the mixing time of Markov chains.
Consider a finite Markov chain with state space [n], transition matrix P = ((P (x, y))), with stationary distribution ?. The t-step distribution starting at the point x ? [n], Pxt (?) is the probability
distribution on [n] obtained by running the chain for t steps starting from x.
Definition 1. Then?-mixing time of a Markov chain with transition matrix
o P = ((P (x, y))) is defined
P
1
t
as tmix (?) := inf t ? [n] : supx?[n] 2 y?[n] |Px (y) ? ?(y)| ? ? .
Definition 2. The average t-step distribution of a Markov chain P with n states is the distribution
P
t
P = n1 x?[n] Pxt , that is, the distribution obtained by choosing x uniformly from [n] and walking
t steps from the state x.
The connection between closeness testing and testing whether a Markov chain is close to mixing
was first observed by Batu et al. [6], who proposed testing the `1 difference between distributions
t0
Pxt0 and P , for every x ? [n]. The algorithm leveraged their equal sample-size hypothesis testing
? 2/3 log n) samples from both the distributions Pxt0 and P t0 . This yields an
results, drawing O(n
? 5/3 t0 ).
overall running time of O(n
Here, we note that our unequal sample-size hypothesis testing algorithm can yield an improved
t0
?
runtime. Since the distribution P is independent of the starting state x, it suffices to take O(n)
?
t0
t
? n) samples from Px , for every x ? [n]. This results in a query and
samples from P once and O(
? 3/2 t0 ). We sketch this algorithm below.
runtime complexity of O(n
Algorithm 2 Testing for Mixing Times in Markov Chains
Given t0 ? R and a finite Markov chain with state space [n] and transition matrix P = ((P (x, y))),
we wish to test
1
H0 : tmix O ?
? t0 , versus H1 : tmix (1/4) > t0 .
(5)
n
1. Draw O(log n) samples S1 , . . . , SO(log n) , each of size Pois(C1 n) from the average t0 -step
distribution.
t0
2. For each state x ? [n] we will distinguish whether ||Pxt0 ? P ||1 ? O( ?1n ), versus
t0
||Pxt0 ? P ||1 > 1/4, with probability of error 1/n. We do this by running
? O(log n)
runs of Algorithm 1, with the i-th run using Si and a fresh set of Pois(O( n)) samples
from Pxt .
3. If all n of the `1 closeness testing problems are accepted, then we ACCEPT H0 .
6
The above testing algorithm can be leveraged to estimate the mixing time of a Markov chain, via the
?
log ?
basic observation that if tmix (1/4) ? t0 , then for any ?, tmix (?) ? log
1/2 t0 , and thus tmix (1/ n) ?
?
2 log n ? tmix (1/4). Because tmix (1/4) and tmix (O(1/ n)) differ by at most a factor of log n,
by applying Algorithm 2 for a geometrically increasing sequence of t0 ?s, and repeating each test
O(log t0 + log n) times, one obtains Corollary 1, restated below:
Corollary 1 For a finite Markov chain with state space [n] and a next node oracle, there is an
algorithm that estimates the mixing time, ?mix , up to a multiplicative factor of log n, that uses
? 3/2 ?mix ) time and queries to the next node oracle.
O(n
4
Empirical Results
Both our formal algorithms and the corresponding theorems involve some unwieldy constant factors
(that can likely be reduced significantly). Nevertheless, in this section we provide some evidence
that the statistic at the core of our algorithm can be fruitfully used in practice, even for surprisingly
small sample sizes.
4.1
Testing similarity of words
An extremely important primitive in natural language processing is the ability to estimate the semantic similarity of two words. Here, we show that the Z statistic, Z =
P (m2 Xi ?m1 Yi )2 ?(m22 Xi +m21 Yi )
, which is the core of our testing algorithm, can accurately dis3/2
i
m1
m2 (Xi +Yi )
tinguish whether two words are very similar based on surprisingly small samples of the contexts in
which they occur. Specifically, for each pair of words, a, b that we consider, we select m1 random
occurrences of a and m2 random occurrences of word b from the Google books corpus, using the
Google Books Ngram Dataset.2 We then compare the sample of words that follow a with the sample
of words that follow b. Henceforth, we refer to these as samples of the set of bi-grams involving
each word.
Figure 1(a) illustrates the Z statistic for various pairs of words that range from rather similar words
like ?smart? and ?intelligent?, to essentially identical word pairs such as ?grey? and ?gray? (whose
usage differs mainly as a result of historical variation in the preference for one spelling over the
other); the sample size of bi-grams containing the first word is fixed at m1 = 1, 000, and the sample
size corresponding to the second word varies from m2 = 50 through m2 = 1, 000. To provide a
frame of reference, we also compute the value of the statistic for independent samples corresponding
to the same word (i.e. two different samples of words that follow ?wolf?); these are depicted in red.
For comparison, we also plot the total variation distance between the empirical distributions of
the pair of samples, which does not clearly differentiate between pairs of identical words, versus
different words, particularly for the smaller sample sizes.
One subtle point is that the issue with using the empirical distance between the distributions goes
beyond simply not having a consistent reference point. For example, let X denote a large sample
of size m1 from distribution p, X 0 denote a small sample of size m2 from p, and Y denote a
small sample of size m2 from a different distribution q. It is tempting to hope that the empirical
distance between X and X 0 will be smaller than the empirical distance between X and Y . As
Figure 1(b) illustrates, this is not always the case, even for natural distributions: for the specific
example illustrated in the figure, over much of the range of m2 , the empirical distance between X
and X 0 is indistinguishable from that of X and Y , though the Z statistic easily discerns that these
distributions are very different.
This point is further emphasized in Figure 2, which depicts this phenomena in the synthetic setting
where p = Unif[n] is the uniform distribution over n elements, and q is the distribution whose
elements have probabilities (1 ? ?)/n, for ? = 1/2. The second and fourth plots represent the
probability that the distance between two empirical distributions of samples from p is smaller than
the distance between the empirical distributions of the samples from p and q; the first and third
plots represent the analogous probability involving the Z statistic. The first two plots correspond to
n = 1, 000 and the last two correspond to n = 50, 000. In all?plots, we consider a pair of samples
of respective sizes m1 and m2 , as m1 and m2 range between n and n.
2
The Google Books Ngram Dataset is freely available here: http://storage.googleapis.com/
books/ngrams/books/datasetsv2.html
7
2$"$('%$,3)4.,5..-)6'$%+)78)97%:+)
2$"$('%$,3)4.,5..-)6'$%+)78)97%:+)
2$"$('%$,3)4.,5..-)6'$%+)78)97%:+)
2$"$('%$,3)4.,5..-)6'$%+)78)97%:+)
/0%)!)+,'1+1&)
740&++7$&40#
3!(%0#
####5%&6#
3!(%0#
/0%)!)+,'1+1&
/0%)!)+,'1+1&
)
)
/0%)!)+,'1+1&
)
!"#$%$&'()*$+,'-&.)
##$%&'#
(+!*30#
4&(%+'#
./0#
.1*2#
##$%&'# '%() ##$%&'#
#
'%()#
'%()#
)*+,#
,*-#
$%&'#
$%&'#
$%&'#
$%&'#
$%&'#
$%('#
?
?
?
?
?
?102
?
?
?
?
?
!"#2
m
!"#$%$&'()*$+,'-&.
!"#$%$&'()*$+,'-&.
) )
!"#$%$&'()*$+,'-&.
)
$%&'#
$%&'#
'%()#
'%()#
$%&'#
$%&'#
$%&'#
$%&'#
!"#
?
?
?
?103
?
?
?
?
?
?
?102
?
?
?
?
!
?
?
?
?
?
?103
?
m"#2
(a)
$%&'#
$%&'#
$%&'#
$%&'#
?
?
?
?
?
?
!
m"#2
?
?
?
?
?103
?
!"#
!"#
?
?
?
?
?
?102
?
?
?
?
?
!"#
!m
"# 2
$%&'#
'%()#
?
?
?
?
?103
?
?
?
?
?
?
?102
?
?
?
?
(b)
Figure 1: (a) Two measures of the similarity between words, based on samples of the bi-grams
containing each word. Each line represents a pair of words, and is obtained by taking a sample of
m1 = 1, 000 bi-grams containing the first word, and m2 = 50, . . . , 1, 000 bi-grams containing the
second word, where m2 is depicted along the x-axis in logarithmic scale. In both plots, the red lines
represent pairs of identical words (e.g. ?wolf/wolf?,?almost/almost?,. . . ). The blue lines represent
pairs of similar words (e.g. ?wolf/fox?, ?almost/nearly?,. . . ), and the black line represents the pair
?grey/gray? whose distribution of bi-grams differ because of historical variations in preference for
each spelling. Solid lines indicate the average over 200 trials for each word pair and choice of m2 ,
with error bars of one standard deviation depicted. The left plot depicts our statistic, which clearly
distinguishes identical words, and demonstrates some intuitive sense of semantic distance. The
right plot depicts the total variation distance between the empirical distributions?which does not
successfully distinguish the identical words, given the range of sample sizes considered. The plot
would not be significantly different if other distance metrics between the empirical distributions,
such as f-divergence, were used in place of total variation distance. Finally, note the extremely
uniform magnitudes of the error bars in the left plot, as m2 increases, which is an added benefit
of the Xi + Yi normalization term in the Z statistic. (b) Illustration of how the empirical distance
can be misleading: here, the empirical distance between the distributions of samples of bi-grams for
?wolf/wolf? is indistinguishable from that for the pair ?wolf/fox*? over much of the range of m2 ;
nevertheless, our statistic clearly discerns that these are significantly different distributions. Here,
?fox*? denotes the distribution of bi-grams whose first word is ?fox?, restricted to only the most
common 100 bi-grams.
Pr [ || pm1 ? qm2 || > || pm1 ? pm2 || ]
Pr [ Z(pm1,qm2) > Z(pm1,pm2) ] Pr [ || pm1 ? qm2 || > || pm1 ? pm2 || ]
n = 1,000
n = 50,000
n = 50,000
n
n
n
n = 1,000
n
Pr [ Z(pm1,qm2) > Z(pm1,pm2) ]
1
m2
0.8
n 0.75
m2
n 0.75
n 0.75
m2
n 0.75
0.9
m2
0.7
0.6
0.5
n 0.5
n 0.75
m1
n
n 0.5
n 0.75
m1
n 0.5
n
n 0.75
m1
n
n 0.5
n 0.75
m1
n
Figure 2: The first and third plot depicts the probability that the Z statistic applied to samples of
sizes m1 , m2 drawn from p = U nif [n] is smaller than the Z statistic applied to a sample of size m1
drawn from p and m2 drawn from q, where q is a perturbed version of p in which all elements have
probability (1 ? 1/2)/n. The second and fourth plots depict the probability that empirical distance
between a pair of samples (of respective sizes m1 , m2 ) drawn from p is less than the empirical
distribution between a sample of size m1 drawn from p and m2 drawn from q. The first two plots
correspond
? to n = 1, 000 and the last two correspond to n = 50, 000. In all plots, m1 and m2 range
between n and n on a logarithmic scale. In all plots the colors depict the average probability based
on 100 trials.
8
References
[1] J. Acharya, H. Das, A. Jafarpour, A. Orlitsky, and S. Pan, Competitive closeness testing, COLT, 2011.
[2] J. Acharya, H. Das, A. Jafarpour, A. Orlitsky, and S. Pan, Competitive classification and closeness testing.
COLT, 2012.
[3] J. Acharya, A. Jafarpour, A. Orlitsky, and A. T. Suresh, Sublinear algorithms for outlier detection and
generalized closeness testing, ISIT, 3200?3204, 2014.
[4] J. Acharya, C. Daskalakis, and G. Kamath, Optimal testing for properties of distributions, NIPS, 2015.
[5] Z. Bar-Yossef, R. Kumar, and D. Sivakumar. Sampling algorithms: lower bounds and applications, STOC,
2001.
[6] T. Batu, L. Fortnow, R. Rubinfeld, W. D. Smith, and P. White, Testing that distributions are close, FOCS,
2000.
[7] T. Batu, S. Dasgupta, R. Kumar, and R. Rubinfeld, The complexity of approximating the entropy, SIAM
Journal on Computing, 2005.
[8] T. Batu, E. Fischer, L. Fortnow, R. Kumar, R. Rubinfeld, and P. White, Testing random variables for
independence and identity, FOCS, 2001.
[9] S.-on Chan, I. Diakonikolas, P. Valiant, G. Valiant, Optimal Algorithms for Testing Closeness of Discrete
Distributions, Symposium on Discrete Algorithms (SODA), 1193?1203, 2014,
[10] M. Charikar, S. Chaudhuri, R. Motwani, and V.R. Narasayya, Towards estimation error guarantees for
distinct values, Symposium on Principles of Database Systems (PODS), 2000.
[11] A. Czumaj and C. Sohler, Testing expansion in bounded-degree graphs, FOCS, 2007.
[12] O. Goldreich and D. Ron, On testing expansion in bounded-degree graphs, ECCC, TR00-020, 2000.
[13] S. Guha, A. McGregor, and S. Venkatasubramanian, Streaming and sublinear approximation of entropy
and information distances, Symposium on Discrete Algorithms (SODA), 2006.
[14] D. Hsu, A. Kontorovich, and C. Szepesv?ari, Mixing time estimation in reversible Markov chains from a
single sample path, NIPS, 2015.
[15] S. Kale and C. Seshadhri, An expansion tester for bounded degree graphs, ICALP, LNCS, Vol. 5125,
527?538, 2008.
[16] A. Keinan and A. G. Clark. Recent explosive human population growth has resulted in an excess of rare
genetic variants. Science, 336(6082):740743, 2012.
[17] D. A. Levin, Y. Peres, and E. L. Wilmer, Markov Chains and Mixing Times, Amer. Math. Soc., 2009.
[18] A. Nachmias and A. Shapira, Testing the expansion of a graph, Electronic Colloquium on Computational
Complexity (ECCC), Vol. 14 (118), 2007.
[19] M. R. Nelson and D. Wegmann et al., An abundance of rare functional variants in 202 drug target genes
sequenced in 14,002 people. Science, 337(6090):100104, 2012.
[20] L. Paninski, Estimation of entropy and mutual information, Neural Comp., Vol. 15 (6), 1191?1253, 2003.
[21] L. Paninski, Estimating entropy on m bins given fewer than m samples, IEEE Transactions on Information Theory, Vol. 50 (9), 2200?2203, 2004.
[22] L. Paninski, A coincidence-based test for uniformity given very sparsely-sampled discrete data, IEEE
Transactions on Information Theory, Vol. 54, 4750?4755, 2008.
[23] S. Raskhodnikova, D. Ron, A. Shpilka, and A. Smith, Strong lower bounds for approximating distribution
support size and the distinct elements problem, SIAM Journal on Computing, Vol. 39(3), 813?842, 2009.
[24] R. Rubinfeld, Taming big probability distributions, XRDS, Vol. 19(1), 24?28, 2012.
[25] A. Sinclair and M. Jerrum, Approximate counting, uniform generation and rapidly mixing Markov chains,
Information and Computation, Vol. 82(1), 93?133, 1989.
[26] J. A. Tennessen, A.W. Bigham, and T.D. O?Connor et al. Evolution and functional impact of rare coding
variation from deep sequencing of human exomes. Science, 337(6090):6469, 2012
[27] G. Valiant and P. Valiant, Estimating the unseen: an n/ log n-sample estimator for entropy and support
size, shown optimal via new CLTs, STOC, 2011.
[28] G. Valiant and P. Valiant, Estimating the unseen: improved estimators for entropy and other properties,
NIPS, 2013.
[29] G. Valiant and P. Valiant, An Automatic Inequality Prover and Instance Optimal Identity Testing, FOCS,
51?60, 2014.
[30] P. Valiant, Testing symmetric properties of distributions, STOC, 2008.
[31] P. Valiant, Testing Symmetric Properties of Distributions, PhD thesis, M.I.T., 2008.
9
| 5908 |@word trial:2 version:2 briefly:1 polynomial:2 seems:5 smirnov:1 norm:1 c0:2 clts:1 open:1 unif:1 grey:2 jafarpour:3 solid:1 moment:1 venkatasubramanian:1 initial:1 contains:2 efficacy:1 genetic:2 com:1 si:1 intriguing:1 must:1 additive:2 partition:2 designed:1 plot:15 depict:2 stationary:3 half:1 fewer:2 smith:2 core:4 math:1 node:9 ron:5 contribute:1 preference:2 along:1 c2:2 symposium:3 viable:1 m22:5 prove:1 focs:4 theoretically:3 rapid:1 examine:1 chi:1 resolve:2 actual:1 increasing:1 becomes:1 provided:3 estimating:11 underlying:2 bounded:5 maximizes:1 mass:1 moreover:2 begin:2 developed:1 guarantee:2 thorough:1 every:3 orlitsky:3 growth:1 seshadhri:2 runtime:2 prohibitively:1 scaled:1 demonstrates:1 before:1 t1:3 understood:1 consequence:1 despite:2 establishing:2 path:3 sivakumar:1 birthday:2 might:3 black:1 twice:1 studied:2 ease:2 ngram:2 range:8 bi:10 practical:1 testing:55 practice:4 differs:1 suresh:1 lncs:1 area:1 empirical:21 drug:1 significantly:8 thought:1 matching:2 reject:3 word:31 shapira:2 suggest:1 close:3 storage:1 context:1 applying:1 raskhodnikova:1 customer:1 primitive:2 go:4 kale:2 independently:1 starting:3 survey:1 restated:1 pod:1 immediately:1 m2:50 estimator:3 rule:1 population:1 variation:7 analogous:1 limiting:1 target:2 suppose:1 modulo:1 exact:1 distinguishing:6 us:4 hypothesis:14 element:24 approximated:2 particularly:1 walking:1 asymmetric:1 sparsely:1 database:1 vein:1 observed:4 yossef:1 coincidence:1 capture:1 worst:3 removed:2 intuition:5 colloquium:1 complexity:9 uniformity:2 tight:3 smart:1 easily:2 goldreich:4 various:3 kolmogorov:1 distinct:2 describe:3 query:6 bhaswar:2 choosing:1 h0:2 whose:5 tmix:9 stanford:4 larger:3 supplementary:3 say:1 drawing:1 otherwise:3 ability:1 statistic:30 fischer:1 jerrum:1 unseen:2 final:1 differentiate:1 sequence:2 eigenvalue:3 czumaj:2 unresolved:1 relevant:2 narasayya:1 rapidly:1 mixing:23 chaudhuri:1 achieve:1 intuitive:1 m21:5 motwani:1 illustrate:1 polylog:1 stating:1 depending:1 strong:1 soc:1 shpilka:1 entirety:1 involves:1 implies:1 indicate:1 differ:3 tester:1 human:2 oisson:4 material:3 bin:1 require:2 suffices:1 generalization:1 isit:1 extension:1 strictly:1 hold:2 practically:3 considered:6 deciding:2 algorithmic:2 smallest:1 omitted:1 estimation:5 largest:1 successfully:2 tool:1 weighted:1 hope:2 concurrently:1 clearly:3 always:1 rather:2 poi:2 corollary:4 sequencing:1 check:4 mainly:1 contrast:1 sense:1 wegmann:1 streaming:1 accept:3 provably:1 overall:1 among:1 issue:1 html:1 colt:2 exponent:1 classification:1 development:1 mutual:1 equal:3 once:3 having:2 sampling:2 sohler:2 identical:8 represents:3 nearly:1 discrepancy:2 t2:3 pxt:3 intelligent:1 acharya:6 distinguishes:2 modern:1 simultaneously:1 divergence:1 interpolate:1 individual:1 relabeling:1 resulted:1 n1:2 explosive:1 detection:1 certainly:1 introduces:1 extreme:4 light:5 behind:2 chain:24 pm2:4 necessary:8 respective:3 intense:1 modest:1 machinery:1 fox:4 walk:1 abundant:1 desired:1 re:1 instance:2 modeling:1 deviation:1 entry:1 rare:7 uniform:8 successful:1 levin:1 fruitfully:1 guha:1 dependency:3 supx:1 varies:1 gregory:1 perturbed:1 synthetic:4 cited:1 siam:2 picking:1 kontorovich:1 thesis:1 squared:3 reflect:1 opposed:2 leveraged:2 possibly:1 containing:4 henceforth:1 sinclair:1 book:5 suggesting:1 coding:1 waste:1 satisfy:1 explicitly:1 performed:1 later:2 multiplicative:2 h1:1 portion:1 competitive:3 red:2 defer:1 mutation:3 contribution:1 variance:3 largely:2 who:1 yield:4 correspond:4 pm1:8 accurately:3 worth:1 comp:1 history:1 reach:1 definition:2 frequency:3 involved:2 obvious:1 proof:1 mi:1 hsu:2 sampled:1 dataset:3 proved:2 recall:1 color:1 subtle:1 actually:3 back:1 follow:3 improved:7 amer:1 though:3 furthermore:2 just:1 nif:1 sketch:1 web:1 reversible:1 google:3 gray:2 believe:2 usage:1 contain:1 true:2 ccf:1 evolution:1 hence:3 symmetric:4 nonzero:1 semantic:3 illustrated:1 deal:1 white:2 indistinguishable:2 generalized:1 presenting:1 outline:1 theoretic:2 performs:2 recently:4 ari:1 common:1 functional:2 fortnow:2 discussed:1 analog:1 m1:47 significant:2 refer:1 connor:1 rd:1 automatic:1 pointed:1 language:4 had:1 access:1 similarity:4 etc:1 recent:4 showed:3 chan:3 conjectured:2 inf:1 inequality:1 arbitrarily:1 yi:20 captured:1 minimum:1 additional:2 employed:1 freely:1 determine:2 tempting:1 mix:10 calculation:1 award:1 qi:5 ensuring:1 variant:4 basic:5 involving:2 denominator:2 essentially:5 metric:2 bigham:1 impact:1 normalization:3 tailored:1 represent:4 sequenced:1 achieved:1 audience:1 c1:3 addition:2 want:2 szepesv:1 concluded:1 crucial:2 standpoint:1 appropriately:3 comment:1 tend:1 leveraging:1 seem:1 linearizes:1 information1:1 noting:1 counting:1 independence:1 gave:1 idea:2 tradeoff:2 t0:16 whether:14 s7:1 interpolates:2 cause:1 supress:1 deep:1 useful:1 informally:1 involve:1 amount:1 repeating:1 extensively:1 reduced:1 http:1 nsf:1 s3:1 estimated:1 blue:1 discrete:7 dasgupta:1 vol:8 nevertheless:3 drawn:15 clarity:1 graph:7 geometrically:1 run:2 fourth:2 soda:2 place:1 throughout:2 almost:3 electronic:1 separation:1 draw:5 bound:18 distinguish:6 oracle:7 occur:3 infinity:2 n3:1 aspect:1 extremely:6 optimality:1 min:3 kumar:3 relatively:3 px:2 department:2 developing:1 rubinfeld:5 charikar:1 nachmias:2 poor:2 smaller:5 remain:1 across:1 slightly:2 pan:2 n4:1 s1:4 explained:1 invariant:1 restricted:1 pr:4 outlier:1 equation:2 remains:1 discus:1 demographic:2 available:3 operation:1 apply:1 spectral:1 occurrence:6 distinguished:1 bhattacharya:1 batch:1 robustness:1 denotes:3 binomial:2 remaining:1 running:3 especially:1 approximating:2 classical:1 question:12 quantity:1 added:1 prover:1 dependence:4 spelling:2 diakonikolas:1 distance:30 separate:1 simulated:1 nelson:1 the1:1 fresh:1 length:1 index:2 illustration:2 nc:1 difficult:2 kamath:1 stoc:3 unknown:12 perform:2 upper:3 observation:1 markov:22 datasets:2 finite:5 peres:1 paradox:2 frame:1 community:2 introduced:1 evidenced:1 pair:13 required:3 specified:1 connection:1 california:2 unequal:7 polylogarithmic:1 established:1 nm2:1 nip:3 beyond:1 bar:3 below:3 regime:11 appeared:1 summarize:1 max:8 natural:10 undersampled:3 turning:1 eccc:2 mn:3 minimax:1 misleading:2 mi1:1 imply:1 axis:1 genomics:1 text:1 taming:1 batu:7 literature:1 asymptotic:2 icalp:1 sublinear:2 generation:1 versus:14 clark:1 degree:4 sufficient:4 consistent:1 principle:1 pi:5 heavy:3 summary:1 supported:6 surprisingly:3 last:2 wilmer:1 formal:2 understand:1 wide:1 taking:2 focussed:1 absolute:4 sparse:1 benefit:1 cle:1 world:1 gram:11 transition:5 historical:2 far:1 transaction:2 excess:1 approximate:1 obtains:2 emphasize:1 gene:1 tolerant:1 corpus:2 conclude:1 xi:18 daskalakis:1 ngrams:1 learn:1 robust:2 ca:2 career:1 obtaining:2 expansion:6 qm2:4 domain:3 da:2 main:3 dense:1 linearly:1 synonym:1 s2:3 big:1 n2:11 intolerably:1 depicts:4 wish:2 crude:1 late:1 third:2 abundance:2 theorem:5 unwieldy:1 specific:3 emphasized:1 appeal:1 closeness:12 normalizing:2 exists:1 evidence:1 valiant:15 importance:1 phd:1 magnitude:1 illustrates:2 gap:1 smoothly:1 entropy:7 logarithmic:4 depicted:3 paninski:4 simply:2 likely:1 deducing:1 highlighting:1 wolf:7 sized:5 identity:4 exposition:1 towards:2 included:1 specifically:3 except:1 determined:1 uniformly:1 total:5 accepted:1 formally:1 select:1 support:9 people:1 phenomenon:2 incorporate:1 tested:1 mcgregor:1 |
5,423 | 5,909 | Learning Causal Graphs with Small Interventions
Karthikeyan Shanmugam1 , Murat Kocaoglu2 , Alexandros G. Dimakis3 , Sriram Vishwanath4
Department of Electrical and Computer Engineering
The University of Texas at Austin, USA
1
karthiksh@utexas.edu,2 mkocaoglu@utexas.edu,
3
dimakis@austin.utexas.edu,4 sriram@ece.utexas.edu
Abstract
We consider the problem of learning causal networks with interventions, when
each intervention is limited in size under Pearl?s Structural Equation Model with
independent errors (SEM-IE). The objective is to minimize the number of experiments to discover the causal directions of all the edges in a causal graph. Previous
work has focused on the use of separating systems for complete graphs for this
task. We prove that any deterministic adaptive algorithm needs to be a separating system in order to learn complete graphs in the worst case. In addition, we
present a novel separating system construction, whose size is close to optimal and
is arguably simpler than previous work in combinatorics. We also develop a novel
information theoretic lower bound on the number of interventions that applies in
full generality, including for randomized adaptive learning algorithms.
For general chordal graphs, we derive worst case lower bounds on the number
of interventions. Building on observations about induced trees, we give a new
deterministic adaptive algorithm to learn directions on any chordal skeleton completely. In the worst case, our achievable scheme is an ?-approximation algorithm
where ? is the independence number of the graph. We also show that there exist
graph classes for which the sufficient number of experiments is close to the lower
bound. In the other extreme, there are graph classes for which the required number
of experiments is multiplicatively ? away from our lower bound.
In simulations, our algorithm almost always performs very close to the lower
bound, while the approach based on separating systems for complete graphs is
significantly worse for random chordal graphs.
1
Introduction
Causality is a fundamental concept in sciences and philosophy. The mathematical formulation of
a theory of causality in a probabilistic sense has received significant attention recently (e.g. [1?5]).
A formulation advocated by Pearl considers the structural equation models: In this framework,
X is a cause of Y , if Y can be written as f (X, E), for some deterministic function f and some
latent random variable E. Given two causally related variables X and Y , it is not possible to infer
whether X causes Y or Y causes X from random samples, unless certain assumptions are made
on the distribution of E and/or on f [6, 7]. For more than two random variables, directed acyclic
graphs (DAGs) are the most common tool used for representing causal relations. For a given DAG
D = (V, E), the directed edge (X, Y ) 2 E shows that X is a cause of Y .
If we make no assumptions on the data generating process, the standard way of inferring the causal
directions is by performing experiments, the so-called interventions. An intervention requires modifying the process that generates the random variables: The experimenter has to enforce values on
the random variables. This process is different than conditioning as explained in detail in [1].
1
The natural problem to consider is therefore minimizing the number of interventions required to
learn a causal DAG. Hauser et al. [2] developed an efficient algorithm that minimizes this number
in the worst case. The algorithm is based on optimal coloring of chordal graphs and requires at
most log interventions to learn any causal graph where is the chromatic number of the chordal
skeleton.
However, one important open problem appears when one also considers the size of the used interventions: Each intervention is an experiment where the scientist must force a set of variables to take
random values. Unfortunately, the interventions obtained in [2] can involve up to n/2 variables. The
simultaneous enforcing of many variables can be quite challenging in many applications: for example in biology, some variables may not be enforceable at all or may require complicated genomic
interventions for each parameter.
In this paper, we consider the problem of learning a causal graph when intervention sizes are
bounded by some parameter k. The first work we are aware of for this problem is by Eberhardt
et al. [3], where he provided an achievable scheme. Furthermore [8] shows that the set of interventions to fully identify a causal DAG must satisfy a specific set of combinatorial conditions called a
separating system1 , when the intervention size is not constrained or is 1. In [4], with the assumption
that the same holds true for any intervention size, Hyttinen et al. draw connections between causality
and known separating system constructions. One open problem is: If the learning algorithm is adaptive after each intervention, is a separating system still needed or can one do better? It was believed
that adaptivity does not help in the worst case [8] and that one still needs a separating system.
Our Contributions: We obtain several novel results for learning causal graphs with interventions
bounded by size k. The problem can be separated for the special case where the underlying undirected graph (the skeleton) is the complete graph and the more general case where the underlying
undirected graph is chordal.
1. For complete graph skeletons, we show that any adaptive deterministic algorithm needs a (n, k)
separating system. This implies that lower bounds for separating systems also hold for adaptive
algorithms and resolves the previously mentioned open problem.
2. We present a novel combinatorial construction of a separating system that is close to the previous
lower bound. This simple construction may be of more general interest in combinatorics.
3. Recently [5] showed that randomized adaptive algorithms need only log log n interventions with
high probability for the unbounded case. We extend this result and show that O nk log log k
interventions of size bounded by k suffice with high probability.
n
4. We present a more general information theoretic lower bound of 2k
to capture the performance
of such randomized algorithms.
5. We extend the lower bound for adaptive algorithms for general chordal graphs. We show that
over all orientations, the number of experiments from a ( (G), k) separating system is needed
where (G) is the chromatic number of the skeleton graph.
6. We show two extremal classes of graphs. For one of them, the interventions through ( , k)
n
separating system is sufficient. For the other class, we need ?( 2k 1) ? 2k
experiments in the
worst case.
7. We exploit the structural properties of chordal graphs to design a new deterministic adaptive algorithm that uses the idea of separating systems together with adaptability to Meek rules. We
simulate our new algorithm and empirically observe that it performs quite close to the ( , k) separating system. Our algorithm requires much fewer interventions compared to (n, k) separating
systems.
2
Background and Terminology
2.1
Essential graphs
A causal DAG D = (V, E) is a directed acyclic graph where V = {x1 , x2 . . . xn } is a set of random
variables and (x, y) 2 E is a directed edge if and only if x is a direct cause of y. We adopt Pearl?s
structural equation model with independent errors (SEM-IE) in this work (see [1] for more details).
1
A separating system is a 0-1 matrix with n distinct columns and each row has at most k ones.
2
Variables in S ? V cause xi , if xi = f ({xj }j2S , ey ) where ey is a random variable independent of
all other variables.
The causal relations of D imply a set of conditional independence (CI) relations between the variables. A conditional independence relation is of the following form: Given Z, the set X and the set
Y are conditionally independent for some disjoint subsets of variables X, Y, Z. Due to this, causal
DAGs are also called causal Bayesian networks. A set V of variables is Bayesian with respect to a
DAG D if the joint probability distribution of V can be factorized as a product of marginals of every
variable conditioned on its parents.
All the CI relations that are learned statistically through observations can also be inferred from the
Bayesian network using a graphical criterion called the d-separation [9] assuming that the distribution is faithful to the graph 2 . Two causal DAGs are said to be Markov equivalent if they encode the
same set of CIs. Two causal DAGs are Markov equivalent if and only if they have the same skeleton3
and the same immoralities4 . The class of causal DAGs that encode the same set of CIs is called the
Markov equivalence class. We denote the Markov equivalence class of a DAG D by [D]. The graph
union5 of all DAGs in [D] is called the essential graph of D. It is denoted E(D). E(D) is always a
chain graph with chordal6 chain components 7 [11].
The d-separation criterion can be used to identify the skeleton and all the immoralities of the underlying causal DAG [9]. Additional edges can be identified using the fact that the underlying DAG
is acyclic and there are no more immoralities. Meek derived 3 local rules (Meek rules), introduced
in [12], to be recursively applied to identify every such additional edge (see Theorem 3 of [13]). The
repeated application of Meek rules on this partially directed graph with identified immoralities until
they can no longer be used yields the essential graph.
2.2
Interventions and Active Learning
Given a set of variables V = {x1 , ..., xn }, an intervention on a set S ? X of the variables is an
experiment where the performer forces each variable s 2 S to take the value of another independent
(from other variables) variable u, i.e., s = u. This operation, and how it affects the joint distribution
is formalized by the do operator by Pearl [1]. An intervention modifies the causal DAG D as
follows: The post intervention DAG D{S} is obtained by removing the connections of nodes in S to
their parents. The size of an intervention S is the number of intervened variables, i.e., |S|. Let S c
denote the complement of the set S.
CI-based learning algorithms can be applied to D{S} to identify the set of removed edges, i.e.
parents of S [9], and the remaining adjacent edges in the original skeleton are declared to be the
children. Hence,
(R0) The orientations of the edges of the cut between S and S c in the original DAG D can be
inferred.
Then, 4 local Meek rules (introduced in [12]) are repeatedly applied to the original DAG D with
the new directions learnt from the cut to learn more till no more directed edges can be identified.
Further application of CI-based algorithms on D will reveal no more information. The Meek rules
are given below:
(R1) (a
(R2) (a
(R3) (a
b) is oriented as (a ! b) if 9c s.t. (c ! a) and (c, b) 2
/ E.
b) is oriented as (a ! b) if 9c s.t. (a ! c) and (c ! b).
b) is oriented as (a ! b) if 9c, d s.t. (a c),(a d),(c ! b),(d ! b) and (c, d) 2
/ E.
2
Given Bayesian network, any CI relation implied by d-separation holds true. All the CIs implied by the
distribution can be found using d-separation if the distribution is faithful. Faithfulness is a widely accepted
assumption, since it is known that only a measure zero set of distributions are not faithful [10].
3
Skeleton of a DAG is the undirected graph obtained when directed edges are converted to undirected edges.
4
An induced subgraph on X, Y, Z is an immorality if X and Y are disconnected, X ! Z and Z
Y.
5
Graph union of two DAGs D1 = (V, E1 ) and D2 = (V, E2 ) with the same skeleton is a partially directed
graph D = (V, E), where (va , vb ) 2 E is undirected if the edges (va , vb ) in E1 and E2 have different
directions, and directed as va ! vb if the edges (va , vb ) in E1 and E2 are both directed as va ! vb .
6
An undirected graph is chordal if it has no induced cycle of length greater than 3.
7
This means that E(D) can be decomposed as a sequence of undirected chordal graphs G1 , G2 . . . Gm
(chain components) such that there is a directed edge from a vertex in Gi to a vertex in Gj only if i < j
3
(R4) (a
c) is oriented as (a ! c) if 9b, d s.t. (b ! c),(a
d),(a
b),(d ! b) and (c, d) 2
/ E.
The concepts of essential graphs and Markov equivalence classes are extended in [14] to incorporate
the role of interventions: Let I = {I1 , I2 , ..., Im }, be a set of interventions and let the above process
be followed after each intervention. Interventional Markov equivalence class (I equivalence) of
a DAG is the set of DAGs that represent the same set of probability distributions obtained when
the above process is applied after every intervention in I. It is denoted by [D]I . Similar to the
observational case, I essential graph of a DAG D is the graph union of all DAGs in the same I
equivalence class; it is denoted by EI (D). We have the following sequence:
a
b
D ! CI learning ! Meek rules ! E(D) ! I1 ! learn by R0 ! Meek rules
! E{I1 } (D) ! I2 . . . ! E{I1 ,I2 } (D) . . .
(1)
Therefore, after a set of interventions I has been performed, the essential graph EI (D) is a graph
with some oriented edges that captures all the causal relations we have discovered so far, using I.
Before any interventions happened E(D) captures the initially known causal directions. It is known
that EI (D) is a chain graph with chordal chain components. Therefore when all the directed edges
are removed, the graph becomes a set of disjoint chordal graphs.
2.3
Problem Definition
We are interested in the following question:
Problem 1. Given that all interventions in I are of size at most k < n/2 variables, i.e., for each
intervention I, |I| ? k, 8I 2 I, minimize the number of interventions |I| such that the partially
directed graph with all directions learned so far EI (D) = D.
The question is the design of an algorithm that computes the small set of interventions I given E(D).
Note, of course, that the unknown directions of the edges D are not available to the algorithm. One
can view the design of I as an active learning process to find D from the essential graph E(D). E(D)
is a chain graph with undirected chordal components and it is known that interventions on one chain
components do not affect the discovery process of directed edges in the other components [15]. So
we will assume that E(D) is undirected and a chordal graph to start with. Our notion of algorithm
does not consider the time complexity (of statistical algorithms involved) of steps a and b in (1).
Given m interventions, we only consider efficiently computing Im+1 using (possibly) the graph
E{I1 ,...Im } . We consider the following three classes of algorithms:
1. Non-adaptive algorithm: The choice of I is fixed prior to the discovery process.
2. Adaptive algorithm: At every step m, the choice of Im+1 is a deterministic function of
E{I1 ,...Im } (D).
3. Randomized adaptive algorithm: At every step m, the choice of Im+1 is a random function
of E{I1 ,...Im } (D).
The problem is different for complete graphs versus more general chordal graphs since rule R1
becomes applicable when the graph is not complete. Thus we give a separate treatment for each
case. First, we provide algorithms for all three cases for learning the directions of complete graphs
E(D) = Kn (undirected complete graph) on n vertices. Then, we generalize to chordal graph
skeletons and provide a novel adaptive algorithm with upper and lower bounds on its performance.
The missing proofs of the results that follow can be found in the Appendix.
3
Complete Graphs
In this section, we consider the case where the skeleton we start with, i.e. E(D), is an undirected
complete graph (denoted Kn ). It is known that at any stage in (1) starting from E(D), rules R1,
R3 and R4 do not apply. Further, the underlying DAG D is a directed clique. The directed clique
is characterized by an ordering on [1 : n] such that, in the subgraph induced by (i), (i +
~ n ( ) for some ordering . Let
1) . . . (n), (i) has no incoming edges. Let D be denoted by K
[1 : n] denote the set {1, 2 . . . n}. We need the following results on a separating system for our first
result regarding adaptive and non-adaptive algorithms for a complete graph.
4
3.1
Separating System
Definition 1. [16, 17] An (n, k)-separating system on an n element set [1 : n] is a set of subsets
S = {S1 , S2 . . . Sm } such that |Si | ? k and for every pair i, j there is a subset S 2 S such that
either i 2 S, j 2
/ S or j 2 S, i 2
/ S. If a pair i, j satisfies the above condition with respect to S,
then S is said to separate the pair i, j. Here, we consider the case when k < n/2
In [16], Katona gave an (n, k)-separating system together with a lower bound on |S|. In [17],
Wegener gave a simpler argument for the lower bound and also provided a tighter upper bound than
the one in [16]. In this work, we give a different construction below where the separating system
size is at mostdlogdn/ke ne larger than the construction of Wegener. However, our construction has
a simpler description.
Lemma 1. There is a labeling procedure that produces distinct ` length labels for all elements in
[1 : n] using letters from the integer alphabet {0, 1 . . . a} where ` = dloga ne. Further, in every digit
(or position), any integer letter is used at most dn/ae times.
Once we have a set of n string labels as in Lemma 1, our separating system construction is straightforward.
Theorem 1. Consider an alphabet A = [0 : d nk e] of size d nk e + 1 where k < n/2. Label every
element of an n element set using a distinct string of letters from A of length ` = dlogd nk e ne using
the procedure in Lemma 1 with a = d nk e. For every 1 ? i ? ` and 1 ? j ? d nk e, choose the
subset Si,j of vertices whose string?s i-th letter is j. The set of all such subsets S = {Si,j } is a
k-separating system on n elements and |S| ? (d nk e)dlogd nk e ne.
3.2
Adaptive algorithms: Equivalence to a Separating System
Consider any non-adaptive algorithm that designs a set of interventions I, each of size at most k,
~ n ( ). I has to be a separating system in the worst case over all . This is already
to discover K
known. Now, we prove the necessity of a separating system for deterministic adaptive algorithms in
the worst case.
Theorem 2. Let there be an adaptive deterministic algorithm A that designs the set of interventions
~ n ( ) for any ground truth ordering starting from the
I such that the final graph learnt EI (D) = K
initial skeleton E(D) = Kn . Then, there exists a such that A designs an I which is a separating
system.
The theorem above is independent of the individual intervention sizes. Therefore, we have the
following theorem, which is a direct corollary of Theorem 2:
Theorem 3. In the worst case over , any adaptive or a non-adaptive deterministic algorithm on
~ n ( ) has to be such that n log ne n ? |I|. There is a feasible I with |I| ? d( n e
the DAG K
k
k
k
1)dlogd nk e ne
Proof. By Theorem 2, we need a separating system in the worst case and the lower and upper bounds
are from [16, 17].
3.3
Randomized Adaptive Algorithms
In this section, we show that that total number of variable accesses to fully identify the complete
causal DAG is ?(n).
~ n ( ) on n variables using size-k intervenTheorem 4. To fully identify a complete causal DAG K
n
tions, 2k
interventions are necessary. Also, the total number of variables accessed is at least n2 .
The lower bound in Theorem 4 is information theoretic. We now give a randomized algorithm that
requires O( nk log log k) experiments in expectation. We provide a straightforward generalization
of [5], where the authors gave a randomized algorithm for unbounded intervention size.
Theorem 5. Let E(D) be Kn and the experiment size k = nr for some 0 < r < 1. Then there
exists a randomized adaptive algorithm which designs an I such that EI (D) = D with probability
polynomial in n, and |I| = O( nk log log(k)) in expectation.
5
4
General Chordal Graphs
In this section, we turn to interventions on a general DAG G. After the initial stages in (1), E(G)
is a chain graph with chordal chain components. There are no further immoralities throughout the
graph. In this work, we focus on one of the chordal chain components. Thus the DAG D we work
on is assumed to be a directed graph with no immoralities and whose skeleton E(D) is chordal. We
are interested in recovering D from E(D) using interventions of size at most k following (1).
4.1
Bounds for Chordal skeletons
We provide a lower bound for both adaptive and non-adaptive deterministic schemes for a chordal
skeleton E(D). Let (E(D)) be the coloring number of the given chordal graph. Since, chordal
graphs are perfect, it is the same as the clique number.
Theorem 6. Given a chordal E(D), in the worst case over all DAGs D (which has skeleton E(D) and no immoralities), if every intervention is of size at most k, then |I|
(E(D))
log (E(D))e (E(D)) for any adaptive and non-adaptive algorithm with EI (D) = D.
k
k
Upper bound: Clearly, the separating system based algorithm of Section 3 can be applied to the
vertices in the chordal skeleton E(D) and it is possible to find all the directions. Thus, |I| ?
?(E(D)) (E(D))
n
n
logd nk e n. This with the lower bound implies an ? approximation
k logd k e n ?
k
algorithm (since logd nk e n ? log (E(D))e (E(D)) , under a mild assumption (E(D)) ? ne ).
k
Remark: The separating system on n nodes gives an ? approximation. However, the new algorithm
in Section 4.3 exploits chordality and performs much better empirically. It is possible to show that
our heuristic also has an ? approximation guarantee but we skip that.
4.2
Two extreme counter examples
We provide two classes of chordal skeletons G: One for which the number of interventions close
to the lower bound is sufficient and the other for which the number of interventions needed is very
close to the upper bound.
Theorem 7. There exists chordal skeletons such that for any algorithm with intervention size constraint k, the number of interventions |I| required is at least ? ( 2k1) where ? and are the independence number and chromatic numbers respectively. There exists chordal graph classes such that
|I| = d k edlogd k e e is sufficient.
4.3
An Improved Algorithm using Meek Rules
In this section, we design an adaptive deterministic algorithm that anticipates Meek rule R1 usage
along with the idea of a separating system. We evaluate this experimentally on random chordal
graphs. First, we make a few observations on learning connected directed trees T from the skeleton
E(T ) (undirected trees are chordal) that do not have immoralities using Meek rule R1 where every
intervention is of size k = 1. Because the tree has no cycle, Meek rules R2-R4 do not apply.
Lemma 2. Every node in a directed tree with no immoralities has at most one incoming edge. There
is a root node with no incoming edges and intervening on that node alone identifies the whole tree
using repeated application of rule R1.
Lemma 3. If every intervention in I is of size at most 1, learning all directions on a directed tree
T with no immoralities can be done adaptively with at most |I| ? O(log2 n) where n is the number
of vertices in the tree. The algorithm runs in time poly(n).
Lemma 4. Given any chordal graph and a valid coloring, the graph induced by any two color
classes is a forest.
In the next section, we combine the above single intervention adaptive algorithm on directed trees
which uses Meek rules, with that of the non-adaptive separating system approach.
6
4.3.1
Description of the algorithm
The key motivation behind the algorithm is that, a pair of color classes is a forest (Lemma 4).
Choosing the right node to intervene leaves only a small subtree unlearnt as in the proof of Lemma
3. In subsequent steps, suitable nodes in the remaining subtrees could be chosen until all edges are
learnt. We give a brief description of the algorithm below.
Let G denote the initial undirected chordal skeleton E(D) and let be its coloring number. Consider
a ( , k) separating system S = {Si }. To intervene on the actual graph, an intervention set Ii
corresponding to Si is chosen. We would like to intervene on a node of color c 2 Si .
Consider a node v of color c. Now, we attach a score P (v, c) as follows. For any color c0 2
/ Si ,
consider the induced forest F (c, c0 ) on the color classes c and c0 in G. Consider the tree T (v, c, c0 )
containing node v in F . Let d(v) be the degree of v in T . Let T1 , T2 , . . . Td(v) be the resulting
disjoint trees after node v is removed from T . If v is intervened on, according to the proof of
Lemma 3: a) All edge directions in all trees Ti except one of them would be learnt when applying
Meek Rules and rule R0. b) All the directions from v to all its neighbors would be found.
The score is taken to be the total number of edge directions
guaranteed to be learnt
?
? in the worst case.
P
0
Therefore, the score P (v) is: P (v) =
|T (c, c )|
max |Tj | . The node with the
T
c0 :|c,c0
|=1
1?j?d(v)
highest score among the color class c is used for the intervention Ii . After intervening on Ii , all the
edges whose directions are known through Meek Rules (by repeated application till nothing more
can be learnt) and R0 are deleted from G. Once S is processed, we recolor the sparser graph G. We
find a new S with the new chromatic number on G and the above procedure is repeated. The exact
hybrid algorithm is described in Algorithm 1.
Theorem 8. Given an undirected choral skeleton G of an underlying directed graph with no immoralities, Algorithm 1 ends in finite time and it returns the correct underlying directed graph. The
algorithm has runtime complexity polynomial in n.
Algorithm 1 Hybrid Algorithm using Meek rules with separating system
1: Input: Chordal Graph skeleton G = (V, E) with no Immoralities.
~
2: Initialize G(V,
Ed = ;) with n nodes and no directed edges. Initialize time t = 1.
3: while E 6= ; do
4:
Color the chordal graph G with colors. . Standard algorithms exist to do it in linear time
5:
Initialize color set C = {1, 2 . . . }. Form a ( , min(k, d /2e)) separating system S such
that |S| ? k, 8S 2 S.
6:
for i = 1 until |S| do
7:
Initialize Intervention It = ;.
8:
for c 2 Si and every node v in color class c do
d(i)
0
9:
Consider F (c, c0 ), T (c, cP
, v) and {Tj }1 (as per definitions in Sec. 4.3.1).
0
10:
Compute: P (v, c) =
|T (c, c , v)|
max |Tj |.
T
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
c0 2C
1?j?d(i)
Sic
end for
if k ? /2 then
S
It = It
{ argmax P (v, c)}.
c2Si v:P (v,c)6=0
else
It = It [c2Si {First d k/2e nodes v with largest nonzero P (v, c)}.
end if
t=t+1
Apply R0 and Meek rules using Ed and E after intervention It . Add newly learnt directed edges to Ed and delete them from E.
end for
Remove all nodes which have degree 0 in G.
end while
~
return G.
7
Simulations
Information Theoretic LB
Max. Clique Sep. Sys. Entropic LB
Max. Clique Sep. Sys. Achievable LB
Our Construction Clique Sep. Sys. LB
Our Heuristic Algorithm
Naive (n,k) Sep. Sys. based Algorithm
Seperating System UB
200
180
160
140
Number of Experiments
Number of Experiments
5
120
100
80
60
40
20
0
20
40
60
80
Chromatic Number, ?
100
350
300
250
200
150
100
50
0
20
120
(a) n = 1000, k = 10
Information Theoretic LB
Max. Clique Sep. Sys. Entropic LB
Max. Clique Sep. Sys. Achievable LB
Our Construction Clique Sep. Sys. LB
Our Heuristic Algorithm
Naive (n,k) Sep. Sys. based Algorithm
Seperating System UB
400
40
60
80
100
Chromatic Number, ?
120
(b) n = 2000, k = 10
Figure 1: n: no. of vertices, k: Intervention size bound. The number of experiments is compared between our heuristic and the naive algorithm based on the (n, k) separating system on random chordal
graphs. The red markers represent the sizes of ( , k) separating system. Green circle markers and
the cyan square markers for the same value correspond to the number of experiments required
by our heuristic and the algorithm based on an (n, k) separating system(Theorem 1), respectively,
on the same set of chordal graphs. Note that, when n = 1000 and n = 2000, the naive algorithm
requires on average about 130 and 260 (close to n/k) experiments respectively, while our algorithm
requires at most ? 40 (orderwise close to /k = 10) when = 100.
We simulate our new heuristic, namely Algorithm 1, on randomly generated chordal graphs and
compare it with a naive algorithm that follows the intervention sets given by our (n, k) separating
system as in Theorem 1. Both algorithms apply R0 and Meek rules after each intervention according
to (1). We plot the following lower bounds: a) Information Theoretic LB of 2k b) Max. Clique Sep.
Sys. Entropic LB which is the chromatic number based lower bound of Theorem 6. Moreover, we
use two known ( , k) separating system constructions for the maximum clique size as ?references?:
The best known ( , k) separating system is shown by the label Max. Clique Sep. Sys. Achievable
LB and our new simpler separating system construction (Theorem 1) is shown by Our Construction
Clique Sep. Sys. LB. As an upper bound, we use the size of the best known (n, k) separating system
(without any Meek rules) and is denoted Separating System UB.
Random generation of chordal graphs: Start with a random ordering on the vertices. Consider
every vertex starting from (n). For each vertex i, (j, i) 2 E with probability inversely proportional
1
1
to (i) for every j 2 Si where Si = {v :
(v) <
(i)}. The proportionality constant is
changed to adjust sparsity of the graph. After all such j are considered, make Si \ ne(i) a clique by
adding edges respecting the ordering , where ne(i) is the neighborhood of i. The resultant graph is
a DAG and the corresponding skeleton is chordal. Also, is a perfect elimination ordering.
Results: We are interested in comparing our algorithm and the naive one which depends on the
(n, k) separating system to the size of the ( , k) separating system. The size of the ( , k) separating
? /k). Consider values around = 100 on the x-axis for the plots with n =
system is roughly O(
1000, k = 10 and n = 2000, k = 10. Note that, our algorithm performs very close to the size of
? /k). In fact, it is always < 40 in both cases while the average
the ( , k) separating system, i.e. O(
performance of naive algorithm goes from 130 (close to n/k = 100) to 260 (close to n/k = 200).
The result points to this: For random chordal graphs, the structured tree search allows us to learn the
edges in a number of experiments quite close to the lower bound based only on the maximum clique
size and not n. The plots for (n, k) = (500, 10) and (n, k) = (2000, 20) are given in Appendix.
Acknowledgments
Authors acknowledge the support from grants: NSF CCF 1344179, 1344364, 1407278, 1422549
and a ARO YIP award (W911NF-14-1-0258). We also thank Frederick Eberhardt for helpful discussions.
8
References
[1] J. Pearl, Causality: Models, Reasoning and Inference.
Cambridge University Press, 2009.
[2] A. Hauser and P. B?uhlmann, ?Two optimal strategies for active learning of causal models from
interventional data,? International Journal of Approximate Reasoning, vol. 55, no. 4, pp. 926?
939, 2014.
[3] F. Eberhardt, C. Glymour, and R. Scheines, ?On the number of experiments sufficient and in
the worst case necessary to identify all causal relations among n variables,? in Proceedings of
the 21st Conference on Uncertainty in Artificial Intelligence (UAI), pp. 178?184.
[4] A. Hyttinen, F. Eberhardt, and P. Hoyer, ?Experiment selection for causal discovery,? Journal
of Machine Learning Research, vol. 14, pp. 3041?3071, 2013.
[5] H. Hu, Z. Li, and A. Vetta, ?Randomized experimental design for causal graph discovery,? in
Proceedings of NIPS 2014, Montreal, CA, December 2014.
[6] S. Shimizu, P. O. Hoyer, A. Hyvarinen, and A. J. Kerminen, ?A linear non-gaussian acyclic
model for causal discovery,? Journal of Machine Learning Research, vol. 7, pp. 2003?2030,
2006.
[7] P. O. Hoyer, D. Janzing, J. Mooij, J. Peters, and B. Sch?olkopf, ?Nonlinear causal discovery
with additive noise models,? in Proceedings of NIPS 2008, 2008.
[8] F. Eberhardt, Causation and Intervention (Ph.D. Thesis), 2007.
[9] P. Spirtes, C. Glymour, and R. Scheines, Causation, Prediction, and Search.
Book, 2001.
A Bradford
[10] C. Meek, ?Strong completeness and faithfulness in bayesian networks,? in Proceedings of the
eleventh international conference on uncertainty in artificial intelligence, 1995.
[11] S. A. Andersson, D. Madigan, and M. D. Perlman, ?A characterization of markov equivalence
classes for acyclic digraphs,? The Annals of Statistics, vol. 25, no. 2, pp. 505?541, 1997.
[12] T. Verma and J. Pearl, ?An algorithm for deciding if a set of observed independencies has a
causal explanation,? in Proceedings of the Eighth international conference on uncertainty in
artificial intelligence, 1992.
[13] C. Meek, ?Causal inference and causal explanation with background knowledge,? in Proceedings of the eleventh international conference on uncertainty in artificial intelligence, 1995.
[14] A. Hauser and P. B?uhlmann, ?Characterization and greedy learning of interventional markov
equivalence classes of directed acyclic graphs,? Journal of Machine Learning Research,
vol. 13, no. 1, pp. 2409?2464, 2012.
[15] ??, ?Two optimal strategies for active learning of causal networks from interventional data,?
in Proceedings of Sixth European Workshop on Probabilistic Graphical Models, 2012.
[16] G. Katona, ?On separating systems of a finite set,? Journal of Combinatorial Theory, vol. 1(2),
pp. 174?194, 1966.
[17] I. Wegener, ?On separating systems whose elements are sets of at most k elements,? Discrete
Mathematics, vol. 28(2), pp. 219?222, 1979.
[18] R. J. Lipton and R. E. Tarjan, ?A separator theorem for planar graphs,? SIAM Journal on
Applied Mathematics, vol. 36, no. 2, pp. 177?189, 1979.
9
| 5909 |@word mild:1 polynomial:2 achievable:5 c0:8 open:3 proportionality:1 d2:1 hu:1 simulation:2 recursively:1 initial:3 necessity:1 score:4 comparing:1 chordal:41 si:11 written:1 must:2 additive:1 subsequent:1 remove:1 plot:3 alone:1 intelligence:4 fewer:1 leaf:1 greedy:1 sys:11 alexandros:1 completeness:1 characterization:2 node:16 simpler:4 accessed:1 unbounded:2 mathematical:1 dn:1 along:1 direct:2 j2s:1 prove:2 combine:1 eleventh:2 roughly:1 decomposed:1 td:1 resolve:1 actual:1 becomes:2 provided:2 discover:2 bounded:3 underlying:7 suffice:1 factorized:1 moreover:1 minimizes:1 string:3 dimakis:1 developed:1 guarantee:1 every:16 ti:1 runtime:1 grant:1 intervention:63 causally:1 arguably:1 before:1 t1:1 engineering:1 scientist:1 local:2 orderwise:1 equivalence:9 r4:3 challenging:1 limited:1 statistically:1 directed:26 faithful:3 acknowledgment:1 perlman:1 union:2 digit:1 procedure:3 significantly:1 madigan:1 close:13 selection:1 operator:1 applying:1 equivalent:2 deterministic:11 missing:1 modifies:1 straightforward:2 attention:1 starting:3 go:1 focused:1 ke:1 formalized:1 rule:23 system1:1 notion:1 annals:1 construction:13 gm:1 exact:1 us:2 element:7 cut:2 observed:1 role:1 electrical:1 capture:3 worst:13 cycle:2 connected:1 ordering:6 counter:1 removed:3 highest:1 mentioned:1 complexity:2 skeleton:24 respecting:1 chordality:1 completely:1 sep:11 joint:2 alphabet:2 separated:1 distinct:3 artificial:4 labeling:1 choosing:1 neighborhood:1 whose:5 quite:3 widely:1 larger:1 heuristic:6 mkocaoglu:1 statistic:1 gi:1 g1:1 final:1 sequence:2 aro:1 product:1 till:2 subgraph:2 description:3 intervening:2 olkopf:1 parent:3 r1:6 produce:1 generating:1 perfect:2 help:1 derive:1 develop:1 tions:1 montreal:1 received:1 advocated:1 strong:1 recovering:1 skip:1 implies:2 direction:15 correct:1 modifying:1 observational:1 elimination:1 require:1 generalization:1 tighter:1 im:7 hold:3 around:1 considered:1 ground:1 deciding:1 adopt:1 entropic:3 applicable:1 combinatorial:3 label:4 uhlmann:2 utexas:4 extremal:1 largest:1 tool:1 clearly:1 genomic:1 always:3 gaussian:1 immorality:12 chromatic:7 corollary:1 encode:2 derived:1 focus:1 sense:1 helpful:1 inference:2 initially:1 relation:8 i1:7 interested:3 among:2 orientation:2 denoted:6 constrained:1 special:1 initialize:4 yip:1 aware:1 once:2 hyttinen:2 biology:1 t2:1 few:1 causation:2 oriented:5 randomly:1 individual:1 argmax:1 interest:1 sriram:2 adjust:1 extreme:2 behind:1 tj:3 chain:10 subtrees:1 edge:29 necessary:2 unless:1 tree:13 circle:1 causal:34 delete:1 column:1 w911nf:1 kerminen:1 vertex:10 subset:5 hauser:3 kn:4 learnt:7 anticipates:1 adaptively:1 st:1 fundamental:1 randomized:9 international:4 ie:2 siam:1 probabilistic:2 together:2 thesis:1 containing:1 choose:1 possibly:1 worse:1 book:1 return:2 li:1 converted:1 sec:1 satisfy:1 combinatorics:2 depends:1 performed:1 view:1 root:1 red:1 start:3 complicated:1 contribution:1 minimize:2 square:1 efficiently:1 yield:1 identify:7 correspond:1 generalize:1 bayesian:5 simultaneous:1 janzing:1 ed:3 definition:3 sixth:1 pp:9 involved:1 e2:3 resultant:1 proof:4 newly:1 experimenter:1 treatment:1 color:11 knowledge:1 adaptability:1 coloring:4 appears:1 follow:1 planar:1 improved:1 formulation:2 done:1 generality:1 furthermore:1 stage:2 until:3 ei:7 nonlinear:1 marker:3 sic:1 reveal:1 usage:1 usa:1 building:1 concept:2 true:2 ccf:1 hence:1 nonzero:1 spirtes:1 i2:3 conditionally:1 adjacent:1 criterion:2 complete:14 theoretic:6 performs:4 cp:1 logd:3 reasoning:2 novel:5 recently:2 common:1 empirically:2 seperating:2 conditioning:1 extend:2 he:1 marginals:1 significant:1 cambridge:1 dag:32 mathematics:2 access:1 intervene:3 longer:1 gj:1 add:1 showed:1 certain:1 additional:2 greater:1 performer:1 ey:2 r0:6 ii:3 full:1 infer:1 characterized:1 believed:1 post:1 e1:3 award:1 va:5 prediction:1 ae:1 expectation:2 represent:2 addition:1 background:2 else:1 sch:1 induced:6 undirected:14 december:1 integer:2 structural:4 wegener:3 independence:4 xj:1 affect:2 gave:3 identified:3 idea:2 regarding:1 texas:1 whether:1 peter:1 cause:6 repeatedly:1 remark:1 involve:1 ph:1 processed:1 exist:2 nsf:1 happened:1 disjoint:3 per:1 discrete:1 vol:8 key:1 independency:1 terminology:1 deleted:1 interventional:4 graph:85 run:1 letter:4 uncertainty:4 almost:1 throughout:1 separation:4 draw:1 appendix:2 vb:5 bound:26 cyan:1 meek:21 followed:1 guaranteed:1 constraint:1 katona:2 x2:1 lipton:1 karthiksh:1 generates:1 declared:1 simulate:2 argument:1 min:1 enforceable:1 performing:1 glymour:2 department:1 structured:1 according:2 disconnected:1 s1:1 explained:1 taken:1 equation:3 scheines:2 previously:1 turn:1 r3:2 needed:3 end:5 available:1 operation:1 apply:4 observe:1 away:1 enforce:1 original:3 remaining:2 graphical:2 log2:1 exploit:2 k1:1 implied:2 objective:1 question:2 already:1 strategy:2 nr:1 said:2 hoyer:3 separate:2 thank:1 separating:51 considers:2 enforcing:1 assuming:1 length:3 multiplicatively:1 minimizing:1 unfortunately:1 design:9 murat:1 unknown:1 upper:6 observation:3 markov:8 sm:1 finite:2 acknowledge:1 extended:1 discovered:1 lb:12 tarjan:1 inferred:2 introduced:2 complement:1 pair:4 required:4 namely:1 connection:2 faithfulness:2 learned:2 pearl:6 nip:2 frederick:1 below:3 eighth:1 sparsity:1 including:1 max:8 green:1 explanation:2 suitable:1 natural:1 force:2 attach:1 hybrid:2 representing:1 scheme:3 brief:1 imply:1 ne:9 inversely:1 identifies:1 axis:1 naive:7 prior:1 discovery:6 mooij:1 fully:3 adaptivity:1 generation:1 proportional:1 acyclic:6 versus:1 degree:2 sufficient:5 verma:1 austin:2 row:1 course:1 changed:1 neighbor:1 xn:2 valid:1 computes:1 author:2 made:1 adaptive:30 far:2 hyvarinen:1 approximate:1 clique:15 active:4 incoming:3 uai:1 assumed:1 xi:2 search:2 latent:1 learn:7 ca:1 sem:2 eberhardt:5 forest:3 poly:1 european:1 separator:1 s2:1 whole:1 karthikeyan:1 motivation:1 n2:1 nothing:1 noise:1 repeated:4 child:1 x1:2 causality:4 inferring:1 position:1 intervened:2 theorem:18 removing:1 specific:1 r2:2 essential:7 exists:4 workshop:1 adding:1 ci:9 subtree:1 conditioned:1 nk:13 sparser:1 shimizu:1 partially:3 g2:1 applies:1 truth:1 satisfies:1 conditional:2 vetta:1 digraph:1 feasible:1 experimentally:1 except:1 lemma:9 called:6 total:3 bradford:1 ece:1 accepted:1 experimental:1 andersson:1 shanmugam1:1 support:1 ub:3 philosophy:1 incorporate:1 evaluate:1 d1:1 |
5,424 | 591 | Silicon Auditory Processors
as
Computer Peripherals
.T ollll Lmr,7.Hl'o . .T ollll Wawl'7.Ylwk
CS Division
UC B(~rk('ley
Evans lIall
Bcrl,plpy. Ct\ !H720
lazzaro~cs.berkeley.edu,
johnw~cs.berkeley.edu
M. Mahowald'" ~ Massimo Sivilotti t , Dave Gillcspic t
Califol'lIia lnst,itult' of Technology
Pasadena. CA !) 11:l!)
Abstract
Sever<tl resE'<lI'ch gl'Oups cue impl('lllt'lIt.ing allalog integrat.ed circuit.
models of hiological audit.ory Pl"Occ'ssing. The outputs of these
circuit models haV(~ takell sevel'al forms. includillg video [ormat.
for monitor display, simple scanned Ollt.put [01' oscilloscope display
anJ parallel analog out.put.s suitable ror dat.a-acquisition systems.
In this pa.per, we describe an allel"llative out.put method for silicon
auditory models, suit.able for din-'ct. interface to digital computers.
* Present address: f\1. Mahowald, f\1H.C ,\natol1lical Ncmophamacology
Unit, Mansfield TId, Oxfc)('d OXI :1'rl? Ellgland . mam~vax.oxford.ac.uk
t Present address: f\lass Siviloui. '1'(1111)(,1' H,csearrh, 180 Nort.h Vinedo
Avenue, Pasadena, CA 9Il07. mass~tanner. corn
:I: Present address: Dave Gill,>spiE', SYllapf,ics, :l()!)8 Orchard Parkway, San
Jose CA, 95131. daveg~synaptics. com
820
Silicon Auditory Processors as Computer Peripherals
1. INTRODUCTION
Several researchers have implemented comput.at.ional models of biological audit.ory
processing, with the goal of incorporat.ing t.hese models into a speech recognition
system (for a recent review, see (Jankowski, 1992)). These projects have shown the
promise of the biological approach, someti Illes showing clear performance advantages over traditiona.l methods.
The application of t.IH'se comput.('tional models is limited by t.heir large computation and communication I?eqllil?ement.::.;. A lIalog VLSI implementations of these
neural models may relieve t.his cOlllput.at.ional burden; several VLSI research groups
have effort.s in this area, and working int.egrated circuit models of nlany popular
representat.ions present.ly exist.. A review of t.h('se models is present.ed in (Lazzaro,
1991). In this paper, we present. an interface met.hod (Ma.howald, 1992; Sivilott.i,
1991) that addresses the cOll1l1lllnicHt.ions issues between analog VLSI auditory implementations and digital processors.
2. COMMUNICATIONS IN NEURAL SYSTEMS
Biological neurons COlli IllU lIicat.e 10llg dist.anc('s using a pulse represent.ation. Communications engineers have developed several schemes for communicat.ing on a wire
using pulses as aLomic unit.s. In t.hese schemes, maximally using the communications bandwidth of a wire implies t.lw nlPan rat.e of pulses on t.he wire is a significant
fract.ion of the maxil1lum pulse I'at.e allowed 011 the wire.
Using this criterion, nemal syst.el\1s lise wir('s very illefficiently. Tn most. parts of the
brain, most. of the wires arc esselltia.lly inactive 1Il0st of the time. If neural syst.ems
are not organized t.o fully ut.ilii':E' t.lw available handwidt.h of each wire. what does
neural communica.tioll opt.illli;!'c'? 8\'idclI(,(' sllggest.s that. f'llcrgy consel?vat.ion is an
important isslle for neural syst.f'lI1s. A silnpl(' st rat.egy fOl' enel'gy conservat.ion is
t.he reduction of t.he t.ot.al Ill11I1I)('1' of \>lIls('s ill t.he representat.ioll. ivlany possible
coding st.rategies sat.isfy t.his elwrgy rC'qllil'('lIwnt..
The strat.egies observed ill lI(,lIl'al SYSt.('IIIS share anot her ('0111111011 propert.y. Neural
systems oftE'1I implenwnt. H class of COIIlPlltilt.ions ill a. Il\allller t.ha.t. produces an
energy-efficient. out.put. encoding as all addit.ional bypl'Oduct.. The energy-efficient
coding is not perfOl'tnE'd simply for comll1unica.t.ioll and immediat.ely reversed upon
receipt, but is an int.egral part. of t.h(' n('w r('IH'csent.at.ioll. In this way, energyefficient neural coding is int.rinsically diffnc'lIt. frolll engineering da.ta compression
techniques.
Temporal adaptat.ion. lat.('J'al inhihition, alld spike' colTelat.ions arc examples of neural processing methods tlIH1. perform illtercst.illg cOlllput.at.ions while producing an
energy-efficient. OUt.PIlt. code. Thcst> repr('sent.a(.ional principles are t.he founda.tion
of t.he neural computation and cOllll11unicat.ioll method we advocat.e in this paper,
In this method, t.he out.put. units of a chip are spiking llelll'On circuit.s that use
energy-efficient coding IlIet.hods. To COll1lllllllicat.f' t.his code off a c\rip, we liSE' a
dist.inctly non-biological apPl'o(1eh.
821
822
Lazzaro, Wawrzynek, Mahowald, Sivilotti, and Gillespie
3. THE EVENT-ADDRESS PROTOCOL
The unique characteristics of enel?gy-efficient. codes define the remaining off-chip
communications problem. In the spiking nemon protocol, the height. and width of
the spike carries no informat.ion; the neuron imparts new information only at the
moment a spike begins. This moment occurs asynchronously; there is no global clock
synchronizing the output units. One way of completely specifying the information
in the output units is an event list., a tabulation of the precise time each output
unit. begins a new spike. vVe can use this specification as a basis for an off-chip
communicat.ions system, t.hat. sends an event.-list message off-chip at the moment. an
output neuron begins a new spike. An eVl'nt.-list message includes the identificat.ion
of the output unit., and the t.ime of firing . A pcrformance analysis of' this protocol
can be found in (Lazzal'O et al., l!J92).
Note that an explicit timest.amp for each clIt.l?y in t.he cvent list is not necessary, if
communication lat.ency betwcen t.he scnding chip and the receiver is a constant. In
this case, the sender simply com1l1I1nicRt,es, lIpon onset of a. spike from an output, the
identit.y of the output. unit.; t.he l'eceivcl' can aPPclld a locally genera.ted timestamp
to complete the event. If simplificd in t.his mallllcr, we refcl' to the event-list protocol
as the event.-a.ddress protocol.
''''e have designed a \vol'king syst.em t.hat. comput.es a model of auditory nerve response, in rea] t.ime, usillg ana.log V LSI processing. This syst.em takes as input
an analog sOllnd sOllrce. alld uses t.he eVPlIt.-list, I?('prl'sent.a.t.ion t.o communicat.e t.he
model out.put to the host. computel?.
Board Architecture
1--------Chip Architecture
I
1
1
Analog
Processing
and
~ Ou tPllt
D
1
I
r--
ArbIters 1
Spiking
-i
Paralld
l3us
I
I
I
Array
En('odlll~ : I
1
1_- _ _ _ _ _ _ _ J
CJ 0
I\\ttt~ \\\\\ I
DO
Olll
Tim.>r
~,---'
A,Sf1us c a r d U
IPC
l
"
aa
aa
aa
aa
aa
aaaaaaaaaa
aaaaaaaaaa
aaaaaaaaao
aaaaaaaaa
a
I
."~
DOD
aaa
aao
aa
aao
aa
Sound Input
Figure 1. System block diagrnI1l. showing chip architecture, hoard a.rchit.ecture,
a.nd the host. comput,(')' (SIlIl IPC).
Silicon Auditory Processors as Computer Peripherals
4. SYSTEMS IMPLEMENTATION
Figure 1 is a block diagl'am of this system. A single VLSI chip computes the auditory
model response; an array of spiking neuron circuit.s is the final representation of the
model. This chip also implements t.he event-address protocol, using asynchronous
arbitration circuits. The chip produces a pa.rallel binary encoding of the model
output, as an asynchronous stream of event addt?esses. These on-chip operations
are shown inside the dashed recta.ngle in Figure 1, labelled Chip Architecture.
Additional digital processing com~letes t.he custom hardwa.re in the system. This
hardware transforms the event.-address prot.ocol int.o an event-list. protocol, by
adding a time marker for each event (16 bit time markers with 20ps resolution).
In addition, the hardware implemellt.s the bus interface to the host computer, in
conjunction with a commercial int.erface board. The commercial interface board
supports 10 MBytes/second asynchronous da.ta tl'ansfers between our custom hardware and the host computer, and includes 8 KByt.es of data buffers. Our display
software produces a real-time graphical display of the audit.ory model response,
using the X window syst.em .
5. VLSI CIRCUIT DETAILS
Figure 2 shows a block diagram of t.he chip. The analog input signal connects
to circuits t.hat. perform analog pl'Ocessing, t hat are fully described and referenced
in (Lazzal'O et al., 1993). The outpllt. of this analog processing is represented by
150 spiking neurons, arranged in a ~O by 5 array. These are the output units of
the chip; the event.-address prot.ocol commllnicates t.he a.ctivity of these units off
chip. At the onset of a spike from an output unit, t.he array position of the spiking
unit, encoded as a binary number, appears on the output bus. The asynchronous
out.put bus is shown in FigurE' 2 as t.he dat.a signals marked Encoded X Output
(column position) and Encoded Y Output (row position), and the acknowledge
and request control signals Ae and Re.
"Ve implemented the event.-address pl'Ot.ocol as an asynchl'Onous arbitration prot.ocol
in two dimensions. ]ll t.his scheme, an out.put. unit. can access two request. lines,
one a.<;socia.ted wit.h its row and one ;.)ssociat.ed with its column. Using a wire-OR
signalling prot.ocol. any out.put. unit 011 a part.icular row or colllmn may assert the
request line. Each request. lillE' is paired wit.h an acknowledge line, driven by the
arbitration circuitt?y out.sidr. t.h0 array. Rowand column wires for acknowledge and
request are explicit.ly showl! in Figll\'e 2. as t he lines that form a grid inside the
output. unit. array.
At the onset of a. spike, an out.put. unit. a~sert . s it.s row request. line, and wait.s for
a reply on it.s \'Ow acknowlf>dge line. An aSYllchronous arbit.rat.ion syst.em, mar'ked
in Figure 2 as Y Arhitration Tree. aSSl\res only one out.put row is acknowledged.
Aft.er row acknowledgement., the output unit. assel't.s its column request line, and
waits for a reply on it.s COlU11111 acknowledge line. The al?bit.ration system is shown
in detail in Figure 2: fOllr two-input. arhit.f'r circuit.s, shown a.<; rectangles marked
with the letter A, arc connect.ed as a t.ree t.o arhit.rat.e among t.he 5 column inputs.
823
824
Lazzaro, Wawrzynek, Mahowald, Sivilotti, and Gillespie
Encoded Y Axis
n
I--
A
0 0 0
r-
r-
?
-0
~
..
???
?
r-~
0
f-
??
?
Q,)
Q,)
cd
.....ubO
v
.....
u
.-<
.-<
0
U
.....
~
0
.....
H
a
R
t-=l
A
0 r- 0 0 r- 0 r- 0
f-
.-<
~
0
..... r-
~
.....
....
a
u
;.:::
.....
U')
.
???
f-
t-
~
~
?
f-
0 0 0 0 0
l-
t-
r...... r-
So und Input
V
I
Control Loo'ic
~
A-
l
Encod ed X Output
Figure 2. Block dia p; ralll
J
l")
I A
I I AI
l A
or t Itt' chip ,
cd
.....
f"'>
?
.
--... .
-~
U
???
..a=
..
J
II
U
S('(' t('xt 1'0J'ddail:-;,
Ae Rc
Silicon Auditory Processors as Computer Peripherals
n
To Array
--..---H------t---
Ac
(b)
- - - - + - - - - - - - - + - - + - - - - At
A
B
l?y
(C)
,..,'
Ay
/.'
Figure 3. Diagrams of COllllllllllicatinn circllil:-; ill 1.1.(' citip. (a) Two-inpllt. arhit.er
circuit.. (b) ('olltl'ol lo~ic to illt(?rl'a('(? arhil 1';,1 iOIl logic alld Olltpill. IInit array. (c)
Ollt.put. Illlit. ('ircllit..
825
826
Lazzaro, Wawrzynek, Mahowald, Sivilotti, and Gillespie
Upon the an'ival of hot.h row and column acknowledgement.s, t.he output unit releases both row and column request. lines. St.a.t.ic lat.ches, shown in Figure 2 as the
rectangles marked Control Logic:, ret.ain t.he stat.e of the row and column request
lines.
Binary encoders transfonl1 t.he row and column acknowledge lines int.o the output
data bus. Another column encoder Sf'nses t.he acknowledgement of any column, and
asserts the bus cont.rol OUt.PIlt. R e. \," hell t.he ('xtcl'ncd device has secmed t.he data,
it responds by a.'3sert.ing t.he Ae signal. The At: signal clears the st.at.ic latches in
the Control Logic blocks and reset.s Re. When Ae is reset, the data transfer is
complet.e, and the chip is reacly for t.he next communication event.
Figure 3 shows the deta.ils of the communications circuit.s of Figure 2. Figure 3(a)
shows the t.wo-input cHhit.er circuit used to create t.he binary arbitration trees in
Figure 3. This digital circuit. t"kcs a~ inpllt t.wo request signals, RI and R 2 , and
produces the a<;sociated acknowledge sigll:1ls Al and A 2 ? The acknowledgement of
a request precludes the acknowledgement. of a second request. The cil?cuit. assert.s
an acknowledge signal unt.il it.s clssocialed r'('qll!'st. is released.
Ro is an auxiliary out.put. sigllal indicat.ing eit.her HI 01' R",1 has been asserted; Ao is
an auxiliary input. sigllal t.ltal enahl('s t.ht' ..\ I rllld +~ out.put.s. Tlte auxiliary signals
allow the two-input. 'Hhit<'r? t.o fllnctioll as an elcrllcnt. ill arhit.rat.ion trees, as shown
in Figure 2; the Ro and AI) sip;nals of one I('vel of arbit.ra.t.ion cOllnect to the Rk and
Ak signals at. t.he next level of cuhit.rat.ion. rn !.\Vo-input. operat.ioll, t.he Ro and Ao
signals are connect.NI t.ogct.lwl', as sbon-II ill I he' root. arhit.er in Figure 2.
Figure 3(b) shows t.lre circuit illlpl('lllellt.at.ioll of t.he Control Logic blocks ill Figure 2; this circuit is rcpC'at.cd for ea.ch row and col1lmn connect.ion. This circuit.
interfaces t.he output. hilS cont.rol input. Ac wit II t.he c\l?hitrat.ion cil?cuitl'Y. If output.
communicat.ion is not in progr('ss, A c is at, ground, and Ac is at. \idd.
The PFET transist.or marked a~ Load act.s as a st.at.ic pullllp to t.he array request
line (R); out.put unit.s pull t.his lillt" low t.o as:'o:<Tt ,1 request.. The NOR gat.e invert.s the
array request line, alld rout.es it. t.o (.\W arhitration t.ree. When a penuillg request
is ackno",.ledged by tIl<' I.r<'<' ackllo\\'I(' dge linC', till' two NFET t.ransist.ors act. t.o
latch the army request. lilw . The ass<'ltion of ....le releases t.he array request line
and disa.bles t.he arhit.rat.ioll LI?ce request. inpllt.; t.hese actions reset all st.ate in the
communicat.ions syste'm . '''llCn AI; is releas(~d, the syst.em is ready t.o communicate
a. new event.
Figure 3(c) shows t.he circuit. implernclltat.ion of a Hnit. in the output aITay. In t.his
implcmentat.ion, each out.put ullit. is a t.wo-st.age lo\\,-powel? axon circuit. (Lazzaro,
1992). The first. axonal st.ag(' r('ceiV<'s t.he' cochlear input.; t.his axon st.age is not
shown in Figure 3( c). The fil?sf. st.agf' couple'S illt.o the second st.age, shown in Figure
3(c), via t.he Sand F wires.
To underst.and t.ht' opt'rat.ioll of' this circll it.. WI' C01ISid('r t IIf' t.rallsrnis~ioll of a single
spil-\t'. Init.ially, We' a~Sllllle the n'<jll('st. lilh's I?,. and ay are held high hy t.lte st.at.ic
pl\lIup PFET t.ransist.ors shown ill Figlll?(' :J(h); ill addit.ion, we aSSIJIlle the a.cknowledge lines A.r and A" arp at. gWlllld, alld tilt' 1I0nillvert.ing hurrer input. voltage is at
Silicon Auditory Processors as Computer Peripherals
ground.
When the first. axonal st.age fires, the S signal changes from ground potential to
Vdd. At this point the buffer input voJt.agf! begills t.o increase, at a rate determined
by the analog cont.I?ol voltage P. Whcn t.he swit.ching threshold of t.he buffer is
reached, the buffet output. volt.ag(' F swings \,0 \/dd ; capacit.ive feedback ensures a
reliable switching transitioll. At this point, the output, unit. Plllls the request line
Ry low, and the cOnllllllnirat.ions sequellre hegins .
The Y arbitl'ation logic I'('plies 1.0 t 11<' Ry rf'qu('st. by asserting t.he Ay line. 'Vhen
both F and Ay are asscJ'tcd, t.hc output ullit. pulls the reqllest. line R.r: low. The X
arbitration logic replies to t.he R.r: I'equcst hy asserting t.he Ax line. The assertion of
both Ax aBd Ay reset,s t.IH~ burrel' input voltagc t.o gl"Ound. As a result, the F line
swings to ground potent.i<ll, t.hc out.put. unit. releases thc Rx a.nd Ry lines, and the
first axon stage is enahled. At. t.his point., the l<lt.ch cil'cuit. of Figme 3(b) maintains
the state of the Rx and 17,/ lines, IInt,il it is cleared by the off-chip acknowledge
sign a. I.
Acknowledgements
Research and prot.ot.ypillg of t.he ('vc-nt-addr('ss illt('rface took place in Carver J\Jead's
laborat.ory at. Calt,('('il: we arc' grateful for his illsights, CrtrOllragell1ellt., and support..
The Caltech-hased 1'('s<"Hrdl was funded by t.1t(' ONH, UP, and t.he Syst.ems Development. Foundat.ion. HCS('aJ'ch and prot,otyping of I.he allditory-nerve demollstrat.ion
chip and syst.em took place a1 (1(' n(,l'kd(~y, alld was flll1(kd by t.he NSF (PYI award
l\lIPS-895-8568), AT,,:T, al\d t.he ONH (UHI-l'\OOOl/1-02-.J-1672).
Rcfel'cuees
Jankowski. C. R. (1 m.l:2). "1\ ('olllparisoll of ,\ IIdit.ory l'\'lodcls for A lItomat.ir Speech
Recognit.ion," S.13. TI1('~i:-;, ;\[1'1' Ikpt. of Electrical Ellgillc<.'\'ing and Comput.cl' Science.
Lazz;uo, J . P. (IH!H). "Biologically-h(ls('d auditory signal pl'ocessing in analog
V LSI," IEEE A.'iilo/lla1' ('ollje 1'( lin 011 Sl[jll(ll.~ . .'3ys/c1Il.'i. (/.lId Comp1liers.
spiking I}(,\lron~ and :nons," IEEE 11Iter(l1Id ,'1'.1/,1;/(1/1.,. Sail Di('go, CA, p . 2:22U-:22:2-1.
La7,zaro, J. P. (199:2). "l,o\\,-po\\,('J'
ualiol/al S'ympo,<;iu1ll 01/
CiJ'CIt".~
~ilicoll
Lazzaro, J., \\'awl'zYlWk, .1., ~Iaho\\'ald, ~I., Sivilot.f.i, ~L, and Gillespie, D. (1993).
"Silicon audit.ory pro('(~ssorS as (,OlllplItcr peripIH'I'als," IEEE Trallsactiolls of Ne 'IIT'a/
Nelworks. May (ill pr('ss).
Mahowald, 1\I. (1992). Ph.n. Thesis, COlllplltat.ion and NCll\'al Syst.ems, California.
Instittlt.r of Technology.
Sivilot.t.i, 1\1. (1991), "Wirillg ('ollsi(krat.iolls ill analog VLSr syst.ems, wit.h applicat.iolls to fh'ld-progrnllll1lahlc' 11('\ works," (:0111 pllt.('r Sciel1r(' Technical Heport. (Ph.
D. Thesis), Califol'llia 11lSit.II1.<' of'T('cllllology,
827
| 591 |@word agf:2 underst:1 compression:1 nd:2 pulse:4 rol:2 ld:1 moment:3 carry:1 reduction:1 amp:1 cleared:1 usillg:1 com:2 nt:2 aft:1 evans:1 nemal:1 heir:1 designed:1 ilii:1 cue:1 device:1 signalling:1 es:1 ional:4 wir:1 height:1 rc:2 iinit:1 inside:2 ra:1 oscilloscope:1 dist:2 nor:1 ry:3 brain:1 ol:2 window:1 project:1 begin:3 circuit:18 mass:1 what:1 anj:1 sivilotti:4 developed:1 ret:1 ag:2 temporal:1 berkeley:2 impl:1 assert:2 act:2 sip:1 ro:3 prot:6 uk:1 control:5 unit:21 ly:2 uo:1 producing:1 tid:1 engineering:1 referenced:1 switching:1 encoding:2 ak:1 oxford:1 syste:1 firing:1 ree:2 specifying:1 appl:1 genus:1 limited:1 sail:1 lte:1 unique:1 zaro:1 ement:1 block:6 implement:1 area:1 wait:2 put:17 isfy:1 go:1 l:2 resolution:1 wit:4 pyi:1 array:11 pull:2 his:10 commercial:2 rip:1 us:1 pa:2 recognition:1 qll:1 observed:1 electrical:1 ensures:1 ssing:1 und:1 ration:1 colli:1 hese:3 vdd:1 unt:1 grateful:1 ror:1 mbytes:1 abd:1 hased:1 upon:2 division:1 completely:1 basis:1 po:1 chip:19 eit:1 represented:1 iit:1 describe:1 recognit:1 h0:1 sociated:1 encoded:4 ive:1 s:3 precludes:1 encoder:1 asynchronously:1 final:1 advantage:1 took:2 reset:4 till:1 awl:1 asserts:1 outpllt:1 p:1 produce:4 tim:1 ac:4 stat:1 implemented:2 c:3 auxiliary:3 implies:1 met:1 ley:1 vc:1 ana:1 sand:1 ao:2 hell:1 opt:2 biological:4 communicat:5 pl:5 fil:1 ic:7 ground:4 cuit:2 released:1 fh:1 foundat:1 ain:1 ound:1 hav:1 create:1 voltage:2 conjunction:1 lise:2 release:3 ax:2 lass:1 tne:1 am:1 tional:1 aaaaaaaaaa:2 el:1 pasadena:2 vlsi:5 her:2 kc:1 oool:1 lnst:1 issue:1 among:1 ill:11 development:1 uc:1 timestamp:1 nemon:1 ted:2 iif:1 lit:2 synchronizing:1 lille:1 ime:2 ve:1 connects:1 fire:1 suit:1 message:2 custom:2 llg:1 asserted:1 held:1 necessary:1 tree:3 carver:1 illg:1 re:4 rallel:1 ollt:2 ocessing:2 column:11 assertion:1 mahowald:6 ory:6 dod:1 loo:1 connect:3 encoders:1 st:16 potent:1 ches:1 off:6 tanner:1 thesis:2 receipt:1 til:1 li:3 syst:13 potential:1 gy:2 coding:4 includes:2 relieve:1 int:6 ely:1 onset:3 stream:1 tion:1 root:1 fol:1 reached:1 maintains:1 parallel:1 ttt:1 ii1:1 alld:6 figme:1 ir:1 ni:1 il:5 characteristic:1 ecture:1 rx:2 researcher:1 applicat:1 dave:2 processor:6 ed:5 energy:4 acquisition:1 spie:1 di:1 couple:1 auditory:10 popular:1 ut:1 organized:1 cj:1 ou:1 ea:1 nerve:2 appears:1 strat:1 ta:2 response:3 maximally:1 arranged:1 vel:1 hcs:1 mar:1 stage:1 reply:3 clock:1 working:1 marker:2 ency:1 jll:2 aj:1 dge:2 progr:1 swing:2 din:1 volt:1 vhen:1 ll:3 latch:2 width:1 ackno:1 rat:8 criterion:1 ay:5 complete:1 tt:1 vo:1 tn:1 onh:2 interface:5 pro:1 lwl:1 spiking:7 rl:2 ipc:2 tabulation:1 lre:1 tilt:1 analog:10 he:51 silicon:7 significant:1 ai:3 grid:1 repr:1 funded:1 specification:1 access:1 synaptics:1 indicat:1 recent:1 driven:1 buffer:3 deta:1 binary:4 caltech:1 additional:1 gill:1 dashed:1 signal:12 figlll:1 ii:3 sound:1 integrat:1 representat:2 ing:7 operat:1 technical:1 lin:1 host:4 award:1 y:1 paired:1 vat:1 a1:1 imparts:1 mansfield:1 ae:4 lly:1 represent:1 invert:1 ion:29 rea:1 addition:1 rese:1 diagram:2 sends:1 ot:3 sent:2 idd:1 transist:1 axonal:2 prl:1 iii:1 incorporat:1 ioll:9 architecture:4 bandwidth:1 avenue:1 ti1:1 ked:1 inactive:1 effort:1 wo:3 ullit:2 speech:2 lazzaro:7 sever:1 action:1 complet:1 clear:2 se:2 transforms:1 locally:1 ph:2 hardware:3 cit:1 sl:1 exist:1 lsi:2 jankowski:2 nsf:1 sign:1 per:1 icular:1 promise:1 vol:1 group:1 iter:1 threshold:1 monitor:1 acknowledged:1 hoard:1 ce:1 tcd:1 ht:2 rectangle:2 ncd:1 rout:1 jose:1 letter:1 communicate:1 place:2 ctivity:1 ially:1 informat:1 bit:2 ct:2 hi:1 display:4 scanned:1 ri:1 software:1 anot:1 propert:1 hy:2 corn:1 peripheral:5 orchard:1 request:20 pfet:2 kd:2 ate:1 em:10 wawrzynek:3 wi:1 qu:1 lid:1 aaa:1 biologically:1 hl:1 pr:1 bus:5 addr:1 dia:1 oxi:1 available:1 operation:1 bles:1 buffet:1 hat:4 mam:1 remaining:1 lat:3 graphical:1 dat:2 spike:8 occurs:1 swit:1 responds:1 ow:1 reversed:1 addit:2 cochlear:1 rectum:1 code:3 cont:3 ching:1 nfet:1 ngle:1 cij:1 implementation:3 lil:1 perform:2 neuron:5 wire:9 arc:4 acknowledge:8 communication:8 precise:1 rn:1 california:1 address:9 able:1 rf:1 reliable:1 video:1 gillespie:4 suitable:1 ation:2 event:15 eh:1 hot:1 scheme:3 technology:2 vax:1 ne:1 axis:1 ready:1 review:2 acknowledgement:6 fully:2 uhi:1 arbit:2 digital:4 age:4 principle:1 dd:1 share:1 cd:3 row:11 lo:2 gl:2 asynchronous:4 allow:1 feedback:1 dimension:1 computes:1 asserting:2 san:1 iolls:2 logic:6 global:1 inpllt:3 parkway:1 sat:1 receiver:1 arbiter:1 lip:1 itt:1 transfer:1 ca:4 init:1 anc:1 as:1 hc:2 cl:1 capacit:1 protocol:7 da:2 thc:1 aao:2 allowed:1 vve:1 tl:2 en:1 board:3 cil:3 axon:3 n:1 position:3 explicit:2 sf:2 comput:5 lw:2 ply:1 rk:2 tlte:1 xt:1 load:1 showing:2 er:4 list:7 rface:1 burden:1 ih:4 adding:1 gat:1 vojt:1 hod:2 egy:1 lt:1 simply:2 army:1 sender:1 ch:4 aa:7 ald:1 ma:1 goal:1 marked:4 king:1 massimo:1 occ:1 labelled:1 change:1 determined:1 engineer:1 arp:1 e:4 audit:4 support:2 arbitration:5 |
5,425 | 5,910 | Regret-Based Pruning in Extensive-Form Games
Tuomas Sandholm
Computer Science Department
Carnegie Mellon University
Pittsburgh, PA 15217
sandholm@cs.cmu.edu
Noam Brown
Computer Science Department
Carnegie Mellon University
Pittsburgh, PA 15217
noamb@cmu.edu
Abstract
Counterfactual Regret Minimization (CFR) is a leading algorithm for finding a
Nash equilibrium in large zero-sum imperfect-information games. CFR is an iterative algorithm that repeatedly traverses the game tree, updating regrets at each
information set. We introduce an improvement to CFR that prunes any path of play
in the tree, and its descendants, that has negative regret. It revisits that sequence
at the earliest subsequent CFR iteration where the regret could have become positive, had that path been explored on every iteration. The new algorithm maintains
CFR?s convergence guarantees while making iterations significantly faster?even
if previously known pruning techniques are used in the comparison. This improvement carries over to CFR+, a recent variant of CFR. Experiments show an order
of magnitude speed improvement, and the relative speed improvement increases
with the size of the game.
1
Introduction
Extensive-form imperfect-information games are a general model for strategic interaction. The last
ten years have witnessed a leap of several orders of magnitude in the size of two-player zero-sum
extensive-form imperfect-information games that can be solved to (near-)equilibrium [11][2][6].
This is the game class that this paper focuses on. For small games, a linear program (LP) can
find a solution (that is, a Nash equilibrium) to the game in polynomial time, even in the presence
of imperfect information. However, today?s leading LP solvers only scale to games with around
108 nodes in the game tree [4]. Instead, iterative algorithms are used to approximate solutions for
larger games. There are a variety of such iterative algorithms that are guaranteed to converge to a
solution [5, 3, 10]. Among these, Counterfactual Regret Minimization (CFR) [16] has emerged as
the most popular, and CFR+ as the state-of-the-art variant thereof [13, 14].
CFR begins by exploring the entire game tree (though sampling variants exist as well [9]) and
calculating regret for every hypothetical situation in which the player could be. A key improvement
that makes CFR practical in large games is pruning. At a high level, pruning allows the algorithm to
avoid traversing the entire game tree while still maintaining the same convergence guarantees. The
classic version of pruning, which we will refer to as partial pruning, allows the algorithm to skip
updates for a player in a sequence if the other player?s current strategy does not reach the sequence
with positive probability. This dramatically reduces the cost of each iteration. The magnitude of this
reduction varies considerably depending on the game, but can easily be higher than 90% [9], which
improves the convergence speed of the algorithm by a factor of 10. Moreover, the benefit of partial
pruning empirically seems to be more significant as the size of the game increases.
While partial pruning leads to a large gain in speed, we observe that there is still room for much
larger speed improvement. Partial pruning only skips updates for a player if an opponent?s action
in the path leading to that point has zero probability. This can fail to prune paths that are actually
prunable. Consider a game where the first player to act (Player 1) has hundreds of actions to choose
1
from, and where, over several iterations, the reward received from many of them is extremely poor.
Intuitively, we should be able to spend less time updating the strategy for Player 1 following these
poor actions, and more time on the actions that proved worthwhile so far. However, here, partial
pruning will continue to update Player 1?s strategy following each action in every iteration.
In this paper we introduce a better version of pruning, regret-based pruning (RBP), in which CFR
can avoid traversing a path in the game tree if either player takes actions leading to that path with
zero probability. This pruning needs to be temporary, because the probabilities may change later in
the CFR iterations, so the reach probability may turn positive later on. The number of CFR iterations
during which a sequence can be skipped depends on how poorly the sequence has performed in
previous CFR iterations. More specifically, the number of iterations that an action can be pruned is
proportional to how negative the regret is for that action. We will detail these topics in this paper.
RBP can lead to a dramatic improvement depending on the game. As a rough example, consider
a game in which each player has very negative regret for actions leading to 90% of nodes. Partial
pruning, which skips updates for a player when the opponent does not reach the node, would traverse
10% of the game tree per iteration. In contrast, regret-based pruning, which skips updates when
either player does not reach the node, would traverse only 0.1 ? 0.1 = 1% of the game tree. In
general, RBP roughly squares the performance gain of partial pruning.
We test RBP with CFR and CFR+. Experiments show that it leads to more than an order of magnitude speed improvement over partial pruning. The benefit increases with the size of the game.
2
Background
In this section we present the notation used in the rest of the paper. In an imperfect-information
extensive-form game there is a finite set of players, P. H is the set of all possible histories (nodes)
in the game tree, represented as a sequence of actions, and includes the empty history. A(h) is
the actions available in a history and P (h) ? P ? c is the player who acts at that history, where c
denotes chance. Chance plays an action a ? A(h) with a fixed probability ?c (h, a) that is known
to all players. The history h0 reached after an action is taken in h is a child of h, represented by
h ? a = h0 , while h is the parent of h0 . More generally, h0 is an ancestor of h (and h is a descendant
of h0 ), represented by h0 @ h, if there exists a sequence of actions from h0 to h. Z ? H are
terminal histories for which no actions are available. For each player i ? P, there is a payoff
function ui : Z ? <. If P = {1, 2} and u1 = ?u2 , the game is two-player zero-sum. We define
?i = maxz?Z ui (z) ? minz?Z ui (z) and ? = maxi ?i .
Imperfect information is represented by information sets for each player i ? P by a partition Ii of
h ? H : P (h) = i. For any information set I ? Ii , all histories h, h0 ? I are indistinguishable
to player i, so A(h) = A(h0 ). I(h) is the information set I where h ? I. P (I) is the player
i such that I ? Ii . A(I) is the set of actions such that for all h ? I, A(I) = A(h). |Ai | =
maxI?Ii |A(I)| and |A| = maxi |Ai |. We define U (I) to be the maximum payoff reachable from a
history in I, and L(I) to be the minimum. That is, U (I) = maxz?Z,h?I:hvz uP (I) (z) and L(I) =
minz?Z,h?I:hvz uP (I) (z). We define ?(I) = U (I) ? L(I) to be the range of payoffs reachable
from a history in I. We similarly define U (I, a), L(I, a), and ?(I, a) as the maximum, minimum,
and range of payoffs (respectively) reachable from a history in I after taking action a. We define
D(I, a) to be the set of information sets reachable by player P (I) after taking action a. Formally,
I 0 ? D(I, a) if for some history h ? I and h0 ? I 0 , h ? a v h0 and P (I) = P (I 0 ).
A strategy ?i (I) is a probability vector over A(I) for player i in information set I. The probability
of a particular action a is denoted by ?i (I, a). Since all histories in an information set belonging to
player i are indistinguishable, the strategies in each of them must be identical. That is, for all h ? I,
?i (h) = ?i (I) and ?i (h, a) = ?i (I, a). We define ?i to be a probability vector for player i over all
available strategies ?i in the game. A strategy profile ? is a tuple of strategies, one for each player.
ui (?i , ??i ) is the expected payoff for player i if all players play according
to the strategy profile
P
h?i , ??i i. If a series of strategies are played over T iterations, then ?
?iT =
?
t?T
T
?it
.
? (h) = ?h0 ?avh ?P (h) (h, a) is the joint probability of reaching h if all players play according to
?. ?i? (h) is the contribution of player i to this probability (that is, the probability of reaching h if all
?
players other than i, and chance, always chose actions leading to h). ??i
(h) is the contribution of
2
all players other than i, and chance. ? ? (h, h0 ) is the probability of reaching h0 given that h has been
reached, and 0 if h 6@ h0 . In a perfect-recall game, ?h, h0 ? I ? Ii , ?i (h) = ?i (h0 ). In this paper
we focus on perfect-recall games. Therefore, for i = P (I) we define ?i (I) = ?i (h) for h ? I. We
define the average strategy ?
?iT (I) for an information set I to be
P
?it t
t?T ?i ?i (I)
T
P
?
?i (I) =
(1)
?t
t?T ?i (I)
2.1
Nash Equilibrium
A best response to ??i is a strategy ?i? such that ui (?i? , ??i ) = max?i0 ??i ui (?i0 , ??i ). A Nash
equilibrium, is a strategy profile where every player plays a best response. Formally, it is a strategy
?
?
). We define a Nash equilibrium strategy
profile ? ? such that ?i, ui (?i? , ??i
) = max?i0 ??i ui (?i0 , ??i
for player i as a strategy ?i that is part of any Nash equilibrium. In two-player zero-sum games, if ?i
and ??i are both Nash equilibrium strategies, then h?i , ??i i is a Nash equilibrium. An -equilibrium
?
?
).
is a strategy profile ? ? such that ?i, ui (?i? , ??i
) + ? max?i0 ??i ui (?i0 , ??i
2.2
Counterfactual Regret Minimization
Counterfactual Regret Minimization (CFR) is a popular regret-minimization algorithm for extensiveform games [16]. Our analysis of CFR makes frequent use of counterfactual value. Informally, this
is the expected utility of an information set given that player i tries to reach it. For player i at
information set I given a strategy profile ?, this is defined as
X
X
?
vi? (I) =
??i
(h)
? ? (h, z)ui (z)
(2)
z?Z
h?I
The counterfactual value of an action a is
X
X
?
vi? (I, a) =
??i
(h)
? ? (h ? a, z)ui (z)
(3)
z?Z
h?I
Let ? t be the strategy profile used on iteration t. The instantaneous regret on iteration t for action a
in information set I is
t
t
(4)
rt (I, a) = vP? (I) (I, a) ? vP? (I) (I)
and the regret for action a in I on iteration T is
RT (I, a) =
X
rt (I, a)
(5)
t?T
T
T
(I, a)}. Regret for player i
Additionally, R+
(I, a) = max{RT (I, a), 0} and RT (I) = maxa {R+
in the entire game is
X
t
t
RiT = max
ui (?i0 , ??i
) ? ui (?it , ??i
)
(6)
0
?i ??i
t?T
In CFR, a player in an information set picks an action among the actions with positive regret in
proportion to his positive regret on that action. Formally, on each iteration T + 1, player i selects
actions a ? A(I) according to probabilities
?
T
P
(I,a)
? P R+
T
if a0 ?Ai R+
(I, a0 ) > 0
T (I,a0 ) ,
T +1
R
0
a ?A(I)
+
?i (I, a) =
(7)
? 1 ,
otherwise
|A(I)|
If a player plays according to CFR in every iteration, then on iteration T , RT (I) ? ?i
Moreover,
X
p
?
RiT ?
RT (I) ? |Ii |?i |Ai | T
p
?
|A(I)| T .
(8)
I?Ii
RT
RT
So, as T ? ?, Ti ? 0. In two-player zero-sum games, if both players? average regret Ti ? ,
their average strategies h?
?1T , ?
?2T i form a 2-equilibrium [15]. Thus, CFR constitutes an anytime
algorithm for finding an -Nash equilibrium in zero-sum games.
3
3
Applying Best Response to Zero-Reach Sequences
In Section 2 it was explained that if both players? average regret approaches zero, then their average
strategies approach a Nash equilibrium. CFR provides one way to compute strategies that have
bounded regret, but it is not the only way. CFR-BR [7] is a variant of CFR in which one player
plays CFR and the other player plays a best response to the opponent?s strategy in every iteration.
Calculating a best response to a fixed strategy is computationally cheap (in games of perfect recall),
costing only a single traversal of the game tree. By playing a best response in every iteration, the
best-responder is guaranteed to have at most zero regret. Moreover, the CFR player?s regret is still
bounded according to (8). However, in practice the CFR player?s regret in CFR-BR tends to be
higher than when both players play vanilla CFR (since the opponent is clairvoyantly maximizing the
CFR player?s regret). For this reason, empirical results show that CFR-BR converges slower than
CFR, even though the best-responder?s regret is always at most zero.
We now discuss a modification of CFR that will motivate the main contribution of this paper, which,
in turn, is described in Section 4. The idea is that by applying a best response only in certain
situations (and CFR in others), we can lower regret for one player without increasing it for the
opponent. Without loss of generality, we discuss how to reduce regret for Player 1. Specifically,
consider an information set I ? I1 and action a where ?1t (I, a) = 0 and any history h ? I. Then
t
for any ancestor history h0 such that h0 @ h ? a, we know ?1? (h0 , h ? a) = 0. Likewise, for any
t
descendant history h0 such that h ? a v h0 , we know ?1? (h0 ) = 0. Thus, from (4) we see that Player
1?s strategy on iteration t in any information set following action a has no effect on Player 2?s regret
for that iteration. Moreover, it also has no effect on Player 1?s regret for any information set except
R(I, a) and information sets that follow action a. Therefore, by playing a best response only in
information sets following action a (and playing vanilla CFR elsewhere), Player 1 guarantees zero
regret for himself in all information sets following action a, without the practical cost of increasing
his regret in information sets before I or of increasing Player 2?s regret. This may increase regret
for action a itself, but if we only do this when R(I, a) ? ??(I), we can guarantee R(I, a) ? 0
even after the iteration. Similarly, Player 2 can simultaneously play a best response in information
sets following an action a0 where ?2t (I 0 , a0 ) = 0 for I 0 ? I2 . This approach leads to lower regret for
both players.
(In situations where both players? sequences of reaching an information set have zero probability
(?1 (h) = ?2 (h) = 0) the strategies chosen have no impact on the regret or average strategy for
either player, so there is no need to compute what strategies should be played from then on.)
Our experiments showed that this technique leads to a dramatic improvement over CFR in terms
of the number of iterations needed?though the theoretical convergence bound remains the same.
However, each iteration touches more nodes?because negative-regret actions more quickly become
positive and are not skipped with partial pruning?and thus takes longer. It depends on the game
whether CFR or this technique is faster overall; see experiments in Appendix A. Regret-based pruning, introduced in the next section, outperforms both of these approaches significantly.
4
Regret-Based Pruning (RBP)
In this section we present the main contribution of this paper, a technique for soundly pruning?on a
temporary basis?negative-regret actions from the tree traversal in order to speed it up significantly.
In Section 3 we proposed a variant of CFR where a player plays a best response in information sets
that the player reaches with zero probability. In this section, we show that these information sets and
their descendants need not be traversed in every iteration. Rather, the frequency that they must be
traversed is proportional to how negative regret is for the action leading to them. This less-frequent
traversal does not hurt the regret bound (8). Consider an information set I ? I1 and action a where
Rt (I, a) = ?1000 and regret for at least one other action in I is positive, and assume ?(I) = 1.
From (7), we see that ?1t+1 (I, a) = 0. As described in Section 3, the strategy played by Player 1
on iteration t + 1 in any information set following action a has no effect on Player 2. Moreover, it
has no immediate effect on what Player 1 will do in the next iteration (other than in information sets
following action a), because we know regret for action a will still be at most -999 on iteration t + 2
(since ?(I) = 1) and will continue to not be played. So rather than traverse the game tree following
action a, we could ?procrastinate? in deciding what Player 1 did on iteration t + 1, t + 2, ..., t + 1000
4
in that branch until after iteration t + 1000 (at which point regret for that action may be positive).
That is, we could (in principle) store Player 2?s strategy for each iteration between t+1 and t+1000,
and on iteration t+1000 calculate a best response to each of them and announce that Player 1 played
those best responses following action a on iterations t + 1 to t + 1000 (and update the regrets to
match this). Obviously this itself would not be an improvement, but performance would be identical
to the algorithm described in Section 3.
However, rather than have Player 1 calculate and play a best response for each iteration between
t + 1 and t + 1000 separately, we could simply calculate a best response against the average strategy
that Player 2 played in those iterations. This can be accomplished in a single traversal of the game
tree. We can then announce that Player 1 played this best response on each iteration between t + 1
and t + 1000. This provides benefits similar to the algorithm described in Section 3, but allows us
to do the work of 1000 iterations in a single traversal! We coin this regret-based pruning (RBP).
We now present a theorem that guarantees that when R(I, a) ? 0, we can prune D(I, a) through
|R(I,a)|
regret-based pruning for b U (I,a)?L(I)
c iterations.
Theorem 1. Consider a two-player zero-sum game. Let a ? A(I) be an action such that on
iteration T0 , RT0 (I, a) ? 0. Let I 0 be an information set for any player such that I 0 6? D(I, a) and
|R(I,a)|
let a0 ? A(I 0 ). Let m = b U (I,a)?L(I)
c. If ?(I, a) = 0 when R(I, a) ? 0, then regardless of what
T
(I 0 , a0 ) is identical for T ? T0 + m.
is played in D(I, a) during {T0 , ..., T0 + m}, R+
Proof. Since vi? (I) ? L(I) and vi? (I, a) ? U (I, a), so from (4) we get rt (I, a) ? U (I, a) ? L(I).
Thus, for iteration T0 ? T ? T0 + m, RT (I, a) ? 0. Clearly the theorem is true for T < T0 .
We prove the theorem continues to hold inductively for T ? T0 + m. Assume the theorem holds
for iteration T and consider iteration T + 1. Suppose I 0 ? IP (I) and either I 0 6= I or a0 6= a.
? T +1
Then for any h0 ? I 0 , there is no ancestor of h0 in an information set in D(I, a). Thus, ??i
(h0 )
0
does not depend on the strategy in D(I, a). Moreover, for any z ? Z, if h @ h @ z for some
T +1
h ? I ? ? D(I, a), then ? ? (h0 , z) = 0 because ? T +1 (I, a) = 0. Since I 0 6= I or a0 6= a, it
T +1
similarly holds that ? ? (h0 ?a0 , z) = 0. Then from (4), rT +1 (I, a) does not depend on the strategy
in D(I, a).
Now suppose I 0 ? Ii for i 6= P (I). Consider some h0 ? I 0 and some h ? I. First suppose that
T +1
T +1
h ? a v h0 . Since ?i? (h ? a) = 0, so ?i? (h0 ) = 0 and h0 contributes nothing to the regret of
T +1
I 0 . Now suppose h0 @ h. Then for any z ? Z, if h0 @ h @ z then ? ? (h0 , z) = 0 and does not
depend on the strategy in D(I, a). Finally, suppose h0 6@ h and h ? a 6v h0 . Then for any z ? Z
T +1
such that h0 @ z, we know h 6@ z and therefore ? ? (h0 , z) = 0 does not depend on the strategy
in D(I, a).
T
Now suppose I 0 = I and a0 = a. We proved RT (I, a) ? 0 for T0 ? T ? T0 + m, so R+
(I, a) = 0.
T
0 0
Thus, for all T ? T0 + m, R (I , a ) is identical regardless of what is played in D(I, a).
We can improve this approach significantly by not requiring knowledge beforehand of exactly how
many iterations can be skipped. Rather, we will decide in light of what happens during the interven?T
ing CFR iterations when an action needs to be revisited. From (4) we know that rT (I, a) ? ??i
(I).
T
?
Moreover, vP (I) (I) does not depend on D(I, a). Thus, we can prune D(I, a) from iteration T0 until
iteration T1 so long as
T0
T1
T1
X
X
X
t
t
?t
vP? (I) (I, a) +
??i
(I)U (I, a) ?
vP? (I) (I)
(9)
t=1
t=1
t=T0 +1
R(I,a)
b U (I,a)?L(I)
c
In the worst case, this allows us to skip only
iterations. However, in practice it
performs significantly better, though we cannot know on iteration T0 how many iterations it will
skip because it depends on what is played in T0 ? t ? T1 . Our exploratory experiments showed
that in practice performance also improves by replacing U (I, a) with a more accurate upper bound
on reward in (9). CFR will still converge if D(I, a) is pruned for too many iterations; however, that
hurts convergence speed. In the experiments included in this paper, we conservatively use U (I, a)
as the upper bound.
5
4.1
Best Response Calculation for Regret-Based Pruning
In this section we discuss how one can efficiently compute the best responses as called for in regretbased pruning. The advantage of Theorem 1 is that we can wait until after pruning has finished?that
is, until we revisit an action?to decide what strategies were played in D(I, a) during the intervening
iterations. We can then calculate a single best response to the average strategy that the opponent
played, and say that that best response was played in D(I, a) in each of the intervening iterations.
This results in zero regret over those iterations for information sets in D(I, a). We now describe
how this best response can be calculated efficiently.
PT
Typically, when playing CFR one stores t=1 ?it (I)?it (I) for each information set I. This allows
one to immediately calculate the average strategy defined in (1) in any particular iteration. If we
start pruning on iteration T0 and revisit on iteration T1 , we wish to calculate a best response to
?iT1 ?T0 (I) =
?
?iT1 ?T0 where ?
PT1
t
t
t=T0 ?i (I)?i (I)
PT1
t (I)
?
t=T0 i
. An easy approach would be to store the opponent?s
cumulative strategy before pruning begins and subtract it from the current cumulative strategy when
pruning ends. In fact, we only need to store the opponent?s strategy in information sets that follow
action a. However, this could potentially use O(H) memory because the same information set I
belonging to Player 2 may be reached from multiple information sets belonging to Player 1. In
contrast, CFR only requires O(|I||A|) memory, and we want to maintain this desirable property.
We accomplish that as follows.
To calculate a best response against ?
?2T , we traverse the game tree and calculate the counterfactual
value, defined in (3), for every action for every information set belonging to Player 1 that does
T0 ?1
not lead to any further Player 1 information sets. Specifically, we calculate v1??
(I, a) for every
action a in I such that D(I, a) = ?. Since we calculate this only for actions where D(I, a) = ?,
T0 ?1
so v1??
(I, a) does not depend on ?
?1 . Then, starting from the bottom information sets, we set the
best-response strategy ?1BR (I) to always play the action with the highest counterfactual value (ties
can be broken arbitrarily), and pass this value up as the payoff for reaching I, repeating the process
up the tree. In order to calculate a best response to ?
?2T1 ?T0 , we first store, before pruning begins,
the counterfactual values for Player 1 against Player 2?s average strategy for every action a in each
information set I where D(I, a) = ?. When we revisit the action on iteration T1 , we calculate a best
response to ?
?2T1 except that we set the counterfactual value for every action a in information set I
T1
T0 ?1
where D(I, a) = ? to be T1 v1?? (I, a) ? (T0 ? 1)v1??
(I, a). The latter term was stored, and the
former term can be calculated from the current average strategy profile. As before, we set ?1BR (I)
to always play whichever action has the highest counterfactual value, and pass this term up.
A slight complication arises when we are pruning an action a in information set I and wish to start
pruning an earlier action a0 from information set I 0 such that I ? D(I 0 , a0 ). In this case, it is
necessary to explore action a in order to calculate the best response in D(I 0 , a0 ). However, if such
traversals happen frequently, then this would defeat the purpose of pruning action a. One way to
address this is to only prune an action a0 when the number of iterations guaranteed (or estimated)
to be skipped exceeds some threshold. This ensures that the overhead is worthwhile, and that we
are not frequently traversing an action a farther down the tree that is already being pruned. Another
option is to add some upper bound to how long we will prune an action. If the lower bound for
how long we will prune a exceeds the upper bound for how long we will prune a0 , then we need not
traverse a in the best response calculation for a0 because a will still be pruned when we are finished
with pruning a0 . In our experiments, we use the former approach. Experiments to determine a good
parameter for this are presented in Appendix B.
4.2
Regret-Based Pruning with CFR+
CFR+ [13] is a variant of CFR where the regret is never allowed to go below 0. Formally, RT (I, a) =
max{RT ?1 (I, a) + rT (I, a), 0} for T ? 1 and RT (I, a) = 0 for T = 0. Although this change
appears small, and does not improve the bound on regret, it leads to faster empirical convergence.
CFR+ was a key advancement that allowed Limit Texas Hold?em poker to be essentially solved [1].
At first glance, it would seem that CFR+ and RBP are incompatible. RBP allows actions to be
traversed with decreasing frequency as regret decreases below zero. However, CFR+ sets a floor
6
for regret at zero. Nevertheless, it is possible to combine the two, as we now show. We modify
the definition of regret in CFR+ so that it can drop below zero, but immediately returns to being
positive as soon as regret begins increasing. Formally, we modify the definition of regret in CFR+
for T > 0 to be as follows: RT (I, a) = rT (I, a) if rT (I, a) > 0 and RT ?1 (I, a) ? 0, and
RT (I, a) = RT ?1 (I, a) + rT (I, a) otherwise. This leads to identical behavior in CFR+, and also
allows regret to drop below zero so actions can be pruned.
When using RBP with CFR+, regret does not strictly follow the rules for CFR+. CFR+ calls for an
action to be played with positive probability whenever instantaneous regret for it is positive in the
previous iteration. Since RBP only checks the regret for an action after potentially several iterations
have been skipped, there may be a delay between the iteration when an action would return to play
in CFR+ and the iteration when it returns to play in RBP. This does not pose a theoretical problem:
CFR?s convergence rate still applies.
However, this difference is noticeable when combined with linear averaging. Linear averaging
weighs each iteration ? t in the average strategy by t. It does not affect regret or influence the selection of strategies on an iteration. That is, with linear averaging the new definition for average strategy becomes ?
?iT (I) =
?it t
t?T (t?i ?i )
P
?t
i
t?T (t?i )
P
. Linear averaging still maintains the asymptotic convergence
rate of constant averaging (where each iteration is weighed equally) in CFR+ [14]. Empirically it
causes CFR+ to converge to a Nash equilibrium much faster. However, in vanilla CFR it results in
worse performance and there is no proof guaranteeing convergence. Since RBP with CFR+ results
in behavior that does not strictly conform to CFR+, linear averaging results in somewhat noisier
convergence. This can be mitigated by reporting the strategy profile found so far that is closest to a
Nash equilibrium rather than the current average strategy profile, and we do this in the experiments.
5
Experiments
We tested regret-based pruning in both CFR and CFR+ against partial pruning, as well as against
CFR with no pruning. Our implementation traverses the game tree once each iteration.1 We tested
our algorithm on standard Leduc Hold?em [12] and a scaled-up variant of it featuring more actions.
Leduc Hold?em is a popular benchmark problem for imperfect-information game solving due to its
size (large enough to be highly nontrivial but small enough to be solvable) and strategic complexity.
In Leduc Hold?em, there is a deck consisting of six cards: two each of Jack, Queen, and King. There
are two rounds. In the first round, each player places an ante of 1 chip in the pot and receives a single
private card. A round of betting then takes place with a two-bet maximum, with Player 1 going first.
A public shared card is then dealt face up and another round of betting takes place. Again, Player 1
goes first, and there is a two-bet maximum. If one of the players has a pair with the public card, that
players wins. Otherwise, the player with the higher card wins. In standard Leduc Hold?em, the bet
size in the first round is 2 chips, and 4 chips in the second round. In our scaled-up variant, which we
call Leduc-5, there are 5 bet sizes to choose from: in the first round a player may bet 0.5, 1, 2, 4, or
8 chips, while in the second round a player may bet 1, 2, 4, 8, or 16 chips.
We measure the quality of a strategy profile by its exploitability, which is the summed distance
of both players from a Nash equilibrium strategy. Formally, exploitability of a strategy profile ?
is max?1? ??1 u1 (?1? , ?2 ) + max?2? ??2 u2 (?1 , ?2? ). We measure exploitability against the number of
nodes touched over all CFR traversals. As shown in Figure 1, RBP leads to a substantial improvement over vanilla CFR with partial pruning in Leduc Hold?em, increasing the speed of convergence
by more than a factor of 8. This is partially due to the game tree being traversed twice as fast, and
partially due to the use of a best response in sequences that are pruned (the benefit of which was
described in Section 3). The improvement when added on top of CFR+ is smaller, increasing the
speed of convergence by about a factor of 2. This matches the reduction in game tree traversal size.
The benefit from RBP is more substantial in the larger benchmark game, Leduc-5. RBP increases
convergence speed of CFR by a factor of 12, and reduces the per-iteration game tree traversal cost by
about a factor of 7. In CFR+, RBP improves the rate of convergence by about an order of magnitude.
RBP also decreases the number of nodes touched per iteration in CFR+ by about a factor of 40.
1
Canonical CFR+ traverses the game tree twice each iteration, updating the regrets for each player in separate traversals [13]. This difference does not, however, affect the error measure (y-axis) in the experiments.
7
(a) Leduc Hold?em
(b) Leduc-5 Hold?em
Figure 1: Top: Exploitability. Bottom: Nodes touched per iteration.
The results imply that larger games benefit more from RBP than smaller games. This is not universally true, since it is possible to have a large game where every action is part of the Nash equilibrium.
Nevertheless, there are many games with very large action spaces where the vast majority of those
actions are suboptimal, but players do not know beforehand which are suboptimal. In such games,
RBP would improve convergence tremendously.
6
Conclusions and Future Research
In this paper we introduced a new method of pruning that allows CFR to avoid traversing highregret actions in every iteration. Our regret-based pruning (RBP) temporarily ceases their traversal
in a sound way without compromising the overall convergence rate. Experiments show an order of
magnitude speed improvement over partial pruning, and suggest that the benefit of RBP increases
with game size. Thus RBP is particularly useful in large games where many actions are suboptimal,
but where it is not known beforehand which actions those are.
In future research, it would be worth examining whether similar forms of pruning can be applied
to other equilibrium-finding algorithms as well. RBP, as presented in this paper, is for CFR using
regret matching to determine what strategies to use on each iteration based on the regrets. RBP
does not directly apply to other strategy selection techniques that could be used within CFR such as
exponential weights, because the latter always puts positive probability on actions. Also, it would be
interesting to see whether RBP-like pruning could be applied to first-order methods for equilibriumfinding [5, 3, 10, 8]. The results in this paper suggest that for any equilibrium-finding algorithm to
be efficient in large games, effective pruning is essential.
6.1
Acknowledgement
This material is based on work supported by the National Science Foundation under grants IIS1320620 and IIS-1546752, as well as XSEDE computing resources provided by the Pittsburgh Supercomputing Center.
8
References
[1] Michael Bowling, Neil Burch, Michael Johanson, and Oskari Tammelin. Heads-up limit holdem poker is solved. Science, 347(6218):145?149, 2015.
[2] Noam Brown, Sam Ganzfried, and Tuomas Sandholm. Hierarchical abstraction, distributed
equilibrium computation, and post-processing, with application to a champion no-limit texas
hold?em agent. In Proceedings of the 2015 international conference on Autonomous agents
and multi-agent systems. International Foundation for Autonomous Agents and Multiagent
Systems, 2015.
[3] Andrew Gilpin, Javier Pe?na, and Tuomas Sandholm. First-order algorithm with O(ln(1/))
convergence for -equilibrium in two-person zero-sum games. Mathematical Programming,
133(1?2):279?298, 2012. Conference version appeared in AAAI-08.
[4] Andrew Gilpin and Tuomas Sandholm. Lossless abstraction of imperfect information games.
Journal of the ACM, 54(5), 2007. Early version ?Finding equilibria in large sequential games
of imperfect information? appeared in the Proceedings of the ACM Conference on Electronic
Commerce (EC), pages 160?169, 2006.
[5] Samid Hoda, Andrew Gilpin, Javier Pe?na, and Tuomas Sandholm. Smoothing techniques
for computing Nash equilibria of sequential games. Mathematics of Operations Research,
35(2):494?512, 2010. Conference version appeared in WINE-07.
[6] Eric Griffin Jackson. A time and space efficient algorithm for approximately solving large imperfect information games. In AAAI Workshop on Computer Poker and Imperfect Information,
2014.
[7] Michael Johanson, Nolan Bard, Neil Burch, and Michael Bowling. Finding optimal abstract
strategies in extensive-form games. In AAAI Conference on Artificial Intelligence (AAAI),
2012.
[8] Christian Kroer, Kevin Waugh, Fatma K?l?nc?-Karzan, and Tuomas Sandholm. Faster firstorder methods for extensive-form game solving. In Proceedings of the ACM Conference on
Economics and Computation (EC), 2015.
[9] Marc Lanctot, Kevin Waugh, Martin Zinkevich, and Michael Bowling. Monte Carlo sampling
for regret minimization in extensive games. In Proceedings of the Annual Conference on
Neural Information Processing Systems (NIPS), pages 1078?1086, 2009.
[10] Franc?ois Pays. An interior point approach to large games of incomplete information. In AAAI
Computer Poker Workshop, 2014.
[11] Tuomas Sandholm. The state of solving large incomplete-information games, and application
to poker. AI Magazine, pages 13?32, Winter 2010. Special issue on Algorithmic Game Theory.
[12] Finnegan Southey, Michael Bowling, Bryce Larson, Carmelo Piccione, Neil Burch, Darse
Billings, and Chris Rayner. Bayes? bluff: Opponent modelling in poker. In Proceedings of the
21st Annual Conference on Uncertainty in Artificial Intelligence (UAI), pages 550?558, July
2005.
[13] Oskari Tammelin. Solving large imperfect information games using CFR+. arXiv preprint
arXiv:1407.5042, 2014.
[14] Oskari Tammelin, Neil Burch, Michael Johanson, and Michael Bowling. Solving heads-up
limit texas holdem. In IJCAI, volume 2015, 2015.
[15] Kevin Waugh, David Schnizlein, Michael Bowling, and Duane Szafron. Abstraction pathologies in extensive games. In International Conference on Autonomous Agents and Multi-Agent
Systems (AAMAS), 2009.
[16] Martin Zinkevich, Michael Bowling, Michael Johanson, and Carmelo Piccione. Regret minimization in games with incomplete information. In Proceedings of the Annual Conference on
Neural Information Processing Systems (NIPS), 2007.
9
| 5910 |@word private:1 version:5 polynomial:1 seems:1 proportion:1 szafron:1 rayner:1 pick:1 dramatic:2 carry:1 it1:2 reduction:2 series:1 outperforms:1 current:4 must:2 subsequent:1 happen:1 partition:1 cheap:1 christian:1 drop:2 update:6 intelligence:2 advancement:1 farther:1 provides:2 node:9 revisited:1 traverse:8 complication:1 mathematical:1 become:2 descendant:4 prove:1 overhead:1 combine:1 introduce:2 expected:2 behavior:2 roughly:1 frequently:2 multi:2 terminal:1 decreasing:1 solver:1 increasing:6 becomes:1 begin:4 provided:1 moreover:7 notation:1 bounded:2 mitigated:1 what:9 maxa:1 finding:6 guarantee:5 every:15 hypothetical:1 act:2 ti:2 firstorder:1 tie:1 exactly:1 scaled:2 grant:1 positive:12 before:4 t1:10 modify:2 tends:1 limit:4 path:6 approximately:1 chose:1 twice:2 range:2 practical:2 commerce:1 carmelo:2 practice:3 regret:72 empirical:2 significantly:5 matching:1 wait:1 suggest:2 get:1 cannot:1 xsede:1 selection:2 interior:1 put:1 applying:2 influence:1 weighed:1 karzan:1 maxz:2 zinkevich:2 center:1 maximizing:1 go:2 regardless:2 rt0:1 starting:1 economics:1 immediately:2 rule:1 jackson:1 his:2 classic:1 exploratory:1 autonomous:3 hurt:2 pt:1 play:16 today:1 suppose:6 magazine:1 programming:1 pa:2 particularly:1 updating:3 continues:1 bottom:2 preprint:1 solved:3 worst:1 calculate:13 ensures:1 decrease:2 highest:2 substantial:2 nash:15 ui:14 broken:1 reward:2 complexity:1 inductively:1 traversal:11 motivate:1 depend:6 solving:6 eric:1 basis:1 easily:1 joint:1 chip:5 represented:4 fast:1 describe:1 effective:1 monte:1 artificial:2 kevin:3 h0:39 emerged:1 larger:4 spend:1 hvz:2 say:1 pt1:2 otherwise:3 nolan:1 neil:4 itself:2 ip:1 obviously:1 sequence:10 advantage:1 interaction:1 frequent:2 poorly:1 intervening:2 convergence:17 empty:1 parent:1 ijcai:1 perfect:3 converges:1 guaranteeing:1 depending:2 andrew:3 pose:1 noticeable:1 received:1 pot:1 c:1 skip:6 ois:1 compromising:1 public:2 material:1 traversed:4 exploring:1 strictly:2 hold:12 around:1 deciding:1 equilibrium:23 algorithmic:1 early:1 wine:1 purpose:1 leap:1 champion:1 minimization:7 rough:1 clearly:1 always:5 reaching:5 rather:5 avoid:3 johanson:4 bet:6 earliest:1 focus:2 improvement:13 modelling:1 check:1 contrast:2 skipped:5 tremendously:1 waugh:3 abstraction:3 i0:7 entire:3 typically:1 a0:18 ancestor:3 going:1 selects:1 i1:2 overall:2 among:2 issue:1 denoted:1 art:1 summed:1 smoothing:1 special:1 once:1 never:1 sampling:2 identical:5 constitutes:1 future:2 others:1 tammelin:3 leduc:9 franc:1 winter:1 simultaneously:1 national:1 consisting:1 maintain:1 highly:1 kroer:1 light:1 accurate:1 beforehand:3 tuple:1 partial:12 necessary:1 traversing:4 tree:21 incomplete:3 prunable:1 theoretical:2 weighs:1 witnessed:1 earlier:1 queen:1 strategic:2 cost:3 hundred:1 delay:1 examining:1 too:1 stored:1 holdem:2 varies:1 accomplish:1 considerably:1 combined:1 person:1 st:1 international:3 michael:11 quickly:1 oskari:3 na:2 again:1 aaai:5 choose:2 worse:1 leading:7 return:3 includes:1 depends:3 vi:4 later:2 performed:1 try:1 reached:3 start:2 bayes:1 maintains:2 option:1 ante:1 contribution:4 responder:2 square:1 who:1 likewise:1 efficiently:2 vp:5 dealt:1 carlo:1 worth:1 history:15 reach:7 whenever:1 definition:3 against:6 frequency:2 thereof:1 proof:2 fatma:1 gain:2 proved:2 popular:3 counterfactual:11 recall:3 anytime:1 knowledge:1 improves:3 javier:2 actually:1 appears:1 higher:3 follow:3 response:28 though:4 generality:1 until:4 receives:1 ganzfried:1 touch:1 replacing:1 glance:1 quality:1 rbp:25 effect:4 samid:1 brown:2 true:2 requiring:1 former:2 i2:1 round:8 indistinguishable:2 game:71 during:4 bowling:7 larson:1 performs:1 instantaneous:2 jack:1 empirically:2 defeat:1 volume:1 slight:1 mellon:2 refer:1 significant:1 ai:5 vanilla:4 mathematics:1 similarly:3 clairvoyantly:1 had:1 reachable:4 pathology:1 longer:1 add:1 closest:1 recent:1 showed:2 store:5 certain:1 continue:2 arbitrarily:1 accomplished:1 minimum:2 somewhat:1 floor:1 prune:8 converge:3 determine:2 july:1 ii:9 branch:1 multiple:1 desirable:1 sound:1 reduces:2 ing:1 exceeds:2 faster:5 match:2 calculation:2 long:4 post:1 equally:1 impact:1 variant:8 himself:1 cmu:2 essentially:1 arxiv:2 iteration:73 background:1 want:1 separately:1 rest:1 seem:1 call:2 near:1 presence:1 easy:1 enough:2 variety:1 affect:2 suboptimal:3 imperfect:12 idea:1 reduce:1 billing:1 br:5 texas:3 t0:26 whether:3 six:1 utility:1 cause:1 repeatedly:1 action:77 dramatically:1 generally:1 useful:1 informally:1 repeating:1 ten:1 exist:1 canonical:1 revisit:3 estimated:1 per:4 conform:1 carnegie:2 key:2 threshold:1 nevertheless:2 costing:1 v1:4 vast:1 sum:8 year:1 uncertainty:1 reporting:1 place:3 announce:2 decide:2 electronic:1 incompatible:1 appendix:2 griffin:1 lanctot:1 bound:8 pay:1 guaranteed:3 played:14 annual:3 nontrivial:1 burch:4 u1:2 speed:12 extremely:1 pruned:6 betting:2 martin:2 department:2 according:5 poor:2 belonging:4 sandholm:8 smaller:2 em:9 sam:1 lp:2 making:1 modification:1 happens:1 intuitively:1 explained:1 taken:1 computationally:1 resource:1 ln:1 previously:1 remains:1 turn:2 discus:3 fail:1 needed:1 know:7 whichever:1 end:1 available:3 operation:1 opponent:9 apply:1 observe:1 worthwhile:2 hierarchical:1 coin:1 slower:1 denotes:1 top:2 maintaining:1 calculating:2 already:1 added:1 strategy:56 rt:26 poker:6 win:2 distance:1 separate:1 card:5 majority:1 chris:1 topic:1 cfr:76 reason:1 bard:1 tuomas:7 nc:1 potentially:2 noam:2 negative:6 darse:1 implementation:1 upper:4 benchmark:2 finite:1 schnizlein:1 immediate:1 situation:3 payoff:6 head:2 introduced:2 david:1 pair:1 extensive:8 temporary:2 nip:2 address:1 able:1 below:4 appeared:3 program:1 max:8 memory:2 solvable:1 improve:3 lossless:1 imply:1 finished:2 axis:1 bryce:1 acknowledgement:1 relative:1 asymptotic:1 loss:1 multiagent:1 piccione:2 interesting:1 proportional:2 southey:1 foundation:2 agent:6 principle:1 playing:4 elsewhere:1 featuring:1 supported:1 last:1 soon:1 taking:2 face:1 benefit:7 distributed:1 calculated:2 cumulative:2 conservatively:1 bluff:1 universally:1 supercomputing:1 far:2 ec:2 pruning:45 approximate:1 finnegan:1 uai:1 pittsburgh:3 iterative:3 additionally:1 exploitability:4 contributes:1 hoda:1 marc:1 did:1 main:2 revisits:1 profile:12 nothing:1 child:1 allowed:2 aamas:1 wish:2 exponential:1 pe:2 minz:2 touched:3 theorem:6 down:1 maxi:3 explored:1 cease:1 exists:1 essential:1 workshop:2 sequential:2 magnitude:6 subtract:1 simply:1 explore:1 deck:1 temporarily:1 partially:2 u2:2 applies:1 duane:1 chance:4 acm:3 king:1 room:1 shared:1 change:2 included:1 specifically:3 except:2 averaging:6 called:1 pas:2 player:91 formally:6 rit:2 gilpin:3 latter:2 arises:1 noisier:1 tested:2 |
5,426 | 5,911 | Nonparametric von Mises Estimators for Entropies,
Divergences and Mutual Informations
Akshay Krishnamurthy
Microsoft Research, NY
akshaykr@cs.cmu.edu
Kirthevasan Kandasamy
Carnegie Mellon University
kandasamy@cs.cmu.edu
Barnab?as P?oczos, Larry Wasserman
Carnegie Mellon University
bapoczos@cs.cmu.edu, larry@stat.cmu.edu
James M. Robins
Harvard University
robins@hsph.harvard.edu
Abstract
We propose and analyse estimators for statistical functionals of one or more distributions under nonparametric assumptions. Our estimators are derived from the
von Mises expansion and are based on the theory of influence functions, which appear in the semiparametric statistics literature. We show that estimators based either on data-splitting or a leave-one-out technique enjoy fast rates of convergence
and other favorable theoretical properties. We apply this framework to derive estimators for several popular information theoretic quantities, and via empirical
evaluation, show the advantage of this approach over existing estimators.
1
Introduction
Entropies, divergences, and mutual informations are classical information-theoretic quantities that
play fundamental roles in statistics, machine learning, and across the mathematical sciences. In
addition to their use as analytical tools, they arise in a variety of applications including hypothesis
testing, parameter estimation, feature selection, and optimal experimental design. In many of these
applications, it is important to estimate these functionals from data so that they can be used in downstream algorithmic or scientific tasks. In this paper, we develop a recipe for estimating statistical
functionals of one or more nonparametric distributions based on the notion of influence functions.
Entropy estimators are used in applications ranging from independent components analysis [15],
intrinsic dimension estimation [4] and several signal processing applications [9]. Divergence estimators are useful in statistical tasks such as two-sample testing. Recently they have also gained
popularity as they are used to measure (dis)-similarity between objects that are modeled as distributions, in what is known as the ?machine learning on distributions? framework [5, 28]. Mutual information estimators have been used in in learning tree-structured Markov random fields [19], feature
selection [25], clustering [18] and neuron classification [31]. In the parametric setting, conditional
divergence and conditional mutual information estimators are used for conditional two sample testing or as building blocks for structure learning in graphical models. Nonparametric estimators for
these quantities could potentially allow us to generalise several of these algorithms to the nonparametric domain. Our approach gives sample-efficient estimators for all these quantities (and many
others), which often outperfom the existing estimators both theoretically and empirically.
Our approach to estimating these functionals is based on post-hoc correction of a preliminary estimator using the Von Mises Expansion [7, 36]. This idea has been used before in the semiparametric
statistics literature [3, 30]. However, most studies are restricted to functionals of one distribution
and have focused on a ?data-split? approach which splits the samples for density estimation and
functional estimation. While the data-split (DS) estimator is known to achieve the parametric con1
vergence rate for sufficiently smooth densities [3, 14], in practical settings, as we show in our simulations, splitting the data results in poor empirical performance.
In this paper we introduce the method of influence function based nonparametric estimators to the
machine learning community and expand on this technique in several novel and important ways.
The main contributions of this paper are:
1. We propose a ?leave-one-out? (LOO) technique to estimate functionals of a single distribution.
We prove that it has the same convergence rates as the DS estimator. However, the LOO estimator
has better empirical performance in our simulations since it makes efficient use of the data.
2. We extend both DS and LOO methods to functionals of multiple distributions and analyse their
convergence. Under sufficient smoothness both estimators achieve the parametric rate and the
DS estimator has a limiting normal distribution.
3. We prove a lower bound for estimating functionals of multiple distributions. We use this to
establish minimax optimality of the DS and LOO estimators under sufficient smoothness.
4. We use the approach to construct and implement estimators for various entropy, divergence, mutual information quantities and their conditional versions. A subset of these
functionals are listed in Table 1 in the Appendix. Our software is publicly available at
github.com/kirthevasank/if-estimators.
5. We compare our estimators against several other approaches in simulation. Despite the generality
of our approach, our estimators are competitive with and in many cases superior to existing
specialised approaches for specific functionals. We also demonstrate how our estimators can be
used in machine learning applications via an image clustering task.
Our focus on information theoretic quantities is due to their relevance in machine learning applications, rather than a limitation of our approach. Indeed our techniques apply to any smooth functional.
History: We provide a brief history of the post-hoc correction technique and influence functions.
We defer a detailed discussion of other approaches to estimating functionals to Section 5. To our
knowledge, the first paper using a post-hoc correction estimator was that of Bickel and Ritov [2].
The line of work following
this paper analysed integral functionals of a single one dimensional
R
density of the form ?(p) [2, 3, 11, 14]. A recent paper by Krishnamurthy et al. [12] also extends
to functionals of multiple densities, but only considers polynomial functionals of the form
Rthis?line
p q ? for densities p and q. All approaches above of use data splitting. Our work contributes to
this line of research in two ways: we extend the technique to a more general class of functionals and
study the empirically superior LOO estimator.
A fundamental quantity in the design of our estimators is the influence function, which appears both
in robust and semiparametric statistics. Indeed, our work is inspired by that of Robins et al. [30]
and Emery et al. [6] who propose a (data-split) influence-function based estimator for functionals of
a single distribution. Their analysis for nonparametric problems rely on ideas from semiparametric
statistics: they define influence functions for parametric models and then analyse estimators by
looking at all parametric submodels through the true parameter.
2
Preliminaries
Let X be a compact metric space equipped with a measure ?, e.g. the Lebesgue measure. Let
F and G be measures over X that are absolutely continuous w.r.t ?. Let f, g ? L2 (X ) be the
Radon-Nikodym derivatives with respect to ?. We focus on estimating functionals of the form:
Z
Z
T (F ) = T (f ) = ?
?(f )d?
or
T (F, G) = T (f, g) = ?
?(f, g)d? ,
(1)
where ?, ? are real valued Lipschitz functions that twice differentiable. Our framework permits
more general functionals (e.g. functionals based on the conditional densities), but we will focus on
this form for ease of exposition. To facilitate presentation of the main definitions, it is easiest to
work with functionals of one distribution T (F ). Define M to be the set of all measures that are
absolutely continuous w.r.t ?, whose Radon-Nikodym derivatives belong to L2 (X ).
2
Central to our development is the Von Mises expansion (VME), which is the distributional analog
of the Taylor expansion. For this we introduce the G?ateaux derivative which imposes a notion of
differentiability in topological spaces. We then introduce the influence function.
Definition 1. Let P, H ? M and U : M ? R be any functional. The map U 0 : M ? R
where U 0 (H; P ) = ?U (P?t+tH) |t=0 is called the G?ateaux derivative at P if the derivative exists and
is linear and continuous in H. U is G?ateaux differentiable at P if the G?ateaux derivative exists at P .
Definition 2. Let RU be G?ateaux differentiable at P . A function ?(?; P ) : X ? R which satisfies
U 0 (Q ? P ; P ) = ?(x; P )dQ(x), is the influence function of U w.r.t the distribution P .
By the Riesz representation theorem, the influence function exists uniquely since the domain of U is
a bijection of L2 (X ) and consequently a Hilbert space. The classical work of Fernholz [7] defines
the influence function in terms of the G?ateaux derivative by,
?U ((1 ? t)P + t?x )
?(x; P ) = U 0 (?x ? P ; P ) =
(2)
,
?t
t=0
where ?x is the dirac delta function at x. While our functionals are defined only on non-atomic
distributions, we can still use (2) to compute the influence function. The function computed this
way can be shown to satisfy Definition 2.
Based on the above, the first order VME is,
U (Q) = U (P ) + U 0 (Q ? P ; P ) + R2 (P, Q) = U (P ) +
Z
?(x; P )dQ(x) + R2 (P, Q),
(3)
where R2 is the second order remainder. G?ateaux differentiability alone will not be sufficient for
our purposes. In what follows, we will assign Q ? F and P ? Fb, where F , Fb are the true and
estimated distributions. We would like to bound the remainder in terms of a distance between F and
Fb. For functionals T of the form (1), we restrict the domain to be only measures with continuous
densities, Then, we can control R2 using the L2 metric of the densities. This essentially means that
our functionals satisfy a stronger form of differentiability called Fr?echet differentiability [7, 36] in
the L2 metric. Consequently, we can write all derivatives in terms of the densities, and the VME
reduces to a functional Taylor expansion on the densities (Lemmas 9, 10 in Appendix A):
Z
Z
T (q) = T (p) + ?0
?(p)
(q ? p)? 0 (p) + R2 (p, q)
Z
= T (p) + ?(x; p)q(x)d?(x) + O(kp ? qk22 ).
(4)
This expansion will be the basis for our estimators.
These ideas generalise to functionals of multiple distributions and to settings where the functional
involves quantities other than the density. A functional T (P, Q) of two distributions has two
G?ateaux derivatives, Ti0 (?; P, Q) for i = 1, 2 formed by perturbing the ith argument with the other
fixed. The influence functions ?1 , ?2 satisfy, ?P1 , P2 ? M,
Z
?T (P1 + t(Q1 ? P1 ), P2 )
0
T1 (Q1 ? P1 ; P1 , P2 ) =
= ?1 (u; P1 , P2 )dQ1 (u),
(5)
?t
t=0
Z
?T (P1 , P2 + t(Q2 ? P2 ))
T20 (Q2 ? P2 ; P1 , P2 ) =
= ?2 (u; P1 , P2 )dQ2 (u).
?t
t=0
The VME can be written as,
Z
T (q1 , q2 ) = T (p1 , p2 ) +
Z
?1 (x; p1 , p2 )q1 (x)dx +
+ O(kp1 ? q1 k22 ) + O(kp2 ? q2 k22 ).
3
?2 (x; p1 , p2 )q2 (x)dx
(6)
Estimating Functionals
R
First consider estimating a functional of a single distribution, T (f ) = ?( ?(f )d?) from samples
X1n ? f . We wish to find an estimator Tb with low expected mean squared error (MSE) E[(Tb ? T )2 ].
3
Using the VME (4), Emery et al. [6] and Robins et al. [30] suggest a natural estimator. If we use
n/2
half of the data X1 to construct an estimate f?(1) of the density f , then by (4):
Z
(1)
?
T (f ) ? T (f ) = ?(x; f?(1) )f (x)d? + O(kf ? f?(1) k22 ).
As the influence function does not depend on (the unknown) F , the first term on the right hand side
n
is simply an expectation of ?(X; f?(1) ) w.r.t F . We can use the second half of the data Xn/2+1
to
estimate this expectation with its sample mean. This leads to the following preliminary estimator:
1
(1)
TbDS = T (f?(1) ) +
n/2
n
X
?(Xi ; f?(1) ).
(7)
i=n/2+1
(2)
n/2
n
We can similarly construct an estimator TbDS by using Xn/2+1
for density estimation and X1 for
(1)
(2)
averaging. Our final estimator is obtained via TbDS = (TbDS + TbDS )/2. In what follows, we shall
refer to this estimator as the Data-Split (DS) estimator. The DS estimator for functionals of one
distribution has appeared before in the statistics literature [2, 3, 30].
The rate of convergence of this estimator is determined by the O(kf ? f?(1) k22 ) error in the VME
and the n?1 rate for estimating an expectation. Lower bounds from several literature [3, 14] confirm
minimax optimality of the DS estimator when f is sufficiently smooth. The data splitting trick is
common approach [3, 12, 14] as the analysis is straightforward. While in theory DS estimators enjoy
good rates of convergence, data splitting is unsatisfying from a practical standpoint since using only
half the data each for estimation and averaging invariably decreases the accuracy.
To make more effective use of the sample, we propose a Leave-One-Out (LOO) version of the above
estimator,
n
1 X ?
T (f?i ) + ?(Xi ; f??i ) .
(8)
TbLOO =
n i=1
where f??i is a density estimate using all the samples X1n except for Xi . We prove that the LOO
Estimator achieves the same rate of convergence as the DS estimator but empirically performs much
better. Our analysis is specialised to the case where f??i is a kernel density estimate (Section 4).
We can extend this method to estimate functionals of two distributions. Say we have n i.i.d samples
X1n from f and m samples Y1m from g. Akin to the one distribution case, we propose the following
DS and LOO versions.
n
m
X
X
1
1
(1)
(1) (1)
(1) (1)
?
?
b
TDS = T (f , g? ) +
?f (Xi ; f , g? ) +
?g (Yj ; f?(1) , g?(1) ). (9)
n/2
m/2
i=n/2+1
TbLOO =
1
max(n, m)
max(n,m)
X
j=m/2+1
T (f??i , g??i ) + ?f (Xi ; f??i , g??i ) + ?g (Yi ; f??i , g??i ) .
(10)
i=1
Here, g?(1) , g??i are defined similar to f?(1) , f??i . For the DS estimator, we swap the samples to
(2)
compute TbDS and average. For the LOO estimator, if n > m we cycle through the points Y1m until
we have summed over all X1n or vice versa. TbLOO is asymmetric when n 6= m. A seemingly natural
alternative would be to sum over all nm pairings of Xi ?s and Yj ?s. However, this is computationally
more expensive. Moreover, a straightforward modification of our proof in Appendix D.2 shows that
both approaches converge at the same rate if n and m are of the same order.
Examples: We demonstrate the generality of our framework by presenting estimators for several
entropies, divergences mutual informations and their conditional versions in Table 1 (Appendix H).
For many functionals in the table, these are the first computationally efficient estimators proposed.
We hope this table will serve as a good reference for practitioners. For several functionals (e.g.
conditional and unconditional R?enyi-? divergence, conditional Tsallis-? mutual information) the
estimators are not listed only because the expressions are too long to fit into the table. Our software
implements a total of 17 functionals which include all the estimators in the table. In Appendix F we
illustrate how to apply our framework to derive an estimator for any functional via an example.
4
As will be discussed in Section 5, when compared to other alternatives, our technique has several
favourable properties: the computational complexity of our method is O(n2 ) when compared to
O(n3 ) of other methods; for several functionals we do not require numeric integration; unlike most
other methods [28, 32], we do not require any tuning of hyperparameters.
4
Analysis
Some smoothness assumptions on the densities are warranted to make estimation tractable. We use
the H?older class, which is now standard in nonparametrics literature.
P
Definition 3. Let X ? Rd be a compact space. For any r = (r1 , . . . , rd ), ri ? N, define |r| = i ri
|r|
and Dr = ?xr1?...?xrd . The H?older class ?(s, L) is the set of functions on L2 (X ) satisfying,
1
d
|Dr f (x) ? Dr f (y)| ? Lkx ? yks?r ,
for all r s.t. |r| ? bsc and for all x, y ? X .
Moreover, define the Bounded H?older Class ?(s, L, B 0 , B) to be {f ? ?(s, L) : B 0 < f < B}.
Note that large s implies higher smoothness. Given n samples X1n from a d-dimensional density
Pn
i
. Here
f , the kernel density estimator (KDE) with bandwidth h is f?(t) = 1/(nhd ) i=1 K t?X
h
?1
K : Rd ? R is a smoothing kernel [35]. When f ? ?(s, L), by selecting h ? ?(n 2s+d ) the KDE
?2s
achieves the minimax rate of OP (n 2s+d ) in mean squared error. Further, if f is in the bounded
H?older class ?(s, L, B 0 , B) one can truncate the KDE from below at B 0 and from above at B and
achieve the same convergence rate [3]. In our analysis, the density estimators f?(1) , f??i , g?(1) , g??i are
formed by either a KDE or a truncated KDE, and we will make use of these results.
We will also need the following regularity condition on the influence function. This is satisfied for
smooth functionals including those in Table 1. We demonstrate this in our example in Appendix F.
Assumption 4. For a functional T (f ) of one distribution, the influence function ? satisfies,
E (?(X; f 0 ) ? ?(X; f ))2 ? O(kf ? f 0 k2 ) as kf ? f 0 k2 ? 0.
For a functional T (f, g) of two distributions, the influence functions ?f , ?g satisfy,
h
i
Ef (?f (X; f 0 , g 0 ) ? ?f (X; f, g))2 ? O(kf ? f 0 k2 + kg ? g 0 k2 ) as kf ? f 0 k2 , kg ? g 0 k2 ? 0.
h
i
Eg (?g (Y ; f 0 , g 0 ) ? ?g (Y ; f, g))2 ? O(kf ? f 0 k2 + kg ? g 0 k2 ) as kf ? f 0 k2 , kg ? g 0 k2 ? 0.
Under the above assumptions, Emery et al. [6], Robins et al. [30] show that the DS estimator on a
?4s
single distribution achieves MSE E[(TbDS ?T (f ))2 ] ? O(n 2s+d +n?1 ) and further is asymptotically
normal when s > d/2. Their analysis in the semiparametric setting contains the nonparametric
setting as a special case. In Appendix B we review these results with a simpler self contained
analysis that directly uses the VME and has more interpretable assumptions. An attractive property
of our proof is that it is agnostic to the density estimator used provided it achieves the correct rates.
For the LOO estimator (Equation (8)), we establish the following result.
Theorem 5 (Convergence of LOO Estimator for T (f )). Let f ? ?(s, L, B, B 0 ) and ? satisfy
?4s
Assumption 4. Then, E[(TbLOO ? T (f ))2 ] is O(n 2s+d ) when s < d/2 and O(n?1 ) when s ? d/2.
The key technical challenge in analysing the LOO estimator (when compared to the DS estimator)
is in bounding the variance as there are several correlated terms in the summation. The bounded
difference inequality is a popular trick used in such settings, but this requires a supremum on the influence functions which leads to significantly worse rates. Instead we use the Efron-Stein inequality
which provides an integrated version of bounded differences that can recover the correct rate when
coupled with Assumption 4. Our proof is contingent on the use of the KDE as the density estimator.
While our empirical studies indicate that TbLOO ?s limiting distribution is normal (Fig 2(c)), the proof
seems challenging due to the correlation between terms in the summation. We conjecture that TbLOO
is indeed asymptotically normal but for now leave it to future work.
5
We reiterate that while the convergence rates are the same for both DS and LOO estimators, the data
splitting degrades empirical performance of TbDS as we show in our simulations.
Now we turn our attention to functionals of two distributions. When analysing asymptotics we will
assume that as n, m ? ?, n/(n + m) ? ? ? (0, 1). Denote N = n + m. For the DS estimator (9)
we generalise our analysis for one distribution to establish the theorem below.
Theorem 6 (Convergence/Asymptotic Normality of DS Estimator for T (f, g)). Let f, g ?
?4s
?4s
?(s, L, B, B 0 ) and ?f , ?g satisfy Assumption 4. Then, E[(TbDS ? T (f, g))2 ] is O(n 2s+d + m 2s+d )
when s < d/2 and O(n?1 + m?1 ) when s ? d/2. Further, when s > d/2 and when ?f , ?g 6= 0,
TbDS is asymptotically normal,
?
1
1
D
N (TbDS ? T (f, g)) ?? N 0, Vf [?f (X; f, g)] +
Vg [?g (Y ; f, g)] .
(11)
?
1??
The convergence rate is analogous to the one distribution case with the estimator achieving the
parametric rate under similar smoothness conditions. The asymptotic normality result allows us to
construct asymptotic confidence intervals for the functional. Even though the asymptotic variance
of the influence function is not known, by Slutzky?s theorem any consistent estimate of the variance
gives a valid asymptotic confidence interval. In fact, we can use an influence function based estimator for the asymptotic variance, since it is also a differentiable functional of the densities. We
demonstrate this in our example in Appendix F.
The condition ?f , ?g 6= 0 is somewhat technical. When both ?f and ?g are zero, the first order
terms vanishes and the estimator converges very fast (at rate 1/n2 ). However, the asymptotic behavior of the estimator is unclear. While this degeneracy occurs only on a meagre set, it does arise for
important choices, such as the null hypothesis f = g in two-sample testing problems.
Finally, for the LOO estimator (10) on two distributions we have the following result. Convergence
is analogous to the one distribution setting and the parametric rate is achieved when s > d/2.
Theorem 7 (Convergence of LOO Estimator for T (f, g)). Let f, g ? ?(s, L, B, B 0 ) and ?f , ?g
?4s
?4s
satisfy Assumption 4. Then, E[(TbLOO ? T (f, g))2 ] is O(n 2s+d + m 2s+d ) when s < d/2 and
?1
?1
O(n + m ) when s ? d/2.
For many functionals, a H?olderian assumption (?(s, L)) alone is sufficient to guarantee the rates in
Theorems 5,6 and 7. However, for some functionals (such as the ?-divergences) we require f?, g?, f, g
to be bounded above and below. Existing results [3, 12] demonstrate that estimating such quantities
is difficult without this assumption.
Now we turn our attention to the question of statistical difficulty. Via lower bounds given by Birg?e
and Massart [3] and Laurent [14] we know that the DS and LOO estimators are minimax optimal
when s > d/2 for functionals of one distribution. In the following theorem, we present a lower
bound for estimating functionals of two distributions.
Theorem 8 (Lower Bound for T (f, g)). Let f, g ? ?(s, L) and Tb be any estimator for T (f, g).
Define ? = min{8s/(4s + d), 1}. Then there exists a strictly positive constant c such that,
lim inf inf
sup E (Tb ? T (f, g))2 ? c n?? + m?? .
n??
Tb
f,g??(s,L)
Our proof, given in Appendix E, is based on LeCam?s method [35] and generalises the analysis of
Birg?e and Massart [3] for functionals of one distribution. This establishes minimax optimality of the
DS/LOO estimators for functionals of two distributions when s ? d/2. However, when s < d/2
there is a gap between our upper and lower bounds. It is natural to ask if it is possible to improve
on our rates in this regime. A series of work [3, 11, 14] shows that, for integral functionals of one
distribution, one can achieve the n?1 rate when s > d/4 by estimating the second order term in the
functional Taylor expansion. This second order correction was also done for polynomial functionals
of two distributions with similar statistical gains [12]. While we believe this is possible here, these
estimators are conceptually complicated and computationally expensive ? requiring O(n3 + m3 )
running time compared to the O(n2 + m2 ) running time for our estimator. The first order estimator
has a favorable balance between statistical and computational efficiency. Further, not much is known
about the limiting distribution of second order estimators.
6
2
?1
10
10
Plug-in
DS
LOO
kNN
KDP
Voronoi
3
10
n
Renyi-0.75 Divergence
10
?1
?3
?4
10
3
n
Hellinger Divergence
10
Plug-in
DS
LOO
kNN
2
10
10
n
Tsallis-0.75 Divergence
3
?1
Plug-in
DS
LOO
kNN
0
10
10
2
10
10
?1
10
?2
10
|Tb ? T |
|Tb ? T |
10
|Tb ? T |
?2
?1
Plug-in
DS
LOO
kNN
?3
10
|Tb ? T |
?1
10
10
KL Divergence
Shannon Entropy 2D
Plug-in
DS
LOO
kNN
KDP
Vasicek-KDE
|Tb ? T |
|Tb ? T |
Shannon Entropy 1D
?2
10
Plug-in
DS
LOO
kNN
?3
10
?4
10
?4
10
2
10
n
10
3
2
3
10
n
10
10
2
3
n
10
Figure 1: Comparison of DS/LOO estimators against alternatives on different functionals. The y-axis is the
error |Tb ? T (f, g)| and the x-axis is the number of samples. All curves were produced by averaging over 50
experiments. Discretisation in hyperparameter selection may explain some of the unsmooth curves.
5
Comparison with Other Approaches
Estimation of statistical functionals under nonparametric assumptions has received considerable attention over the last few decades. A large body of work has focused on estimating the Shannon
entropy? Beirlant et al. [1] gives a nice review of results and techniques. More recent work in the
single-distribution setting includes estimation of R?enyi and Tsallis entropies [17, 24]. There are also
several papers extending some of these techniques to divergence estimation [10, 12, 26, 27, 37].
Many of the existing methods can be categorised as plug-in methods: they are based on estimating
the densities either via a KDE or using k-Nearest Neighbors (k-NN) and evaluating the functional
on these estimates. Plug-in methods are conceptually simple but unfortunately suffer several drawbacks. First, they typically have worse convergence rate than our approach, achieving the parametric
rate only when s ? d as opposed to s ? d/2 [19, 32]. Secondly, using either the KDE or k-NN,
obtaining the best rates for plug-in methods requires undersmoothing the density estimate and we
are not aware for principled approaches for selecting this smoothing parameter. In contrast, the
bandwidth used in our estimators is the optimal bandwidth for density estimation so we can select
it using a number of approaches, e.g. cross validation. This is convenient from a practitioners perspective as the bandwidth can be selected automatically, a convenience that other estimators do not
enjoy. Secondly, plugin methods based on the KDE always require computationally burdensome
numeric integration. In our approach, numeric integration can be avoided for many functionals of
interest (See Table 1).
Another line of work focuses more specifically on estimating f -Divergences. Nguyen et al. [22]
estimate f -divergences by solving a convex program and analyse the method when the likelihood
ratio of the densities belongs to an RKHS. Comparing the theoretical results is not straightforward
as it is not clear how to port the RKHS assumption to our setting. Further, the size of the convex
program increases with the sample size which is problematic for large samples. Moon and Hero [21]
use a weighted ensemble estimator for f -divergences. They establish asymptotic normality and the
parametric convergence rate only when s ? d, which is a stronger smoothness assumption than is
required by our technique. Both these works only consider f -divergences, whereas our method has
wider applicability and includes f -divergences as a special case.
6
Experiments
We compare the estimators derived using our methods on a series of synthetic examples. We compare against the methods in [8, 20, 23, 26?29, 33]. Software for the estimators was obtained either
7
10
?1
2
3
10
n
(a)
10
Quantiles of n?1/2 (TbLOO ? T )/?
?
|Tb ? T |
DS
LOO
Quantiles of n?1/2 (TbDS ? T )/?
?
Conditional Tsallis-0.75 Divergence
3
2
1
0
?1
?2
?3
?3
?2
?1
0
1
Quantiles of N (0, 1)
(b)
2
3
3
2
1
0
?1
?2
?3
?3
?2
?1
0
1
Quantiles of N (0, 1)
2
3
(c)
Figure 2: Fig (a): Comparison of the LOO vs DS estimator on estimating the Conditional Tsallis divergence
in 4 dimensions. Note that the plug-in estimator is intractable due to numerical integration. There are no other
known estimators for the conditional tsallis divergence. Figs (b), (c): QQ plots obtained using 4000 samples
for Hellinger divergence estimation in 4 dimensions using the DS and LOO estimators respectively.
directly from the papers or from Szab?o [34]. For the DS/LOO estimators, we estimate the density
via a KDE with the smoothing kernels constructed using Legendre polynomials [35]. In both cases
and for the plug in estimator we choose the bandwidth by performing 5-fold cross validation. The
integration for the plug in estimator is approximated numerically.
We test the estimators on a series of synthetic datasets in 1 ? 4 dimension. The specifics of the
densities used in the examples and methods compared to are given in Appendix G. The results are
shown in Figures 1 and 2. We make the following observations. In most cases the LOO estimator
performs best. The DS estimator approaches the LOO estimator when there are many samples but
is generally inferior to the LOO estimator with few samples. This, as we have explained before is
because data splitting does not make efficient use of the data. The k-NN estimator for divergences
[28] requires choosing a k. For this estimator, we used the default setting for k given in the software.
As performance is sensitive to the choice of k, it performs well in some cases but poorly in other
cases. We reiterate that the hyper-parameter of our estimator (bandwidth of the kernel) can be
selected automatically using cross validation.
Next, we test the DS and LOO estimators for asymptotic normality on a 4-dimensional Hellinger
divergence estimation problem. We use 4000 samples for estimation.?We repeat this experiment 200
times and compare the empiriical asymptotic distribution (i.e. the 4000(Tb ? T (f, g))/Sb values
where Sb is the estimated asymptotic variance) to a N (0, 1) distribution on a QQ plot. The results in
Figure 2 suggest that both estimators are asymptotically normal.
Image clustering: We demonstrate the use of our nonparametric divergence estimators in an image
clustering task on the ETH-80 datset [16]. Using our Hellinger divergence estimator we achieved an
accuracy of 92.47% whereas a naive spectral clustering approach achieved only 70.18%. When we
used a k-NN estimator for the Hellinger divergence [28] we achieved 90.04% which attests to the
superiority of our method. Since this is not the main focus of this work we defer this to Appendix G.
7
Conclusion
We generalise existing results in Von Mises estimation by proposing an empirically superior LOO
technique for estimating functionals and extending the framework to functionals of two distributions.
We also prove a lower bound for the latter setting. We demonstrate the practical utility of our
technique via comparisons against other alternatives and an image clustering application. An open
problem arising out of our work is to derive the limiting distribution of the LOO estimator.
Acknowledgements
This work is supported in part by NSF Big Data grant IIS-1247658 and DOE grant DESC0011114.
References
[1] Jan Beirlant, Edward J. Dudewicz, L?aszl?o Gy?orfi, and Edward C. Van der Meulen. Nonparametric entropy
estimation: An overview. International Journal of Mathematical and Statistical Sciences, 1997.
8
[2] Peter J. Bickel and Ya?acov Ritov. Estimating integrated squared density derivatives: sharp best order of
convergence estimates. Sankhy?a: The Indian Journal of Statistics, 1988.
[3] Lucien Birg?e and Pascal Massart. Estimation of integral functionals of a density. Ann. of Stat., 1995.
[4] Kevin M. Carter, Raviv Raich, and Alfred O. Hero. On local intrinsic dimension estimation and its
applications. IEEE Transactions on Signal Processing, 2010.
[5] Inderjit S. Dhillon, Subramanyam Mallela, and Rahul Kumar. A Divisive Information Theoretic Feature
Clustering Algorithm for Text Classification. J. Mach. Learn. Res., 2003.
[6] M Emery, A Nemirovski, and D Voiculescu. Lectures on Prob. Theory and Stat. Springer, 1998.
[7] Luisa Fernholz. Von Mises calculus for statistical functionals. Lecture notes in statistics. Springer, 1983.
[8] Mohammed Nawaz Goria, Nikolai N Leonenko, Victor V Mergel, and Pier Luigi Novi Inverardi. A new
class of random vector entropy estimators and its applications. Nonparametric Statistics, 2005.
[9] Hero, Bing Ma, O. J. J. Michel, and J. Gorman. Applications of entropic spanning graphs. IEEE Signal
Processing Magazine, 19, 2002.
[10] David K?allberg and Oleg Seleznjev. Estimation of entropy-type integral functionals. arXiv, 2012.
[11] G?erard Kerkyacharian and Dominique Picard. Estimating nonquadratic functionals of a density using
haar wavelets. Annals of Stat., 1996.
[12] Akshay Krishnamurthy, Kirthevasan Kandasamy, Barnabas Poczos, and Larry Wasserman. Nonparametric Estimation of R?enyi Divergence and Friends. In ICML, 2014.
[13] Akshay Krishnamurthy, Kirthevasan Kandasamy, Barnabas Poczos, and Larry Wasserman. On Estimating
L22 Divergence. In Artificial Intelligence and Statistics, 2015.
[14] B?eatrice Laurent. Efficient estimation of integral functionals of a density. Ann. of Stat., 1996.
[15] Erik Learned-Miller and Fisher John. ICA using spacings estimates of entropy. Mach. Learn. Res., 2003.
[16] Bastian Leibe and Bernt Schiele. Analyzing Appearance and Contour Based Methods for Object Categorization. In CVPR, 2003.
[17] Nikolai Leonenko and Oleg Seleznjev. Statistical inference for the epsilon-entropy and the quadratic
R?enyi entropy. Journal of Multivariate Analysis, 2010.
[18] Jeremy Lewi, Robert Butera, and Liam Paninski. Real-time adaptive information-theoretic optimization
of neurophysiology experiments. In NIPS, 2006.
[19] Han Liu, Larry Wasserman, and John D Lafferty. Exponential concentration for mutual information
estimation with application to forests. In NIPS, 2012.
[20] Erik G Miller. A new class of Entropy Estimators for Multi-dimensional Densities. In ICASSP, 2003.
[21] Kevin Moon and Alfred Hero. Multivariate f-divergence Estimation With Confidence. In NIPS, 2014.
[22] XuanLong Nguyen, Martin J. Wainwright, and Michael I. Jordan. Estimating divergence functionals and
the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, 2010.
[23] Havva Alizadeh Noughabi and Reza Alizadeh Noughabi. On the Entropy Estimators. Journal of Statistical Computation and Simulation, 2013.
[24] D?avid P?al, Barnab?as P?oczos, and Csaba Szepesv?ari. Estimation of R?enyi Entropy and Mutual Information
Based on Generalized Nearest-Neighbor Graphs. In NIPS, 2010.
[25] Hanchuan Peng, Fulmi Long, and Chris Ding. Feature selection based on mutual information criteria of
max-dependency, max-relevance, and min-redundancy. IEEE PAMI, 2005.
[26] Fernando P?erez-Cruz. KL divergence estimation of continuous distributions. In IEEE ISIT, 2008.
[27] Barnab?as P?oczos and Jeff Schneider. On the estimation of alpha-divergences. In AISTATS, 2011.
[28] Barnab?as P?oczos, Liang Xiong, and Jeff G. Schneider. Nonparametric Divergence Estimation with Applications to Machine Learning on Distributions. In UAI, 2011.
[29] David Ram?rez, Javier V?a, Ignacio Santamar?a, and Pedro Crespo. Entropy and Kullback-Leibler Divergence Estimation based on Szegos Theorem. In EUSIPCO, 2009.
[30] James Robins, Lingling Li, Eric Tchetgen, and Aad W. van der Vaart. Quadratic semiparametric Von
Mises Calculus. Metrika, 2009.
[31] Elad Schneidman, William Bialek, and Michael J. Berry II. An Information Theoretic Approach to the
Functional Classification of Neurons. In NIPS, 2002.
[32] Shashank Singh and Barnabas Poczos. Exponential Concentration of a Density Functional Estimator. In
NIPS, 2014.
[33] Dan Stowell and Mark D Plumbley. Fast Multidimensional Entropy Estimation by k-d Partitioning. IEEE
Signal Process. Lett., 2009.
[34] Zolt?an Szab?o. Information Theoretical Estimators Toolbox. J. Mach. Learn. Res., 2014.
[35] Alexandre B. Tsybakov. Introduction to Nonparametric Estimation. Springer, 2008.
[36] Aad W. van der Vaart. Asymptotic Statistics. Cambridge University Press, 1998.
[37] Qing Wang, Sanjeev R. Kulkarni, and Sergio Verd?u. Divergence estimation for multidimensional densities
via k-nearest-neighbor distances. IEEE Transactions on Information Theory, 2009.
9
| 5911 |@word neurophysiology:1 version:5 polynomial:3 stronger:2 seems:1 open:1 calculus:2 simulation:5 dominique:1 q1:5 zolt:1 liu:1 contains:1 series:3 selecting:2 rkhs:2 luigi:1 existing:6 com:1 comparing:1 analysed:1 dx:2 written:1 john:2 cruz:1 numerical:1 plot:2 interpretable:1 v:1 alone:2 kandasamy:4 half:3 selected:2 intelligence:1 metrika:1 ith:1 provides:1 bijection:1 simpler:1 plumbley:1 mathematical:2 constructed:1 pairing:1 prove:4 dan:1 hellinger:5 introduce:3 theoretically:1 peng:1 ica:1 expected:1 indeed:3 p1:12 behavior:1 multi:1 inspired:1 automatically:2 equipped:1 provided:1 estimating:20 moreover:2 bounded:5 agnostic:1 null:1 what:3 easiest:1 kg:4 q2:5 proposing:1 csaba:1 guarantee:1 multidimensional:2 k2:10 control:1 partitioning:1 grant:2 enjoy:3 appear:1 superiority:1 before:3 t1:1 positive:1 local:1 eusipco:1 despite:1 plugin:1 mach:3 analyzing:1 laurent:2 pami:1 twice:1 challenging:1 ease:1 tsallis:6 nemirovski:1 liam:1 practical:3 testing:4 atomic:1 yj:2 block:1 implement:2 lewi:1 hanchuan:1 jan:1 asymptotics:1 empirical:5 eth:1 significantly:1 orfi:1 convenient:1 luisa:1 confidence:3 ateaux:8 suggest:2 convenience:1 selection:4 risk:1 influence:20 map:1 straightforward:3 attention:3 convex:3 focused:2 splitting:7 wasserman:4 m2:1 estimator:110 notion:2 krishnamurthy:4 analogous:2 limiting:4 undersmoothing:1 qq:2 play:1 annals:1 magazine:1 us:1 verd:1 hypothesis:2 harvard:2 trick:2 expensive:2 satisfying:1 approximated:1 asymmetric:1 distributional:1 role:1 aszl:1 ding:1 shashank:1 wang:1 cycle:1 decrease:1 principled:1 vanishes:1 complexity:1 schiele:1 ti0:1 barnabas:3 depend:1 solving:1 singh:1 serve:1 efficiency:1 eric:1 basis:1 swap:1 icassp:1 various:1 enyi:5 fast:3 effective:1 kp:1 artificial:1 hyper:1 choosing:1 kevin:2 dq2:1 whose:1 bernt:1 elad:1 valued:1 cvpr:1 fernholz:2 say:1 statistic:11 knn:6 vaart:2 analyse:4 final:1 seemingly:1 hoc:3 advantage:1 differentiable:4 analytical:1 propose:5 remainder:2 fr:1 poorly:1 achieve:4 dirac:1 recipe:1 convergence:16 regularity:1 r1:1 extending:2 emery:4 raviv:1 leave:4 converges:1 object:2 wider:1 derive:3 develop:1 illustrate:1 stat:5 friend:1 nearest:3 op:1 received:1 edward:2 p2:12 c:3 involves:1 implies:1 riesz:1 indicate:1 drawback:1 correct:2 larry:5 require:4 assign:1 barnab:4 preliminary:3 isit:1 summation:2 secondly:2 strictly:1 correction:4 sufficiently:2 normal:6 algorithmic:1 bickel:2 achieves:4 entropic:1 purpose:1 favorable:2 estimation:31 lucien:1 sensitive:1 vice:1 establishes:1 tool:1 weighted:1 hope:1 minimization:1 bsc:1 always:1 attests:1 rather:1 pn:1 derived:2 focus:5 likelihood:2 contrast:1 kp2:1 burdensome:1 inference:1 voronoi:1 nn:4 sb:2 integrated:2 typically:1 expand:1 classification:3 pascal:1 development:1 smoothing:3 summed:1 integration:5 mutual:10 special:2 field:1 construct:4 aware:1 tds:1 novi:1 icml:1 sankhy:1 future:1 others:1 few:2 kp1:1 divergence:36 qing:1 lebesgue:1 microsoft:1 william:1 invariably:1 interest:1 picard:1 evaluation:1 unconditional:1 integral:5 discretisation:1 tree:1 taylor:3 vasicek:1 re:3 theoretical:3 con1:1 hsph:1 applicability:1 subset:1 too:1 loo:34 dependency:1 synthetic:2 density:35 fundamental:2 international:1 michael:2 sanjeev:1 von:7 central:1 satisfied:1 squared:3 nm:1 choose:1 l22:1 opposed:1 dr:3 worse:2 derivative:10 michel:1 li:1 jeremy:1 gy:1 includes:2 satisfy:7 unsatisfying:1 reiterate:2 sup:1 competitive:1 recover:1 complicated:1 defer:2 contribution:1 formed:2 publicly:1 accuracy:2 moon:2 variance:5 who:1 ensemble:1 miller:2 conceptually:2 produced:1 vme:7 history:2 explain:1 definition:5 against:4 echet:1 james:2 pier:1 proof:5 mi:7 degeneracy:1 gain:1 popular:2 ask:1 knowledge:1 efron:1 lim:1 hilbert:1 javier:1 oleg:2 appears:1 alexandre:1 higher:1 rahul:1 ritov:2 nonparametrics:1 though:1 done:1 generality:2 until:1 d:32 hand:1 correlation:1 defines:1 scientific:1 believe:1 building:1 facilitate:1 k22:4 requiring:1 true:2 butera:1 dhillon:1 leibler:1 eg:1 attractive:1 self:1 uniquely:1 inferior:1 x1n:5 criterion:1 generalized:1 presenting:1 theoretic:6 demonstrate:7 performs:3 ranging:1 image:4 novel:1 recently:1 ef:1 ari:1 superior:3 common:1 functional:16 empirically:4 perturbing:1 overview:1 reza:1 extend:3 belong:1 analog:1 discussed:1 lecam:1 numerically:1 mellon:2 refer:1 versa:1 cambridge:1 smoothness:6 tuning:1 rd:3 similarly:1 erez:1 han:1 similarity:1 lkx:1 sergio:1 multivariate:2 recent:2 perspective:1 inf:2 belongs:1 categorization:1 mohammed:1 oczos:4 inequality:2 unsmooth:1 yi:1 der:3 victor:1 contingent:1 somewhat:1 schneider:2 mallela:1 converge:1 fernando:1 schneidman:1 signal:4 ii:2 multiple:4 reduces:1 smooth:4 technical:2 generalises:1 plug:12 cross:3 long:2 stowell:1 post:3 dudewicz:1 essentially:1 cmu:4 metric:3 expectation:3 arxiv:1 kernel:5 achieved:4 addition:1 semiparametric:6 whereas:2 spacing:1 interval:2 szepesv:1 nikolai:2 standpoint:1 unlike:1 massart:3 dq1:1 lafferty:1 jordan:1 practitioner:2 split:5 mergel:1 variety:1 fit:1 restrict:1 bandwidth:6 idea:3 avid:1 expression:1 utility:1 akin:1 nonquadratic:1 suffer:1 peter:1 poczos:3 y1m:2 useful:1 generally:1 detailed:1 listed:2 clear:1 xuanlong:1 nonparametric:15 stein:1 tsybakov:1 differentiability:4 carter:1 inverardi:1 problematic:1 nsf:1 xr1:1 delta:1 estimated:2 popularity:1 arising:1 alfred:2 carnegie:2 write:1 shall:1 hyperparameter:1 key:1 redundancy:1 achieving:2 goria:1 ram:1 asymptotically:4 graph:2 downstream:1 sum:1 prob:1 extends:1 submodels:1 appendix:11 radon:2 vf:1 bound:8 fold:1 topological:1 bastian:1 quadratic:2 n3:2 software:4 ri:2 raich:1 qk22:1 argument:1 optimality:3 min:2 kumar:1 performing:1 leonenko:2 kerkyacharian:1 martin:1 conjecture:1 structured:1 truncate:1 poor:1 legendre:1 across:1 modification:1 explained:1 restricted:1 crespo:1 bapoczos:1 computationally:4 equation:1 bing:1 turn:2 know:1 hero:4 tractable:1 available:1 permit:1 apply:3 leibe:1 birg:3 spectral:1 xiong:1 alternative:4 clustering:7 include:1 running:2 graphical:1 epsilon:1 establish:4 classical:2 question:1 quantity:9 occurs:1 parametric:9 degrades:1 concentration:2 bialek:1 unclear:1 distance:2 chris:1 considers:1 spanning:1 ru:1 erik:2 modeled:1 ratio:2 balance:1 liang:1 difficult:1 unfortunately:1 robert:1 potentially:1 kde:11 kirthevasan:3 design:2 unknown:1 upper:1 neuron:2 observation:1 markov:1 datasets:1 truncated:1 looking:1 sharp:1 community:1 david:2 required:1 kl:2 toolbox:1 learned:1 acov:1 nip:6 below:3 appeared:1 regime:1 challenge:1 tb:14 program:2 t20:1 including:2 max:4 wainwright:1 natural:3 rely:1 difficulty:1 haar:1 minimax:5 older:4 normality:4 github:1 improve:1 brief:1 meulen:1 ignacio:1 axis:2 coupled:1 naive:1 text:1 review:2 literature:5 l2:6 nice:1 kf:8 acknowledgement:1 berry:1 asymptotic:12 lecture:2 limitation:1 vg:1 validation:3 sufficient:4 consistent:1 imposes:1 dq:2 port:1 nikodym:2 repeat:1 last:1 supported:1 tchetgen:1 dis:1 side:1 allow:1 generalise:4 aad:2 neighbor:3 akshaykr:1 akshay:3 van:3 curve:2 dimension:5 xn:2 numeric:3 valid:1 evaluating:1 fb:3 default:1 contour:1 lett:1 adaptive:1 avoided:1 yks:1 nguyen:2 transaction:3 functionals:52 alpha:1 compact:2 kullback:1 supremum:1 confirm:1 categorised:1 nhd:1 uai:1 xi:6 continuous:5 decade:1 vergence:1 robin:6 table:8 learn:3 robust:1 obtaining:1 contributes:1 forest:1 expansion:7 mse:2 warranted:1 beirlant:2 domain:3 aistats:1 main:3 bounding:1 big:1 arise:2 hyperparameters:1 n2:3 x1:2 body:1 fig:3 quantiles:4 ny:1 alizadeh:2 wish:1 kdp:2 exponential:2 renyi:1 wavelet:1 rez:1 theorem:10 specific:2 favourable:1 r2:5 datset:1 intrinsic:2 exists:4 intractable:1 gained:1 gorman:1 gap:1 entropy:20 specialised:2 simply:1 appearance:1 paninski:1 erard:1 contained:1 inderjit:1 springer:3 pedro:1 satisfies:2 ma:1 conditional:11 presentation:1 consequently:2 exposition:1 ann:2 jeff:2 lipschitz:1 fisher:1 considerable:1 analysing:2 determined:1 except:1 specifically:1 szab:2 averaging:3 lemma:1 called:2 total:1 experimental:1 ya:1 m3:1 shannon:3 kirthevasank:1 divisive:1 select:1 mark:1 latter:1 relevance:2 absolutely:2 kulkarni:1 indian:1 correlated:1 |
5,427 | 5,912 | Bounding errors of Expectation-Propagation
Simon Barthelm?
CNRS, Gipsa-lab
simon.barthelme@gipsa-lab.fr
Guillaume Dehaene
University of Geneva
guillaume.dehaene@gmail.com
Abstract
Expectation Propagation is a very popular algorithm for variational inference, but
comes with few theoretical guarantees. In this article, we prove that the approximation errors made by EP can be bounded. Our bounds have an asymptotic interpretation in the number n of datapoints, which allows us to study EP?s convergence with respect to the true posterior. In particular, we show that EP converges
at a rate of O(n?2 ) for the mean, up to an order of magnitude faster than the traditional Gaussian approximation at the mode. We also give similar asymptotic expansions for moments of order 2 to 4, as well as excess Kullback-Leibler cost (defined as the additional KL cost incurred by using EP rather than the ideal Gaussian
approximation). All these expansions highlight the superior convergence properties of EP. Our approach for deriving those results is likely applicable to many
similar approximate inference methods. In addition, we introduce bounds on the
moments of log-concave distributions that may be of independent interest.
Introduction
Expectation Propagation (EP, 1) is an efficient approximate inference algorithm that is known to give
good approximations, to the point of being almost exact in certain applications [2, 3]. It is surprising
that, while the method is empirically very successful, there are few theoretical guarantees on its
behavior. Indeed, most work on EP has focused on efficiently implementing the method in various
settings. Theoretical work on EP mostly represents new justifications of the method which, while
they offer intuitive insight, do not give mathematical proofs that the method behaves as expected.
One recent breakthrough is due to Dehaene and Barthelm? [4] who prove that, in the large datalimit, the EP iteration behaves like a Newton search and its approximation is asymptotically exact.
However, it remains unclear how good we can expect the approximation to be when we have only
finite data. In this article, we offer a characterization of the quality of the EP approximation in terms
of the worst-case distance between the true and approximate mean and variance.
When approximating a probability distribution p(x) that is, for some reason, close to being Gaussian,
a natural approximation to use is the Gaussian with mean equal to the mode (or argmax) of p(x) and
with variance the inverse log-Hessian at the mode. We call it the Canonical Gaussian Approximation
(CGA), and its use is usually justified by appealing to the Bernstein-von Mises theorem, which
shows that, in the limit of a large amount of independent observations, posterior distributions tend
towards their CGA. This powerful justification, and the ease with which the CGA is computed
(finding the mode can be done using Newton methods) makes it a good reference point for any
method like EP which aims to offer a better Gaussian approximation at a higher computational cost.
In section 1, we introduce the CGA and the EP approximation. In section 2, we give our theoretical
results bounding the quality of EP approximations.
1
1
Background
In this section, we present the CGA and give a short introduction to the EP algorithm. In-depth
descriptions of EP can be found in Minka [5], Seeger [6], Bishop [7], Raymond et al. [8].
1.1
The Canonical Gaussian Approximation
What we call here the CGA is perhaps the most common approximate inference method in the
machine learning cookbook. It is often called the ?Laplace approximation?,
but this is a misnomer:
?
the Laplace approximation refers to approximating the integral p from the integral of the CGA.
The reason the CGA is so often used is its compelling simplicity: given a target distribution p(x) =
exp (?? (x)), we find the mode x? and compute the second derivatives of ? at x? :
x?
??
=
=
argmin?(x)
?00 (x? )
to form a Gaussian approximation q(x) = N x|x? , ?1? ? p(x). The CGA is effectively just
a second-order Taylor expansion, and its use is justified by the Bernstein-von Mises theorem [9],
which essentially saysQthat the CGA becomes exact in the large-data (large-n) asymptotic limit.
n
Roughly, if pn (x) ? i=1 p (yi|x) p0 (x), where y1 . . . yn represent independent datapoints, then
limn?? pn (x) = N x|x?n , ?1? in total variation.
n
1.2
CGA vs Gaussian EP
Gaussian EP, as its name indicates, provides an alternative way of computing a Gaussian approximation to a target distribution. There is broad overlap between the problems where EP can be applied
and the problems where the CGA can be used, with EP coming at a higher cost. Our contribution is
to show formally that the higher computational cost for EP may well be worth bearing, as EP approximations can outperform CGAs by an order of magnitude. To be specific, we focus on the moment
estimates (mean and covariance) computed by EP and CGA, and derive bounds on their distance to
the true mean and variance of the target distribution. Our bounds have an asymptotic interpretation,
and under that interpretation we show for example that the mean returned by EP is within an order
of O n?2 of the true mean, where n is the numberof datapoints. For the CGA, which uses the
mode as an estimate of the mean,
we exhibit a O n?1 upper bound, and we compute the error term
?1
responsible for this O n
behavior. This enables us to show that, in the situations in which this
error is indeed O n?1 , EP is better than the CGA.
1.3
The EP algorithm
We consider the task of approximating a probability distribution over a random-variable X : p(x),
which we call the target distribution. X can be high-dimensional, but for simplicity, we focus on
the one-dimensional case. One important hypothesis that makes EP feasible is that p(x) factorizes
into n simple factor terms:
Y
p(x) =
fi (x)
i
EP proposes to approximate each fi (x) (usually referred to as sites) by a Gaussian function qi (x)
(referred to as the site-approximations). It is convenient to use the parametrization of Gaussians in
terms of natural parameters:
x2
qi (x|ri , ?i ) ? exp ri x ? ?i
2
which makes some of the further computations easier to understand. Note that EP could also be
used with other exponential approximating families. These Gaussian approximations are computed
iteratively. Starting from a current approximation (qit (x|rit , ?it )), we select a site for update with
index i. We then:
2
Q
t
? Compute the cavity distribution q?i
(x) ? j6=1 qjt (x). This is very easy in natural parameters:
??
?
?
? ?
X
X
x2
q?i (x) ? exp ??
rjt ? x ? ?
?jt ? ?
2
j6=i
hti (x)
j6=i
t
q?i
(x)fi (x)
? Compute the hybrid distribution
?
and its mean and variance
? Compute the Gaussian which minimizes the Kullback-Leibler divergence to the hybrid, ie
the Gaussian with same mean and variance:
P(hti ) = argmin KL hti |q
q
? Finally, update the approximation of fi :
qit+1 =
P(hti )
t
q?i
where the division is simply computed as a subtraction between natural parameters
We iterate these operations
Q until a fixed point is reached, at which point we return a Gaussian approximation of p(x) ? qi (x).
1.4
The ?EP-approximation?
In this work, we will characterize the quality of an EP approximation of p(x). We define this to be
any fixed point of the iteration presented in section 1.3, which could all be returned by the algorithm.
It is known that EP will have at least one fixed-point [1], but it is unknown under which conditions
the fixed-point is unique. We conjecture that, when all sites are log-concave (one of our hypotheses
to control the behavior of EP), it is in fact unique but we can?t offer a proof yet. If p (x) isn?t logconcave, it is straightforward to construct examples in which EP has multiple fixed-points. These
open questions won?t matter for our result because we will show that all fixed-points of EP (should
there be more than one) produce a good approximation of p (x).
Fixed points of EP have a very interesting characterization. If we note qi? the site-approximations at
a given fixed-point, h?i the corresponding hybrid distributions, and q ? the global approximation of
p(x), then the mean and variance of all the hybrids and q ? is the same1 . As we will show in section
2.2, this leads to a very tight bound on the possible positions of these fixed-points.
1.5
Notation
Q
We will use repeatedly the following notation. p(x) = i fi (x) is the target distribution we want
to approximate. The sites fi (x) are each approximated
by a Gaussian site-approximation qi (x)
Q
yielding an approximation to p(x) ? q(x) = i qi (x). The hybrids hi (x) interpolate between q(x)
and p(x) by replacing one site approximation qi (x) with the true site fi (x).
Our results make heavy use of the log-functions of the P
sites and the target distribution. We note
?i (x) = ? log (fi (x)) and ?p (x) = ? log (p(x)) =
?i (x). We will introduce in section 2
hypotheses on these functions. Parameter ?m controls their minimum curvature and parameters Kd
control the maximum dth derivative.
We will always consider fixed-points of EP, where the mean and variance under all hybrids and q(x)
is identical. We will note these common values: ?EP and vEP . We will also refer to the third and
fourth centered moment of the hybrids, denoted by mi3 , mi4 and to the fourth moment of q(x) which
2
is simply 3vEP
. We will show how all these moments are related to the true moments of the target
distribution which we will note ?, v for the mean and variance, and mp3 , mp4 for the third and fourth
h 00
i?1
moment. We also investigate the quality of the CGA: ? ? x? and v ? ?p (x? )
where x? is the
the mode of p(x).
1
For non-Gaussian approximations, the expected values of all sufficient statistics of the exponential family
are equal.
3
2
Results
In this section, we will give tight bounds on the quality of the EP approximation (ie: of fixed-points
of the EP iteration). Our results lean on the properties of log-concave distributions [10]. In section
2.1, we introduce new bounds on the moments of log-concave distributions. The bounds show that
those distributions are in a certain sense close to being Gaussian. We then apply these results to
study fixed points of EP, where they enable us to compute bounds on the distance between the mean
and variance of the true distribution p(x) and of the approximation given by EP, which we do in
section 2.2.
Our bounds require us to assume that all sites fi (x) are ?m -strongly log-concave with slowlychanging log-function. That is, if we note ?i (x) = ? log (fi (x)):
00
?i ?x ?i (x) ? ?m > 0
(d)
?i ?d ? [3, 4, 5, 6] ?i (x) ? Kd
(1)
(2)
The target distribution
p(x) then inherits those properties from the sites. Noting ?p (x) =
P
? log (p(x)) =
i ?i (x), then ?p is n?m -strongly log-concave and its higher derivatives are
bounded:
00
?x, ?p (x) ?
?d ? [3, 4, 5, 6] ?(d)
p (x) ?
n?m
(3)
nKd
(4)
A natural concern here is whether or not our conditions on the sites are of practical interest. Indeed,
strongly-log-concave likelihoods are rare. We picked these strong regularity conditions because they
make the proofs relatively tractable (although still technical and long). The proof technique carries
over to more complicated, but more realistic, cases. One such interesting generalization consists
of the case in which p(x) and all hybrids at the fixed-point are log-concave with slowly changing
log-functions (with possibly differing constants). In such a case, while the math becomes more
unwieldy, similar bounds as ours can be found, greatly extending the scope of our results. The
results we present here should thus be understood as a stepping stone and not as the final word on
the quality of the EP approximation: we have focused on providing a rigorous but extensible proof.
2.1
Log-concave distributions are strongly constrained
Log-concave distributions have many interesting properties. They are of course unimodal, and the
family is closed under both marginalization and multiplication. For our purposes however, the most
important property is a result due to Brascamp and Lieb [11], which bounds their even moments. We
give here an extension in the case of log-concave distributions with slowly changing log-functions
(as quantified by eq. (2)). Our results show that these are close to being Gaussian.
The Brascamp-Lieb inequality states that, if LC(x) ? exp (??(x)) is ?m -strongly log-concave (ie:
00
? (x) ? ?m ), then centered even moments of LC are bounded by the corresponding moments of a
?1
Gaussian with variance ?m
. If we note these moments m2k and ?LC = ELC (x) the mean of LC:
2k
m2k = ELC (x ? ?LC )
m2k
?
?k
(2k ? 1)!!?m
(5)
where (2k ? 1)!! is the double factorial: the product of all odd terms from 1 to 2k ? 1. 3!! = 3,
5!! = 15, 7!! = 105, etc. This result can be understood as stating that a log-concave distribution
must have a small variance, but doesn?t generally need to be close to a Gaussian.
With our hypothesis of slowly changing log-functions, we were able to improve on this result. Our
improved results include a bound on odd moments, as well as first order expansions of even moments
(eqs. (6)-(9)).
Our extension to the Brascamp-Lieb inequality is as follows. If ? is slowly changing in the sense
0
that some of its higher derivatives are bounded, as per eq. 2, then we can give a bound on ? (?LC )
4
(showing that ?LC is close to the mode x? of LC, see eqs. (10) to (13)) and m3 (showing that LC
is mostly symmetric):
0
? (?LC )
?
|m3 |
?
K3
2?m
2K3
3
?m
(6)
(7)
and we can compute the first order expansions of m2 and m4 , and bound the errors in terms of ?m
and the K?s :
00
?1
m2 ? ? (?LC )
?
00
? (?LC )m4 ? 3m2
?
With eq. (8) and (9), we see that m2 ?
K32
K4
+
2
?m
2?m
5 K4
19 K32
+
4
3
2 ?m
2 ?m
(8)
(9)
00
?2
00
?1
and m4 ? 3 ? (?LC )
and, in that
? (?LC )
00
sense, that LC(x) is close to the Gaussian with mean ?LC and inverse-variance ? (?LC ).
These expansions could be extended to further orders and similar formulas can be found for the other
?(k+1)
moments of LC(x): for example, any odd moments can be bounded by |m2k+1 | ? Ck K3 ?m
(with Ck some constant) and any even moment can be found to have first-order expansion:
00
?k
m2k ? (2k ? 1)!! ? (?LC )
. The proof, as well as more detailed results, can be found in
the Supplement.
Note how our result relates to the Bernstein-von Mises theorem, which says that, in the limit of a
large amount of observations, a posterior p(x) tends towards its CGA. If we consider the posterior
obtained from n likelihood functions that are all log-concave and slowly changing, our results show
the slightly different result that the moments of that posterior are close to those of a Gaussian with
00
00
mean ?LC (instead of x?LC ) and inverse-variance ? (?LC ) (instead of ? (x?LC )) . This point is
critical. While the CGA still ends up capturing the limit behavior of p, as ?LC ? x? in the largedata limit (see eq. (13) below), an approximation that would return the Gaussian approximation at
?LC would be better. This is essentially what EP does, and this is how it improves on the CGA.
2.2
Computing bounds on EP approximations
?
In this section, we consider
a given
P
P EP fixed-point qk (x|ri , ?i ) and the corresponding approximation
of p(x): q ? (x|r =
ri , ? =
?i ). We will show that the expected value and variance of q ? (resp.
?EP and vEP ) are close to the true mean and variance of p (resp. ? and v), and also investigate the
h 00
i?1
quality of the CGA (? ? x? , v ? ?p (x? )
).
Under our assumptions on the sites (eq. (1) and (2)), we are able to derive bounds on the quality
of the EP approximation. The proof is quite involved and long, and we will only present it in the
Supplement. In the main text, we give a partial version: we detail the first step of the demonstration, which consists of computing a rough bound on the distance between the true mean ?, the EP
approximation ?EP and the mode x? , and give an outline of the rest of the proof.
Let?s show that ?, ?EP and x? are all close to one another. We start from eq. (6) applied to p(x):
0
K3
?p (?) ?
2?m
5
(10)
0
which tells us that ?p (?) ? 0. ? must thus be close to x? . Indeed:
0
0
0
?p (?) = ?p (?) ? ?p (x? )
00
= ?p (?) (? ? x? ) ? ? [?, x? ]
00
? ?p (?) |? ? x? |
? n?m |? ? x? |
(11)
(12)
Combining eq. (10) and (12), we finally have:
|? ? x? | ? n?1
K3
2
2?m
(13)
Let?s now show that ?EP is also close to x? . We proceed similarly, starting from eq. (6) but applied
to all hybrids hi (x):
0
K3
?i ?i (?EP ) + ??i ?EP ? r?i ? n?1
(14)
2?m
which is not really equivalent to eq. (10) yet. Recall that q(x|r, ?) has mean ?EP : we thus have:
r = ??EP . Which gives:
!
X
??i ?EP = ((n ? 1)?) ?EP
i
(n ? 1)r
X
=
r?i
=
(15)
i
If we sum all terms in eq. (14), the ??i ?EP and r?i thus cancel, leaving us with:
0
K3
?p (?EP ) ?
2?m
(16)
which is equivalent to eq. (10) but for ?EP instead of ?. This shows that ?EP is, like ?, close to x? :
|?EP ? x? | ? n?1
K3
2
2?m
(17)
At this point,
we can show that, since they are both close to x? (eq. (13) and (17)), ? = ?EP +
?1
O n
, which constitutes the first step of our computation of bounds on the quality of EP.
After computing this, the next step is evaluating
the quality
of the approximation of the variance,
00
?1
via computing v ?1 ? vEP
for EP and v ?1 ? ?p (x? ) for the CGA, from eq. (8). In both cases,
we find:
v ?1
?1
vEP
+ O (1)
=
00
?
?p (x ) + O (1)
=
(18)
(19)
Since v ?1 is of order n, because of eq. (5) (Brascamp-Lieb upper bound on variance), this is a
decent approximation: the relative error is of order n?1 .
We can find similarly that both EP and CGA do a good job of finding a good approximation of the
fourth moment of p: m4 . For EP this means that the fourth moment of each hybrid and of q are a
close match:
?i m4
?
?
2
mi4 ? 3vEP
00
?2
3 ?p (m)
(20)
(21)
In contrast, the third moment of the hybrids doesn?t match at all the third moment of p, but their sum
does !
X
m3 ?
mi3
(22)
i
6
Finally, we come back to the approximation of ? by ?EP . These obey two very similar relationships:
0
?p (?) + ?(3)
p (?)
0
?p (?EP ) + ?(3)
p (?EP )
v
2
vEP
2
?1
= O n
?1
= O n
Since v = vEP + O n?2 (a slight rephrasing of eq. (18)), we finally have:
? = ?EP + O n?2
(23)
(24)
(25)
We summarize the results in the following theorem:
Theorem 1. Characterizing fixed-points of EP
Under the assumptions given by eq. (1) and (2) (log-concave sites with slowly changing log), we
can bound the quality of the EP approximation and the CGA:
|? ? x? |
?
n?1
K3
2
2?m
|? ? ?EP | ? B1 (n) = O n?2
00
K4
2K32
?1
+
v ? ?p (x? ) ?
2
?m
2?m
?1
v ? v ?1 ? B2 (n) = O (1)
EP
We give the full expression for the bounds B1 and B2 in the Supplement
Note that the order of magnitude of the bound on |? ? x? | is the best possible, because it is attained for certain distributions. For example, consider a Gamma distribution with natural parameters
1
?1
(n?, n?) whose mean ?
by its mode ?
? is approximated at order n
? ? n? . More generally, from
eq. (23), we can compute the first order of the error:
(3)
(3)
??m??
1 ?p (?)
?p (?) v
? ? 00 2
?00p (?) 2
2 ? (?)
(26)
p
which is the term causing the order n?1 error. Whenever this term is significant, it is thus safe to
conclude that EP improves on the CGA.
Also note that, since v ?1 is of order n, the relative error for the v ?1 approximation is of order n?1
for both methods. Despite having a convergence rate of the same order, the EP approximation is
demonstrably better than the CGA, as we show next. Let us first see why the approximation for v ?1
is only of order 1 for both methods. The following relationship holds:
00
v ?1 = ?p (?) + ?(3)
p (?)
mp3
mp4
+ ?(4)
(?)
+ O n?1
p
2v
3!v
(27)
00
In this relationship, ?p (?) is an order n term while the rest are order 1. If we now compare this to
the CGA approximation of v ?1 , we find that it fails at multiple levels. First, it completely ignores
00
the two order 1 terms, and then, because it takes the value of ?p at x? which is at a distance of
(3)
O n?1 from ?, it adds another order 1 error term (since ?p = O (n)). The CGA is thus adding
quite a bit of error, even if each component is of order 1.
Meanwhile, vEP obeys a relationship similar to eq. (27):
2
X (3)
00
mi
3vEP
?1
vEP
= ?p (?EP ) +
?i (?EP ) 3 + ?(4)
(?
)
+ O n?1
EP
p
2vEP
3!vEP
i
00
(28)
We can see where the EP approximation produces errors. The ?p term is well approximated: since
00
00
|? ? ?EP | = O n?2 , we have ?p (?) = ?p (?EP ) + O n?1 . The term involving m4 is also well
7
approximated, and we can see that the only term that fails is the m3 term. The order 1 error is thus
entirely coming from this term, which shows that EP performance suffers more from the skewness
of the target distribution than from its kurtosis.
Finally, note that, with our result, we can get some intuitions about the quality of the EP approximation using other metrics. For example, if the most interesting metric is the KL divergence KL (p, q),
the excess KL divergence from using the EP approximation q instead of the true minimizer qKL
(which has the same mean ? and variance v as p) is given by:
!
?
?
2
2
qKL
(x ? ?)
(x ? ?EP )
1
v
?KL = p log
=
p(x) ?
+
? log
(29)
q
2v
2vEP
2
vEP
2
1
v
v
(? ? ?EP )
=
? 1 ? log
+
(30)
2 vEP
vEP
2vEP
2
2
(? ? ?EP )
1 v ? vEP
+
(31)
?
4
vEP
2vEP
which we recognize as KL (qKL , q). A similar formula gives the excess KL divergence from using
the CGA instead of qKL . For both methods, the variance term is of order n?2 (though it should be
smaller for EP), but the mean term is of order n?3 for EP while it is of order n?1 for the CGA. Once
again, EP is found to be the better approximation.
Finally, note that our bounds are quite pessimistic: the true value might be a much better fit than we
have predicted here.
A first cause is the bounding of the derivatives of log(p) (eqs. (3),(4)): while those bounds are
correct, they might prove to be very pessimistic. For example, if the contributions from the sites to
the higher-derivatives cancel each other out, a much lower bound than nKd might apply. Similarly,
there might be another lower bound on the curvature much higher than n?m .
Another cause is the bounding of the variance from the curvature. While applying Brascamp-Lieb
requires the distribution to have high log-curvature everywhere, a distribution with high-curvature
close to the mode and low-curvature in the tails still has very low variance: in such a case, the
Brascamp-Lieb bound is very pessimistic.
In order to improve on our bounds, we will thus need to use tighter bounds on the log-derivatives of
the hybrids and of the target distribution, but we will also need an extension of the Brascamp-Lieb
result that can deal with those cases where a distribution is strongly log-concave around its mode
but, in the tails, the log-curvature is much lower.
3
Conclusion
EP has been used for now quite some time without any theoretical concrete guarantees on its performance. In this work, we provide explicit performance bounds and show that EP is superior to the
CGA, in the sense of giving provably better approximations of the mean and variance. There are
now theoretical arguments for substituting EP to the CGA in a number of practical problems where
the gain in precision is worth the increased computational cost. This work tackled the first steps in
proving that EP offers an appropriate approximation. Continuing in its tracks will most likely lead
to more general and less pessimistic bounds, but it remains an open question how to quantify the
quality of the approximation using other distance measures. For example, it would be highly useful for machine learning if one could show bounds on prediction error when using EP. We believe
that our approach should extend to more general performance measures and plan to investigate this
further in the future.
References
[1] Thomas P. Minka. Expectation Propagation for approximate Bayesian inference. In UAI ?01:
Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, pages 362?369,
San Francisco, CA, USA, 2001. Morgan Kaufmann Publishers Inc. ISBN 1-55860-800-1.
URL http://portal.acm.org/citation.cfm?id=720257.
8
[2] Malte Kuss and Carl E. Rasmussen. Assessing Approximate Inference for Binary Gaussian
Process Classification. J. Mach. Learn. Res., 6:1679?1704, December 2005. ISSN 1532-4435.
URL http://portal.acm.org/citation.cfm?id=1194901.
[3] Hannes Nickisch and Carl E. Rasmussen. Approximations for Binary Gaussian Process
Classification. Journal of Machine Learning Research, 9:2035?2078, October 2008. URL
http://www.jmlr.org/papers/volume9/nickisch08a/nickisch08a.pdf.
[4] Guillaume Dehaene and Simon Barthelm?. Expectation propagation in the large-data limit.
Technical report, March 2015. URL http://arxiv.org/abs/1503.08060.
[5] T. Minka. Divergence Measures and Message Passing. Technical report, 2005. URL
http://research.microsoft.com/en-us/um/people/minka/papers/
message-passing/minka-divergence.pdf.
[6] M. Seeger.
Expectation Propagation for Exponential Families.
Technical report,
2005. URL http://people.mmci.uni-saarland.de/~{}mseeger/papers/
epexpfam.pdf.
[7] Christopher M. Bishop. Pattern Recognition and Machine Learning (Information Science
and Statistics). Springer, 1st ed. 2006. corr. 2nd printing 2011 edition, October 2007.
ISBN 0387310738. URL http://www.amazon.com/exec/obidos/redirect?
tag=citeulike07-20&path=ASIN/0387310738.
[8] Jack Raymond, Andre Manoel, and Manfred Opper. Expectation propagation, September
2014. URL http://arxiv.org/abs/1409.6179.
[9] Anirban DasGupta.
Asymptotic Theory of Statistics and Probability (Springer
Texts in Statistics).
Springer, 1 edition, March 2008.
ISBN 0387759700.
URL
http://www.amazon.com/exec/obidos/redirect?tag=citeulike07-20&
path=ASIN/0387759700.
[10] Adrien Saumard and Jon A. Wellner. Log-concavity and strong log-concavity: A review.
Statist. Surv., 8:45?114, 2014. doi: 10.1214/14-SS107. URL http://dx.doi.org/10.
1214/14-SS107.
[11] Herm J. Brascamp and Elliott H. Lieb. Best constants in young?s inequality, its converse, and
its generalization to more than three functions. Advances in Mathematics, 20(2):151?173,
May 1976. ISSN 00018708. doi: 10.1016/0001-8708(76)90184-5. URL http://dx.doi.
org/10.1016/0001-8708(76)90184-5.
9
| 5912 |@word version:1 nd:1 open:2 covariance:1 p0:1 carry:1 moment:23 mseeger:1 ours:1 current:1 com:4 surprising:1 gmail:1 yet:2 must:2 dx:2 realistic:1 enables:1 update:2 v:1 intelligence:1 parametrization:1 short:1 manfred:1 characterization:2 provides:1 math:1 org:7 mathematical:1 saarland:1 prove:3 consists:2 introduce:4 expected:3 indeed:4 roughly:1 behavior:4 becomes:2 bounded:5 notation:2 qjt:1 what:2 argmin:2 minimizes:1 skewness:1 mp3:2 finding:2 differing:1 guarantee:3 concave:16 um:1 asin:2 control:3 converse:1 yn:1 understood:2 tends:1 limit:6 despite:1 mach:1 id:2 path:2 might:4 quantified:1 ease:1 obeys:1 unique:2 responsible:1 practical:2 convenient:1 word:1 refers:1 get:1 close:15 applying:1 www:3 equivalent:2 straightforward:1 starting:2 focused:2 simplicity:2 amazon:2 m2:4 insight:1 deriving:1 datapoints:3 proving:1 variation:1 justification:2 laplace:2 resp:2 target:10 exact:3 carl:2 us:1 hypothesis:4 surv:1 approximated:4 recognition:1 lean:1 ep:97 mi3:2 worst:1 intuition:1 nkd:2 tight:2 division:1 completely:1 various:1 doi:4 artificial:1 tell:1 quite:4 whose:1 say:1 statistic:4 final:1 kurtosis:1 isbn:3 coming:2 product:1 fr:1 causing:1 combining:1 intuitive:1 description:1 convergence:3 regularity:1 double:1 extending:1 assessing:1 produce:2 converges:1 derive:2 stating:1 odd:3 job:1 eq:21 strong:2 predicted:1 come:2 quantify:1 safe:1 correct:1 centered:2 enable:1 implementing:1 require:1 generalization:2 really:1 pessimistic:4 tighter:1 extension:3 hold:1 around:1 exp:4 k3:9 scope:1 cfm:2 substituting:1 purpose:1 applicable:1 rough:1 gaussian:27 always:1 aim:1 rather:1 ck:2 pn:2 factorizes:1 focus:2 inherits:1 indicates:1 likelihood:2 greatly:1 seeger:2 rigorous:1 contrast:1 sense:4 inference:6 cnrs:1 provably:1 classification:2 denoted:1 proposes:1 plan:1 constrained:1 breakthrough:1 adrien:1 equal:2 construct:1 once:1 having:1 identical:1 represents:1 broad:1 cookbook:1 cancel:2 constitutes:1 jon:1 future:1 report:3 few:2 gamma:1 divergence:6 interpolate:1 recognize:1 m4:6 argmax:1 microsoft:1 ab:2 interest:2 message:2 investigate:3 highly:1 yielding:1 integral:2 partial:1 taylor:1 continuing:1 re:1 theoretical:6 increased:1 compelling:1 extensible:1 cost:6 rare:1 successful:1 characterize:1 barthelm:3 nickisch:1 st:1 ie:3 concrete:1 von:3 again:1 slowly:6 possibly:1 derivative:7 return:2 de:1 b2:2 matter:1 inc:1 picked:1 lab:2 closed:1 reached:1 start:1 complicated:1 simon:3 contribution:2 variance:22 who:1 efficiently:1 qk:1 kaufmann:1 bayesian:1 rjt:1 worth:2 j6:3 kuss:1 suffers:1 whenever:1 ed:1 andre:1 involved:1 minka:5 proof:8 mi:4 gain:1 popular:1 recall:1 improves:2 back:1 higher:7 attained:1 improved:1 hannes:1 done:1 though:1 strongly:6 misnomer:1 just:1 elc:2 until:1 replacing:1 christopher:1 propagation:7 mode:12 quality:13 perhaps:1 believe:1 name:1 usa:1 true:11 symmetric:1 leibler:2 iteratively:1 deal:1 won:1 stone:1 pdf:3 outline:1 variational:1 jack:1 fi:10 superior:2 common:2 behaves:2 empirically:1 stepping:1 tail:2 interpretation:3 slight:1 extend:1 refer:1 significant:1 vep:21 similarly:3 mathematics:1 etc:1 add:1 curvature:7 posterior:5 recent:1 certain:3 inequality:3 binary:2 yi:1 morgan:1 minimum:1 additional:1 subtraction:1 relates:1 full:1 unimodal:1 multiple:2 technical:4 faster:1 match:2 offer:5 long:2 qi:7 prediction:1 involving:1 mmci:1 essentially:2 expectation:7 metric:2 arxiv:2 iteration:3 represent:1 justified:2 addition:1 background:1 want:1 leaving:1 limn:1 publisher:1 rest:2 logconcave:1 tend:1 dehaene:4 december:1 call:3 noting:1 ideal:1 bernstein:3 easy:1 decent:1 iterate:1 marginalization:1 fit:1 barthelme:1 whether:1 expression:1 url:11 wellner:1 lieb:8 returned:2 hessian:1 proceed:1 cause:2 repeatedly:1 passing:2 generally:2 useful:1 cga:31 detailed:1 factorial:1 amount:2 statist:1 demonstrably:1 http:11 outperform:1 canonical:2 per:1 track:1 dasgupta:1 epexpfam:1 changing:6 k4:3 asymptotically:1 sum:2 inverse:3 everywhere:1 powerful:1 fourth:5 uncertainty:1 almost:1 family:4 bit:1 capturing:1 bound:34 hi:2 entirely:1 tackled:1 x2:2 ri:4 tag:2 argument:1 relatively:1 conjecture:1 march:2 kd:2 anirban:1 smaller:1 slightly:1 appealing:1 remains:2 tractable:1 end:1 gaussians:1 operation:1 apply:2 obey:1 appropriate:1 m2k:5 alternative:1 k32:3 thomas:1 include:1 newton:2 qit:2 giving:1 approximating:4 question:2 traditional:1 unclear:1 exhibit:1 september:1 distance:6 reason:2 issn:2 index:1 relationship:4 providing:1 demonstration:1 mostly:2 october:2 unknown:1 exec:2 upper:2 brascamp:8 observation:2 finite:1 situation:1 extended:1 y1:1 kl:8 manoel:1 dth:1 able:2 usually:2 below:1 pattern:1 summarize:1 overlap:1 critical:1 natural:6 hybrid:12 malte:1 improve:2 redirect:2 isn:1 raymond:2 text:2 review:1 multiplication:1 asymptotic:5 relative:2 qkl:4 expect:1 highlight:1 interesting:4 incurred:1 sufficient:1 elliott:1 article:2 heavy:1 course:1 rasmussen:2 understand:1 characterizing:1 depth:1 opper:1 evaluating:1 doesn:2 ignores:1 concavity:2 made:1 san:1 excess:3 geneva:1 approximate:8 citation:2 uni:1 kullback:2 cavity:1 global:1 uai:1 b1:2 conclude:1 francisco:1 herm:1 search:1 why:1 learn:1 ca:1 expansion:7 bearing:1 meanwhile:1 main:1 bounding:4 edition:2 rephrasing:1 site:16 referred:2 en:1 lc:25 precision:1 fails:2 position:1 explicit:1 exponential:3 jmlr:1 third:4 printing:1 hti:4 young:1 theorem:5 unwieldy:1 formula:2 bishop:2 specific:1 jt:1 showing:2 concern:1 mp4:2 adding:1 effectively:1 corr:1 supplement:3 magnitude:3 portal:2 easier:1 simply:2 likely:2 springer:3 minimizer:1 acm:2 towards:2 feasible:1 called:1 total:1 m3:4 formally:1 guillaume:3 rit:1 select:1 people:2 |
5,428 | 5,913 | Local Smoothness in Variance Reduced Optimization
Daniel Vainsencher, Han Liu
Dept. of Operations Research & Financial Engineering
Princeton University
Princeton, NJ 08544
{daniel.vainsencher,han.liu}@princeton.edu
Tong Zhang
Dept. of Statistics
Rutgers University
Piscataway, NJ, 08854
tzhang@stat.rutgers.edu
Abstract
We propose a family of non-uniform sampling strategies to provably speed up
a class of stochastic optimization algorithms with linear convergence including
Stochastic Variance Reduced Gradient (SVRG) and Stochastic Dual Coordinate
Ascent (SDCA). For a large family of penalized empirical risk minimization problems, our methods exploit data dependent local smoothness of the loss functions
near the optimum, while maintaining convergence guarantees. Our bounds are the
first to quantify the advantage gained from local smoothness which are significant
for some problems significantly better. Empirically, we provide thorough numerical results to back up our theory. Additionally we present algorithms exploiting
local smoothness in more aggressive ways, which perform even better in practice.
1 Introduction
We consider minimization of functions of form
n
X
?1
?i x?
P (w) = n
i w + R (w)
i=1
where the convex ?i corresponds to a loss of w on some data xi , R is a convex regularizer and P
is ? strongly convex, so that P (w? ) ? P (w) + hw? ? w, ?P (w)i + ?2 kw? ? wk2 . In addition,
we assume each ?i is smooth in general and near flat in some region; examples include SVM,
regression with the absolute error or ? insensitive loss, smooth approximations of those, and also
logistic regression.
Stochastic optimization algorithms consider one loss ?i at a time, chosen at random according to
a distribution pt which may change over time. Recent algorithms combine ?i with information
about previously seen losses to accelerate the process, achieving linear convergence rate, including
Stochastic Variance Reduced Gradient (SVRG) [2], Stochastic Averaged Gradient (SAG) [4], and
Stochastic Dual Coordinate Ascent (SDCA) [6].
The expected number of iterations required by these
algorithms is of form O (n + L/?) log ??1 where L is a Lipschitz constant of all loss gradients
??i , measuring their smoothness. Difficult problems, having a condition number L/? much larger
than n, are called ill conditioned, and have motivated the development of accelerated algorithms
[5, 8, 3]. Some of these algorithms have been adapted to allow importance sampling where pt is non
uniform; the effect on convergence bounds is to replace the uniform bound L described above by
Lavg , the average over Li , loss specific Lipschitz bounds.
In practice, for an important class of problems, a large proportion of ?i need to be sampled only
very few times, and others indefinitely. As an example we take an instance of smooth SVM, with
? = n?1 and L ? 30, solved via standard SDCA. In Figure 1 we observe the decay of an upper
bound on the updates possible for different samples, where choosing a sample that is white produces
no update. The large majority of the figure is white, indicating wasted effort. For 95% of losses,
the algorithm captured all relevant information after just 3 visits. Since the non white zone is nearly
constant over time, detecting and focusing on the few important losses should be possible. This
1
represents both a success of SDCA and a significant room for improvement, as focusing just half the
effort on the active losses would increase effectiveness by a factor of 10.
Similar phenomena occur under the SVRG and SAG algorithms as well. But is the phenomenon
specific to a single problem, or general? for what problems can we expect the set of useful losses to
be small and near constant?
Figure 1: SDCA on smoothed SVM. Dual residuals upper bound the SDCA update size; white
indicates zero hence wasted effort. The dual residuals quickly become sparse; the support is stable.
Allowing pt to change over time, the phenomenon described indeed can be exploited; Figure 2
shows significant speedups obtained by our variants of SVRG and SDCA. Comparisons on other
datasets are given in Section 4. The mechanism by which speed up is obtained is specific to each algorithm, but the underlying phenomenon we exploit is the same: many problems are much smoother
locally than globally. First consider a single smoothed hinge loss ?i , as used in smoothed SVM with
smoothing parameter ?. The non-smoothness of the hinge loss is spread in ?i over an interval of
length ?, as illustrated in Figure 3 and given by
?
a>1
?0
a<1?? .
?i (a) = 1 ? a ? ?/2
?
2
(a ? 1) / (2?) otherwise
SVRG solving smoothed hinge loss SVM on MNIST 0/1.
Loss0 gradient is 3.33e+01 Lip. smooth. 6.77e-05 strong convexity.
10
-1
Uniform sampling ([2])
10
10-2
Global smoothness sampling ([7])
10-3
Local SVRG (Alg. 1)
10-4
Empirical Affinity SVRG (Alg. 4)
10-5
10-6
10-7
10-8
10-9
10-10
10-11
10-12
10-13
0
500 1000 1500 2000 2500 3000 3500 4000
Effective passes over data
Duality gap/suboptimality
Duality gap/suboptimality
d
The Lipschitz constant of da
?i (a) is ? ?1 , hence it enters into the global estimate of condition number Lavg as Li = kxi k /?; hence approximating the hinge loss more precisely, with a smaller ?,
makes the problems strictly more ill conditioned. But outside that interval of length ?, ?i can be
locally approximated as affine, having a constant gradient; into a correct expression of local conditioning, say on interval B in the figure, it should contribute nothing. So smaller ? can sometimes
make the problem (locally) better conditioned. A set I of losses having constant gradients over a
subset of the hypothesis space can be summarized for purposes of optimization by a single affine
SDCA solving smoothed hinge loss SVM on MNIST 0/1.
Loss0 gradient is 3.33e+01 Lip. smooth. 6.77e-05 strong convexity.
10
-1
Uniform sampling ([6])
10
10-2
Global smoothness sampling ([10])
10-3
Affine-SDCA (Alg. 2)
10-4
Empirical ? SDCA (Alg. 3)
10-5
10-6
10-7
10-8
10-9
10-10
10-11
10-12
10-13
0
50
100 150 200 250 300 350
Effective passes over data
Figure 2: On the left we see variants of SVRG with ? = 1/ (8L), on the right variants of SDCA.
2
Figure 3: A loss ?i that is near flat (Hessian vanishes, near constant gradient) on a ?ball? B ? R.
B with radius 2r kxi k is induced by the (Euclidean) ball of hypotheses B (wt , r), that we prove
includes w? . Then the loss ?i does not contribute to curvature in the region of interest, and an affine
model of the sum of such ?i on B can replace sampling from them. We find r in algorithms by
combining strong convexity with quantities such as duality gap or gradient norm.
function, so sampling from I should not be necessary. It so happens that SAG, SVRG and SDCA
naturally do such modeling, hence need only light modifications to realize significant gains. We
provide the details for SVRG in Section 2 (the SAG case is similar) and for SDCA in Section 3.
Other losses, while nowhere affine, are locally smooth: the logistic regression loss has gradients
with local Lipschitz constants that decay exponentially with distance from a hyperplane dependent
on xi . For such losses we cannot forgo sampling any ?i permanently, but we can still obtain bounds
benefitting from local smoothness for an SVRG variant.
Next we define formally the relevant geometric properties of the optimization problem and relate
them to provable convergence improvements over existing generic bounds; we give detailed bounds
in the sequel. Throughout B (c, r) is a Euclidean ball of radius r around c.
Definition 1. We shall denote Li,r = maxw?B(w? ,r)
?2 ?i x?
i ? 2 which is also the uniform
?
Lipschitz coefficient of ??i that hold at distance at most r from w .
? i,r around
Remark 2. Algorithms will use similar quantities not dependent on knowing w? such as L
a known w.
?
Definition 3. We define the average ball smoothness function S : R ? R of a problem by:
n
n
X
X
Li,r .
Li,? /
S (r) =
i=1
i=1
In Theorem 5 we see that Algorithm 1 requires fewer stochastic gradient samples to reduce loss suboptimality by a constant factor than SVRG with importance sampling according to global smoothness. Once it has certified that the optimum w? is within r of the current iterate w0 it uses S (2r)
times less stochastic gradient steps. The next measure similarly increases when many losses are
affine on a ball around the optimum.
Definition 4. We define the ball affinity function S : R ? [0, n] of a problem by:
!?1
n
X
?1
1{Li,r >0}
.
A (r) = n
i=1
In Theorem 10 we see similarly that Algorithm 2 requires fewer accesses of ?i to reduce the duality
gap to any ? > 0 than SDCA with importance sampling according to global smoothness. Once it has
certified that the optimum is within distance r of the current primal iterate w = w ?0 it accesses
A (2r) times fewer ?i .
In both cases, local smoothness and affinity enable us to focus a constant portion of sampling effort
on the fewer losses still challenging near the optimum; when these are few, the ratios (and hence
3
algorithmic advantage) are large. We obtain these provable speedups over already fast algorithms
by using that local smoothness which we can certify. For non smooth losses such as SVM and and
absolute loss regression, we can similarly ignore irrelevant losses, leading to significant practical
improvements; the current theory for such losses is insufficient to quantify the speed ups as we do
for smooth losses.
We obtain algorithms that are simpler and sometimes much faster by using the more qualitative
observation that as iterates tend to an optimum, the set of relevant losses is generally stable and
shrinking. Then algorithms can estimate the set of relevant losses directly from quantities observed in
performing stochastic iterations, sidestepping the looseness of estimating r. There are two previous
works in this general direction. The first paper work combining non-uniform sampling and empirical
estimation of loss smoothness is [4]. They note excellent empirical performance on a variant of
SAG, but without theory ensuring convergence. We provide similarly fast (and bound free) variants
of SDCA (Section 3.2) and SVRG (Section 2.2). A dynamic importance sampling variant of SDCA
was reported in [1] without relation to local smoothness; we discuss the connection in Section 3.
2 Local smoothness and gradient descent algorithms
In this section we describe how SVRG, in contrast to the classical stochastic gradient descent (SGD),
naturally exposes local smoothness in losses. Then we present two variants of SVRG that realize
these gains. We begin by considering a single loss when close to the optimum and for simplicity
assume R ? 0. Assume a small ball B = B (w, r) around our current estimate w includes around
the optimum w? , and B is contained in a flat region of ?i , and this holds for a large proportion of
the n losses.
SGD and its descendent SVRG (with importance sampling) use updates of form wt+1 = wt ?
t
?vit / (pi n), where EP
i?p vi / (pi n) =
?F (wt ) is an unbiased estimator of the full gradient of the loss
n
?
?1
term F (w) = n
i=1 ?i xi w . SVRG uses
t
? / (pi n) + ?F (w)
?
? ??i x?
vit = ??i x?
i w
i w
where w
? is some reference point, with the advantage that vit has variance
that vanishes as wt , w
??
?
?
t
w . We point out in addition that when w,
? w ? B and ??i xi ? is constant on B the effects of
sampling ?i cancels out and vit = ?F (w).
? In particular, we can set pti = 0 with no loss of infor?
mation. More generally when ??i xi ? is near constant on B (small Li,r ) the difference between
the sampled values of ??i in vit is very small and pti can be similarly small. We formalize this in the
next section, where we localize existing theory that applied importance sampling to adapt SVRG
statically to losses with varied global smoothness.
2.1 The Local SVRG algorithm
Halving the suboptimality of a solution using SVRG has two parts: computing an exact gradient at a
reference point, and performing many stochastic gradient descent steps. The sampling distribution,
step size and number of iterations in the latter are determined by smoothness of the losses. Algorithm
1, Local-SVRG, replaces the global bounds on gradient change Li with local ones Li,r , made valid
by restricting iterations to a small ball certified to contain the optimum. This allows us to leverage
previous algorithms and analysis, maintaining previous guarantees and improving on them when
S (r) is large.
For this section we assume P = F ; as in the initial version of SVRG [2], we may incorporate
a smooth regularizer (though in a different way, explained later). This allows us to apply the existing Prox-SVRG algorithm [7] and its theory; instead of using the proximal operator for fixed
regularization, we use it to localize (by projections) the stochastic descent to a ball B around the reference point w
? see Algorithm 1. Then the theory developed around importance sampling and global
smoothness applies to sharper local smoothness estimates that hold on B (ignoring ?i which are
affine on B is a special case). This allows for fewer stochastic iterations and using a larger stepsize,
obtaining speedups that are problem dependent but often large in late stages; see Figure 2. This is
formalized in the following theorem.
4
Algorithm 1 Local SVRG is an application of ProxSVRG with w
? dependent regularization. This
portion reduces suboptimality by a constant factor, apply iteratively to minimize loss.
1. Compute v? = ?F (w)
?
0 w ? B (w,
? r)
2. Define r = ?2 k?
v k, R (w) = iB(w,r)
=
(by ? strong convexity, w? ?
?
? otherwise
B (w,
? r))
? i,r = maxw?B(w,r)
3. For each i, compute L
?2 ?i x? w
?
i
4. Define a probability distribution: pi
? i,r / (npi ) and step size ? = 1
maxi L
?
16L
p
? i,r , weighted Lipschitz constant L
?p =
? L
.
5. Apply the inner loop of Prox-SVRG:
(a) Set w0 = w
?
(b) For t ? {1, . . . , m}:
i. Choose it ? p
ii. Compute v t = ??it wt?1 ? ??it (?
x) / (npit ) + v?
iii. wt = prox?R wt?1 ? ?v t
P
(c) Return w
? = m?1 t?[m] wt
Theorem 5. Let w
? be an initial solution such that ?F (w)
? certifies that w? ? B = B (w,
? r).
Algorithm 1 finds w
? with
EF (w)
? ? F (w? ) ? (F (w)
? ? F (w? )) /2
Pn
?1
using O (d (n + m)) time, where m = 128
i=1 Li,2r + 3.
? n
P
Remark 6. In the difficult case that is ill conditioned even locally so that 128n?1 ni=1 Li,2r ? n?,
the term n is negligible and the ratio between complexities of Algorithm 1 and an SVRG using
global smoothness approaches S (2r).
? i,r ? Li,2r . We then apply a single
Proof. In the initial pass on the data, compute ?F (w)
? , r and L
round of Algorithm Prox-SVRG of [7], with the regularizer R (x) = ?B(w,r)
localizing around
?
? i,r instead of the global Li
the reference point. Then we may apply Theorem 1 of [7] with local L
required there for general proximal operators. This allows us to use the corresponding larger stepsize
1
? = 16L
= 16n?1 P1n L? .
p
i=1
i,r
Remark 7. The use of projections (hence the restriction to smooth regularization) is necessary because the local smoothness is restricted to B, and venturing outside B with a large step size may
compromise convergence entirely. While excursions outside B are difficult to control in theory, in
practice skipping the projection entirely does not seem to hurt convergence. Informally, stepping far
from B requires moving consistently against ?F , which is an unlikely event.
Remark 8. The theory requires m stochastic steps per exact gradient to guarantee any improvement
at all, but for ill conditioned problems this is often very pessimistic. In practice, the first O (n)
stochastic steps after an exact gradient provide most of the benefit. In this heuristic scenario, the
computational benefit of Theorem 5 is through the sampling distribution and the larger step size.
Enlarging the step size without accompanying theory often gains a corresponding speed up to a
certain precision but the risk of non convergence materializes frequently.
While [2] incorporated a smooth R by adding it to every loss function, this could reduce the smooth? i,r ) inherent in the losses hence reducing the benefits of our approach. We instead
ness (increase L
propose to add a single loss function defined as nR; that this is not of form ?i x?
i w poses no real
difficulty because Local-SVRG depends on losses only through their gradients and smoothness.
The main difficulty with the approach of this section is that in early stages r is large, in part because
? is often very small (? = n?? for ? ? {0.5, 1} are common choices), leading to loose bounds
5
? i,r . In some cases the speed up is only obtained when the precision is already satisfactory; we
on L
consider a less conservative scheme in the next section.
2.2 The Empirical Affinity SVRG algorithm
t
Local-SVRG relies on local smoothness to certify that some ?ti =
??i x?
? ??i x?
?
i w
i w
are small. In contrast, Empirical Affinity SVRG (Algorithm 4) takes ?ti > t to be evidence that
a loss is active; when ?ti = 0 several times, that is evidence of local affinity of the loss, hence it
can be sampled less often. This strategy deemphasizes locally affine losses even when r is too large
to certify it, thereby focuses work on the relevant losses much earlier. Half of the time we sample
proportional to the global bounds Li which keeps estimates of ?ti current, and also bounds the
variance when some ?ti increases from zero to positive. A benefit of using ?ti is that it is observed
at every sample of i without additional work. Pseudo code for the slightly long Algorithm 4 is in the
supplementary material for space reasons.
3 Stochastic Dual Coordinate Ascent (SDCA)
The SDCA algorithm solves P through the dual problem
D (?) = ?n?1
n
X
??i (??i ) + R? (w (?))
i=1
1
where w (?) = ?R? ?n
i=1 xi ?i . At each iteration, SDCA chooses i at random according to
pt , and updates the ?i corresponding to the loss ?i to increase D. This scheme has been used for
particular losses before, and was analyzed in [6] obtaining linear rates for general smooth losses,
uniform sampling and l2 regularization, and recently generalized in [10] to other regularizers and
general sampling distributions. In particular, [10] show improved bounds and performance by stati?1
cally adapting to the global
of losses;
properties
using
a distribution pi ? 1+Li (n?) ,
smoothness
Pn
L
L
log n + avg
??1
iterations to obtain an expected duit suffices to perform O n + avg
?
?
ality gap of at most ?. While SDCA is very different from gradient descent methods, it shares the
property that when the current state of the algorithm (in the form of ?i ) already matches the derivative information for ?i , the update does not require ?i and can be skipped. As we?ve seen in Figure
1, many losses converge ?i ? ??i very quickly; we will show that local affinity is a sufficient
condition.
3.1 The Affine-SDCA algorithm
The algorithmic approach for exploiting locally affine losses in SDCA is very different from that
for gradient descent style algorithms; for some affine losses we certify early that some ?i are in
their final form (see Lemma 9) and henceforth ignore them. This applies only to locally affine (not
just smooth) losses, but unlike Local-SVRG, does not require modifying the algorithm for explicit
localization. We use a reduction to obtain improved rates while reusing the theory of [9] for the
remaining points. These results are stated for squared Euclidean regularization, but hold for strongly
convex R as in [10].
S
Lemma 9. Let wt = w (?t ) ? B (w? , r), and let {gi } = w?B(wt ,r) ??i x?
i w ; in other words,
t
?
?
?i x?
i ? is affine on B (w , r) which includes w . Then we can compute the optimal value ?i =
?gi .
?
?
?
Proof. As stated in Section 7 of [6], for each i, we have ???i = ??i x?
i w . Then if ?i xi w is a
constant singleton on B (wt , r) containing w? , then in particular that is ???i .
The lemma enables Algorithm 2 to ignore a growing proportion of losses. The overall convergence
this enables is given by the following.
6
Algorithm 2 Affine-SDCA: adapting to locally affine ?i , with speedup approximately A (r).
1. ?0 = 0 ? Rn , I 0 = ?.
2. For ? ? {1, . . . }:
q
(a) w
?? = w ?(? ?1)m ; Compute r? = 2 P (w
?? ) ? D ?(? ?1)m /?
o
n S
?
w
?
(b) Compute I ? = i : w?B(w? ,r) ??i x?
=1
i
(? ?1)n
??
= ???i x?
(c) For i ? I ? \I ? ?1 : ?i
i w
(
0
i ? I?
0
i ? I?
?
,
s
=
(d) pi ?
i
?1
?
s/pi otherwise
1 + Li (n?)
otherwise
(e) For t ? [(? ? 1) m + 1, ? m]:
i. Choose it ? p?
t?1
t
ii. Compute ??tit = sit ? ??it x?
t w (? ) ? ?it
i
(
?t?1
+ ??tj j = it
j
t
iii. ?j =
t?1
?j
otherwise
?
Theorem 10.
epoch ? Algorithm
duality gap
2 is at ?1
?? ,it will achieve expected duality gap ?
If at ?1
A (2r)L?avg
A (2r)L?avg
?
?
?
log n +
iterations, where n? = n ? |I ? | and
in at most n +
?
?
?
L?avg =
n??1
P
i?[n]\I ?
?
Li
.
Remark 11. Assuming Li = L for simplicity, and recalling A (2r) ? n/n? , we find the number
?1
of iterations is reduced by a factor of at least A (2r), compared to using pi ? 1 + Li (n?) . In
contrast, the cost of the steps 2a to 2d added by Algorithm 2 is at most a factor of O ((m + n) /m),
which may be driven towards one by the choice of m.
Recent work [1] modified SDCA for dynamic importance sampling dependent on the so called dual
residual:
?i = ?i + ??i x?
i w (?)
(where by ??i (w) we refer to the derivative of ?i at w) which is 0 at ?? . They exhibit practical
improvement in convergence, especially for smooth SVM, and theoretical speed ups when ? is
sparse (for an impractical version of the algorithm), but [1] does not tell us when this pre-condition
holds, nor the magnitude of the expected benefit in terms of properties of the problem (as opposed
to algorithm state such as ?). In the context of locally flat losses such as smooth SVM, we answer
these questions through local smoothness: Lemma 9 shows ?i tends to zero for losses that are locally
affine on a ball around the optimum, and the practical Algorithm 2 realizes the benefit when this
certification comes into play, as quantified in terms of A (r).
3.2 The Empirical ? SDCA algorithm
Algorithm 2 uses local affinity and a small duality gap to certify the optimality of some ?i , avoiding
calculating ??i that are zero or useless; naturally r is small enough only late in the process. Algorithm 3 instead dedicates half of samples in proportion to the magnitude of recent ??i (the other
half chosen uniformly). As Figure 2 illustrates, this approach leads to significant speed up much
earlier than the approach based on duality gap certification of local affinity. While we it is not clear
that we can prove for Algorithm 3 a bound that strictly improves on Algorithm 2, it is worth noting
that except for (probably rare) updates to i ? I ? , and a factor of 2, the empirical algorithm should
quickly detect all locally affine losses hence obtain at least the speed up of the certifying algorithm.
In addition, it naturally adapts to the expected small updates of locally smooth losses. Note that ??i
is closely related to (and might be replacable by) ?, but the current algorithm differs significantly
from those in [1] in how these quantities are used to guide sampling.
7
Algorithm 3 Empirical ? SDCA
1. ?0 = 0 ? Rn , Ati = 0.
2. For ? ? {1, . . . }:
(? ?1)m
and p2i = n?1
(a) p? = 0.5p?,1 + 0.5p2 where p?,1
? Ai
i
(b) For t ? [(? ? 1) m + 1, ? m]:
i. Choose it ? p?
t?1
t
ii. Compute ??tit = sit ? ??it x?
t w (? ) ? ?it
i
(
0.5At?1
+ 0.5 ??tj j = it
j
t
iii. Aj =
At?1
otherwise
j
(
?t?1
+ ??tj j = it
j
iv. ?tj =
t?1
?j
otherwise
4 Empirical evaluation
Duality gap/suboptimality
SDCA solving smoothed hinge loss SVM on Dorothea.
Loss0 gradient is 3.33e+01 Lip. smooth. 1.25e-03 strong convexity.
10
-1
Uniform sampling ([6])
10
10-2
Global smoothness sampling ([10])
10-3
Affine-SDCA (Alg. 2)
-4
10
Empirical ? SDCA (Alg. 3)
10-5
10-6
10-7
10-8
10-9
10-10
10-11
10-12
10-13
0
5
10
15
20
25
30
Effective passes over data
SDCA solving smoothed hinge loss SVM on w8a.
Loss0 gradient is 3.33e+01 Lip. smooth. 2.01e-05 strong convexity.
10
-1
Uniform sampling ([6])
10
10-2
Global smoothness sampling ([10])
10-3
Affine-SDCA (Alg. 2)
10-4
Empirical ? SDCA (Alg. 3)
10-5
10-6
10-7
10-8
10-9
10-10
10-11
10-12
10-13
0
20
40
60
80
Effective passes over data
Duality gap/suboptimality
Duality gap/suboptimality
SDCA solving smoothed hinge loss SVM on Mushroom.
Loss0 gradient is 3.33e+01 Lip. smooth. 1.23e-04 strong convexity.
10
-1
Uniform sampling ([6])
10
10-2
Global smoothness sampling ([10])
10-3
Affine-SDCA (Alg. 2)
10-4
Empirical ? SDCA (Alg. 3)
10-5
10-6
10-7
10-8
10-9
10-10
10-11
10-12
10-13
0
100
200
300
400
Effective passes over data
Duality gap/suboptimality
We applied the same algorithms with almost1 the same parameters to 4 additional classification
datasets to demonstrate the impact of our algorithm variants more widely. The results for SDCA are
in Figure 4, those for SVRG in Figure 5 in Section 7 in the supplementary material for lack of space.
SDCA solving smoothed hinge loss SVM on ijcnn1.
Loss0 gradient is 3.33e+01 Lip. smooth. 5.22e-06 strong convexity.
10
-1
Uniform sampling ([6])
10
10-2
Global smoothness sampling ([10])
10-3
Affine-SDCA (Alg. 2)
-4
10
Empirical ? SDCA (Alg. 3)
10-5
10-6
10-7
10-8
10-9
10-10
10-11
10-12
10-13
0
100
200
300
400
500
Effective passes over data
Figure 4: SDCA variant results on four additional datasets. The advantages of using local smoothness
are significant on the harder datasets.
References
[1] Dominik Csiba, Zheng Qu, and Peter Richt?arik. Stochastic dual coordinate ascent with adaptive
probabilities. arXiv preprint arXiv:1502.08053, 2015.
[2] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, pages 315?323, 2013.
1
On one of the new datasets, SVRG with a ratio of step-size to Lavg more aggressive than theory suggests
stopped converging; hence we changed all runs to use the permissible 1/8. No other parameters were changed
adapted to the dataset.
8
[3] Qihang Lin, Zhaosong Lu, and Lin Xiao. An accelerated proximal coordinate gradient method
and its application to regularized empirical risk minimization. arXiv preprint arXiv:1407.1296,
2014.
[4] Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic
average gradient. arXiv preprint arXiv:1309.2388, 2013.
[5] Shai Shalev-Shwartz and Tong Zhang. Accelerated proximal stochastic dual coordinate ascent
for regularized loss minimization. Mathematical Programming, pages 1?41, 2013.
[6] Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss. The Journal of Machine Learning Research, 14(1):567?599, 2013.
[7] Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance
reduction. SIAM Journal on Optimization, 24(4):2057?2075, 2014.
[8] Yuchen Zhang and Lin Xiao. Stochastic primal-dual coordinate method for regularized empirical risk minimization. arXiv preprint arXiv:1409.3257, 2014.
[9] Peilin Zhao and Tong Zhang. Stochastic optimization with importance sampling. arXiv
preprint arXiv:1401.2753, 2014.
[10] Peilin Zhao and Tong Zhang. Stochastic optimization with importance sampling for regularized
loss minimization. Proceedings of The 32nd International Conference on Machine Learning,
2015.
9
| 5913 |@word version:2 proportion:4 norm:1 nd:1 ality:1 sgd:2 thereby:1 harder:1 reduction:3 initial:3 liu:2 daniel:2 ati:1 existing:3 current:7 skipping:1 mushroom:1 realize:2 numerical:1 enables:2 update:8 half:4 fewer:5 indefinitely:1 detecting:1 iterates:1 contribute:2 simpler:1 zhang:8 mathematical:1 become:1 qualitative:1 prove:2 combine:1 indeed:1 expected:5 frequently:1 growing:1 nor:1 globally:1 considering:1 begin:1 estimating:1 underlying:1 what:1 developed:1 nj:2 impractical:1 guarantee:3 pseudo:1 thorough:1 every:2 ti:6 sag:5 control:1 positive:1 negligible:1 engineering:1 local:30 before:1 tends:1 approximately:1 might:1 wk2:1 quantified:1 suggests:1 challenging:1 averaged:1 practical:3 practice:4 differs:1 dorothea:1 sdca:40 empirical:17 significantly:2 adapting:2 projection:3 ups:2 word:1 pre:1 cannot:1 close:1 operator:2 risk:4 context:1 restriction:1 vit:5 convex:4 simplicity:2 formalized:1 roux:1 estimator:1 financial:1 coordinate:8 hurt:1 pt:4 play:1 exact:3 programming:1 us:3 hypothesis:2 nowhere:1 approximated:1 observed:2 ep:1 preprint:5 solved:1 enters:1 region:3 richt:1 vanishes:2 convexity:8 complexity:1 lavg:3 dynamic:2 certification:2 solving:6 tit:2 compromise:1 predictive:1 localization:1 accelerate:1 regularizer:3 fast:2 effective:6 describe:1 tell:1 choosing:1 outside:3 shalev:2 heuristic:1 larger:4 supplementary:2 widely:1 say:1 otherwise:7 statistic:1 gi:2 certified:3 final:1 advantage:4 propose:2 relevant:5 combining:2 loop:1 achieve:1 adapts:1 exploiting:2 convergence:11 optimum:10 produce:1 pose:1 stat:1 p2:1 solves:1 strong:8 come:1 quantify:2 direction:1 radius:2 closely:1 correct:1 modifying:1 stochastic:26 enable:1 material:2 require:2 suffices:1 pessimistic:1 strictly:2 hold:5 accompanying:1 around:9 algorithmic:2 early:2 purpose:1 estimation:1 realizes:1 expose:1 weighted:1 minimization:6 mation:1 arik:1 modified:1 pn:2 focus:2 improvement:5 consistently:1 indicates:1 contrast:3 skipped:1 benefitting:1 detect:1 dependent:6 unlikely:1 relation:1 provably:1 infor:1 overall:1 dual:11 ill:4 classification:1 development:1 smoothing:1 special:1 ness:1 once:2 having:3 sampling:34 kw:1 represents:1 progressive:1 cancel:1 nearly:1 others:1 inherent:1 few:3 ve:1 recalling:1 interest:1 zheng:1 evaluation:1 zhaosong:1 analyzed:1 light:1 primal:2 tj:4 regularizers:1 necessary:2 iv:1 euclidean:3 yuchen:1 theoretical:1 stopped:1 instance:1 modeling:1 earlier:2 measuring:1 localizing:1 cost:1 subset:1 rare:1 uniform:12 johnson:1 too:1 reported:1 answer:1 kxi:2 proximal:5 chooses:1 international:1 siam:1 sequel:1 quickly:3 squared:1 containing:1 choose:3 opposed:1 henceforth:1 derivative:2 leading:2 return:1 style:1 li:19 reusing:1 aggressive:2 zhao:2 prox:4 singleton:1 summarized:1 includes:3 coefficient:1 descendent:1 vi:1 depends:1 later:1 francis:1 portion:2 npi:1 shai:2 minimize:1 ni:1 variance:7 lu:1 worth:1 definition:3 against:1 naturally:4 proof:2 vainsencher:2 sampled:3 gain:3 dataset:1 improves:1 formalize:1 back:1 focusing:2 improved:2 rie:1 though:1 strongly:2 just:3 stage:2 lack:1 logistic:2 aj:1 effect:2 contain:1 unbiased:1 hence:10 regularization:5 iteratively:1 satisfactory:1 illustrated:1 white:4 round:1 suboptimality:9 generalized:1 demonstrate:1 ef:1 recently:1 common:1 empirically:1 stepping:1 conditioning:1 insensitive:1 exponentially:1 significant:7 refer:1 ai:1 smoothness:33 similarly:5 moving:1 stable:2 han:2 access:2 proxsvrg:1 add:1 curvature:1 loss0:6 recent:3 irrelevant:1 driven:1 scenario:1 certain:1 success:1 exploited:1 seen:2 captured:1 additional:3 converge:1 ii:3 smoother:1 full:1 reduces:1 smooth:21 faster:1 adapt:1 match:1 bach:1 long:1 lin:4 visit:1 ensuring:1 halving:1 variant:10 regression:4 impact:1 converging:1 rutgers:2 arxiv:10 iteration:9 sometimes:2 addition:3 interval:3 permissible:1 unlike:1 ascent:6 pass:6 induced:1 tend:1 probably:1 effectiveness:1 seem:1 near:7 leverage:1 noting:1 iii:3 enough:1 iterate:2 reduce:3 inner:1 knowing:1 motivated:1 expression:1 accelerating:1 effort:4 peter:1 hessian:1 remark:5 useful:1 generally:2 detailed:1 informally:1 clear:1 locally:13 reduced:4 qihang:1 certify:5 per:1 sidestepping:1 shall:1 four:1 achieving:1 localize:2 wasted:2 sum:2 run:1 tzhang:1 family:2 throughout:1 excursion:1 peilin:2 entirely:2 bound:16 replaces:1 adapted:2 occur:1 precisely:1 flat:4 certifying:1 speed:8 optimality:1 performing:2 statically:1 speedup:4 according:4 piscataway:1 ball:10 smaller:2 slightly:1 pti:2 qu:1 modification:1 happens:1 ijcnn1:1 explained:1 restricted:1 previously:1 discus:1 loose:1 mechanism:1 certifies:1 operation:1 apply:5 observe:1 generic:1 stepsize:2 schmidt:1 permanently:1 remaining:1 include:1 maintaining:2 hinge:9 cally:1 calculating:1 exploit:2 especially:1 approximating:1 classical:1 already:3 quantity:4 added:1 question:1 strategy:2 nr:1 exhibit:1 gradient:32 affinity:9 distance:3 majority:1 w0:2 reason:1 provable:2 assuming:1 length:2 code:1 useless:1 insufficient:1 ratio:3 minimizing:1 difficult:3 dedicates:1 sharper:1 relate:1 stated:2 looseness:1 perform:2 allowing:1 upper:2 observation:1 w8a:1 datasets:5 finite:1 descent:7 incorporated:1 rn:2 varied:1 smoothed:9 required:2 connection:1 csiba:1 including:2 event:1 difficulty:2 regularized:5 residual:3 scheme:2 epoch:1 geometric:1 l2:1 loss:69 expect:1 proportional:1 affine:21 sufficient:1 xiao:3 pi:8 share:1 penalized:1 changed:2 free:1 svrg:34 guide:1 allow:1 absolute:2 sparse:2 benefit:6 valid:1 made:1 avg:5 adaptive:1 far:1 ignore:3 keep:1 global:16 active:2 xi:7 shwartz:2 additionally:1 lip:6 nicolas:1 ignoring:1 obtaining:2 improving:1 alg:12 excellent:1 da:1 spread:1 main:1 nothing:1 p2i:1 p1n:1 tong:7 shrinking:1 precision:2 explicit:1 ib:1 dominik:1 late:2 hw:1 theorem:7 enlarging:1 specific:3 maxi:1 decay:2 svm:13 evidence:2 sit:2 mnist:2 restricting:1 adding:1 gained:1 importance:10 magnitude:2 conditioned:5 illustrates:1 gap:13 contained:1 maxw:2 applies:2 corresponds:1 relies:1 stati:1 towards:1 room:1 lipschitz:6 replace:2 change:3 determined:1 except:1 reducing:1 uniformly:1 wt:12 hyperplane:1 lemma:4 conservative:1 called:2 pas:1 duality:12 forgo:1 indicating:1 zone:1 formally:1 support:1 mark:1 latter:1 accelerated:3 phenomenon:4 incorporate:1 dept:2 princeton:3 avoiding:1 |
5,429 | 5,914 | High Dimensional EM Algorithm:
Statistical Optimization and Asymptotic Normality?
Zhaoran Wang
Princeton University
Quanquan Gu
University of Virginia
Yang Ning
Princeton University
Han Liu
Princeton University
Abstract
We provide a general theory of the expectation-maximization (EM) algorithm for
inferring high dimensional latent variable models. In particular, we make two contributions: (i) For parameter estimation, we propose a novel high dimensional EM
algorithm which naturally incorporates sparsity structure into parameter estimation.
With an appropriate initialization, this algorithm converges at a geometric rate
and attains an estimator with the (near-)optimal statistical rate of convergence. (ii)
Based on the obtained estimator, we propose a new inferential procedure for testing
hypotheses for low dimensional components of high dimensional parameters. For
a broad family of statistical models, our framework establishes the first computationally feasible approach for optimal estimation and asymptotic inference in high
dimensions.
1
Introduction
The expectation-maximization (EM) algorithm [12] is the most popular approach for calculating the
maximum likelihood estimator of latent variable models. Nevertheless, due to the nonconcavity of
the likelihood function of latent variable models, the EM algorithm generally only converges to a
local maximum rather than the global one [30]. On the other hand, existing statistical guarantees
for latent variable models are only established for global optima [3]. Therefore, there exists a gap
between computation and statistics.
Significant progress has been made toward closing the gap between the local maximum attained
by the EM algorithm and the maximum likelihood estimator [2, 18, 25, 30]. In particular, [30] first
establish general sufficient conditions for the convergence of the EM algorithm. [25] further improve
this result by viewing the EM algorithm as a proximal point method applied to the Kullback-Leibler
divergence. See [18] for a detailed survey. More recently, [2] establish the first result that characterizes
explicit statistical and computational rates of convergence for the EM algorithm. They prove that,
given a suitable initialization, the EM algorithm converges at a geometric rate to a local maximum
close to the maximum likelihood estimator. All these results are established in the low dimensional
regime where the dimension d is much smaller than the sample size n.
In high dimensional regimes where the dimension d is much larger than the sample size n, there
exists no theoretical guarantee for the EM algorithm. In fact, when d
n, the maximum likelihood
estimator is in general not well defined, unless the models are carefully regularized by sparsity-type
assumptions. Furthermore, even if a regularized maximum likelihood estimator can be obtained in a
computationally tractable manner, establishing the corresponding statistical properties, especially
asymptotic normality, can still be challenging because of the existence of high dimensional nuisance
parameters. To address such a challenge, we develop a general inferential theory of the EM algorithm
for parameter estimation and uncertainty assessment of high dimensional latent variable models. In
particular, we make two contributions in this paper:
? For high dimensional parameter estimation, we propose a novel high dimensional EM algorithm by
attaching a truncation step to the expectation step (E-step) and maximization step (M-step). Such a
?
Research supported by NSF IIS1116730, NSF IIS1332109, NSF IIS1408910, NSF IIS1546482-BIGDATA,
NSF DMS1454377-CAREER, NIH R01GM083084, NIH R01HG06841, NIH R01MH102339, and FDA
HHSF223201000072C.
1
truncation step effectively enforces the sparsity of the attained estimator and allows us to establish
significantly improved statistical rate of convergence.
? Based upon the estimator attained by the high dimensional EM algorithm, we propose a decorrelated
score statistic for testing hypotheses related to low dimensional components of the high dimensional
parameter.
Under a unified analytic framework, we establish simultaneous statistical and computational guarantees for the proposed high dimensional EM algorithm and the respective uncertainty assessment
(t) T
procedure. Let ? 2 Rd be the true parameter, s? be its sparsity level and
be the iterative
t=0
solution sequence of the high dimensional EM algorithm with T being the total number of iterations.
In particular, we prove that:
? Given an appropriate initialization init with relative error upper bounded by a constant ? 2 (0, 1),
?
(t) T
i.e., init
/k ? k2 ? ?, the iterative solution sequence
satisfies
2
t=0
p
(t)
?
t/2
?
??
+ 2 ? s? ? log d/n
(1.1)
2
| 1 {z }
|
{z
}
Optimization Error
Statistical Error: Optimal Rate
with high probability. Here ? 2 (0, 1), and 1 , 2 are quantities that possibly depend on ?, ? and
?
. As the optimization error term in (1.1)pdecreases to zero at a geometric rate with respect to
t, the overall estimation error achieves the s? ? log d/n statistical rate of convergence (up to an
extra factor of log n), which is (near-)minimax-optimal. See Theorem 3.4 for details.
? The proposed decorrelated score statistic is asymptotically normal. Moreover, its limiting variance
is optimal in the sense that it attains the semiparametric information bound for the low dimensional
components of interest in the presence of high dimensional nuisance parameters. See Theorem 4.6
for details.
Our framework allows two implementations of the M-step: the exact maximization versus approximate
maximization. The former one calculates the maximizer exactly, while the latter one conducts an
approximate maximization through a gradient ascent step. Our framework is quite general. We
illustrate its effectiveness by applying it to two high dimensional latent variable models, that is,
Gaussian mixture model and mixture of regression model.
Comparison with Related Work: A closely related work is by [2], which considers the low dimensional regime where d is much smaller than n. Under certain initialization conditions, theyp
prove
that the EM algorithm converges at a geometric rate to some local optimum that attains the d/n
statistical rate of convergence. They cover both maximization and gradient ascent implementations of
the M-step, and establish the consequences for the two latent variable models considered in our paper
under low dimensional settings. Our framework adopts their view of treating the EM algorithm as
a perturbed version of gradient methods. However, to handle the challenge of high dimensionality,
the key ingredient of our framework is the truncation step that enforces the sparsity structure along
the solution path. Such a truncation operation poses significant challenges for both computational
and statistical analysis. In detail, for computational analysis we need to carefully characterize the
evolution of each intermediate solution?s support and its effects on the evolution of the entire iterative
solution sequence. For statistical analysis, we need to establish a fine-grained characterization of the
entrywise statistical error, which is technically more challenging than just establishing
the `2 -norm
p
error employed by [2]. In high dimensional regimes, we p
need to establish the s? ? log d/n statistical
rate of convergence, which is much sharper than their d/n rate when d
n. In addition to point
estimation, we further construct hypothesis tests for latent variable models in the high dimensional
regime, which have not been established before.
High dimensionality poses significant challenges for assessing the uncertainty (e.g., testing hypotheses) of the constructed estimators. For example, [15] show that the limiting distribution of the Lasso
estimator is not Gaussian even in the low dimensional regime. A variety of approaches have been
proposed to correct the Lasso estimator to attain asymptotic normality, including the debiasing method
[13], the desparsification methods [26, 32] as well as instrumental variable-based methods [4]. Meanwhile, [16, 17, 24] propose the post-selection procedures for exact inference. In addition, several
authors propose methods based on data splitting [20, 29], stability selection [19] and `2 -confidence
sets [22]. However, these approaches mainly focus on generalized linear models rather than latent
variable models. In addition, their results heavily rely on the fact that the estimator is a global optimum
of a convex program. In comparison, our approach applies to a much broader family of statistical
models with latent structures. For these latent variable models, it is computationally infeasible to
2
obtain the global maximum of the penalized likelihood due to the nonconcavity of the likelihood
function. Unlike existing approaches, our inferential theory is developed for the estimator attained
by the proposed high dimensional EM algorithm, which is not necessarily a global optimum to any
optimization formulation.
Another line of research for the estimation of latent variable models is the tensor method, which
exploits the structures of third or higher order moments. See [1] and the references therein. However,
existing tensor methods primarily focus on the low dimensional regime where d ? n. In addition,
since the high order sample moments generally have a slow statistical rate of convergence, the
estimators obtained by the tensor
p methods usually have a suboptimal statistical rate even for d ? n.
For example, [9] establish the d6 /n statistical
rate of convergence for mixture of regression model,
p
which is suboptimal compared with the d/n minimax lower bound. Similarly, in high dimensional
settings, the statistical rates of convergence attained by tensor methods are significantly slower than
the statistical rate obtained in this paper.
The latent variable models considered in this paper have been well studied. Nevertheless, only a
few works establish theoretical guarantees for the EM algorithm. In particular, for Gaussian mixture
model, [10, 11] establish parameter estimation guarantees for the EM algorithm and its extensions. For
mixture of regression model, [31] establish exact parameter recovery guarantees for the EM algorithm
under a noiseless setting. For high dimensional mixture of regression model, [23] analyze the gradient
EM algorithm for the `1 -penalized log-likelihood. They establish support recovery guarantees for the
attained local optimum but have no parameter estimation guarantees. In comparison with existing
works, this paper establishes a general inferential framework for simultaneous parameter estimation
and uncertainty assessment based on a novel high dimensional EM algorithm. Our analysis provides
the first theoretical guarantee of parameter estimation and asymptotic inference in high dimensional
regimes for the EM algorithm and its applications to a broad family of latent variable models.
Notation: The matrix (p, q)-norm, i.e., k ? kp,q , is obtained by taking the `p -norm of each row and
then taking the `q -norm of the obtained row norms. We use C, C 0 , . . . to denote generic constants.
Their values may vary from line to line. We will introduce more notations in ?2.2.
2
Methodology
We first introduce the high dimensional EM Algorithm and then the respective inferential procedure.
As examples, we consider their applications to Gaussian mixture model and mixture of regression
model. For compactness, we defer the details to ?A of the appendix. More models are included in the
longer version of this paper.
Algorithm 1 High Dimensional EM Algorithm
1: Parameter: Sparsity Parameter sb, Maximum Number of Iterations T
(0)
2: Initialization: Sbinit
supp init , sb ,
trunc init , Sbinit
supp(?, ?) and trunc(?, ?) are defined in (2.2) and (2.3)
3: For t = 0 to T 1
4:
E-step: Evaluate Qn ; (t)
5:
M-step: (t+0.5)
Mn (t)
Mn (?) is implemented as in Algorithm 2 or 3
(t+0.5)
(t+1)
6:
T-step: Sb
supp (t+0.5) , sb ,
trunc (t+0.5) , Sb(t+0.5)
7: End For
(T )
8: Output: b
Algorithm 2 Maximization Implementation of the M-step
1: Input:
(t)
, Qn
;
Output: Mn
(t)
(t)
argmax Qn
;
(t)
Algorithm 3 Gradient Ascent Implementation of the M-step
1: Input: (t) , Qn ;
2: Output: Mn (t)
2.1
(t)
(t)
Parameter: Stepsize ? > 0
+ ? ? rQn (t) ; (t)
High Dimensional EM Algorithm
Before we introduce the proposed high dimensional EM Algorithm (Algorithm 1), we briefly review
the classical EM algorithm. Let h (y) be the probability density function of Y 2 Y, where 2 Rd is
the model parameter. For latent variable models, we assume
R that h (y) is obtained by marginalizing
over an unobserved latent variable Z 2 Z, i.e., h (y) = Z f (y, z) dz. Let k (z | y) be the density
3
of Z conditioning on the observed variable Y = y, i.e., k (z | y) = f (y, z)/h (y). We define
n Z
1X
Qn ( ; 0 ) =
k 0 (z | yi ) ? log f (yi , z) dz.
(2.1)
n i=1 Z
See ?B of the appendix for a detailed derivation. At the t-th iteration of the classical EM algorithm, we
evaluate Qn ; (t) at the E-step and then perform max Qn ; (t) at the M-step. The proposed
high dimensional EM algorithm (Algorithm 1) is built upon the E-step and M-step (lines 4 and 5)
of the classical EM algorithm. In addition to the exact maximization implementation of the M-step
(Algorithm 2), we allow the gradient ascent implementation of the M-step (Algorithm 3), which
performs an approximate maximization via a gradient ascent step. To handle the challenge of high
dimensionality, in line 6 of Algorithm 1 we perform a truncation step (T-step) to enforce the sparsity
structure. In detail, we define
supp( , s): The set of index j?s corresponding to the top s largest | j |?s.
(2.2)
Also, for an index set S ? {1, . . . , d}, we define the trunc(?, ?) function in line 6 as
?
?
trunc( , S) j = j ? 1{j 2 S}.
(2.3)
Note that (t+0.5) is the output of the M-step (line 5) at the t-th iteration of the high dimensional
EM algorithm. To obtain (t+1) , the T-step (line 6) preserves the entries of (t+0.5) with the top sb
large magnitudes and sets the rest to zero. Here sb is a tuning parameter that controls the sparsity level
(line 1). By iteratively performing the E-step, M-step and T-step, the high dimensional EM algorithm
attains an sb-sparse estimator b = (T ) (line 8). Here T is the total number of iterations.
2.2
Asymptotic Inference
Notation: Let r1 Q( ; 0 ) be the gradient with respect to and r2 Q( ; 0 ) be the gradient with
respect to 0 . If there is no confusion, we simply denote rQ( ; 0 ) = r1 Q( ; 0 ) as in the previous
sections. We define the higher order derivatives in the same manner, e.g., r21,2 Q( ; 0 ) is calculated
>
by first taking derivative with respect to and then with respect to 0 . For = 1> , 2> 2 Rd with
d1
d2
and d1 + d2 = d, we use notations such as v 1 2 Rd1 and A 1 , 2 2 Rd1 ?d2
1 2 R , 2 2 R
to denote the corresponding subvector of v 2 Rd and the submatrix of A 2 Rd?d .
We aim to conduct asymptotic inference for low dimensional components of the high dimensional
parameter ? . Without loss of generality, we consider a single entry of ? . In particular, we assume
?
?>
?
= ?? , ( ? )> , where ?? 2 R is the entry of interest, while ? 2 Rd 1 is treated as the
nuisance parameter. In the following, we construct a high dimensional score test named decorrelated
score test. It is worth noting that, our method and theory can be easily generalized to perform statistical
inference for an arbitrary low dimensional subvector of ? .
Decorrelated Score Test: For score test, we are primarily interested in testing H0 : ?? = 0, since
this null hypothesis characterizes the uncertainty in variable selection. Our method easily generalizes
to H0 : ?? = ?0 with ?0 6= 0. For notational simplicity, we define the following key quantity
Let
= ?,
> >
Tn ( ) = r21,1 Qn ( ; ) + r21,2 Qn ( ; ) 2 Rd?d .
(2.4)
. We define the decorrelated score function Sn (?, ?) 2 R as
?
?
?
?
Sn ( , ) = r1 Qn ( ; ) ? w( , )> ? r1 Qn ( ; ) .
Here w( , ) 2 Rd
is obtained using the following Dantzig selector [8]
?
?
?
?
w( , ) = argmin kwk1 , subject to Tn ( ) ,?
Tn ( ) , ? w
(2.5)
1
w2Rd
1
1
? ,
(2.6)
>
where > 0 is a tuning parameter. Let b = ?
b, b > , where b is the estimator attained by the high
dimensional EM algorithm (Algorithm 1). We define the decorrelated score statistic as
?
?
p
1/2
n ? Sn b0 ,
Tn b0
,
(2.7)
where b0 = 0, b
> >
?
?|
, and Tn b0
?
?|
?
= 1, w b0 ,
>?
?
? Tn b0 ? 1, w b0 ,
> ?>
.
Here we use b0 instead of b since we are interested in the null hypothesis H0 : ?? = 0. We can also
replace b0 with b and the theoretical results will remain the same. In ?4 we will prove the proposed
decorrelated score statistic in (2.7) is asymptotically N (0, 1). Consequently, the decorrelated score
4
test with significance level 2 (0, 1) takes the form
?
?
?
p
1/2
n ? Sn b0 ,
Tn b0 ?|
2
/
S( ) = 1
1
(1
1
/2),
(1
/2)
?
,
where
(?) is the inverse function of the Gaussian cumulative distribution function. If S ( ) = 1,
we reject the null hypothesis H0 : ?? = 0. The intuition of this decorrelated score test is explained
in ?D of the appendix. The key theoretical observation is Theorem 2.1, which connects r1 Qn (?; ?)
in (2.5) and Tn (?) in (2.7) with the score function and Fisher information in the presence of latent
structures. Let `?n ( ) be the?log-likelihood. Its score function is r`n ( ) and the Fisher information is
I( ? ) = E ? r2 `n ( ? ) n, where E ? (?) is the expectation under the model with parameter ? .
1
Theorem 2.1. For the true parameter ? and any
?
r1 Qn ( ; ) = r`n ( )/n, and E ? Tn (
2 Rd , it holds that
?
?
) = I( ? ) = E
?
Proof. See ?I.1 of the appendix for a detailed proof.
?
r2 ` n (
?
?
) n.
(2.8)
Based on the decorrelated score test, it is easy to establish the decorrelated Wald test, which allows
us to construct confidence intervals. For compactness we defer it to the longer version of this paper.
3
Theory of Computation and Estimation
Before we present the main results, we introduce three technical conditions, which will significantly
ease our presentation. They will be verified for specific latent variable models in ?E of the appendix.
The first two conditions, proposed by [2], characterize the properties of the population version lower
bound function Q(?; ?), i.e., the expectation of Qn (?; ?) defined in (2.1). We define the respective
population version M-step as follows. For the M-step in Algorithm 2, we define
M ( ) = argmax Q( 0 ; ).
(3.1)
0
For the M-step in Algorithm 3, we define
M ( ) = + ? ? r1 Q( ; ),
(3.2)
where ? > 0 is the stepsize in Algorithm 3. We use B to denote the basin of attraction, i.e., the local
region where the high dimensional EM algorithm enjoys desired guarantees.
Condition 3.1. We define two versions of this condition.
? Lipschitz-Gradient-1( 1 , B). For the true parameter ? and any 2 B, we have
?
?
?
?
?
r1 Q M ( ); ?
r1 Q M ( );
? 1?k
k2 ,
(3.3)
2
where M (?) is the population version M-step (maximization implementation) defined in (3.1).
? Lipschitz-Gradient-2( 2 , B). For the true parameter ? and any 2 B, we have
r1 Q( ;
?
r1 Q( ; )
)
2
?
2
?k
?
(3.4)
k2 .
Condition 3.1 defines a variant of Lipschitz continuity for r1 Q(?; ?). In the sequel, we will use (3.3)
and (3.4) in the analysis of the two implementations of the M-step respectively.
Condition 3.2 Concavity-Smoothness(?, ?, B). For any 1 , 2 2 B, Q(?; ? ) is ?-smooth, i.e.,
Q( 1 ; ? ) Q( 2 ;
and ?-strongly concave, i.e.,
Q(
1;
?
) ? Q(
2;
?
)+(
1
2)
>
? r1 Q(
2;
?
)
?/2 ? k
2
2
1 k2 ,
(3.5)
?
)+(
1
2)
>
? r1 Q(
2;
?
)
?/2 ? k
2
2
1 k2 .
(3.6)
This condition indicates that, when the second variable of Q(?; ?) is fixed to be , the function is
?sandwiched? between two quadratic functions. The third condition characterizes the statistical error
between the sample version and population version M-steps, i.e., Mn (?) defined in Algorithms 2 and
3, and M (?) in (3.1) and (3.2). Recall k ? k0 denotes the total number of nonzero entries in a vector.
Condition 3.3 Statistical-Error(?, , s, n, B). For any fixed 2 B with k k0 ? s, we have that
?
M ( ) Mn ( ) 1 ? ?
(3.7)
holds with probability at least 1
. Here ? > 0 possibly depends on , sparsity level s, sample size
n, dimension d, as well as the basin of attraction B.
In (3.7) the statistical error ? quantifies the `1 -norm of the difference between the population version
and sample version M-steps. Particularly, we constrain the input of M (?) and Mn (?) to be s-sparse.
Such a condition is different from the one used by [2]. In detail, they quantify the statistical error
5
with the `2 -norm and do not constrain the input of M (?) and Mn (?) to be sparse. Consequently, our
subsequent statistical analysis is different from theirs. The reason we use the `1 -normp
is that, it
characterizes the more refined entrywise statistical error, which converges at a fast rate of log d/n
(possibly with extra factors depending
on specific models). In comparison, the `2 -norm statistical
p
error converges at a slow rate of d/n, which does not decrease to zero as n increases with d
n.
Furthermore, the fine-grained entrywise statistical error is crucial to our key proof for quantifying the
effects of the truncation step (line 6 of Algorithm 1) on the iterative solution sequence.
3.1
Main Results
To simplify the technical analysis of the high dimensional EM algorithm, we focus on its resampling
version, which is illustrated in Algorithm 4 in ?C of the appendix.
?
Theorem 3.4. We define B =
:k
k2 ? R , where R = ? ? k ? k2 for some ? 2 (0, 1).
?
We assume Condition Concavity-Smoothness(?, ?, B) holds and init
? R/2.
2
? For the maximization implementation of the M-step (Algorithm 2), we suppose that Condition
Lipschitz-Gradient-1( 1 , B) holds with ?1 := 1 /? 2 (0, 1) and
?
?
sb = C ? max 16/(1/?1 1)2 , 4 ? (1 + ?)2 /(1 ?)2 ? s? ,
(3.8)
p
p
p
p
0
2
2
?
sb + C / 1 ? ? s? ? ? ? min (1
?1 ) ? R, (1 ?) /[2 ? (1 + ?)] ? k k2 . (3.9)
Here C 1 and C 0 > 0 are constants. Under Condition Statistical-Error(?, /T, sb, n/T, B) we
have that, for t = 1, . . . , T ,
p
p
p
p
t/2
(t)
?
? ?1 ? R + sb + C 0 / 1 ? ? s? /(1
?1 ) ? ?
(3.10)
2
| {z }
|
{z
}
Optimization Error
Statistical Error
holds with probability at least 1
, where C is the same constant as in (3.9).
? For the gradient ascent implementation of the M-step (Algorithm 3), we suppose that Condition
Lipschitz-Gradient-2( 2 , B) holds with ?2 := 1 2 ? (?
2 )/(? + ?) 2 (0, 1) and the stepsize in
Algorithm 3 is set to ? = 2/(? + ?). Meanwhile, we assume (3.8) and (3.9) hold with ?1 replaced
by ?2 . Under Condition Statistical-Error(?, /T, sb, n/T, B) we have that, for t = 1, . . . , T , (3.10)
holds with probability at least 1
, in which ?1 is replaced with ?2 .
0
Proof. See ?G.1 of the appendix for a detailed proof.
The assumption in (3.8) states that the sparsity parameter sb is chosen to be sufficiently large and also
of the same order as the true sparsity level s? . This assumption ensures that the error incurred by the
truncation step can be upper bounded. In addition, as is shown for specific latent variable models in
?E of the appendix, the error term ? in Condition Statistical-Error(?,
/T,
p
psb, n/T, B) decreases as
p
0
sample size n increases. By the assumption in (3.8),
sb + C / 1 ? ? s? is of the same order
p
?
as
p s . Therefore, the assumption in (3.9) suggests the sample size n is sufficiently large such that
s? ? ? is sufficiently small. These assumptions guarantee that the entire iterative solution sequence
remains within the basin of attraction B in the presence of statistical error.
Theorem 3.4 illustrates that, the upper bound of the overall estimation error can be decomposed
into two terms. The first term is the upper bound of optimization error, which decreases to zero at a
geometric rate of convergence, because we have ?1 , ?2 < 1. Meanwhile,
p
p term is the upper
pthe second
0
bound of statistical error, which does not depend on t. Since
sb + C / 1 ? ? s? is of the same
p
p
?
?
order as s , this term is proportional to s ? ?, where ? is the entrywise statistical error between
M (?) and Mn (?). In p
?E of the appendix we prove that, for each specific latent variable model, ? is
roughly of the order log d/n. (There may be extra factors attached
pto ? depending on each specific
model.) Therefore, the statistical error term is roughly of the order s? ? log d/n. Consequently, for
a sufficiently large t = T such that the optimization and statistical
p error terms in (3.10) are of the
same order, the final estimator b = (T ) attains a (near-)optimal s? ? log d/n (possibly with extra
factors) statistical rate. For compactness, we give the following example and defer the details to ?E.
Implications for Gaussian Mixture Model: We assume y1 , . . . , yn are the n i.i.d. realizations of
Y = Z ? ? + V . Here Z is a Rademacher random variable, i.e., P(Z = +1) = P(Z = 1) = 1/2,
and V ? N (0, 2 ? Id ) is independent of Z, where is the standard deviation. Suppose that we have
k ? k2 /
r, where r > 0 is a sufficiently large constant that denotes the minimum signal-to-noise
ratio. In ?E of the appendix we prove that there exists some constant C > 0 such that Conditions
6
Lipschitz-Gradient-1( 1 , B) and Concavity-Smoothness(?, ?, B) hold with
1
= exp
C ? r2 ,
B=
? = ? = 1,
?
:k
with R = ? ? k
k2 ? R
?
k2 , ? = 1/4.
For a sufficiently large n, we have that Condition Statistical-Error(?, , s, n, B) holds with
q?
?
? = C ? k ? k1 +
?
log d + log(2/ ) n.
p
?
Then the first part of Theorem 3.4 implies b
?
C
?
s? ? log d ? log n/n for a sufficiently
2
p
large T , which is near-optimal with respect to the minimax lower bound s? log d/n.
4
Theory of Inference
To simplify the presentation of the unified framework, we lay out several technical conditions, which
will be verified for each model. Let ? EM , ? G , ? T and ? L be four quantities that scale with s? , d and n.
These conditions will be verified for specific latent variable models in ?F of the appendix.
?
Condition 4.1 Parameter-Estimation ? EM . We have b
= O ? EM .
P
1
Condition 4.2 Gradient-Statistical-Error ? G . We have
r1 Q n (
?
;
?
Condition 4.3 Tn (?)-Concentration ? T
?
?
= OP ? G .
?
?
. We have Tn ( ? ) E ? Tn ( ? )
r1 Q(
)
;
)
1
Condition 4.4 Tn (?)-Lipschitz ? L . For any , we have
Tn ( )
Tn (
?
)
1,1
= OP ? L ? k
?
1,1
= OP ? T .
k1 .
In the sequel, we lay out an assumption on several population quantities and the sample size n. Recall
?
d 1
that ? = [?? , ( ? )> ]> , where ??? 2 R is? the entry of interest, while
is the nuisance
?
?2 R
?
(d 1)?(d 1)
?
parameter. By the notations in ?2.2, I( ) , 2 R
and I( ) ,? 2 R(d 1)?1 denote
?
the submatrices of the Fisher information matrix I( ? ) 2 Rd?d . We define w? , s?w and Sw
as
?
?
?
?
1
?
?
?
d 1
?
?
?
?
w = I( ) , ? I( ) ,? 2 R , sw = kw k0 , and Sw = supp(w ).
(4.1)
?
?
?
?
?
?
?
We define 1 I( ) and d I( ) as the largest and smallest eigenvalues of I( ), and
?
?
?
?
?
?> ?
? 1 ?
?
I( ? ) ?| = I( ? ) ?,?
I( ? ) ,? ? I( ? ) , ? I( ? ) ,? 2 R.
(4.2)
According to (4.1) and (4.2), we can easily verify that
?
?
?
?
?
?>
I( ? ) ?| = 1, (w? )> ? I( ? ) ? 1, (w? )> .
(4.3)
?
?
?
?
The following assumption ensures that d I( ? ) > 0. Hence, I( ? ) , in (4.1) is invertible.
?
?
?
?
Also, according to (4.3) and the fact that d I( ? ) > 0, we have I( ? ) ?| > 0.
Assumption 4.5 . We impose the following assumptions.
? For positive constants ?max and ?min , we assume
?
?
?
?
?
?
?
?
?max
)
)
?min ,
I( ? ) ?| = O(1),
1 I(
d I(
? The tuning parameter
of the Dantzig selector in (2.6) is set to
?
I(
?
)
?
1
?|
= O(1). (4.4)
= C ? ? T + ? L ? ? EM ? 1 + kw? k1 ,
(4.5)
where C 1 is a sufficiently large constant. The sample size n is sufficiently large such that
p
max kw? k1 , 1 ? s?w ? = o(1), ? EM = o(1), s?w ? ? ? G = o(1/ n),
(4.6)
p
p
2
? ? EM = o(1/ n), max 1, kw? k1 ? ? L ? ? EM = o(1/ n).
?
?
The assumption on d I( ? ) guarantees that the Fisher information matrix is positive definite. The
p
other assumptions in (4.4) guarantee the existence of the asymptotic variance of n ? Sn b0 , in
the score statistic defined in (2.7). Similar assumptions are standard in existing asymptotic inference
results. For example, for mixture of regression model, [14] impose variants of these assumptions.
For specific models, we will show that ? EM , ? G , ? T and all decrease with n, while ? L increases
with n at a slow rate. Therefore, the assumptions in (4.6) ensure that the sample size n is sufficiently
large. We will make these assumptions more explicit after we specify ? EM , ? G , ? T and ? L for each
7
model. Note the assumptions in (4.6) imply that s?w = kw? k0 needs to be small. For instance, for
specified in (4.5), max kw? k1 , 1 ? s?w ? = o(1) in (4.6) implies s?w ? ? T = o(1). In the following,
p
p
we will prove that ? T is of the order log d/n. Hence, we require that s?w = o n/ log d ? d 1,
i.e., w? 2 Rd 1 is sparse. Such a sparsity
can be
as follows. According to
? assumption
?
? understood
?
the definition of w? in (4.1), we have I( ? ) , ? w? = I( ? ) ,? . Therefore, such a sparsity
?
?
?
?
assumption suggests I( ? ) ,? lies within the span of a few columns of I( ? ) , . Such a sparsity
assumption on w? is necessary, because otherwise it is difficult to accurately estimate w? in high
dimensional regimes. In the context of high dimensional generalized linear models, [26, 32] impose
similar sparsity assumptions.
4.1
Main Results
Decorrelated Score Test: The next theorem establishes the asymptotic normality of the decorrelated
score statistic defined in (2.7).
?
?>
Theorem 4.6. We consider ? = ?? , ( ? )> with ?? = 0. Under Assumption 4.5 and Conditions
4.1-4.4, we have that for n ! 1,
?
?
p
1/2 D
n ? Sn b0 ,
Tn b0 ?|
! N (0, 1),
(4.7)
?
?
where b0 and Tn b0 ?| 2 R are defined in (2.7). The limiting variance of the decorrelated score
?
?
p
function n ? Sn b0 , is I( ? )
, which is defined in (4.2).
?|
Proof. See ?G.2 of the appendix for a detailed proof.
?
?
Optimality: [27] prove that for inferring ?? in the presence of nuisance parameter ? , I( ? ) ?| is
the semiparametric efficient information, i.e., the minimum limiting variance of the (rescaled) score
function. Our proposed decorrelated score function achieves such a semiparametric information lower
bound and is therefore in this sense optimal.
In the following, we use Gaussian mixture model to illustrate the effectiveness of Theorem 4.6. We
defer the details and the implications for mixture of regression to ?F of the appendix.
Implications for Gaussian Mixture Model: Under the same model considered in ?3.1, if we assume
all quantities except s?w , s? , d and n are constant, then we have that Conditions 4.1-4.4 hold with
p
p
p
3/2
? EM = s? log d ? log n/n, ? G = log d/n, ? T = log d/n and ? L = log d + log n
. Thus,
under Assumption 4.5, (4.7) holds when n ! 1. Also, we can verify that (4.6) in Assumption 4.5
?
?
2
holds if max s?w , s? ? (s? )2 ? (log d)5 = o n/(log n)2 .
5
Conclusion
We propose a novel high dimensional EM algorithm which naturally incorporates sparsity structure.
Our theory shows that, with a suitable initialization, the proposed algorithm converges at a geometric
rate and achieves an estimator with the (near-)optimal statistical rate of convergence. Beyond point
estimation, we further propose the decorrelated score and Wald statistics for testing hypotheses and
constructing confidence intervals for low dimensional components of high dimensional parameters.
We apply the proposed algorithmic framework to a broad family of high dimensional latent variable
models. For these models, our framework establishes the first computationally feasible approach for
optimal parameter estimation and asymptotic inference under high dimensional settings.
References
[1] A N A N D K U M A R , A ., G E , R ., H S U , D ., K A K A D E , S . M . and T E L G A R S K Y , M . (2014). Tensor
decompositions for learning latent variable models. Journal of Machine Learning Research 15 2773?2832.
[2] B A L A K R I S H N A N , S ., W A I N W R I G H T , M . J . and Y U , B . (2014). Statistical guarantees for the EM
algorithm: From population to sample-based analysis. arXiv preprint arXiv:1408.2156 .
[3] B A R T H O L O M E W , D . J ., K N O T T , M . and M O U S TA K I , I . (2011). Latent variable models and
factor analysis: A unified approach, vol. 899. Wiley.
[4] B E L L O N I , A ., C H E N , D ., C H E R N O Z H U K O V , V. and H A N S E N , C . (2012). Sparse models and
methods for optimal instruments with an application to eminent domain. Econometrica 80 2369?2429.
[5] B I C K E L , P. J ., R I T O V , Y. and T S Y B A K O V , A . B . (2009). Simultaneous analysis of Lasso and
Dantzig selector. Annals of Statistics 37 1705?1732.
8
[6] B O U C H E R O N , S ., L U G O S I , G . and M A S S A R T , P. (2013). Concentration inequalities: A nonasymptotic theory of independence. Oxford University Press.
[7] C A I , T., L I U , W. and L U O , X . (2011). A constrained `1 minimization approach to sparse precision
matrix estimation. Journal of the American Statistical Association 106 594?607.
[8] C A N D E` S , E . and T A O , T. (2007). The Dantzig selector: Statistical estimation when p is much larger
than n. Annals of Statistics 35 2313?2351.
[9] C H A G A N T Y , A . T. and L I A N G , P. (2013). Spectral experts for estimating mixtures of linear regressions. arXiv preprint arXiv:1306.3729 .
[10] C H A U D H U R I , K ., D A S G U P TA , S . and V AT TA N I , A . (2009). Learning mixtures of Gaussians
using the k-means algorithm. arXiv preprint arXiv:0912.0086 .
[11] D A S G U P TA , S . and S C H U L M A N , L . (2007). A probabilistic analysis of EM for mixtures of separated,
spherical Gaussians. Journal of Machine Learning Research 8 203?226.
[12] D E M P S T E R , A . P., L A I R D , N . M . and R U B I N , D . B . (1977). Maximum likelihood from
incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Statistical
Methodology) 39 1?38.
[13] J AVA N M A R D , A . and M O N TA N A R I , A . (2014). Confidence intervals and hypothesis testing for
high-dimensional regression. Journal of Machine Learning Research 15 2869?2909.
[14] K H A L I L I , A . and C H E N , J . (2007). Variables selection in finite mixture of regression models. Journal
of the American Statistical Association 102 1025?1038.
[15] K N I G H T , K . and F U , W. (2000). Asymptotics for Lasso-type estimators. Annals of Statistics 28
1356?1378.
[16] L E E , J . D ., S U N , D . L ., S U N , Y. and T AY L O R , J . E . (2013). Exact inference after model selection
via the Lasso. arXiv preprint arXiv:1311.6238 .
[17] L O C K H A R T , R ., T AY L O R , J ., T I B S H I R A N I , R . J . and T I B S H I R A N I , R . (2014). A significance
test for the Lasso. Annals of Statistics 42 413?468.
[18] M C L A C H L A N , G . and K R I S H N A N , T. (2007). The EM algorithm and extensions, vol. 382. Wiley.
[19] M E I N S H A U S E N , N . and B U? H L M A N N , P. (2010). Stability selection. Journal of the Royal Statistical
Society: Series B (Statistical Methodology) 72 417?473.
[20] M E I N S H A U S E N , N ., M E I E R , L . and B U? H L M A N N , P. (2009). p-values for high-dimensional
regression. Journal of the American Statistical Association 104 1671?1681.
[21] N E S T E R O V , Y. (2004). Introductory lectures on convex optimization:A basic course, vol. 87. Springer.
[22] N I C K L , R . and VA N D E G E E R , S . (2013). Confidence sets in sparse regression. Annals of Statistics
41 2852?2876.
[23] S T A? D L E R , N ., B U? H L M A N N , P. and VA N D E G E E R , S . (2010). `1 -penalization for mixture
regression models. TEST 19 209?256.
[24] T AY L O R , J ., L O C K H A R T , R ., T I B S H I R A N I , R . J . and T I B S H I R A N I , R . (2014). Post-selection
adaptive inference for least angle regression and the Lasso. arXiv preprint arXiv:1401.3889 .
[25] T S E N G , P. (2004). An analysis of the EM algorithm and entropy-like proximal point methods. Mathematics of Operations Research 29 27?44.
[26] VA N D E G E E R , S ., B U? H L M A N N , P., R I T O V , Y. and D E Z E U R E , R . (2014). On asymptotically
optimal confidence regions and tests for high-dimensional models. Annals of Statistics 42 1166?1202.
[27]
VA N D E R
VA A R T , A . W. (2000). Asymptotic statistics, vol. 3. Cambridge University Press.
[28] V E R S H Y N I N , R . (2010). Introduction to the non-asymptotic analysis of random matrices. arXiv
preprint arXiv:1011.3027 .
[29] W A S S E R M A N , L . and R O E D E R , K . (2009). High-dimensional variable selection. Annals of Statistics
37 2178?2201.
[30] W U , C . F. J . (1983). On the convergence properties of the EM algorithm. Annals of Statistics 11
95?103.
[31] Y I , X ., C A R A M A N I S , C . and S A N G H AV I , S . (2013). Alternating minimization for mixed linear
regression. arXiv preprint arXiv:1310.3745 .
[32] Z H A N G , C . - H . and Z H A N G , S . S . (2014). Confidence intervals for low dimensional parameters in
high dimensional linear models. Journal of the Royal Statistical Society: Series B (Statistical Methodology)
76 217?242.
9
| 5914 |@word briefly:1 version:12 norm:8 instrumental:1 d2:3 decomposition:1 moment:2 liu:1 series:3 score:21 existing:5 subsequent:1 analytic:1 treating:1 resampling:1 eminent:1 provides:1 characterization:1 along:1 constructed:1 prove:8 introductory:1 introduce:4 manner:2 roughly:2 decomposed:1 spherical:1 r01mh102339:1 bounded:2 moreover:1 notation:5 estimating:1 null:3 pto:1 argmin:1 r21:3 developed:1 unified:3 unobserved:1 guarantee:14 concave:1 exactly:1 k2:11 control:1 yn:1 before:3 positive:2 understood:1 local:6 consequence:1 id:1 oxford:1 establishing:2 path:1 initialization:6 therein:1 studied:1 dantzig:4 suggests:2 challenging:2 ava:1 ease:1 enforces:2 testing:6 definite:1 procedure:4 asymptotics:1 submatrices:1 significantly:3 attain:1 inferential:5 reject:1 confidence:7 close:1 selection:8 context:1 applying:1 dz:2 convex:2 survey:1 simplicity:1 splitting:1 recovery:2 estimator:20 attraction:3 dms1454377:1 stability:2 handle:2 population:7 limiting:4 annals:8 suppose:3 heavily:1 exact:5 hypothesis:9 particularly:1 lay:2 observed:1 preprint:7 wang:1 region:2 ensures:2 decrease:4 rescaled:1 rq:1 intuition:1 econometrica:1 trunc:5 depend:2 technically:1 upon:2 gu:1 easily:3 k0:4 derivation:1 separated:1 fast:1 kp:1 h0:4 refined:1 quite:1 larger:2 otherwise:1 statistic:17 final:1 sequence:5 eigenvalue:1 propose:8 realization:1 pthe:1 convergence:13 optimum:5 assessing:1 r1:16 rademacher:1 converges:7 illustrate:2 develop:1 depending:2 pose:2 op:3 b0:17 progress:1 implemented:1 implies:2 quantify:1 ning:1 closely:1 correct:1 viewing:1 require:1 extension:2 hold:13 sufficiently:10 considered:3 normal:1 exp:1 algorithmic:1 achieves:3 vary:1 smallest:1 estimation:19 quanquan:1 largest:2 establishes:4 minimization:2 gaussian:8 aim:1 rather:2 broader:1 focus:3 notational:1 likelihood:11 mainly:1 indicates:1 attains:5 sense:2 inference:11 sb:16 entire:2 compactness:3 interested:2 overall:2 constrained:1 construct:3 kw:6 broad:3 simplify:2 primarily:2 few:2 preserve:1 divergence:1 replaced:2 argmax:2 connects:1 interest:3 mixture:18 implication:3 necessary:1 respective:3 unless:1 conduct:2 incomplete:1 desired:1 theoretical:5 instance:1 column:1 cover:1 maximization:12 deviation:1 entry:5 virginia:1 characterize:2 perturbed:1 proximal:2 density:2 sequel:2 probabilistic:1 invertible:1 possibly:4 american:3 derivative:2 expert:1 supp:5 nonasymptotic:1 zhaoran:1 depends:1 view:1 analyze:1 characterizes:4 defer:4 contribution:2 r01hg06841:1 variance:4 accurately:1 worth:1 simultaneous:3 decorrelated:16 definition:1 naturally:2 proof:7 popular:1 recall:2 dimensionality:3 carefully:2 attained:7 higher:2 ta:5 methodology:4 specify:1 improved:1 entrywise:4 formulation:1 strongly:1 generality:1 furthermore:2 just:1 hand:1 assessment:3 maximizer:1 continuity:1 defines:1 effect:2 verify:2 true:5 former:1 evolution:2 hence:2 alternating:1 leibler:1 iteratively:1 nonzero:1 illustrated:1 nuisance:5 generalized:3 ay:3 confusion:1 tn:17 performs:1 novel:4 recently:1 nih:3 debiasing:1 conditioning:1 attached:1 association:3 theirs:1 r01gm083084:1 significant:3 cambridge:1 smoothness:3 rd:11 tuning:3 mathematics:1 similarly:1 closing:1 han:1 longer:2 certain:1 inequality:1 kwk1:1 yi:2 minimum:2 impose:3 employed:1 signal:1 ii:1 smooth:1 technical:3 iis1408910:1 post:2 va:5 calculates:1 variant:2 regression:15 wald:2 basic:1 noiseless:1 expectation:5 arxiv:14 iteration:5 addition:6 semiparametric:3 fine:2 interval:4 crucial:1 extra:4 rest:1 unlike:1 ascent:6 subject:1 incorporates:2 effectiveness:2 near:5 yang:1 presence:4 intermediate:1 noting:1 easy:1 variety:1 independence:1 lasso:7 suboptimal:2 generally:2 detailed:5 nsf:5 iis1332109:1 psb:1 vol:4 key:4 four:1 nevertheless:2 verified:3 asymptotically:3 inverse:1 angle:1 uncertainty:5 named:1 family:4 appendix:13 submatrix:1 bound:8 quadratic:1 constrain:2 fda:1 min:3 span:1 optimality:1 performing:1 according:3 smaller:2 remain:1 em:55 explained:1 computationally:4 remains:1 tractable:1 instrument:1 end:1 generalizes:1 operation:2 w2rd:1 gaussians:2 apply:1 appropriate:2 generic:1 enforce:1 spectral:1 stepsize:3 slower:1 existence:2 top:2 denotes:2 ensure:1 sw:3 calculating:1 exploit:1 k1:6 especially:1 establish:13 classical:3 sandwiched:1 society:3 tensor:5 quantity:5 concentration:2 gradient:16 d6:1 considers:1 toward:1 reason:1 index:2 ratio:1 difficult:1 sharper:1 implementation:10 perform:3 upper:5 av:1 observation:1 finite:1 y1:1 arbitrary:1 subvector:2 specified:1 established:3 address:1 beyond:1 usually:1 regime:9 sparsity:16 challenge:5 program:1 built:1 including:1 max:8 royal:3 suitable:2 treated:1 rely:1 regularized:2 mn:9 normality:4 minimax:3 improve:1 imply:1 sn:7 review:1 geometric:6 marginalizing:1 asymptotic:13 relative:1 loss:1 lecture:1 mixed:1 proportional:1 versus:1 ingredient:1 penalization:1 incurred:1 sufficient:1 basin:3 row:2 course:1 penalized:2 supported:1 truncation:7 infeasible:1 enjoys:1 allow:1 taking:3 attaching:1 sparse:7 dimension:4 calculated:1 cumulative:1 qn:14 concavity:3 adopts:1 made:1 author:1 adaptive:1 approximate:3 selector:4 kullback:1 global:5 latent:24 iterative:5 quantifies:1 career:1 init:5 necessarily:1 meanwhile:3 constructing:1 domain:1 significance:2 main:3 noise:1 slow:3 wiley:2 precision:1 inferring:2 explicit:2 lie:1 third:2 grained:2 theorem:10 specific:7 r2:4 exists:3 effectively:1 magnitude:1 illustrates:1 gap:2 rd1:2 entropy:1 simply:1 applies:1 springer:1 satisfies:1 presentation:2 consequently:3 quantifying:1 replace:1 fisher:4 feasible:2 lipschitz:7 included:1 except:1 total:3 support:2 latter:1 bigdata:1 evaluate:2 princeton:3 d1:2 |
5,430 | 5,915 | Associative Memory via a Sparse Recovery Model
Arya Mazumdar
Department of ECE
University of Minnesota Twin Cities
arya@umn.edu
Ankit Singh Rawat?
Computer Science Department
Carnegie Mellon University
asrawat@andrew.cmu.edu
Abstract
An associative memory is a structure learned from a dataset M of vectors (signals)
in a way such that, given a noisy version of one of the vectors as input, the nearest
valid vector from M (nearest neighbor) is provided as output, preferably via a fast
iterative algorithm. Traditionally, binary (or q-ary) Hopfield neural networks are
used to model the above structure. In this paper, for the first time, we propose
a model of associative memory based on sparse recovery of signals. Our basic
premise is simple. For a dataset, we learn a set of linear constraints that every
vector in the dataset must satisfy. Provided these linear constraints possess some
special properties, it is possible to cast the task of finding nearest neighbor as
a sparse recovery problem. Assuming generic random models for the dataset,
we show that it is possible to store super-polynomial or exponential number of
n-length vectors in a neural network of size O(n). Furthermore, given a noisy
version of one of the stored vectors corrupted in near-linear number of coordinates,
the vector can be correctly recalled using a neurally feasible algorithm.
1
Introduction
Neural associative memories with exponential storage capacity and large (potentially linear) fraction
of error-correction guarantee have been the topic of extensive research for the past three decades.
A networked associative memory model must have the ability to learn and remember an arbitrary
but specific set of n-length messages. At the same time, when presented with a noisy query, i.e., an
n-length vector close to one of the messages, the system must be able to recall the correct message.
While the first task is called the learning phase, the second one is referred to as the recall phase.
Associative memories are traditionally modeled by what is called binary Hopfield networks [15],
where a weighted graph of size n is considered with each vertex representing a binary state neuron.
The edge-weights of the network are learned from the set of binary vectors to be stored by the
Hebbian learning rule [13]. It has been shown in [22] that, to recover the correct vector in the
presence of a linear (in n) number of errors, it is not possible to store more than O( logn n ) arbitrary
binary vectors in the above model of learning. In the pursuit of networks that can store exponential
(in n) number of messages, some works [26, 12, 21] do show the existence of Hopfield networks that
can store ? 1.22n messages. However, for such Hopfield networks, only a small number of errors
in the query render the recall phase unsuccessful. The Hopfield networks that store non-binary
message vectors are studied in [17, 23], where the storage capacity of such networks against large
fraction of errors is again shown to be linear in n. There have been multiple efforts to increase the
storage capacity of the associative memories to exponential by moving away from the framework
of the Hopfield networks (in term of both the learning and the recall phases)[14, 11, 19, 25, 18].
These efforts also involve relaxing the requirement of storing the collections of arbitrary messages.
In [11], Gripon and Berrou stored O(n2 ) number of sparse message vectors in the form of neural
cliques. Another setting where neurons have been assumed to have a large (albeit constant) number
?
This work was done when the author was with the Dept. of ECE, University of Texas at Austin, TX, USA.
1
of states, and at the same time the message set (or the dataset) is assumed to form a linear subspace
is considered in [19, 25, 18].
The most basic premise of the works on neural
associative memory is to design a graph dynamic x1 x2 x3
xn
system such that the vectors to be stored are the
Message
nodes
steady states of the system. One way to attain
this is to learn a set of constraints that every vector in the dataset must satisfy. The inclusion relation between the variables in the vectors and the
Constraint
constraints can be represented by a bipartite graph
nodes
r1 r2 r3
rm
(cf. Fig. 1). For the recall phase, noise removal
can be done by running belief propagation on this Figure 1: The complete bipartite graph correbipartite graph. It can be shown that the correct sponding to the associative memory. Here, we
message is recovered successfully under conditions depict only a small fraction of edges. The edge
such as sparsity and expansion properties of the weights of the bipartite graph are obtained from the
graph. This is the main idea that has been explored linear constraints satisfied by the messages. Inforin [19, 25, 18]. In particular, under the assump- mation can flow in both directions in the graph, i.e.,
from a message node to a constraint node and from
tion that the messages belong to a linear subspace, a constraint node to a message node. In the steady
[19, 25] propose associative memories that can state n message nodes store n coordinates of a valid
store exponential number of messages while toler- message, and all the m constraints nodes are satisating at most constant number of errors. This ap- fied, i.e., the weighted sum of the values stored on
proach is further refined in [18], where each mes- the neighboring message nodes (according to the assage vector from the dataset is assumed to com- sociated edge weights) is equal to zero. Note that an
prise overlapping sub-vectors which belong to dif- edge is relevant for the information flow iff the corferent linear subspaces. The learning phase finds responding edge weight is nonzero.
the (sparse) linear constraints for the subspaces associated with these sub-vectors. For the recall phase then belief propagation decoding ideas of
error-correcting codes have been used. In [18], Karbasi et al. show that the associative memories
obtained in this manner can store exponential (in n) messages. They further show that the recall
phase can correct linear (in n) number of random errors provided that the bipartite graph associated
with learnt linear constraints (during learning phase) has certain structural properties.
Our work is very closely related to the above principle. Instead of finding a sparse set of constraints,
we aim to find a set of linear constraints that satisfy 1) the coherence property, 2) the null-space
property or 3) the restricted isometry property (RIP). Indeed, for a large class of random signal
models, we show that, such constraints must exists and can be found in polynomial time. Any of
the three above mentioned properties provide sufficient condition for recovery of sparse signals or
vectors [8, 6]. Under the assumption that the noise in the query vector is sparse, denoising can
be done very efficiently via iterative sparse recovery algorithms that are neurally feasible [9]. A
neurally feasible algorithm for our model employs only local computations at the vertices of the
corresponding bipartite graph based on the information obtained from their neighboring nodes.
1.1
Our techniques and results
Our main provable results pertain to two different models of datasets, and are given below.
Theorem 1 (Associative memory with sub-gaussian dataset model). It is possible to store a dataset
of size ? exp(n3/4 ) of n-length vectors in a neural network of size O(n) such that a neurally
feasible algorithm can output the correct vector from the dataset given a noisy version of the vector
corrupted in ?(n1/4 ) coordinates.
Theorem 2 (Associative memory with dataset spanned by random rows of fixed orthonormal basis).
It is possible to store a dataset of size ? exp(r) of n-length vectors in a neural network of size O(n)
such that a neurally feasible algorithm can output the correct vector from the dataset given a noisy
n r
version of the vector corrupted in ?( log
6 n ) coordinates.
Theorem 1 follows from Prop. 1 and Theorem 3, while Theorem 2 follows from Prop. 2 and 3; and
by also noting the fact that all r-vectors over any finite alphabet can be linearly mapped to exp(r)
number of points in a space of dimensionality r. The neural feasibility of the recovery follows
from the discussion of Sec. 5. In contrast with [18], our sparse recovery based approach provides
2
associative memories that are robust against stronger error model which comprises adversarial error
patterns as opposed to random error patterns. Even though we demonstrate the associative memories
which have sub-exponential storage capacity and can tolerate sub-linear (but polynomial) number
of errors, neurally feasible recall phase is guaranteed to recover the message vector from adversarial
errors. On the other hand, the recovery guarantees in [18, Theorem 3 and 5] hold if the bipartite
graph obtained during learning phase possesses certain structures (e.g. degree sequence). However,
it is not apparent in their work if the learnt bipartite graph indeed has these structural properties.
Similar to the aforementioned papers, our operations are performed over real numbers. We show the
dimensionality of the dataset to be large enough, as referenced in Theorem 1 and 2. As in previous
works such as [18], we can therefore find a large number of points, exponential in the dimensionality,
with finite (integer) alphabet that can be treated as the message vectors or dataset.
Our main contribution is to bring in the model of sparse recovery in the domain of associative
memory - a very natural connection. The main techniques that we employ are as follows: 1) In
Sec. 3, we present two models of ensembles for the dataset. The dataset belongs to subspaces
that have associated orthogonal subspace with ?good? basis. These good basis for the orthogonal
subspaces satisfy one or more of the conditions introduced in Sec. 2, a section that provides some
background material on sparse recovery and various sufficient conditions relevant to the problem.
2) In Sec. 4, we briefly describe a way to obtain a ?good? null basis for the dataset. The found bases
serve as measurement matrices that allow for sparse recovery. 3) Sec. 5 focus on the recall phases
of the proposed associative memories. The algorithms are for sparse recovery, but stated in a way
that are implementable in a neural network.
In Sec. 6, we present some experimental results showcasing the performance of the proposed associative memories. In Appendix C, we describe another approach to construct associative memory
based on the dictionary learning problem [24].
2
Definition and mathematical preliminaries
Notation: We use lowercase boldface letters to denote vectors. Uppercase boldface letters represent
matrices. For a matrix B, BT denotes the transpose of B. A vector is called k-sparse if it has only k
nonzero entries. For a vector x 2 Rn and any set of coordinates I ? [n] ? {1, 2, . . . , n}, xI denotes
the projection of x on to the coordinates of I. For any set of coordinates I ? [n], I c ? [n] \ I.
Similarly, for a matrix B, BI denotes the sub-matrix obtained by the rows of B that are indexed by
the set I. We use span(B) to denote the subspace spanned by the columns of B. Given an m ? n
matrix B, denote the columns of the matrix as bj , j = 1, . . . , n and assume, for all the matrices in
this section, that the columns are all unit norm, i.e., kbj k2 = 1.
Definition 1 (Coherence). The mutual coherence of the matrix B is defined to be
?(B) = max |hbi , bj i|.
i6=j
(1)
Definition 2 (Null-space property). The matrix B is supposed to satisfy the null-space property
with parameters (k, ? < 1) if khI k1 ? ?khI c k1 , for every vector h 2 Rn with Bh = 0 and any
set I ? [n], |I| = k.
Definition 3 (RIP). A matrix B is said to satisfy the restricted isometry property with parameters k
and , or the the (k, )-RIP, if for all k-sparse vectors x 2 Rn ,
)kxk22 ? kBxk22 ? (1 + )kxk22 .
(1
(2)
Next we list some results pertaining to sparse signal recovery guarantee based on these aforemen? , that has the smallest number
tioned parameters. The sparse recovery problem seeks the solution x
of nonzero entries, of the underdetermined system of equation Bx = r, where, B 2 Rm?n and
x 2 Rn . The basis pursuit algorithm for sparse recovery provides the following estimate.
(3)
? = arg min kxk1 .
x
Bx=r
Let xk denote the projection of x on its largest k coordinates.
Proposition 1. If B has null-space property with parameters (k, ? < 1), then, we have,
k?
x
xk1 ?
2(1 + ?)
kx
1 ?
3
xk k1 .
(4)
The proof of this is quite standard and delegated to the Appendix A.
p
Proposition 2 ([5] ). The (2k, 2 1)-RIP of the sampling matrix implies, for a constant c,
k?
x
c
xk2 ? p kx
k
Furthermore, it can be easily seen that any matrix is (k, (k
coherence of the sampling matrix.
3
(5)
xk k1 .
1)?)-RIP, where ? is the mutual
Properties of the datasets
In this section, we show that, under reasonable random models that represent quite general assumptions on the datasets, it is possible to learn linear constraints on the messages, that satisfy one of the
sufficient properties of sparse recovery: incoherence, null-space property or RIP. We mainly consider two models for the dataset: 1) sub-gaussian model 2) span of a random set from an orthonormal
basis.
3.1
Sub-Gaussian model for the dataset and the null-space property
In this section we consider the message sets that are spanned by a basis matrix which has its entries
distributed according to a sub-gaussain distribution. The sub-gaussian distributions are prevalent
in machine learning literature and provide a broad class of random models to analyze and validate
various learning algorithms. We refer the readers to [27, 10] for the background on these distribution.
n?r
Let A 2 R
be an n ? r random matrix that has independent zero mean sub-gaussian random
variables as its entries. We assume that the subspace spanned by the columns of the matrix A
represents our dataset M. The main result of this section is the following.
Theorem 3. The dataset above satisfies a set of linear constraints, that has the null-space property.
That is, for any h 2 M ? span(A), the following holds with high probability:
khI k1 ? ?khI c k1
for all I ? [n] such that |I| ? k,
(6)
for k = O(n1/4 ), r = O(n/k) and a constant ? < 1.
The rest of this section is dedicated to the proof of this theorem. But, before we present the proof,
we state a result from [27] which we utilize to prove Theorem 3.
Proposition 3. [27, Theorem 5.39] Let A be an s ? r matrix whose rows ai are independent
n
sub-gaussian isotropic random vectors in R . Then for every t
0, with probability at least
2
1 2 exp( ct ) one has
p
p
s C r t ? smin (A) =
min
kAxk2
x2Rr :kxk2 =1
p
p
? smax (A) =
max
kAxk2 ? s + C r + t.
(7)
r
x2R :kxk2 =1
Here C and c depends on the sub-gaussian norms of the rows of the matrix A.
Proof of Theorem 3: Consider an n ? r matrix A which has independent sub-gaussian isotropic
random vectors as its rows. Now for a given set I ? [n], we can focus on two disjoint sub-matrices
of A: 1) A1 = AI and 2) A2 = AI c .
Using Proposition 3 with s = |I|, we know that, with probability at least 1 2 exp( ct2 ), we have
p
p
|I| + C r + t.
(8)
smax (A1 ) =
max
kA
xk
?
1
2
r
x2R :kxk2 =1
p
Since we know that kA1 xk1 ? |I|kA1 xk2 , using (8) the following holds with probability at least
1 2 exp( ct2 ).
p
p
p
k(Ax)I k1 = kA1 xk1 ? |I|kA1 xk2 ? |I| + C |I|r + t |I| 8 x 2 Rr : kxk2 = 1. (9)
4
We now consider A2 . It follows from Proposition 3 with s = |I c | = n |I| that with probability at
least 1 2 exp( ct2 ),
p
p
smin (A2 ) =
min
kA2 xk2
n |I| C r t.
(10)
r
x2R :kxk2 =1
Combining (10) with the observation that kA2 xk1
at least 1 2 exp( ct2 ).
p
k(Ax)I c k1 = kA2 xk1 kA2 xk2
n |I|
kA2 xk2 , the following holds with probability
p
C r
t for all x 2 Rr : kxk2 = 1.
Note that we are interested in showing that for all h 2 M, we have
khI k1 ? ?khI c k1
for all I ? [n] such that |I| ? k.
This is equivalent to showing that the following holds for all x 2 R : kxk2 = 1.
k(Ax)I k1 ? ?k(Ax)I c k1
(11)
(12)
r
for all I ? [n] such that |I| ? k.
(13)
For a given I ? [n], we utilize (9) and (11) to guarantee that (13) holds with probability at least
1 2 exp( ct2 ) as long as
p
p
p
p
|I| + C |I|r + t |I| ? ? n |I| C r t
(14)
Now, given that k = |I| satisfies (14), (13) holds for all I ? [n] : |I| = k with probability at least
? ?
n
en k
1 2
exp( ct2 ) 1
exp( ct2 ).
(15)
k
k
1/4
3/4
Let?s
p consider the following set of parameters: k = O(n ), r = O(n/k) = O(n ) and t =
?( k log(n/k)). This set of parameters ensures that (14) holds with overwhelming probability
(cf. (15)).
Remark 1. In Theorem 3, we specify one particular set of parameters for which the null-space
property holds. Using (14) and (15), it can be shown that the null-space property inpgeneral holds
p
for the following set of parameters: k = O( n/ log n), r = O(n/k) and t = ?( k log(n/k)).
Therefore, it possible to trade-off the number of correctable errors during the recall phase (denoted
by k) with the dimension of the dataset (represented by r).
3.2
Span of a random set of columns of an orthonormal basis
Next, in this section, we consider the ensemble of signals spanned by a random subset of rows from
a fixed orthonormal basis B. Assume B to be an n ? n matrix with orthonormal rows. Let ? [n]
be a random index set such that E(| |) = r. The vectors in the dataset have form h = BT u for
| |
some u 2 R . In other words, the dataset M ? span(BT ).
In this case, B c constitutes a basis matrix for the null space of the dataset. Since we have selected
the set randomly, set ? ? c is also a random set with E(?) = n E( ) = n r.
n
Proposition 4 ([7]). Assume that B be an n ? n orthonormal basis for R with the property that
maxi,j |Bi,j | ? ?. Consider a random |?| ? n matrix C obtained by selecting a random set of rows
of B indexed by the set ? 2 [n] such that E(?) = m. Then the matrix C obeys (k, )-RIP with
probability at least 1 O(n ?/? ) for some fixed constant ? > 0, where k = ? 2 ?m
.
log6 n
Therefore, we can invoke Proposition 4 to conclude that the matrix B c obeys (k, )-RIP with
r)
k = ??(n
2 log6 n with ? being the largest absolute value among the entries of B c .
4
Learning the constraints: null space with small coherence
In the previous section, we described some random ensemble of datasets that can be stored on
an associative memory based on sparse recovery. This approach involves finding a basis for the
5
orthogonal subspace to the message or the signal subspace (dataset). Indeed, our learning algorithm
simply finds a null space of the dataset M. While obtaining the basis vectors of null(M), we
require them to satisfy null-space property, RIP or small mutual coherence so that the a signal can
be recovered from its noisy version via the basis pursuit algorithm, that can be neurally implemented
(see Sec. 5.2). However, for a given set of message vectors, it is computationally intractable to check
if the obtained (learnt) orthogonal basis has null-space property or RIP with suitable parameters
associated with these properties. Mutual coherence of the orthogonal basis, on the other hand, can
indeed be verified in a tractable manner. Further, the more straight-forward iterative soft thresholding
algorithm will be successful if null(M) has low coherence. This will also lead to fast convergence
of the recovery algorithm (see, Sec. 5.1). Towards this, we describe one approach that ensures the
selection of a orthogonal basis that has smallest possible mutual coherence. Subsequently, using
the mutual coherence based recovery guarantees for sparse recovery, this basis enables an efficient
recovery phase for the associative memory.
One underlying assumption on the dataset that we make is its less than full dimensionality. That
is, the dataset must belong to a low dimensional subspace, so that its null-space is not trivial. In
practical cases, M is approximately low-dimensional. We use a preprocessing step, employing
principal component analysis (PCA) to make sure that the dataset is low-dimensional. We do not
indulge in to a more detailed description of this phase, as it seems to be quite standard (see, [18]).
Algorithm 1 Find null-space with low coherence
Input: The dataset M with n dimensional vectors. An initial coherence ?0 and a step-size
Output: A m ? n orthogonal matrix B and coherence ?
Preprocessing. Perform PCA on M
Find the n ? r basis matrix A of M
for l = 0, 1, 2, . . . do
Find a feasible point of the quadratically constrained quadratic problem (QCQP) below (interior
point method): BA = 0; kbi k = 1, 8i 2 [n]; |hbi , bj i| ? ?l where B is (n r) ? n
if No feasible point found then
break
else
?
?l
?l+1 = ?l
end if
end for
5
Recall via neurally feasible algorithms
We now focus on the second aspect of an associative memory, namely the recovery phase. For the
signal model that we consider in this paper, the recovery phase is equivalent to solving a sparse signal
recovery problem. Given a noisy vector y = x + e from the dataset, we can use the basis of the
null-space B associated to our dataset that we constructed during the learning phase to obtain r =
By = Be. Now given that e is sufficiently sparse enough and the matrix B obeys the properties of
Sec. 2, we can solve for e using a sparse recovery algorithm. Subsequently, we can remove the error
vector e from the noisy signal y to construct the underlying message vector x. There is a plethora of
algorithms available in the literature to solve this problem. However, we note that for the purpose of
an associative memory, the recovery phase should be neurally feasible and computationally simple.
In other words, each node (or storage unit) should be able to recover the coordinated associated to
it locally by applying simple computation on the information received from its neighboring nodes
(potentially in an iterative manner).
5.1
Recovery via Iterative soft thresholding (IST) algorithm
Among the various, sparse recovery algorithms in the literature, iterative soft thresholding (IST)
algorithm is a natural candidate for implementing the recovery phase of the associative memories
with underlying setting. The IST algorithm tries to solve the following unconstrained `1 -regularized
6
least square problem which is closely related to the basis pursuit problem described in (3) and (18).
1
? = arg min ?|ek1 + kBe rk2 .
e
(16)
e
2
For the IST algorithm, its t + 1-th iteration is described as follows.
(IST)
et+1 = ? S (et
? BT (Bet
r);
= ? ?).
(17)
Here, ? is a constant and ? (x; ) = (sgn(x1 )(x1 ) , sgn(x2 )(x2 ) , . . . , sgn(xn )(xn )+ )
denotes the soft thresholding (or shrinkage) operator. Note that the IST algorithm as described in
(17) is neurally feasible as it involves only 1) performing matrix vector multiplications and 2) soft
thresholding a coordinate in a vector independent of the values of other coordinates in the vector.
In Appendix B, we describe in details how the IST algorithm can be performed over a bipartite
neural network with B as its edge weight matrix. Under suitable assumption on the coherence of the
measurement matrix B, the IST algorithm is also known to converge to the correct k-sparse vector
e [20]. In particular, Maleki [20] allows the thresholding parameter to be varied in every iteration
such that all but at most the largest k coordinates (in terms of their absolute values) are mapped
to zero by the soft thresholding operation. In this setting, Maleki shows that the solution of the
IST algorithm recovers the correct support of the optimal solution in finite steps and subsequently
converges to the true solution very fast. However, we are interested in analysis of the IST algorithm
in a setting where thresholding parameter is kept a suitable constant depending on other system
parameters so that the algorithm remains neurally feasible. Towards this, we note that there exists
general analysis of the IST algorithm even without the coherence assumption.
T
1
Proposition 5. [4, Theorem 3.1] Let {et }t 1 be as defined in (17) with ?1
max (B B) . Then,
S
for any t
1, h(et ) h(e) ?
function defined in (16).
5.2
+
ke0 ek2
.
2t
Here, h(e) =
+
1
2 kr
Bek2 + ?kek1 is the objective
Recovery via Bregman iterative algorithm
Recall that the basis pursuit algorithm refers to the following optimization problem.
? = arg min{kek1 : r = Be}.
e
e
(18)
Even though the IST algorithm as described in the previous subsection solves the problem in (16),
? nearly satisfies
the parameter value ? needs to be set small enough so that the recovered solution e
the constraint Be = r in (18). However, if we insist on recovering the solution e which exactly
meets the constraint, one can employ the Bregman iterative algorithm from [29]. The Bregman
p
iterative algorithm relies on the Bregman distance Dk?k
(?, ?) based on k ? k1 which is defined as
1
follows.
p
Dk?k
(e1 , e2 ) = ke1 k1 ke2 k1 hp, e1 e2 i,
1
where p 2 @ke2 k1 is a sub-gradient of the `1 -norm at the point e2 . The (t + 1)-th iteration of the
Bregman iterative algorithm is then defined as follows.
1
pt
et+1 = arg min Dk?k
(e, et ) + kBe rk2 ,
1
e
2
1
t T
= arg min kek1 (p ) e + kBe rk2 ket k1 + (pt )T et ,
(19)
e
2
pt+1 = pt BT (Bet+1 r).
(20)
Note that, for the (t + 1)-th iteration, the objective function in (19) is essentially equivalent to the
objective function in (16). Therefore, each iteration of the Bergman iterative algorithm can be solved
using the IST algorithm. It is shown in [29] that after a finite number of iteration of the Bregman
iterative algorithm, one recovers the solution of the problem in (18) (Theorem 3.2 and 3.3 in [29]).
Remark 2. We know that the IST algorithm is neurally feasible. Furthermore, the step described
in (20) is neurally feasible as it only involve matrix-vector multiplications in the spirit of Eq. (17).
Since each iteration of the Bregman iterative algorithm only relies on these two operations, it follows
that the Bregman iterative algorithm is neurally feasible as well. It should be noted that the neural
feasibility of the Bregman iterative algorithm was discussed in [16] as well, however the neural
structures employed by [16] is different from ours.
1
Note that max (BT B), the maximum eigenvalue of the matrix BT B serves as a Lipschitz constant for the
gradient f (e) of the function f (e) = 21 kr Bek2
7
1
0.9
0.8
0.8
0.7
0.7
Probability of failure
Probability of failure
1
0.9
0.6
0.5
0.4
0.3
0.1
0
100
150
200
250
300
350
0.5
0.4
0.3
m = 500 (PD)
m = 500 (BI)
m = 700 (PD)
m = 700 (BI)
0.2
0.6
m = 500 (PD)
m = 500 (BI)
m = 700 (PD)
m = 700 (BI)
0.2
0.1
400
0
100
450
150
200
(a) Gaussian matrix and Gaussian noise
350
400
450
1
0.9
0.9
Student Version of MATLAB
Student Version of MATLAB
0.8
0.8
0.7
0.7
Probability of failure
Probability of failure
300
(b) Gaussian matrix and Discrete noise
1
0.6
0.5
0.4
0.3
0.6
0.5
0.4
0.3
m = 500 (PD)
m = 500 (BI)
m = 700 (PD)
m = 700 (BI)
0.2
0.1
0
100
250
Sparsity
Sparsity
150
200
250
300
350
400
m = 500 (PD)
m = 500 (BI)
m = 700 (PD)
m = 700 (BI)
0.2
0.1
0
100
450
Sparsity
150
200
250
300
350
400
450
Sparsity
(c) Bernoulli matrix and Gaussian noise
(d) Bernoulli matrix and Discrete noise
Figure 2: Performance of the proposed associative memory approach during recall phase. The PD algorithm
Student Version of MATLAB
Student Version of MATLAB
refers to the primal dual algorithm to solve linear program associated with the problem in (18). The BI algorithm
refers to the Bregman iterative algorithm described in Sec. 5.2.
6
Experimental results
In this section, we demonstrate the feasibility of the associative memory framework using computer
generated data. Along the line of the discussion in Sec. 3.1, we first sample an n ? r sub-gaussian
matrix A with i.i.d entries. We consider two sub-gaussian distributions: 1) Gaussian distribution
and 2) Bernoulli distribution over {+1, 1}. The message vectors to be stored are then assumed to
be spanned by the k columns of the sampled matrix. For the learning phase, we find a good basis
for the subspace orthogonal to the space spanned by the columns of the matrix A. For noise during
the recall phase, we consider two noise models: 1) Gaussian noise and 2) discrete noise where each
nonzero elements take value in the set { M, (M 1), . . . , M }\{0}.
Figure 2 presents our simulation results for n = 1000. For recall phase, we employ the Bregman
iterative (BI) algorithm with the IST algorithm as a subroutine. We also plot the performance of
the primal dual (PD) algorithm based linear programming solution for the recovery problem of
interest (cf. (18)). This allows us to gauge the disadvantage due to the restriction of working with
a neurally feasible recovery algorithm, e.g., the BI algorithm in our case. Furthermore, we consider
message sets with two different dimensions which amounts to m = 500 and m = 700. Note that
the dimension of the message set is n m. We run 50 iterations of the recovery algorithms for a
given set of parameters to obtain the estimates of the probability of failure (of exact recovery of error
vector). In Fig. 2a, we focus on the setting with Gaussian basis matrix (for message set) and unit
variance zero mean Gaussian noise during the recall phase. It is evident that the proposed associative
memory do allow for the exact recovery of error vectors up to certain sparsity level. This corroborate
our findings in Sec. 3. We also note that the performance of the BI algorithm is very close to the
PD algorithm. Fig. 2b shows the performance of the recall phase for the setting with Gaussian basis
for message set and discrete noise model with M = 4. In this case, even though the BI algorithm
is able to exactly recover the noise vector up to a particular sparsity level, it?s performance is worse
than that of PD algorithm. The performance of the recall phase with Bernoulli bases matrices for
message set is shown in Fig. 2c and 2d. The results are similar to those in the case of Gaussian bases
matrices for the message sets.
8
References
[1] A. Agarwal, A. Anandkumar, P. Jain, P. Netrapalli, and R. Tandon. Learning sparsely used overcomplete
dictionaries via alternating minimization. CoRR, abs/1310.7991, 2013.
[2] S. Arora, R. Ge, T. Ma, and A. Moitra. Simple, efficient, and neural algorithms for sparse coding. CoRR,
abs/1503.00778, 2015.
[3] S. Arora, R. Ge, and A. Moitra. New algorithms for learning incoherent and overcomplete dictionaries.
arXiv preprint arXiv:1308.6273, 2013.
[4] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[5] E. J. Candes. The restricted isometry property and its implications for compressed sensing. Comptes
Rendus Mathematique, 346(9):589?592, 2008.
[6] E. J. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from
highly incomplete frequency information. IEEE Trans. on Inf. Theory, 52(2):489?509, 2006.
[7] E. J. Candes and T. Tao. Near-optimal signal recovery from random projections: Universal encoding
strategies? IEEE Trans. on Inf. Theory, 52(12):5406?5425, Dec 2006.
[8] D. L. Donoho. Compressed sensing. IEEE Trans. on Inf. Theory, 52(4):1289?1306, 2006.
[9] D. L. Donoho, A. Maleki, and A. Montanari. Message-passing algorithms for compressed sensing. Proceedings of the National Academy of Sciences, 106(45):18914?18919, 2009.
[10] S. Foucart and H. Rauhut. A Mathematical Introduction to Compressive Sensing. Birkhauser Basel, 2013.
[11] V. Gripon and C. Berrou. Sparse neural networks with large learning diversity. IEEE Transactions on
Neural Networks, 22(7):1087?1096, 2011.
[12] D. J. Gross and M. Mezard. The simplest spin glass. Nuclear Physics B, 240(4):431 ? 452, 1984.
[13] D. O. Hebb. The organization of behavior: A neuropsychological theory. Psychology Press, 2005.
[14] C. Hillar and N. M. Tran. Robust exponential memory in hopfield networks. arXiv preprint
arXiv:1411.4625, 2014.
[15] J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities.
Proceedings of the national academy of sciences, 79(8):2554?2558, 1982.
[16] T. Hu, A. Genkin, and D. B. Chklovskii. A network of spiking neurons for computing sparse representations in an energy-efficient way. Neural computation, 24(11):2852?2872, 2012.
[17] S. Jankowski, A. Lozowski, and J. M. Zurada. Complex-valued multistate neural associative memory.
IEEE Transactions on Neural Networks, 7(6):1491?1496, Nov 1996.
[18] A. Karbasi, A. H. Salavati, and A. Shokrollahi. Convolutional neural associative memories: Massive
capacity with noise tolerance. CoRR, abs/1407.6513, 2014.
[19] K. R. Kumar, A. H. Salavati, and A. Shokrollahi. Exponential pattern retrieval capacity with non-binary
associative memory. In 2011 IEEE Information Theory Workshop (ITW), pages 80?84, Oct 2011.
[20] A. Maleki. Coherence analysis of iterative thresholding algorithms. In 47th Annual Allerton Conference
on Communication, Control, and Computing, 2009. Allerton 2009, pages 236?243, Sept 2009.
[21] R. J. McEliece and E. C. Posner. The number of stable points of an infinite-range spin glass memory.
Telecommunications and Data Acquisition Progress Report, 83:209?215, 1985.
[22] R. J. McEliece, E. C. Posner, E. R. Rodemich, and S. S. Venkatesh. The capacity of the hopfield associative memory. Information Theory, IEEE Transactions on, 33(4):461?482, 1987.
[23] M. K. Muezzinoglu, C. Guzelis, and J. M. Zurada. A new design method for the complex-valued multistate hopfield associative memory. IEEE Transactions on Neural Networks, 14(4):891?899, July 2003.
[24] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy employed by
v1? Vision Research, 37(23):3311 ? 3325, 1997.
[25] A. H. Salavati and A. Karbasi. Multi-level error-resilient neural networks. In 2012 IEEE International
Symposium on Information Theory Proceedings (ISIT), pages 1064?1068, July 2012.
[26] F. Tanaka and S. F. Edwards. Analytic theory of the ground state properties of a spin glass. i. ising spin
glass. Journal of Physics F: Metal Physics, 10(12):2769, 1980.
[27] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint
arXiv:1011.3027, 2010.
[28] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using `1 constrained quadratic programming (lasso). IEEE Trans. Inform. Theory, 55(5):2183?2202, May 2009.
[29] W. Yin, S. Osher, D. Goldfarb, and J. Darbon. Bregman iterative algorithms for `1 -minimization with
applications to compressed sensing. SIAM Journal on Imaging Sciences, 1(1):143?168, 2008.
9
| 5915 |@word briefly:1 version:9 polynomial:3 stronger:1 norm:3 seems:1 hu:1 seek:1 simulation:1 initial:1 selecting:1 ours:1 zurada:2 past:1 recovered:3 com:1 ka:1 must:6 enables:1 analytic:1 remove:1 plot:1 depict:1 selected:1 xk:4 isotropic:2 provides:3 node:12 allerton:2 mathematical:2 along:1 constructed:1 symposium:1 prove:1 manner:3 indeed:4 behavior:1 cand:1 shokrollahi:2 multi:1 insist:1 overwhelming:1 provided:3 notation:1 underlying:3 null:20 what:1 compressive:1 finding:4 guarantee:5 remember:1 preferably:1 every:5 exactly:2 rm:2 k2:1 control:1 unit:3 before:1 local:1 referenced:1 encoding:1 meet:1 incoherence:1 ap:1 approximately:1 studied:1 relaxing:1 dif:1 bi:15 range:1 obeys:3 neuropsychological:1 practical:1 x3:1 universal:1 attain:1 projection:3 word:2 refers:3 kbi:1 close:2 pertain:1 selection:1 bh:1 storage:5 interior:1 applying:1 operator:1 romberg:1 restriction:1 equivalent:3 hillar:1 recovery:37 correcting:1 rule:1 spanned:7 orthonormal:6 nuclear:1 posner:2 traditionally:2 coordinate:11 delegated:1 pt:4 tandon:1 rip:10 exact:3 programming:2 massive:1 bergman:1 element:1 sparsely:1 ising:1 kxk1:1 preprint:3 solved:1 ensures:2 trade:1 mentioned:1 gross:1 pd:12 kaxk2:2 dynamic:1 singh:1 solving:1 serve:1 bipartite:8 basis:26 easily:1 hopfield:10 emergent:1 represented:2 tx:1 various:3 alphabet:2 jain:1 fast:4 describe:4 pertaining:1 query:3 refined:1 sociated:1 apparent:1 quite:3 whose:1 solve:4 valued:2 compressed:4 ability:2 noisy:9 ankit:1 associative:32 sequence:1 rr:2 eigenvalue:1 multistate:2 propose:2 reconstruction:1 tran:1 neighboring:3 networked:1 relevant:2 combining:1 iff:1 academy:2 supposed:1 description:1 validate:1 convergence:1 requirement:1 r1:1 smax:2 plethora:1 converges:1 depending:1 andrew:1 nearest:3 received:1 progress:1 eq:1 edward:1 netrapalli:1 implemented:1 recovering:1 involves:2 implies:1 solves:1 direction:1 closely:2 correct:8 subsequently:3 sgn:3 material:1 implementing:1 require:1 premise:2 mathematique:1 resilient:1 preliminary:1 proposition:8 isit:1 underdetermined:1 correction:1 hold:10 sufficiently:1 considered:2 ground:1 exp:11 bj:3 dictionary:3 smallest:2 xk2:6 a2:3 purpose:1 largest:3 ek1:1 gauge:1 city:1 successfully:1 weighted:2 minimization:2 gaussian:20 mation:1 super:1 aim:1 ke1:1 shrinkage:2 bet:2 ax:4 focus:4 prevalent:1 check:1 mainly:1 bernoulli:4 contrast:1 adversarial:2 glass:4 lowercase:1 bt:7 relation:1 subroutine:1 interested:2 tao:2 arg:5 aforementioned:1 among:2 dual:2 logn:1 denoted:1 constrained:2 special:1 mutual:6 equal:1 construct:2 field:1 sampling:2 represents:1 broad:1 constitutes:1 nearly:1 report:1 employ:4 randomly:1 genkin:1 national:2 beck:1 phase:27 n1:2 ab:3 organization:1 interest:1 message:35 highly:1 umn:1 uppercase:1 primal:2 implication:1 bregman:12 edge:7 orthogonal:8 indexed:2 incomplete:1 overcomplete:3 showcasing:1 hbi:2 column:7 soft:6 teboulle:1 corroborate:1 disadvantage:1 vertex:2 entry:6 subset:1 successful:1 stored:7 corrupted:3 learnt:3 vershynin:1 international:1 siam:2 ct2:7 off:1 invoke:1 decoding:1 physic:3 again:1 satisfied:1 moitra:2 opposed:1 salavati:3 ket:1 worse:1 bx:2 diversity:1 twin:1 sec:12 student:4 coding:2 satisfy:8 coordinated:1 depends:1 tion:1 performed:2 break:1 try:1 analyze:1 recover:4 candes:2 contribution:1 square:1 spin:4 convolutional:1 variance:1 efficiently:1 ensemble:3 ka1:4 rauhut:1 straight:1 ary:1 inform:1 definition:4 against:2 failure:5 energy:1 acquisition:1 frequency:1 e2:3 associated:7 proof:4 recovers:2 sampled:1 dataset:34 recall:18 subsection:1 dimensionality:4 rodemich:1 tolerate:1 specify:1 done:3 though:3 furthermore:4 xk1:5 mceliece:2 hand:2 working:1 overlapping:1 propagation:2 olshausen:1 usa:1 rk2:3 true:1 maleki:4 alternating:1 nonzero:4 goldfarb:1 during:7 noted:1 steady:2 evident:1 complete:1 demonstrate:2 dedicated:1 bring:1 spiking:1 physical:1 belong:3 discussed:1 x2r:3 mellon:1 measurement:2 refer:1 ai:3 gaussain:1 unconstrained:1 similarly:1 inclusion:1 i6:1 hp:1 minnesota:1 moving:1 stable:1 base:3 isometry:3 belongs:1 inf:3 store:10 certain:3 binary:7 itw:1 seen:1 ke2:2 employed:2 converge:1 berrou:2 signal:13 july:2 neurally:15 multiple:1 full:1 hebbian:1 long:1 retrieval:1 e1:2 a1:2 feasibility:3 basic:2 essentially:1 cmu:1 vision:1 arxiv:6 iteration:8 sponding:1 represent:2 agarwal:1 dec:1 background:2 chklovskii:1 else:1 rest:1 posse:2 sure:1 flow:2 spirit:1 integer:1 anandkumar:1 structural:2 near:2 presence:1 noting:1 enough:3 psychology:1 lasso:1 idea:2 texas:1 pca:2 effort:2 render:1 passing:1 remark:2 matlab:4 detailed:1 involve:2 amount:1 locally:1 simplest:1 jankowski:1 disjoint:1 correctly:1 darbon:1 correctable:1 carnegie:1 proach:1 discrete:4 smin:2 ist:15 threshold:1 kbj:1 verified:1 utilize:2 kept:1 v1:1 imaging:2 graph:12 fraction:3 sum:1 run:1 inverse:1 letter:2 uncertainty:1 telecommunication:1 reasonable:1 reader:1 coherence:16 appendix:3 ct:1 guaranteed:1 quadratic:2 annual:1 constraint:19 x2:3 n3:1 mazumdar:1 qcqp:1 aspect:1 span:5 min:7 kumar:1 performing:1 department:2 according:2 osher:1 restricted:3 karbasi:3 computationally:2 equation:1 remains:1 r3:1 rendus:1 know:3 ge:2 tractable:1 end:2 serf:1 pursuit:5 operation:3 available:1 away:1 generic:1 existence:1 responding:1 running:1 cf:3 denotes:4 k1:17 ek2:1 objective:3 strategy:2 said:1 gradient:2 subspace:13 distance:1 mapped:2 capacity:7 me:1 topic:1 trivial:1 provable:1 boldface:2 assuming:1 length:5 code:1 modeled:1 index:1 ke0:1 potentially:2 stated:1 ba:1 design:2 collective:1 basel:1 perform:1 neuron:3 observation:1 datasets:4 arya:2 finite:4 implementable:1 communication:1 rn:4 varied:1 arbitrary:3 sharp:1 introduced:1 ka2:5 cast:1 namely:1 venkatesh:1 extensive:1 connection:1 recalled:1 gripon:2 learned:2 quadratically:1 tanaka:1 trans:4 able:3 below:2 pattern:3 sparsity:8 program:1 kek1:3 unsuccessful:1 memory:34 max:5 belief:2 wainwright:1 suitable:3 treated:1 natural:2 regularized:1 representing:1 kxk22:2 arora:2 incoherent:1 sept:1 literature:3 removal:1 multiplication:2 asymptotic:1 log6:2 degree:1 sufficient:3 metal:1 principle:2 thresholding:10 storing:1 austin:1 row:8 transpose:1 allow:2 neighbor:2 absolute:2 sparse:31 distributed:1 tolerance:1 dimension:3 xn:3 valid:2 author:1 collection:1 forward:1 preprocessing:2 employing:1 transaction:4 nov:1 clique:1 assumed:4 conclude:1 xi:1 iterative:20 decade:1 learn:4 robust:3 obtaining:1 expansion:1 complex:2 domain:1 main:5 montanari:1 linearly:1 noise:14 n2:1 x1:3 fied:1 fig:4 referred:1 en:1 hebb:1 sub:18 mezard:1 comprises:1 khi:6 exponential:10 candidate:1 kxk2:7 theorem:15 specific:1 showing:2 kbe:3 maxi:1 r2:1 explored:1 list:1 dk:3 sensing:5 foucart:1 exists:2 intractable:1 workshop:1 albeit:1 corr:3 kr:2 kx:2 yin:1 simply:1 assump:1 satisfies:3 relies:2 ma:1 prop:2 oct:1 tioned:1 donoho:2 towards:2 lipschitz:1 feasible:16 infinite:1 birkhauser:1 denoising:1 principal:1 comptes:1 called:3 ece:2 experimental:2 e:1 support:1 dept:1 |
5,431 | 5,916 | Matrix Completion Under Monotonic Single Index
Models
Ravi Ganti
Wisconsin Institutes for Discovery
UW-Madison
gantimahapat@wisc.edu
Laura Balzano
Electrical Engineering and Computer Sciences
University of Michigan Ann Arbor
girasole@umich.edu
Rebecca Willett
Department of Electrical and Computer Engineering
UW-Madison
rmwillett@wisc.edu
Abstract
Most recent results in matrix completion assume that the matrix under consideration is low-rank or that the columns are in a union of low-rank subspaces. In
real-world settings, however, the linear structure underlying these models is distorted by a (typically unknown) nonlinear transformation. This paper addresses
the challenge of matrix completion in the face of such nonlinearities. Given a
few observations of a matrix that are obtained by applying a Lipschitz, monotonic
function to a low rank matrix, our task is to estimate the remaining unobserved entries. We propose a novel matrix completion method that alternates between lowrank matrix estimation and monotonic function estimation to estimate the missing
matrix elements. Mean squared error bounds provide insight into how well the
matrix can be estimated based on the size, rank of the matrix and properties of the
nonlinear transformation. Empirical results on synthetic and real-world datasets
demonstrate the competitiveness of the proposed approach.
1
Introduction
In matrix completion, one has access to a matrix with only a few observed entries, and the task is
to estimate the entire matrix using the observed entries. This problem has a plethora of applications
such as collaborative filtering, recommender systems [1] and sensor networks [2]. Matrix completion has been well studied in machine learning, and we now know how to recover certain matrices
given a few observed entries of the matrix [3, 4] when it is assumed to be low rank. Typical work
in matrix completion assumes that the matrix to be recovered is incoherent, low rank, and entries
are sampled uniformly at random [5, 6, 4, 3, 7, 8]. While recent work has focused on relaxing the
incoherence and sampling conditions under which matrix completion succeeds, there has been little
work for matrix completion when the underlying matrix is of high rank. More specifically, we shall
assume that the matrix that we need to complete is obtained by applying some unknown, non-linear
function to each element of an unknown low-rank matrix. Because of the application of a non-linear
transformation, the resulting ratings matrix tends to have a large rank. To understand the effect of
the application of non-linear transformation on a low-rank
matrix, we shall consider the following
Pm
simple experiment: Given an n ? m matrix X, let X = i=1 ?i ui vi> be its SVD. The rank of the
matrix X is the number of non-zero singular values. Given an ? (0, 1), define the effective rank
of X as follows:
s Pm
(
)
2
j=k+1 ?j
Pm 2 ? .
r (X) = min k ? N :
(1)
j=1 ?j
1
20
18
Effective rank
16
14
12
10
8
6
4
2
0
5
10
15
20
25
30
35
40
45
50
c
Figure 1: The plot shows the r0.01 (X) defined in equation (1) obtained by applying a non-linear
1
function g ? to each element of Z, where g ? (z) = 1+exp(?cz)
. Z is a 30 ? 20 matrix of rank 5.
? that satisfies
The effective rank of X tells us the rank k of the lowest rank approximator X
? ? X||F
||X
? .
||X||F
(2)
1
In figure (1), we show the effect of applying a non-linear monotonic function g ? (z) = 1+exp(?cz)
to the elements of a low-rank matrix Z. As c increases both the rank of X and its effective rank
r (X) grow rapidly with c, rendering traditional matrix completion methods ineffective even in the
presence of mild nonlinearities.
1.1
Our Model and contributions
In this paper we consider the high-rank matrix completion problem where the data generating process is as follows: There is some unknown matrix Z ? ? Rn?m with m ? n and of rank r m. A
non-linear, monotonic, L- Lipschitz function g ? is applied to each element of the matrix Z ? to get
another matrix M ? . A noisy version of M ? , which we call X, is observed on a subset of indices
denoted by ? ? [n] ? [m].
?
?
Mi,j
= g ? (Zi,j
), ?i ? [n], j ? [m]
?
X? = (M + N )?
(3)
(4)
?
The function g is called the transfer function. We shall assume that E[N ] = 0, and the entries
of N are i.i.d. We shall also assume that the index set ? is generated uniformly at random with
replacement from the set [n] ? [m] 1 . Our task is to reliably estimate the entire matrix M ? given
observations of X on ?. We shall call the above model as Monotonic Matrix Completion (MMC).
To illustrate our framework we shall consider the following two simple examples. In recommender
systems users are required to provide discrete ratings of various objects. For example, in the Netflix
problem users are required to rate movies on a scale of 1 ? 5 2 . These discrete scores can be
thought of as obtained by applying a rounding function to some ideal real valued score matrix given
by the users. This real-valued score matrix may be well modeled by a low-rank matrix, but the
application of the rounding function 3 increases the rank of the original low-rank matrix. Another
important example is that of completion of Gaussian kernel matrices. Gaussian kernel matrices are
used in kernel based learning methods. The Gaussian kernel matrix of a set of n points is an n ? n
matrix obtained by applying the Gaussian function on an underlying Euclidean distance matrix. The
Euclidean distance matrix is a low-rank matrix [9]. However, in many cases one cannot measure
all pair-wise distances between objects, resulting in an incomplete Euclidean distance matrix and
hence an incomplete kernel matrix. Completing the kernel matrix can then be viewed as completing
a matrix of large rank.
In this paper we study this matrix completion problem and provide algorithms with provable error
guarantees. Our contributions are as follows:
1. In Section (3) we propose an optimization formulation to estimate matrices in the above
described context. In order to do this we introduce two formulations, one using a squared
1
By [n] we denote the set {1, 2 . . . , n}
This is typical of many other recommender engines such as Pandora.com, Last.fm and Amazon.com.
3
Technically the rounding function is not a Lipschitz function but can be well approximated by a Lipschitz
function.
2
2
loss, which we call MMC - LS, and another using a calibrated loss function, which we call
as MMC ? c. For both these formulations we minimize w.r.t. M ? and g ? . This calibrated
loss function has the property that the minimizer of the calibrated loss satisfies equation (3).
2. We propose alternating minimization algorithms to solve our optimization problem. Our
proposed algorithms, called MMC ? c and MMC-LS, alternate between solving a quadratic
program to estimate g ? and performing projected gradient descent updates to estimate the
? where M
? i,j = g?(Z?i,j ).
matrix Z ? . MMC outputs the matrix M
? returned by one
3. In Section (4) we analyze the mean squared error (MSE) of the matrix M
? output by
step of the MMC ? c algorithm. The upper bound on the MSE of the matrix M
MMC depends only on the rank r of the matrix Z ? and not on the rank of matrix M ? . This
property makes our analysis useful because the matrix M ? could be potentially high rank
and our results imply reliable estimation of a high rank matrix with error guarantees that
depend on the rank of the matrix Z ? .
4. We compare our proposed algorithms to state-of-art implementations of low rank matrix
completion on both synthetic and real datasets (Section 5).
2
Related work
Classical matrix completion with and without noise has been investigated by several authors [5, 6,
4, 3, 7, 8]. The recovery techniques proposed in these papers solve a convex optimization problem
that minimizes the nuclear norm of the matrix subject to convex constraints. Progress has also been
made on designing efficient algorithms to solve the ensuing convex optimization problem [10, 11,
12, 13]. Recovery techniques based on nuclear norm minimization guarantee matrix recovery under
the condition that a) the matrix is low rank, b) the matrix is incoherent or not very spiky, and c) the
entries are observed uniformly at random. Literature on high rank matrix completion is relatively
sparse. When columns or rows of the matrix belong to a union of subspaces, then the matrix tends
to be of high rank. For such high rank matrix completion problems algorithms have been proposed
that exploit the fact that multiple low-rank subspaces can be learned by clustering the columns or
rows and learning subspaces from each of the clusters. While Eriksson et al. [14] suggested looking
at the neighbourhood of each incomplete point for completion, [15] used a combination of spectral
clustering techniques as done in [16, 17] along with learning sparse representations via convex
optimization to estimate the incomplete matrix. Singh et al. [18] consider a certain specific class
of high-rank matrices that are obtained from ultra-metrics. In [19] the authors consider a model
similar to ours, but instead of learning a single monotonic function, they learn multiple monotonic
functions, one for each row of the matrix. However, unlike in this paper, their focus is on a ranking
problem and their proposed algorithms lack theoretical guarantees.
Davenport et al [20] studied the one-bit matrix completion problem. Their model is a special case of
the matrix completion model considered in this paper. In the one-bit matrix completion problem we
assume that g ? is known and is the CDF of an appropriate probability distribution, and the matrix X
is a boolean matrix where each entry takes the value 1 with probability Mi,j , and 0 with probability
1 ? Mi,j . Since g ? is known, the focus in one-bit matrix completion problems is accurate estimation
of Z ? .
To the best of our knowledge the MMC model considered in this paper has not been investigated
before. The MMC model is inspired by the single-index model (SIM) that has been studied both
in statistics [21, 22] and econometrics for regression problems [23, 24]. Our MMC model can be
thought of as an extension of SIM to matrix completion problems.
3
Algorithms for matrix completion
Our goal is to estimate g ? and Z ? from the model in equations (3- 4). We approach this problem
via mathematical optimization. Before we discuss our algorithms, we mention in brief an algorithm
for the problem of learning Lipschitz, monotonic functions in 1- dimension. This algorithm will be
used for learning the link function in MMC.
3
The LPAV algorithm: Suppose we are given data (p1 , y1 ), . . . (pn , yn ), where p1 ? p2 . . . ? pn ,
def
and y1 , . . . , yn are real numbers. Let G = {g : R ? R, g is L-Lipschitz and monotonic}.
The
Pn
4
LPAV algorithm introduced in [21] outputs the best function g? in G that minimizes i=1 (g(pi ) ?
yi )2 . In order to do this, the LPAV first solves the following optimization problem:
z? = arg minn kz ? yk22
z?R
s.t. 0 ? zj ? zi ? L(pj ? pi ) if pi ? pj
(5)
def
where, g?(pi ) = z?i . This gives us the value of g? on a discrete set of points p1 , . . . , pn . To get g?
everywhere else on the real line, we simply perform linear interpolation as follows:
?
if ? ? p1
?z?1 ,
g?(?) = z?n ,
(6)
if ? ? pn
?
??
zi + (1 ? ?)?
zi+1 if ? = ?pi + (1 ? ?)pi+1
3.1
Squared loss minimization
A natural approach to the monotonic matrix completion problem is to learn g ? , Z ? via squared loss
minimization. In order to do this we need to solve the following optimization problem:
X
min
(g(Zi,j ) ? Xi,j )2
g,Z
?
(7)
g : R ? R is L-Lipschitz and monotonic
rank(Z) ? r.
The problem is a non-convex optimization problem individually in parameters g, Z. A reasonable
approach to solve this optimization problem would be to perform optimization w.r.t. each variable
while keeping the other variable fixed. For instance, in iteration t, while estimating Z one would
keep g fixed, to say g t?1 , and then perform projected gradient descent w.r.t. Z. This leads to the
following updates for Z:
t?1
t?1
t?1
t
Zi,j
? Zi,j
? ?(g t?1 (Zi,j
) ? Xi,j )(g t?1 )0 (Zi,j
) , ?(i, j) ? ?
(8)
Z t ? Pr (Z t )
(9)
where ? > 0 is a step-size used in our projected gradient descent procedure, and Pr is projection
on the rank r cone. The above update involves both the function g t?1 and its derivative (g t?1 )0 .
Since our link function is monotonic, one can use the LPAV algorithm to estimate this link function
g t?1 . Furthermore since LPAV estimates g t?1 as a piece-wise linear function, the function has a
sub-differential everywhere and the sub-differential (g t?1 )0 can be obtained very cheaply. Hence,
the projected gradient update shown in equation (8) along with the LPAV algorithm can be iteratively
used to learn estimates for Z ? and g ? . We shall call this algorithm as MMC?LS. Incorrect estimation
of g t?1 will also lead to incorrect estimation of the derivative (g t?1 )0 . Hence, we would expect
MMC? LS to be less accurate than a learning algorithm that does not have to estimate (g t?1 )0 . We
next outline an approach that provides a principled way to derive updates for Z t and g t that does not
require us to estimate derivatives of the transfer function, as in MMC? LS.
3.2
Minimization of a calibrated loss function and the MMC algorithm.
Let ? : R ? R be a differentiable function that satisfies ?0 = g ? . Furthermore, since g ? is a
monotonic function, ? will be a convex loss function. Now, suppose g ? (and hence ?) is known.
Consider the following function of Z
?
?
X
L(Z; ?, ?) = EX ?
?(Zi,j ) ? Xi,j Zi,j ? .
(10)
(i,j)??
The above loss function is convex in Z, since ? is convex. Differentiating the expression on the
R.H.S. of Equation 10 w.r.t. Z, and setting it to 0, we get
X
g ? (Zi,j ) ? EXi,j = 0.
(11)
(i,j)??
4
LPAV stands for Lipschitz Pool Adjacent Violator
4
The MMC model shown in Equation (3) satisfies Equation (11) and is therefore a minimizer of the
loss function L(Z; ?, ?). Hence, the loss function (10) is ?calibrated? for the MMC model that we
are interested in. The idea of using calibrated loss functions was first introduced for learning single
index models [25]. When the transfer function is identity, ? is a quadratic function and we get the
squared loss approach that we discussed in section (3.1).
The above discussion assumes that g ? is known. However in the MMC model this is not the case.
To get around this problem, we consider the following optimization problem
X
min L(?, Z; ?) = min EX
?(Zi,j ) ? Xi,j Zi,j
(12)
?,Z
?,Z
(i,j)??
where ? : R ? R is a convex function, with ?0 = g and Z ? Rm?n is a low-rank matrix. Since, we
know that g ? is a Lipschitz, monotonic function, we shall solve a constrained optimization problem
that enforces Lipschitz constraints on g and low rank constraints on Z. We consider the sample
version of the optimization problem shown in equation (12).
X
min L(?, Z; ?) = min
?(Zi,j ) ? Xi,j Zi,j
(13)
?
rank(Z)?r
?,Z
(i,j)??
The pseudo-code of our algorithm MMC that solves the above optimization problem (13) is shown
in algorithm (1). MMC optimizes for ? and Z alternatively, where we fix one variable and update
another.
At the start of iteration t, we have at our disposal iterates g?t?1 , and Z t?1 . To update our estimate
of Z, we perform gradient descent with fixed ? such that ?0 = g?t?1 . Notice that the objective in
equation (13) is convex w.r.t. Z. This is in contrast to the least squares objective where the objective
in equation (7) is non-convex w.r.t. Z. The gradient of L(Z; ?) w.r.t. Z is
X
t?1
?Zi,j L(Z; ?) =
g?t?1 (Z?i,j
) ? Xi,j .
(14)
(i,j)??
Gradient descent updates on Z? t?1 using the above gradient calculation leads to an update of the
form
t
Z?i,j
? Z? t?1 ? ?(?
g t?1 (Z? t?1 ) ? Xi,j )1(i,j)??
i,j
i,j
Z? t ? Pr (Z? t )
(15)
Equation (15) projects matrix Z? t onto a cone of matrices of rank r. This entails performing SVD on
Z? t and retaining the top r singular vectors and singular values while discarding the rest. This is done
in steps 4, 5 of Algorithm (1). As can be seen from the above equation we do not need to estimate
derivative of g?t?1 . This, along with the convexity of the optimization problem in Equation (13) w.r.t.
Z for a given ? are two of the key advantages of using a calibrated loss function over the previously
proposed squared loss minimization formulation.
Optimization over ?. In round t of algorithm (1), we have Z? t after performing steps 4, 5. Differentiating the objective function in equation (13) w.r.t. Z, we get that the optimal ? function should
satisfy
X
t
g?t (Z?i,j
) ? Xi,j = 0,
(16)
(i,j)??
def t
t
? i,j =
g? (Z?i,j
where ?0 = g?t . This provides us with a strategy to calculate g?t . Let, X
). Then
solving the optimization problem in equation (16) is equivalent to solving the following optimization
problem.
X
? i,j ? Xi,j )2
min
(X
?
X
(i,j)??
(17)
? i,j + X
? k,l ? L(Z? t ? Z? t ) if Z? t ? Z? t , (i, j) ? ?, (k, l) ? ?
subject to: 0 ? ?X
k,l
i,j
?
i,j
k,l
where L is the Lipschitz constant of g . We shall assume that L is known and does not need to be
? of the objective function, in equation (17), when set to zero is
estimated. The gradient, w.r.t. X,
5
the same as Equation (16). The constraints enforce monotonicity of g?t and the Lipschitz property of
? obtained from
g?t . The above optimization routine is exactly the LPAV algorithm. The solution X
solving the LPAV problem can be used to define g?t on X? . These two steps are repeated for T
iterations. After T iterations we have g?T defined on Z??T . In order to define g?T everywhere else on
the real line we perform linear interpolation as shown in equation (6).
Algorithm 1 Monotonic Matrix Completion (MMC)
Input: Parameters ? > 0, T > 0, r, Data:X? , ?
? = g?T (Z? T )
Output: M
? 0 = mn X? , where X? is the matrix X with zeros filled in at the unobserved loca1: Initialize Z
|?|
tions.
|?|
2: Initialize g?0 (z) = mn z
3: for t = 1, . . . , T do
t?1
t?1
t
4:
Z?i,j
? Z?i,j
? ?(?
g t?1 (Z?i,j
) ? Xi,j )1(i,j)??
t
t
?
?
5:
Z ? Pr (Z )
?
6:
Solve the optimization problem in (17) to get X
t
? i,j for all (i, j) ? ?.
7:
Set g?t (Z?i,j
)=X
8: end for
9: Obtain g?T on the entire real line using linear interpolation shown in equation (6).
def P|?|
Let us now explain our initialization procedure. Define X? =
j=1 X ? ?j , where each ?j
is a boolean mask with zeros everywhere and a 1 at an index corresponding to the index of
an observed entry. A ? B is the Hadamard product, i.e. entry-wise product of matrices A, B.
We have |?| such boolean masks each corresponding to an observed entry. We initialize Z??0 to
P|?|
mn
mn
j=1 X ? ?j . Because each observed index is assumed to be sampled uniformly at
|?| X? = |?|
random with replacement, our initialization is guaranteed to be an unbiased estimate of X.
4
MSE Analysis of MMC
We shall analyze our algorithm, MMC, for the case of T = 1, under the modeling assumption shown
in Equations (4) and (3). Additionally, we will assume that the matrices Z ? and M ? are bounded
? g? and M
? as
entry-wise in absolute value by 1. When T = 1, the MMC algorithm estimates Z,
follows
mnX?
Z? = Pr
.
(18)
|?|
g? is obtained by solving the LPAV problem from Equation (17) with Z? shown in Equation (18). This
? i,j = g?(Z?i,j ), ?i = [n], j = [m].
allows us to define M
? as
Define the mean squared error (MSE) of our estimate M
?
?
n X
m
X
1
? i,j ? Mi,j )2 ? .
?) = E?
(M
M SE(M
mn i=1 j=1
(19)
Denote by ||M || the spectral norm of a matrix M . We need the following additional technical
assumptions:
?
A1. kZ ? k = O( n).
? ?n) with probability at least 1 ? ?, where O
? hides terms logarithmic in
A2. ?r+1 (X) = O(
1/?.
?
Z ? has entries bounded in absolute value by 1. This means that in the worst case, ||Z ? || = mn.
?
Assumption A1 requires that the spectral norm of Z is not very large. Assumption A2 is a weak
assumption on the decay of the spectrum of M ? . By assumption X = M ? + N . Applying Weyl?s
6
inequality we get ?r+1 (X) ? ?r+1 (M ? ) + ?1 (N ). Since N is a zero-mean noise matrix with
independent bounded entries, N is a matrix with sub-Gaussian entries. This means that ?1 (N ) =
? ?n) with high probability. Hence, assumption A2 can be interpreted as imposing the condition
O(
?
?r+1 (M ? ) = O( n). This means that while M ? could be full rank, the (r + 1)th singular value of
M ? cannot be too large.
def
def
Theorem 1. Let ?1 = E||N ||, ?2 = E||N ||2 . Let ? = ||M ? ? Z ? ||. Then, under assumptions A1
and A2, the MSE of the estimator output by MMC with T = 1 is given by
s
p
r
mn
log(n)
r
r
mn
?2
?) = O
?
?
M SE(M
+
+
+
?
+
+
1
m
|?|
m n
n
|?|3/2
(20)
s
!
s
r?
rmn log2 (n)
?
?
.
1+ ?
+
|?|2
m n
n
where O(?) notation hides universal constants, and the Lipschitz constant L of g ? . We would like
to mention that the result derived for MMC-1 can be made to hold true for T > 1, by an additional
large deviation argument.
Interpretation of our results: Our upper bounds on the MSE of MMC depends on the quantity
? = ||M ? ?Z ? ||, and ?1 , ?2 . Since matrix N has independent zero-mean entries which are bounded
in absolute
? value by 1, N is a sub-Gaussian matrix with independent entries. For such matrices
?1 = O( n), ?2 = O(n) (see Theorem 5.39 in [26]). With these settings we can simplify the
expression in Equation (20) to
s
!
p
r
s
mn
log(n)
r
r?
?
rmn log2 (n)
mn
?
?
?
M SE(M ) = O
+
1+ ?
+
.
+
+
m
|?|
|?|2
m n
n
|?|3/2
A remarkable fact about our sample complexity results is that the sample complexity is independent
of the rank of matrix M ? , which could be large. Instead it depends on the rank of matrix Z ? which
?
we assume to be small. The dependence on M ? is via the term ? = ||M ?
? Z ? ||. From equation (4)
it is evident that the best error guarantees are obtained when ? = O( n). For such values of ?
equation (4) reduces to,
s
!
p
r
?
mn
log(n)
mn
r
mn
rmn log2 (n)
?
?
M SE(M ) = O
+
+
+
+
.
m
|?|
|?|
|?|2
|?|3/2
This result can be converted into a sample complexity bound as follows. If we are given |?| =
pr
pr
? mn 2/3 ), then M SE(M
?) ?
O(
m + . It is important to note that the floor of the MSE is
m,
which depends on the rank of Z ? and not on rank(M ? ), which can be much larger than r.
5
Experimental results
We compare the performance of MMC ? 1, MMC ? c, MMC- LS, and nuclear norm based low-rank
matrix completion (LRMC) [4] on various synthetic and real world datasets. The objective metric
that we use to compare different algorithms is the root mean squared error (RMSE) of the algorithms
on unobserved, test indices of the incomplete matrix.
5.1
Synthetic experiments
For our synthetic experiments we generated a random 30 ? 20 matrix Z ? of rank 5 by taking the
product of two random Gaussian matrices of size n ? r, and r ? m, with n = 30, m = 20, r = 5.
?
?
The matrix M ? was generated using the function, g ? (Mi,j
) = 1/(1 + exp(?cZi,j
)), where c > 0.
?
By increasing c, we increase the Lipschitz constant of the function g , making the matrix completion
task harder. For large enough c, Mi,j ? sgn(Zi,j ). We consider the noiseless version of the problem
where X = M ? . Each entry in the matrix X was sampled with probability p, and the sampled entries
are observed. This makes E|?| = mnp. For our implementations we assume that r is unknown,
7
p=0.2
?
0.2
?
p=0.35
?
p=0.5
?
0.1
?
p=0.7
?
RMSE
?on
?test
?data
?
RMSE
?on
?test
?data
?
0.3
?
0.6
?
p=0.2
?
0.4
?
p=0.35
?
p=0.5
?
0.2
?
p=0.7
?
0
?
0
?
LRMC
?
MMC-??LS
? MMC-??1
?
c=1.0
?
LRMC
?
MMC-??c
?
MMC-??LS
? MMC-??1
?
c=10
?
MMC-??c
?
RMSE
?on
?test
?data
?
0.8
?
0.5
?
0.4
?
0.8
?
0.7
?
0.6
?
0.5
?
0.4
?
0.3
?
0.2
?
0.1
?
0
?
p=0.2
?
p=0.35
?
p=0.5
?
p=0.7
?
LRMC
?
MMC-??LS
? MMC-??1
?
c=40
?
MMC-??c
?
Figure 2: RMSE of different methods at different values of c.
and estimate it either (i) via the use of a dedicated validation set in the case of MMC ? 1 or (ii)
adaptively, where we progressively increase the estimate of our rank until a sufficient decrease in
error over the training set is achieved [13]. For an implementation of the LRMC algorithm we used a
standard off-the-shelf implementation from TFOCS [27]. In order to speed up the run time of MMC,
we also keep track of the training set error, and terminate iterations if the relative residual on the
training set goes below a certain threshold 5 . In the supplement we provide a plot that demonstrates
that, for MMC ? c, the RMSE on the training dataset has a decreasing trend and reaches the required
threshold in at most 50 iterations. Hence, we set T = 50. Figure (2) show the RMSE of each method
for different values of p, c. As one can see from figure (2), the RMSE of all the methods improves
for any given c as p increases. This is expected since as p increases E|?| = pmn also increases. As
c increases, g ? becomes steeper increasing the effective rank of X. This makes matrix completion
task hard. For small p, such as p = 0.2, MMC ? 1 is competitive with MMC ? c and MMC?LS
and is often the best. In fact for small p, irrespective of the value of c, LRMC is far inferior to other
methods. For larger p, MMC ? c works the best achieving smaller RMSE over other methods.
5.2
Experiments on real datasets
We performed experimental comparisons on four real world datasets: paper recommendation, Jester3, ML-100k, Cameraman. All of the above datasets, except the Cameraman dataset, are ratings
datasets, where users have rated a few of the several different items. For the Jester-3 dataset we used
5 randomly chosen ratings for each user for training, 5 randomly chosen rating for validation and
the remaining for testing. ML-100k comes with its own training and testing dataset. We used 20%
of the training data for validation. For the Cameraman and the paper recommendation datasets 20%
of the data was used for training, 20% for validation and the rest for testing. The baseline algorithm
chosen for low rank matrix completion is LMaFit-A [13] 6 .
For each of the datasets we report the RMSE of MMC ? 1, MMC ? c, and LMaFit-A on the test
sets. We excluded MMC-LS from these experiments because in all of our datasets the number of
observed entries is a very small fraction of the total number of entries, and from our results on
synthetic datasets we know that MMC? LS is not the best performing algorithm in such cases.
Table 1 shows the RMSE over the test set of the different matrix completion methods. As we see
the RMSE of MMC ? c is the smallest of all the methods, surpassing LMaFit-A by a large margin.
Table 1: RMSE of different methods on real datasets.
Dataset
PaperReco
Jester-3
ML-100k
Cameraman
6
Dimensions
3426 ? 50
24938 ? 100
1682 ? 943
1536 ? 512
|?|
34294
124690
64000
157016
r0.01 (X)
47
66
391
393
LMaFit-A
0.4026
6.8728
3.3101
0.0754
MMC ? 1
0.4247
5.327
1.388
0.1656
MMC ? c
0.2965
5.2348
1.1533
0.06885
Conclusions and future work
We have investigated a new framework and for high rank matrix completion problems called monotonic matrix completion and proposed new algorithms. In the future we would like to investigate if
one could relax improve the theoretical results.
5
For our experiments this threshold is set to 0.001.
http://lmafit.blogs.rice.edu/. The parameter k in the LMaFit algorithm was set to effective
rank, and we used est rank=1 for LMaFit-A.
6
8
References
[1] Prem Melville and Vikas Sindhwani. Recommender systems. In Encyclopedia of machine learning.
Springer, 2010.
[2] Mihai Cucuringu. Graph realization and low-rank matrix completion. PhD thesis, Princeton University,
2012.
[3] Benjamin Recht. A simpler approach to matrix completion. JMLR, 12:3413?3430, 2011.
[4] Emmanuel J Cand`es and Benjamin Recht. Exact matrix completion via convex optimization. FOCM,
9(6):717?772, 2009.
[5] Emmanuel J Candes and Yaniv Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925?
936, 2010.
[6] Sahand Negahban and Martin J Wainwright. Restricted strong convexity and weighted matrix completion:
Optimal bounds with noise. The Journal of Machine Learning Research, 13(1):1665?1697, 2012.
[7] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from a few entries.
Information Theory, IEEE Transactions on, 56(6):2980?2998, 2010.
[8] David Gross. Recovering low-rank matrices from few coefficients in any basis. Information Theory, IEEE
Transactions on, 57(3):1548?1566, 2011.
[9] Jon Dattorro. Convex optimization & Euclidean distance geometry. Lulu. com, 2010.
[10] Bart Vandereycken. Low-rank matrix completion by riemannian optimization. SIAM Journal on Optimization, 23(2):1214?1236, 2013.
[11] Mingkui Tan, Ivor W Tsang, Li Wang, Bart Vandereycken, and Sinno J Pan. Riemannian pursuit for big
matrix recovery. In ICML, pages 1539?1547, 2014.
[12] Zheng Wang, Ming-Jun Lai, Zhaosong Lu, Wei Fan, Hasan Davulcu, and Jieping Ye. Rank-one matrix
pursuit for matrix completion. In ICML, pages 91?99, 2014.
[13] Zaiwen Wen, Wotao Yin, and Yin Zhang. Solving a low-rank factorization model for matrix completion
by a nonlinear successive over-relaxation algorithm. Mathematical Programming Computation, 2012.
[14] Brian Eriksson, Laura Balzano, and Robert Nowak. High-rank matrix completion. In AISTATS, 2012.
[15] Congyuan Yang, Daniel Robinson, and Rene Vidal. Sparse subspace clustering with missing entries. In
ICML, 2015.
[16] Mahdi Soltanolkotabi, Emmanuel J Candes, et al. A geometric analysis of subspace clustering with
outliers. The Annals of Statistics, 40(4):2195?2238, 2012.
[17] Ehsan Elhamifar and Rene Vidal. Sparse subspace clustering: Algorithm, theory, and applications.
TPAMI, 2013.
[18] Aarti Singh, Akshay Krishnamurthy, Sivaraman Balakrishnan, and Min Xu. Completion of high-rank
ultrametric matrices using selective entries. In SPCOM, pages 1?5. IEEE, 2012.
[19] Oluwasanmi Koyejo, Sreangsu Acharyya, and Joydeep Ghosh. Retargeted matrix factorization for collaborative filtering. In Proceedings of the 7th ACM conference on Recommender systems, pages 49?56.
ACM, 2013.
[20] Mark A Davenport, Yaniv Plan, Ewout van den Berg, and Mary Wootters. 1-bit matrix completion.
Information and Inference, 3(3):189?223, 2014.
[21] Sham M Kakade, Varun Kanade, Ohad Shamir, and Adam Kalai. Efficient learning of generalized linear
and single index models with isotonic regression. In NIPS, 2011.
[22] Adam Tauman Kalai and Ravi Sastry. The isotron algorithm: High-dimensional isotonic regression. In
COLT, 2009.
[23] Hidehiko Ichimura. Semiparametric least squares (sls) and weighted sls estimation of single-index models. Journal of Econometrics, 58(1):71?120, 1993.
[24] Joel L Horowitz and Wolfgang H?ardle. Direct semiparametric estimation of single-index models with
discrete covariates. Journal of the American Statistical Association, 91(436):1632?1640, 1996.
[25] Alekh Agarwal, Sham Kakade, Nikos Karampatziakis, Le Song, and Gregory Valiant. Least squares
revisited: Scalable approaches for multi-class prediction. In ICML, pages 541?549, 2014.
[26] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint
arXiv:1011.3027, 2010.
[27] Stephen Becker, E Candes, and M Grant. Tfocs: Flexible first-order methods for rank minimization. In
Low-rank Matrix Optimization Symposium, SIAM Conference on Optimization, 2011.
9
| 5916 |@word mild:1 version:3 norm:5 mention:2 harder:1 score:3 pandora:1 daniel:1 ours:1 ganti:1 recovered:1 com:3 weyl:1 plot:2 update:9 progressively:1 bart:2 item:1 iterates:1 provides:2 revisited:1 successive:1 simpler:1 zhang:1 mathematical:2 along:3 direct:1 differential:2 symposium:1 competitiveness:1 incorrect:2 introduce:1 mask:2 expected:1 andrea:1 p1:4 cand:1 multi:1 inspired:1 ming:1 decreasing:1 little:1 increasing:2 becomes:1 project:1 estimating:1 underlying:3 bounded:4 notation:1 sinno:1 lowest:1 interpreted:1 minimizes:2 lpav:10 unobserved:3 transformation:4 ghosh:1 guarantee:5 pseudo:1 exactly:1 rm:1 demonstrates:1 grant:1 yn:2 before:2 engineering:2 tends:2 incoherence:1 interpolation:3 pmn:1 initialization:2 studied:3 relaxing:1 hidehiko:1 factorization:2 enforces:1 testing:3 union:2 procedure:2 universal:1 empirical:1 thought:2 projection:1 get:8 cannot:2 eriksson:2 onto:1 context:1 applying:7 isotonic:2 equivalent:1 missing:2 jieping:1 go:1 oluwasanmi:1 l:12 convex:13 focused:1 amazon:1 recovery:4 insight:1 estimator:1 nuclear:3 oh:1 krishnamurthy:1 ultrametric:1 annals:1 shamir:1 suppose:2 tan:1 user:5 exact:1 programming:1 designing:1 element:5 trend:1 approximated:1 econometrics:2 observed:10 preprint:1 electrical:2 wang:2 worst:1 calculate:1 tsang:1 decrease:1 principled:1 benjamin:2 gross:1 convexity:2 ui:1 complexity:3 covariates:1 depend:1 solving:6 singh:2 technically:1 basis:1 exi:1 various:2 effective:6 tell:1 balzano:2 larger:2 valued:2 solve:7 say:1 relax:1 melville:1 statistic:2 noisy:1 advantage:1 differentiable:1 tpami:1 propose:3 product:3 hadamard:1 realization:1 rapidly:1 cluster:1 yaniv:2 plethora:1 generating:1 adam:2 object:2 mmc:54 illustrate:1 derive:1 completion:46 tions:1 lowrank:1 progress:1 sim:2 strong:1 p2:1 recovering:1 solves:2 involves:1 come:1 sgn:1 mingkui:1 require:1 fix:1 ultra:1 brian:1 ardle:1 extension:1 hold:1 around:1 considered:2 exp:3 a2:4 smallest:1 aarti:1 estimation:8 sivaraman:1 individually:1 weighted:2 minimization:7 sensor:1 gaussian:7 kalai:2 pn:5 shelf:1 derived:1 focus:2 rank:67 karampatziakis:1 contrast:1 baseline:1 inference:1 typically:1 entire:3 selective:1 interested:1 arg:1 colt:1 flexible:1 denoted:1 jester:2 retaining:1 plan:2 art:1 special:1 initialize:3 constrained:1 sampling:1 icml:4 jon:1 future:2 report:1 simplify:1 roman:1 few:6 wen:1 randomly:2 geometry:1 replacement:2 raghunandan:1 isotron:1 investigate:1 zheng:1 vandereycken:2 joel:1 zhaosong:1 accurate:2 nowak:1 ewout:1 ohad:1 filled:1 incomplete:5 euclidean:4 theoretical:2 joydeep:1 instance:1 column:3 modeling:1 boolean:3 deviation:1 entry:24 subset:1 rounding:3 too:1 gregory:1 synthetic:6 calibrated:7 adaptively:1 recht:2 vershynin:1 negahban:1 siam:2 off:1 pool:1 squared:9 thesis:1 davenport:2 horowitz:1 laura:2 derivative:4 american:1 li:1 converted:1 nonlinearities:2 coefficient:1 satisfy:1 ranking:1 vi:1 depends:4 piece:1 performed:1 root:1 wolfgang:1 analyze:2 steeper:1 netflix:1 recover:1 start:1 competitive:1 candes:3 rmse:13 collaborative:2 contribution:2 minimize:1 square:3 weak:1 lu:1 explain:1 reach:1 mi:6 riemannian:2 sampled:4 dataset:5 knowledge:1 improves:1 routine:1 disposal:1 varun:1 ichimura:1 wei:1 formulation:4 done:2 furthermore:2 spiky:1 until:1 keshavan:1 nonlinear:3 lack:1 mary:1 effect:2 ye:1 unbiased:1 true:1 hence:7 alternating:1 lrmc:6 iteratively:1 excluded:1 adjacent:1 round:1 inferior:1 generalized:1 outline:1 complete:1 demonstrate:1 evident:1 dedicated:1 wise:4 consideration:1 novel:1 rmn:3 belong:1 discussed:1 interpretation:1 association:1 willett:1 surpassing:1 mihai:1 rene:2 imposing:1 sastry:1 pm:3 soltanolkotabi:1 access:1 entail:1 alekh:1 own:1 recent:2 hide:2 optimizes:1 certain:3 inequality:1 blog:1 yi:1 seen:1 additional:2 floor:1 nikos:1 r0:2 ii:1 stephen:1 multiple:2 full:1 sham:2 reduces:1 technical:1 calculation:1 lai:1 a1:3 prediction:1 scalable:1 regression:3 noiseless:1 metric:2 arxiv:2 iteration:6 kernel:6 cz:2 lulu:1 agarwal:1 achieved:1 semiparametric:2 else:2 singular:4 grow:1 koyejo:1 hasan:1 rest:2 unlike:1 ineffective:1 subject:2 balakrishnan:1 call:5 presence:1 ideal:1 yk22:1 yang:1 enough:1 rendering:1 zi:18 fm:1 retargeted:1 idea:1 expression:2 sahand:1 becker:1 song:1 returned:1 wootters:1 useful:1 se:5 encyclopedia:1 http:1 sl:2 zj:1 notice:1 estimated:2 track:1 discrete:4 shall:10 zaiwen:1 key:1 four:1 threshold:3 achieving:1 wisc:2 pj:2 ravi:2 acharyya:1 uw:2 graph:1 relaxation:1 fraction:1 cone:2 run:1 everywhere:4 distorted:1 reasonable:1 bit:4 bound:5 def:6 completing:2 guaranteed:1 fan:1 quadratic:2 constraint:4 speed:1 argument:1 min:8 performing:4 relatively:1 martin:1 department:1 alternate:2 combination:1 smaller:1 pan:1 kakade:2 making:1 outlier:1 restricted:1 pr:7 den:1 equation:25 previously:1 discus:1 cameraman:4 know:3 end:1 umich:1 pursuit:2 vidal:2 spectral:3 appropriate:1 enforce:1 neighbourhood:1 vikas:1 original:1 assumes:2 remaining:2 clustering:5 top:1 log2:3 madison:2 exploit:1 emmanuel:3 classical:1 objective:6 quantity:1 strategy:1 dependence:1 traditional:1 gradient:9 subspace:7 distance:5 link:3 ensuing:1 provable:1 code:1 index:12 modeled:1 minn:1 robert:1 potentially:1 lmafit:7 implementation:4 reliably:1 unknown:5 perform:5 wotao:1 recommender:5 upper:2 observation:2 datasets:12 descent:5 looking:1 y1:2 rn:1 rebecca:1 rating:5 introduced:2 pair:1 required:3 david:1 dattorro:1 engine:1 learned:1 nip:1 robinson:1 address:1 suggested:1 below:1 challenge:1 program:1 reliable:1 wainwright:1 natural:1 residual:1 mn:14 improve:1 movie:1 rated:1 brief:1 imply:1 irrespective:1 incoherent:2 jun:1 focm:1 literature:1 discovery:1 geometric:1 relative:1 wisconsin:1 asymptotic:1 loss:15 expect:1 filtering:2 approximator:1 remarkable:1 validation:4 sufficient:1 sewoong:1 pi:6 row:3 last:1 keeping:1 czi:1 understand:1 institute:1 face:1 taking:1 differentiating:2 absolute:3 sparse:4 akshay:1 van:1 tauman:1 dimension:2 world:4 stand:1 kz:2 author:2 made:2 projected:4 far:1 transaction:2 keep:2 monotonicity:1 ml:3 assumed:2 xi:10 alternatively:1 spectrum:1 table:2 additionally:1 kanade:1 learn:3 transfer:3 terminate:1 mse:7 investigated:3 ehsan:1 aistats:1 montanari:1 big:1 noise:4 repeated:1 xu:1 sub:4 mahdi:1 jmlr:1 theorem:2 specific:1 discarding:1 tfocs:2 decay:1 valiant:1 supplement:1 phd:1 elhamifar:1 margin:1 michigan:1 logarithmic:1 simply:1 yin:2 cheaply:1 ivor:1 recommendation:2 sindhwani:1 monotonic:17 springer:1 minimizer:2 satisfies:4 violator:1 acm:2 cdf:1 rice:1 viewed:1 goal:1 identity:1 ann:1 mnp:1 lipschitz:14 hard:1 typical:2 specifically:1 uniformly:4 except:1 called:3 total:1 arbor:1 svd:2 succeeds:1 experimental:2 est:1 e:1 berg:1 mark:1 prem:1 princeton:1 ex:2 |
5,432 | 5,917 | Sparse Linear Programming via
Primal and Dual Augmented Coordinate Descent
Ian E.H. Yen ?
Kai Zhong ? Cho-Jui Hsieh ? Pradeep Ravikumar ? Inderjit S. Dhillon ?
?
University of Texas at Austin
University of California at Davis
{ianyen,pradeepr,inderjit}@cs.utexas.edu zhongkai@ices.utexas.edu
?
chohsieh@ucdavis.edu
?
?
Abstract
Over the past decades, Linear Programming (LP) has been widely used in different
areas and considered as one of the mature technologies in numerical optimization.
However, the complexity offered by state-of-the-art algorithms (i.e. interior-point
method and primal, dual simplex methods) is still unsatisfactory for problems in
machine learning with huge number of variables and constraints. In this paper,
we investigate a general LP algorithm based on the combination of Augmented
Lagrangian and Coordinate Descent (AL-CD), giving an iteration complexity of
O((log(1/))2 ) with O(nnz(A)) cost per iteration, where nnz(A) is the number
of non-zeros in the m ? n constraint matrix A, and in practice, one can further reduce cost per iteration to the order of non-zeros in columns (rows) corresponding
to the active primal (dual) variables through an active-set strategy. The algorithm
thus yields a tractable alternative to standard LP methods for large-scale problems
of sparse solutions and nnz(A) mn. We conduct experiments on large-scale
LP instances from `1 -regularized multi-class SVM, Sparse Inverse Covariance Estimation, and Nonnegative Matrix Factorization, where the proposed approach
finds solutions of 10?3 precision orders of magnitude faster than state-of-the-art
implementations of interior-point and simplex methods. 1
1
Introduction
Linear Programming (LP) has been studied since the early 19th century and has become one of
the representative tools of numerical optimization with wide applications in machine learning such
as `1 -regularized SVM [1], MAP inference [2], nonnegative matrix factorization [3], exemplarbased clustering [4, 5], sparse inverse covariance estimation [6], and Markov Decision Process [7].
However, as the demand for scalability keeps increasing, the scalability of existing LP solvers has
become unsatisfactory. In particular, most algorithms in machine learning targeting large-scale data
have a complexity linear to the data size [8, 9, 10], while the complexity of state-of-the-art LP
solvers (i.e. Interior-Point method and Primal, Dual Simplex methods) is still at least quadratic in
the number of variables or constraints [11].
The quadratic complexity comes from the need to solve each linear system exactly in both simplex
and interior point method. In particular, the simplex method, when traversing from one corner point
to another, requires solution to a linear system that has dimension linear to the number of variables
or constraints, while in an Interior-Point method, finding the Newton direction requires solving a
linear system of similar size. While there are sparse variants of LU and Cholesky decomposition that
can utilize the sparsity pattern of matrix in a linear system, the worst-case complexity for solving
such system is at least quadratic to the dimension except for very special cases such as a tri-diagonal
or band-structured matrix.
1
Our solver has been released here: http://www.cs.utexas.edu/?ianyen/LPsparse/
1
For interior point method (IPM), one remedy to the high complexity is employing an iterative method
such as Conjugate Gradient (CG) to solve each linear system inexactly. However, this can hardly
tackle the ill-conditioned linear systems produced by IPM when iterates approach boundary of constraints [12]. Though substantial research has been devoted to the development of preconditioners
that can help iterative methods to mitigate the effect of ill-conditioning [12, 13], creating a preconditioner of tractable size is a challenging problem by itself [13]. Most commercial LP software thus
still relies on exact methods to solve the linear system.
On the other hand, some dual or primal (stochastic) sub-gradient descent methods have cheap cost
for each iteration, but require O(1/2 ) iterations to find a solution of precision, which in practice
can even hardly find a feasible solution satisfying all constraints [14].
Augmented Lagrangian Method (ALM) was invented early in 1969, and since then there have been
several works developed Linear Program solver based on ALM [15, 16, 17]. However, the challenge
of ALM is that it produces a series of bound-constrained quadratic problems that, in the traditional
sense, are harder to solve than linear system produced by IPM or Simplex methods [17]. Specifically,
in a Projected-CG approach [18], one needs to solve several linear systems via CG to find solution
to the bound-constrained quadratic program, while there is no guarantee on how many iterations
it requires. On the other hand, Projected Gradient Method (PGM), despite its guaranteed iteration
complexity, has very slow convergence in practice. More recently, Multi-block ADMM [19, 20]
was proposed as a variant of ALM that, for each iteration, only updates one pass (or even less)
blocks of primal variables before each dual update, which however, requires a much smaller step
size in the dual update to ensure convergence [20, 21] and thus requires large number of iterations
for convergence to moderate precision. To our knowledge, there is still no report on a significant
improvement of ALM-based methods over IPM or Simplex method for Linear Programming.
In the recent years, Coordinate Descent (CD) method has demonstrated efficiency in many machine
learning problems with bound constraints or other non-smooth terms [9, 10, 22, 23, 24, 25] and has
solid analysis on its iteration complexity [26, 27]. In this work, we show that CD algorithm can
be naturally combined with ALM to solve Linear Program more efficiently than existing methods
on large-scale problems. We provide an O((log(1/))2 ) iteration complexity of the Augmented
Lagrangian with Coordinate Descent (AL-CD) algorithm that bounds the total number of CD updates required for an -precise solution, and describe an implementation of AL-CD that has cost
O(nnz(A)) for each pass of CD. In practice, an active-set strategy is introduced to further reduce
cost of each iteration to the active size of variables and constraints for primal-sparse and dual-sparse
LP respectively, where a primal-sparse LP has most of variables being zero, and a dual-sparse LP
has few binding constraints at the optimal solution. Note, unlike in IPM, the conditioning of each
subproblem in ALM does not worsen over iterations [15, 16]. The AL-CD framework thus provides
an alternative to interior point and simplex methods when it is infeasible to exactly solving an n ? n
(or m ? m) linear system.
2
Sparse Linear Program
We are interested in solving linear programs of the form
min
x?Rn
s.t.
f (x) = cT x
AI x ? bI , AE x = bE
xj ? 0, j ? [nb ]
(1)
where AI is mI by n matrix of coefficients and AE is mE by n. Without loss of generality, we
assume non-negative constraints are imposed on the first nb variables, denoted as xb , such that
x = [xb ; xf ] and c = [cb ; cf ]. The inequality and equality coefficient matrices can then be
partitioned as AI = [AI,b AI,f ] and AE = [AE,b AE,f ]. The dual problem of (1) then takes the
form
minm g(y) = bT y
y?R
s.t.
? ATb y ? cb , ?ATf y = cf
yi ? 0, i ? [mI ].
2
(2)
where m = mI + mE , b = [bI ; bE ], Ab = [AI,b ; AE,b ], Af = [AI,f ; AE,f ], and y = [yI ; yE ]. In
most of LP occur in machine learning, m and n are both at scale in the order 105 ?106 , for which an
algorithm with cost O(mn), O(n2 ) or O(m2 ) is unacceptable. Fortunately, there are usually various
types of sparsity present in the problem that can be utilized to lower the complexity.
First, the constraint matrix A = [AI ; AE ] are usually pretty sparse in the sense that nnz(A) mn,
and one can compute matrix-vector product Ax in O(nnz(A)). However, in most of current LP
solvers, not only matrix-vector product but also a linear system involving A needs to be solved,
which in general, has cost much more than O(nnz(A)) and can be up to O(min(n3 , m3 )) in the
worst case. In particular, the simplex-type methods, when moving from one corner to another,
requires solving a linear system that involves a sub-matrix of A with columns corresponding to the
basic variables [11], while in an interior point method (IPM), one also needs to solve a normal
equation system of matrix ADt AT to obtain the Newton direction, where Dt is a diagonal matrix
that gradually enforces complementary slackness as IPM iteration t grows [11]. While one remedy
to the high complexity is to employ iterative method such as Conjugate Gradient (CG) to solve the
system inexactly within IPM, this approach can hardly handle the ill-conditionedness occurs when
IPM iterates approaches boundary [12]. On the other hand, the Augmented Lagrangian approach
does not have such asymptotic ill-conditionedness and thus an iterative method with complexity
linear to O(nnz(A)) can be used to produce sufficiently accurate solution for each sub-problem.
Besides sparsity in the constraint matrix A, two other types of structures, which we termed primal
and dual sparsity, are also prevalent in the context of machine learning. A primal-sparse LP refers
to an LP with optimal solution x? comprising only few non-zero elements, while a dual-sparse LP
refers to an LP with few binding constraints at optimal, which corresponds to the non-zero dual
variables. In the following, we give two examples of sparse LP.
L1-Regularized Support Vector Machine The problem of L1-regularized multi-class Support
Vector Machine [1]
k
l
X
X
min ?
kwm k1 +
?i
wm ,?i
(3)
m=1
i=1
T
s.t.
wyTi xi ? wm
xi ? em
i ? ?i , ?(i, m)
m
m
where ei = 0 if yi = m, ei = 1 otherwise. The task is dual-sparse since among all samples i and
class k, only those leads to misclassification will become binding constraints. The problem (3) is
also primal-sparse since it does feature selection through `1 -penalty. Note the constraint matrix in
(3) is also sparse since each constraint only involves two weight vectors, and the pattern xi can be
also sparse.
Sparse Inverse Covariance Estimation The Sparse Inverse Covariance Estimation aims to find
a sparse matrix ? that approximate the inverse of Covariance matrix. One of the most popular
approach to this solves a program of the form [6]
min
k?k1
??Rd?d
(4)
s.t.
kS? ? Id kmax ? ?
which is primal-sparse due to the k.k1 penalty. The problem has a dense constraint matrix, which
however, has special structure where the coefficient matrix S can be decomposed into a product of
two low-rank and (possibly) sparse n by d matrices S = Z T Z. In case Z is sparse or n d, this
decomposition can be utilized to solve the Linear Program much more efficiently. We will discuss
on how to utilize such structure in section 4.3.
3
Primal and Dual Augmented Coordinate Descent
In this section, we describe an Augmented Lagrangian method (ALM) that carefully tackles the
sparsity in a LP. The choice between Primal and Dual ALM depends on the type of sparsity present
in the LP. In particular, a primal AL method can solve a problem of few non-zero variables more
efficiently, while dual ALM will be more efficient for problem with few binding constraints. In the
following, we describe the algorithm only from the primal point of view, while the dual version can
be obtained by exchanging the roles of primal (1) and dual (2).
3
Algorithm 1 (Primal) Augmented Lagrangian Method
Initialization: y 0 ? Rm and ?0 > 0.
repeat
1. Solve (6) to obtain (xt+1 , ? t+1 ) from y t .
AI xt+1 ? bI + ? t+1
t+1
t
2. Update y
= y + ?t
.
AE xt+1 ? bE
3. t = t + 1.
4. Increase ?t by a constant factor if necessary.
until k[AI xt ? bI ]+ k? ? p and kAE xt ? bE k? ? .
3.1
Augmented Lagrangian Method (Dual Proximal Method)
Let g(y) be the dual objective function (2) that takes ? if y is infeasible. The primal AL algorithm
can be interpreted as a dual proximal point algorithm [16] that for each iteration t solves
y t+1 = argmin g(y) +
y
1
ky ? y t k2 .
2?t
(5)
Since g(y) is nonsmooth, (5) is not easier to solve than the original dual problem. However, the dual
of (5) takes the form:
2
1 yIt
?t
AI x ? bI + ?
+
min F (x, ?) = c x +
t
AE x ? bE
x, ?
2
?t yE
s.t. xb ? 0, ? ? 0,
T
(6)
which is a bound-constrained quadratic problem. Note given (x, ?) as Lagrangian Multipliers of (5),
the corresponding y minimizing Lagrangian L(x, ?, y) is
y(x, ?) = ?t
AI x ? bI + ?
AE x ? bE
+
yIt
t
yE
,
(7)
and thus one can solve (x? , ? ? ) from (6) and find y t+1 through (7). The resulting algorithm is
sketched in Algorithm 1. For problem of medium scale, (6) is not easier to solve than a linear
system due to non-negative constraints, and thus an ALM is not preferred to IPM in the traditional
sense. However, for large-scale problem with m ? n nnz(A), the ALM becomes advantageous
since: (i) the conditioning of (6) does not worsen over iterations, and thus allows iterative methods
to solve it approximately in time proportional to O(nnz(A)). (ii) For a primal-sparse (dual-sparse)
problem, most of primal (dual) variables become binding at zero as iterates approach to the optimal
solution, which yields a potentially much smaller subproblem.
3.2
Solving Subproblem via Coordinate Descent
Given a dual solution yt , we employ a variant of Randomized Coordinate Descent (RCD) method
to solve subproblem (6). First, we note that, given x, the part of variables in ? can minimized in
closed-form as
?(x) = [bI ? AI x ? yIt /?t ]+ ,
(8)
where function [v]+ truncates each element of vector v to be non-negative as [v]+i = max{vi , 0}.
Then (6) can be re-written as
?t
[AI x ? bI + yIt /?t ]+
T
?
min F (x) = c x +
t
AE x ? bE + yE
/?t
x
2
s.t. xb ? 0.
4
2
(9)
Algorithm 2 RCD for subproblem (6)
Algorithm 3 PN-CG for subproblem (6)
t,0
t,0 t,0
INPUT: ?t > 0 and (x , w , v ) satisfying
INPUT: ?t > 0 and (xt,0 , wt,0 , v t,0 ) satisfying
relation (11), (12).
relation (11), (12).
OUTPUT: (xt,k , wt,k , v t,k )
OUTPUT: (xt,k , wt,k , v t,k )
repeat
repeat
1. Pick a coordinate j uniformly at random.
1. Identify active variables At,k .
2?
?
2. Compute [?j F (x)]At,k and set Dt,k .
2. Compute ?j F (x), ?j F (x).
?
3. Find Newton direction d?At,k with CG.
3. Obtain Newton direction dj .
4. Find step size via projected line search.
4. Do line search (15) to find step size.
5. Update xt,k+1 ? (xt,k + ? r d?j )+ .
5. Update xt,k+1 ? xt,k + ? r d?j .
6. Maintain relation (11), (12).
6. Maintain relation (11), (12).
7. k ? k + 1.
7. k ? k + 1.
until kd?At,k k? ? t .
until kd? (x)k? ? t .
Denote the objective function as F? (x). The gradient of (9) can be expressed as
?F? (x) = c + ?t ATI [w]+ + ?t ATE v
(10)
where
w = AI x ? bI + yIt /?t
v = AE x ? bE +
(11)
t
yE
/?t ,
(12)
and the (generalized) Hessian of (9) is
?2 F? (x) = ?t ATI D(w)AI + ?t ATE AE ,
(13)
where D(w) is an mI by mI diagonal matrix with Dii (w) = 1 if wi > 0 and Dii = 0 otherwise.
The RCD algorithm then proceeds as follows. In each iteration k, it picks a coordinate from j ?
{1, .., n} uniformly at random and minimizes w.r.t. the coordinate. The minimization is conducted
by a single-variable Newton step, which first finds the Newton direction d?j through minimizing a
quadratic approximation
1
d?j = argmin ?j F? (xt,k )d + ?2j F? (xt,k )d2
2
d
(14)
xt,k
j + d ? 0,
s.t.
and then conducted a line search to find the smallest r ? {0, 1, 2, ...} satisfying
F? (xt,k + ? r d?j ej ) ? F? (xt,k ) ? ?? r (?j F? (xt,k )d?j ).
(15)
for some line-search parameter ? ? (0, 1/2], ? ? (0, 1), where ej denotes a vector with only jth
element equal to 1 and all others equal to 0. Note the single-variable problem (14) has closed-form
solution
h
i
? (xt,k )/?2j F? (xt,k ) ? xt,k ,
d?j = xt,k
?
?
F
(16)
j
j
j
j
j
+
which in a naive implementation, takes O(nnz(A)) time due to the computation of (11) and (12).
However, in a clever implementation, one can maintain the relation (11), (12) as follows whenever
a coordinate xj is updated by ? r d?j
t,k+1 t,k
I
aj
w
w
r ?
=
+ ? dj
,
(17)
aE
v t,k+1
v t,k
j
where aj = [aIj ; aE
j ] denotes the jth column of AI and AE . Then the gradient and (generalized)
second-derivative of jth coordinate
?j F? (x) = cj + ?t haIj , [w]+ i + ?t haE
j , vi
!
?2j F? (x)
= ?t
X
(aIi,j )2
i:wi >0
5
+
X
i
2
(aE
i,j )
(18)
can be computed in O(nnz(aj )) time. Similarly, for each coordinate update, one can evaluate the
difference of function value F? (xt,k + d?j ej ) ? F? (xt,k ) in O(nnz(aj )) by only computing terms
related to the jth variable.
The overall procedure for solving subproblem is summarized in Algorithm 2. In practice, a random
permutation is used instead of uniform sampling to ensure that every coordinate is updated once
before proceeding to the next round, which can speed up convergence and ease the checking of stopping condition kd? (x)k? ? t , and an active-set strategy is employed to avoid updating variables
with d?j = 0. We describe details in section 4
3.3
Convergence Analysis
In this section, we prove the iteration complexity of AL-CD method. Existing analysis [26, 27]
shows that Randomized Coordinate Descent can be up to n times faster than Gradient-based methods
in certain conditions. However, to prove a global linear rate of convergence the analysis requires
objective function to be strongly convex, which is not true for our sub-problem (6). Here we follow
the approach in [28, 29] to show global linear convergence of Algorithm 2 by utilizing the fact that,
when restricted to a constant subspace, (6) is strongly convex. All proofs will be included in the
appendix.
Theorem 1 (Linear Convergence). Denote F ? as the optimum of (6) and x
? = [x; ?]. The iterates
{?
xk }?
k=0 of the RCD Algorithm 2 has
1
E[F (?
xk+1 )] ? F ? ? 1 ?
E[F (?
xk )] ? F ? ,
(19)
?n
where
? = max 16?t M ?(F 0 ? F ? ) , 2M ?(1 + 4L2g ) , 6 ,
M = maxj?[?n] k?
aj k2 is an upper bound on coordinate-wise second derivative, and Lg is local
Lipschitz-continuous constant of function g(z) = ?t kz ? b + yt /?t k2 , and ? is constant of Hoffman?s
bound that depends on the polyhedron formed by the set of optimal solutions.
Then the following theorem gives a bound on the number of iterations required to find an 0 -precise
solution in terms of the proximal minimization (5).
Theorem 2 (Inner Iteration Complexity). Denote y(?
xk ) as the dual solution (7) corresponding to
k
the primal iterate x
? . To guarantee
ky(?
xk ) ? y t+1 k ? 0
(20)
with probability 1 ? p, it suffices running RCD Algorithm 2 for number of iterations
s
!
2(F (?
x0 ) ? F ? ) 1
k ? 2?n log
.
?t p
0
Now we prove the overall iteration complexity of AL-CD. Note that existing linear convergence
analysis of ALM on Linear Program [16] assumes exact solutions of subproblem (6), which is
not possible in practice. Our next theorem extends the linear convergence result to cases when
subproblems are solved inexactly, and in particular, shows the total number of coordinate descent
updates required to find an -accurate solution.
Theorem 3 (Iteration Complexity). Denote {?
y t }?
t=1 as the sequence of iterates obtained from inext ?
act dual proximal updates, {y }t=1 as that generated by exact updates, and yS ? as the projection
of y to the set of optimal dual solutions. To guarantee k?
y t ? y?St ? k ? 2 with probability 1 ? p, it
suffices to run Algorithm 1 for
LR
1
(21)
T = (1 + ) log
?
outer iterations with ?t = (1 + ?)L, and solve each sub-problem (6) by running Algorithm 2 for
?
3
1
LR
k ? 2?n log
+ log (1 + ) log
(22)
2
?
inner
q iterations, where L is a constant depending on the polyhedral set of optimal solutions, ? =
2(1+?)L(F 0 ?F ? )
,
p
R = kprox?t g (y 0 ) ? y 0 k, and F 0 , F ? are upper and lower bounds on the
initial and optimal function values of subproblem respectively.
6
3.4
Fast Asymptotic Convergence via Projected Newton-CG
The RCD algorithm converges to a solution of moderate precision efficiently, but in some problems
a higher precision might be required. In such case, we transfer the subproblem solver from RCD
to a Projected Newton-CG (PN-CG) method after iterates are close enough to the optimum. Note
the Projected Newton method does not have global iteration complexity but has fast convergence for
iterates very close to the optimal.
Denote F (x) as the objective in (9). Each iterate of PN-CG begins by finding the set of active
variables defined as
t,k
At,k = {j|xt,k
(23)
j > 0 ? ?j F (x ) < 0}.
Then the algorithm fixes xt,k
/ At,k and solves a Newton linear system w.r.t. j ? At,k
j = 0, ?j ?
[?2At,k F (xt,k )]d = ?[?At,k F (xt,k )]
(24)
?
to obtain direction d for the current active variables. Let dAt,k denotes a size-n vector taking value
in d? for j ? At,k and taking value 0 for j ?
/ At,k . The algorithm then conducts a projected line
search to find smallest r ? {0, 1, 2, ...} satisfying
F ([xt,k + ? r dAt,k ]+ ) ? F (xt,k ) ? ?? r (?j F (xt,k )dAt,k ),
(25)
and update x by xt,k+1 ? (xt,k + ? r d?j )+ . Compared to interior point method, one key to the
tractability of this approach lies on the conditioning of linear system (24), which does not worsen
as outer iteration t increases, so an iterative Conjugate Gradient (CG) method can be used to obtain
accurate solution without factorizing the Hessian matrix. The only operation required within CG is
the Hessian-vector product
[?2At,k F (xt,k )]s = ?t [ATI D(wt,k )AI + ATE AE ]At,k s,
(26)
where the operator [.]At,k takes the sub-matrix with row and column indices belonging to At,k . For
a primal or dual-sparse LP, the product (26) can be evaluated very efficiently, since it only involves
non-zero elements in columns of AI , AE belonging to the active set, and rows of AI corresponding
to the binding constraints for which Dii (wt,k ) > 0. The overall cost of the product (26) is only
O nnz([AI ]Dt,k ,At,k ) + nnz([AE ]:,At,k ) ,
where Dt,k = {i|wit,k > 0} is the set of current binding constraints. Considering that the computational bottleneck of PN-CG is on the CG iterations for solving linear system (24), the efficient
computation of product (26) reduces the overall complexity of PN-CG significantly. The whole
procedure is summarized in Algorithm 3.
4
4.1
Practical Issues
Precision of Subproblem Minimization
In practice, it is unnecessary to solve subproblem (6) to high precision, especially for iterations
of ALM in the beginning. In our implementation, we employ a two-phase strategy, where in the
first phase we limit the cost spent on each sub-problem (6) to be a constant multiple of nnz(A),
while in the second phase we dynamically increment the AL parameter ?t and inner precision t to
ensure sufficient decrease in the primal and dual infeasibility respectively. The two-phase strategy
is particularly useful for primal or dual-sparse problem, where sub-problem in the latter phase has
smaller active set that results in less computation cost even when solved to high precision.
4.2
Active-Set Strategy
Our implementation of Algorithm 2 maintains an active set of variables A, which initially contains
all variables, but during the RCD iterates, any variable xj binding at 0 with gradient ?j F greater
than a threshold ? will be excluded from A till the end of each subproblem solving. A will be
re-initialized after each dual proximal update (7). Note in the initial phase, the cost spent on each
subproblem is a constant multiple of nnz(A), so if |A| is small one would spend more iterations on
the active variables to achieve faster convergence.
7
4.3
Dealing with Decomposable Constraint Matrix
When we have a m by n constraint matrix A = U V T that can be decomposed into product of an
m ? r matrix U and a r ? n matrix V T , if r min{m, n} or nnz(U ) + nnz(V ) nnz(A), we
can re-formulate the constraint Ax ? b as U z ? b , V T x = z with auxiliary variables z ? Rr .
This new representation reduce the cost of Hessian-vector product in Algorithm 3 and the cost of
each pass of CD in Algorithm 2 from O(nnz(A)) to O(nnz(U ) + nnz(V )).
5
Numerical Experiments
Table 1: Timing Results (in sec. unless specified o.w.) on Multiclass L1-regularized SVM
Data
rcv1
news
sector
mnist
cod-rna.rf
vehicle
real-sim
nb
4,833,738
2,498,415
11,597,992
75,620
69,537
79,429
114,227
mI
778,200
302,765
666,848
540,000
59,535
157,646
72,309
P-Simp.
> 48hr
> 48hr
> 48hr
6,454
86,130
3,296
> 48hr
D-Simp.
> 48hr
37,912
9,282
2,556
5,738
143.33
49,405
Barrier
> 48hr
> 48hr
> 48hr
73,036
> 48hr
8,858
89,476
D-ALCD
3,452
148
1,419
146
3,130
31
179
P-ALCD
3,155
395
2,029
7,207
2,676
598
297
Table 2: Timing Results (in sec. unless specified o.w.) on Sparse Inverse Covariance Estimation
Data
textmine
E2006
dorothea
nb
60,876
55,834
47,232
mI
60,876
55,834
47,232
mE
43,038
32,174
1,600
nf
43,038
32,174
1,600
P-Simp
> 48hr
> 48hr
3,980
D-Simp
> 48hr
> 48hr
103
Barrier
> 48hr
94623
82
D-ALCD
43,096
> 48hr
47
P-ALCD
18,507
4,207
38
Table 3: Timing Results (in sec. unless specified o.w.) for Nonnegative Matrix Factorization.
Data
micromass
ocr
nb
2,896,770
6,639,433
mI
4,107,438
13,262,864
P-Simp.
> 96hr
> 96hr
D-Simp.
> 96hr
> 96hr
Barrier
280,230
284,530
D-ALCD
12,966
40,242
P-ALCD
12,119
> 96hr
In this section, we compare the AL-CD algorithm with state-of-the-art implementation of interior
point and primal, dual Simplex methods in commercial LP solver CPLEX, which is of top efficiency
among many LP solvers as investigated in [30]. For all experiments, the stopping criteria is set to
require both primal and dual infeasibility (in the `? -norm) smaller than 10?3 and set the initial subproblem tolerance t = 10?2 and ?t = 1. The LP instances are generated from L1-SVM (3), Sparse
Inverse Covariance Estimation (4) and Nonnegative Matrix Factorization [3]. For the Sparse Inverse
Covariance Estimation problem, we use technique introduced in section 4.3 to decompose the lowrank matrix S, and since (4) results in d independent problems for each column of the estimated
matrix, we report result on only one of them. The data source and statistics are included in the
appendix.
Among all experiments, we observe that the proposed primal, dual AL-CD methods become particularly advantageous when the matrix A is sparse. For example, for text data set rcv1, real-sim and
news in Table 1, the matrix A is particularly sparse and AL-CD can be orders of magnitude faster
than other approaches by avoiding solving n ? n linear system exactly. In addition, the dual-ALCD
(also dual simplex) is more efficient in L1-SVM problem due to the problem?s strong dual sparsity,
while the primal-ALCD is more efficient on the primal-sparse Inverse Covariance estimation problem. For the Nonnegative Matrix Factorization problem, both the dual and primal LP solutions are
not particularly sparse due to the choice of matrix approximation tolerance (1% of #samples), but
the AL-CD approach is still comparably more efficient.
Acknowledgement We acknowledge the support of ARO via W911NF-12-1-0390, and the support
of NSF via grants CCF-1320746, CCF-1117055, IIS-1149803, IIS-1320894, IIS-1447574, DMS1264033, and NIH via R01 GM117594-01 as part of the Joint DMS/NIGMS Initiative to Support
Research at the Interface of the Biological and Mathematical Sciences.
8
References
[1] J. Zhu, S. Rosset, T. Hastie, and R. Tibshirani. 1-norm support vector machines. NIPS, 2004.
[2] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009.
[3] N. Gillis and R. Luce. Robust near-separable nonnegative matrix factorization using linear optimization.
JMLR, 2014.
[4] A. Nellore and R. Ward. Recovery guarantees for exemplar-based clustering. arXiv., 2013.
[5] I. Yen, X. Lin, K. Zhong, P. Ravikumar, and I. Dhillon. A convex exemplar-based approach to MADBayes dirichlet process mixture models. In ICML, 2015.
[6] M. Yuan. High dimensional inverse covariance matrix estimation via linear programming. JMLR, 2010.
[7] D. Bello and G. Riano. Linear programming solvers for Markov decision processes. In Systems and
Information Engineering Design Symposium, pages 90?95, 2006.
[8] T. Joachims. Training linear svms in linear time. In KDD. ACM, 2006.
[9] C. Hsieh, K. Chang, C. Lin, S.S. Keerthi, and S. Sundararajan. A dual coordinate descent method for
large-scale linear SVM. In ICML, volume 307. ACM, 2008.
[10] G. Yuan, C. Hsieh K. Chang, and C. Lin. A comparison of optimization methods and software for largescale l1-regularized linear classification. JMLR, 11, 2010.
[11] J. Nocedal and S.J. Wright. Numerical Optimization. Springer, 2006.
[12] J. Gondzio. Interior point methods 25 years later. EJOR, 2012.
[13] J. Gondzio. Matrix-free interior point method. Computational Optimization and Applications, 2012.
[14] V.Eleuterio and D.Lucia. Finding approximate solutions for large scale linear programs. Thesis, 2009.
[15] Evtushenko, Yu. G, Golikov, AI, and N. Mollaverdy. Augmented lagrangian method for large-scale linear
programming problems. Optimization Methods and Software, 20(4-5):515?524, 2005.
[16] F. Delbos and J.C. Gilbert. Global linear convergence of an augmented lagrangian algorithm for solving
convex quadratic optimization problems. 2003.
[17] O. G?uler. Augmented lagrangian algorithms for linear programming. Journal of optimization theory and
applications, 75(3):445?470, 1992.
[18] J. Mor?e J and G. Toraldo. On the solution of large quadratic programming problems with bound constraints. SIAM Journal on Optimization, 1(1):93?113, 1991.
[19] M. Hong and Z. Luo. On linear convergence of alternating direction method of multipliers. arXiv, 2012.
[20] H. Wang, A. Banerjee, and Z. Luo. Parallel direction method of multipliers. In NIPS, 2014.
[21] C.Chen, B.He, Y.Ye, and X.Yuan. The direct extension of admm for multi-block convex minimization
problems is not necessarily convergent. Mathematical Programming, 2014.
[22] I.Dhillon, P.Ravikumar, and A.Tewari. Nearest neighbor based greedy coordinate descent. In NIPS, 2011.
[23] I. Yen, C. Chang, T. Lin, S. Lin, and S. Lin. Indexed block coordinate descent for large-scale linear
classification with limited memory. In KDD. ACM, 2013.
[24] I. Yen, S. Lin, and S. Lin. A dual-augmented block minimization framework for learning with limited
memory. In NIPS, 2015.
[25] K. Zhong, I. Yen, I. Dhillon, and P. Ravikumar. Proximal quasi-Newton for computationally intensive
l1-regularized m-estimators. In NIPS, 2014.
[26] P. Richt?arik and M. Tak?ac? . Iteration complexity of randomized block-coordinate descent methods for
minimizing a composite function. Mathematical Programming, 144(1-2):1?38, 2014.
[27] Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM
Journal on Optimization, 22(2):341?362, 2012.
[28] P. Wang and C. Lin. Iteration complexity of feasible descent methods for convex optimization. The
Journal of Machine Learning Research, 15(1):1523?1548, 2014.
[29] I. Yen, C. Hsieh, P. Ravikumar, and I.S. Dhillon. Constant nullspace strong convexity and fast convergence
of proximal methods under high-dimensional settings. In NIPS, 2014.
[30] B. Meindl and M. Templ. Analysis of commercial and free and open source solvers for linear optimization
problems. Eurostat and Statistics Netherlands, 2012.
[31] A.J. Hoffman. On approximate solutions of systems of linear inequalities. Journal of Research of the
National Bureau of Standards, 49(4):263?265, 1952.
[32] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, 2007.
[33] I. Yen, T. Lin, S. Lin, P. Ravikumar, and I. Dhillon. Sparse random feature algorithm as coordinate descent
in Hilbert space. In NIPS, 2014.
9
| 5917 |@word version:1 advantageous:2 norm:2 open:1 d2:1 hsieh:4 covariance:10 decomposition:2 pick:2 kwm:1 solid:1 harder:1 ipm:10 initial:3 atb:1 series:1 contains:1 ati:3 past:1 existing:4 current:3 luo:2 bello:1 written:1 numerical:4 kdd:2 cheap:1 update:13 chohsieh:1 greedy:1 xk:5 beginning:1 lr:2 iterates:8 provides:1 mathematical:3 unacceptable:1 direct:1 become:5 symposium:1 initiative:1 yuan:3 prove:3 polyhedral:1 x0:1 alm:14 multi:4 decomposed:2 solver:10 increasing:1 becomes:1 begin:1 considering:1 medium:1 argmin:2 interpreted:1 minimizes:1 developed:1 finding:3 gm117594:1 guarantee:4 mitigate:1 every:1 nf:1 act:1 tackle:2 exactly:3 rm:1 k2:3 grant:1 ice:1 before:2 simp:6 local:1 timing:3 engineering:1 limit:1 despite:1 id:1 approximately:1 might:1 initialization:1 studied:1 k:1 dynamically:1 challenging:1 ease:1 factorization:6 limited:2 bi:9 practical:1 enforces:1 practice:7 block:6 procedure:2 dorothea:1 area:1 nnz:23 significantly:1 composite:1 projection:1 refers:2 jui:1 interior:12 targeting:1 selection:1 clever:1 nb:5 context:1 kmax:1 close:2 operator:1 www:1 gilbert:1 map:1 lagrangian:12 demonstrated:1 imposed:1 yt:2 convex:6 formulate:1 wit:1 decomposable:1 recovery:1 m2:1 estimator:1 utilizing:1 century:1 handle:1 coordinate:23 increment:1 updated:2 commercial:3 exact:3 programming:11 element:4 satisfying:5 particularly:4 utilized:2 updating:1 invented:1 role:1 subproblem:15 solved:3 wang:2 worst:2 pradeepr:1 news:2 richt:1 decrease:1 substantial:1 convexity:1 complexity:21 nesterov:1 solving:11 efficiency:3 aii:1 joint:1 various:1 fast:3 describe:4 cod:1 adt:1 kai:1 widely:1 solve:18 spend:1 otherwise:2 statistic:2 ward:1 itself:1 sequence:1 rr:1 aro:1 product:9 till:1 achieve:1 scalability:2 ky:2 convergence:16 optimum:2 produce:2 converges:1 uler:1 help:1 depending:1 spent:2 ac:1 exemplar:2 nearest:1 lowrank:1 sim:2 strong:2 solves:3 auxiliary:1 c:2 involves:3 come:1 direction:8 stochastic:1 dii:3 require:2 suffices:2 fix:1 decompose:1 biological:1 extension:1 sufficiently:1 considered:1 wright:1 normal:1 cb:2 early:2 smallest:2 released:1 estimation:9 utexas:3 tool:1 hoffman:2 minimization:5 mit:1 rna:1 arik:1 aim:1 pn:5 ej:3 zhong:3 avoid:1 ax:2 joachim:1 improvement:1 unsatisfactory:2 prevalent:1 rank:1 polyhedron:1 cg:15 sense:3 inference:1 stopping:2 bt:1 initially:1 relation:5 koller:1 tak:1 quasi:1 interested:1 comprising:1 sketched:1 issue:1 classification:2 overall:4 dual:43 ill:4 denoted:1 among:3 development:1 art:4 special:2 constrained:3 equal:2 once:1 sampling:1 yu:1 icml:2 simplex:11 report:2 nonsmooth:1 minimized:1 others:1 few:5 employ:3 national:1 maxj:1 phase:6 cplex:1 toraldo:1 keerthi:1 maintain:3 ab:1 friedman:1 huge:2 investigate:1 mixture:1 pradeep:1 primal:31 devoted:1 xb:4 accurate:3 necessary:1 traversing:1 unless:3 conduct:2 indexed:1 initialized:1 re:3 instance:2 column:6 w911nf:1 exchanging:1 tractability:1 cost:13 uniform:1 zhongkai:1 conducted:2 proximal:7 rosset:1 cho:1 combined:1 st:1 recht:1 randomized:3 siam:2 probabilistic:1 thesis:1 possibly:1 corner:2 exemplarbased:1 creating:1 derivative:2 summarized:2 sec:3 coefficient:3 depends:2 vi:2 vehicle:1 view:1 later:1 closed:2 wm:2 maintains:1 parallel:1 worsen:3 yen:7 formed:1 efficiently:5 yield:2 identify:1 produced:2 l2g:1 lu:1 comparably:1 minm:1 whenever:1 dm:1 naturally:1 proof:1 mi:8 ianyen:2 popular:1 knowledge:1 cj:1 hilbert:1 carefully:1 higher:1 dt:4 follow:1 evaluated:1 though:1 strongly:2 generality:1 preconditioner:1 until:3 hand:3 ei:2 banerjee:1 slackness:1 aj:5 grows:1 effect:1 ye:6 multiplier:3 remedy:2 true:1 e2006:1 equality:1 ccf:2 excluded:1 alternating:1 dhillon:6 round:1 during:1 davis:1 criterion:1 generalized:2 hong:1 l1:7 interface:1 wise:1 recently:1 nih:1 conditioning:4 volume:1 he:1 mor:1 sundararajan:1 significant:1 ai:22 rd:1 similarly:1 dj:2 moving:1 recent:1 moderate:2 termed:1 certain:1 inequality:2 alcd:8 yi:3 fortunately:1 greater:1 employed:1 dms1264033:1 ii:4 multiple:2 reduces:1 rahimi:1 smooth:1 faster:4 xf:1 af:1 lin:11 ravikumar:6 y:1 variant:3 involving:1 basic:1 ae:21 arxiv:2 iteration:32 kernel:1 addition:1 source:2 unlike:1 tri:1 mature:1 near:1 enough:1 gillis:1 iterate:2 xj:3 hastie:1 reduce:3 haij:1 inner:3 multiclass:1 luce:1 intensive:1 texas:1 bottleneck:1 penalty:2 hessian:4 hardly:3 useful:1 tewari:1 netherlands:1 band:1 svms:1 http:1 nsf:1 estimated:1 per:2 tibshirani:1 key:1 threshold:1 yit:5 utilize:2 nocedal:1 year:2 run:1 inverse:10 extends:1 decision:2 appendix:2 bound:10 ct:1 guaranteed:1 convergent:1 quadratic:9 nonnegative:6 occur:1 constraint:25 pgm:1 n3:1 software:3 lucia:1 speed:1 min:7 preconditioners:1 rcv1:2 separable:1 structured:1 combination:1 conjugate:3 kd:3 smaller:4 ate:3 em:1 belonging:2 partitioned:1 lp:25 wi:2 gradually:1 restricted:1 computationally:1 equation:1 discus:1 tractable:2 end:1 operation:1 observe:1 ocr:1 alternative:2 original:1 bureau:1 denotes:3 clustering:2 ensure:3 cf:2 running:2 assumes:1 top:1 graphical:1 newton:11 dirichlet:1 giving:1 k1:3 especially:1 dat:3 r01:1 objective:4 occurs:1 strategy:6 diagonal:3 traditional:2 gradient:9 subspace:1 outer:2 me:3 besides:1 index:1 minimizing:3 lg:1 truncates:1 sector:1 potentially:1 subproblems:1 negative:3 implementation:7 design:1 upper:2 markov:2 acknowledge:1 descent:17 precise:2 rn:1 introduced:2 required:5 specified:3 california:1 ucdavis:1 nip:8 proceeds:1 usually:2 pattern:2 sparsity:7 challenge:1 program:9 rf:1 max:2 memory:2 misclassification:1 regularized:7 largescale:1 hr:20 mn:3 zhu:1 technology:1 naive:1 text:1 acknowledgement:1 checking:1 asymptotic:2 loss:1 permutation:1 proportional:1 offered:1 sufficient:1 principle:1 cd:15 austin:1 row:3 repeat:3 free:2 infeasible:2 jth:4 aij:1 wide:1 neighbor:1 taking:2 barrier:3 sparse:36 tolerance:2 boundary:2 dimension:2 kz:1 projected:7 employing:1 approximate:3 preferred:1 keep:1 dealing:1 global:4 active:13 unnecessary:1 xi:3 factorizing:1 search:5 iterative:6 continuous:1 decade:1 infeasibility:2 pretty:1 table:4 transfer:1 robust:1 investigated:1 necessarily:1 dense:1 whole:1 n2:1 complementary:1 augmented:13 representative:1 rcd:8 slow:1 precision:9 sub:8 lie:1 jmlr:3 nullspace:1 ian:1 theorem:5 xt:34 svm:6 mnist:1 magnitude:2 conditioned:1 atf:1 demand:1 chen:1 easier:2 nigms:1 expressed:1 inderjit:2 kae:1 chang:3 binding:8 springer:1 corresponds:1 inexactly:3 relies:1 acm:3 lipschitz:1 admm:2 feasible:2 included:2 specifically:1 except:1 uniformly:2 wt:5 total:2 pas:3 m3:1 cholesky:1 support:6 latter:1 evaluate:1 avoiding:1 |
5,433 | 5,918 | Convergence rates of sub-sampled Newton methods
Murat A. Erdogdu
Department of Statistics
Stanford University
erdogdu@stanford.edu
Andrea Montanari
Department of Statistics
and Electrical Engineering
Stanford University
montanari@stanford.edu
Abstract
We consider the problem of minimizing a sum of n functions via projected iterations onto a convex parameter set C ? Rp , where n
p
1. In this regime,
algorithms which utilize sub-sampling techniques are known to be effective. In
this paper, we use sub-sampling techniques together with low-rank approximation
to design a new randomized batch algorithm which possesses comparable convergence rate to Newton?s method, yet has much smaller per-iteration cost. The
proposed algorithm is robust in terms of starting point and step size, and enjoys
a composite convergence rate, namely, quadratic convergence at start and linear
convergence when the iterate is close to the minimizer. We develop its theoretical
analysis which also allows us to select near-optimal algorithm parameters. Our
theoretical results can be used to obtain convergence rates of previously proposed
sub-sampling based algorithms as well. We demonstrate how our results apply to
well-known machine learning problems. Lastly, we evaluate the performance of
our algorithm on several datasets under various scenarios.
1
Introduction
We focus on the following minimization problem,
n
minimize f (?) :=
1X
fi (?),
n i=1
(1.1)
where fi : Rp ! R. Most machine learning models can be expressed as above, where each function
fi corresponds to an observation. Examples include logistic regression, support vector machines,
neural networks and graphical models.
Many optimization algorithms have been developed to solve the above minimization problem
[Bis95, BV04, Nes04]. For a given convex set C ? Rp , we denote the Euclidean projection onto this
set by PC . We consider the updates of the form
?
?
??t+1 = PC ??t ?t Qt r? f (??t ) ,
(1.2)
where ?t is the step size and Qt is a suitable scaling matrix that provides curvature information.
Updates of the form Eq. (1.2) have been extensively studied in the optimization literature (for simplicity, we assume C = Rp throughout the introduction). The case where Qt is equal to identity
matrix corresponds to Gradient Descent (GD) which, under smoothness assumptions, achieves linear convergence rate with O(np) per-iteration cost. More precisely, GD with ideal step size yields
t
?t ?? k2 , where, as limt!1 ? t = 1 ( ? / ? ), and ? is the i-th largest
k??t+1 ?? k2 ? ?1,
p
GD k?
1,GD
1
i
eigenvalue of the Hessian of f (?) at minimizer ?? .
Second order methods such as Newton?s Method (NM) and Natural Gradient Descent (NGD)
[Ama98] can be recovered by taking Qt to be the inverse Hessian and the Fisher information evaluated at the current iterate, respectively. Such methods may achieve quadratic convergence rates with
1
O(np2 + p3 ) per-iteration cost [Bis95, Nes04]. In particular, for t large enough, Newton?s method
yields k??t+1 ?? k2 ? ?2,NM k??t ?? k22 , and it is insensitive to the condition number of the Hessian.
However, when the number of samples grows large, computing Qt becomes extremely expensive.
A popular line of research tries to construct the matrix Qt in a way that the update is computationally feasible, yet still provides sufficient second order information. Such attempts resulted in
Quasi-Newton methods, in which only gradients and iterates are utilized, resulting in an efficient update on Qt . A celebrated Quasi-Newton method is the Broyden-Fletcher-Goldfarb-Shanno (BFGS)
algorithm which requires O(np + p2 ) per-iteration cost [Bis95, Nes04].
An alternative approach is to use sub-sampling techniques, where scaling matrix Qt is based on
randomly selected set of data points [Mar10, BCNN11, VP12, Erd15]. Sub-sampling is widely
used in the first order methods, but is not as well studied for approximating the scaling matrix. In
particular, theoretical guarantees are still missing.
A key challenge is that the sub-sampled Hessian is close to the actual Hessian along the directions
corresponding to large eigenvalues (large curvature directions in f (?)), but is a poor approximation
in the directions corresponding to small eigenvalues (flatter directions in f (?)). In order to overcome
this problem, we use low-rank approximation. More precisely, we treat all the eigenvalues below
the r-th as if they were equal to the (r + 1)-th. This yields the desired stability with respect to the
sub-sample: we call our algorithm NewSamp. In this paper, we establish the following:
1. NewSamp has a composite convergence rate: quadratic at start and linear near the minimizer, as illustrated in Figure 1. Formally, we prove a bound of the form k??t+1 ?? k2 ?
?1t k??t ?? k2 + ?2t k??t ?? k22 with coefficient that are explicitly given (and are computable
from data).
2. The asymptiotic behavior of the linear convergence coefficient is limt!1 ?1t = 1
( ?p / ?r+1 ) + , for small. The condition number ( ?1 / ?p ) which controls the convergence of GD, has been replaced by the milder ( ?r+1 / ?p ). For datasets with strong spectral
features, this can be a large improvement, as shown in Figure 1.
3. The above results are achived without tuning the step-size, in particular, by setting ?t = 1.
4. The complexity per iteration of NewSamp is O(np + |S|p2 ) with |S| the sample size.
5. Our theoretical results can be used to obtain convergence rates of previously proposed subsampling algorithms.
The rest of the paper is organized as follows: Section 1.1 surveys the related work. In Section 2,
we describe the proposed algorithm and provide the intuition behind it. Next, we present our theoretical results in Section 3, i.e., convergence rates corresponding to different sub-sampling schemes,
followed by a discussion on how to choose the algorithm parameters. Two applications of the algorithm are discussed in Section 4. We compare our algorithm with several existing methods on
various datasets in Section 5. Finally, in Section 6, we conclude with a brief discussion.
1.1
Related Work
Even a synthetic review of optimization algorithms for large-scale machine learning would go beyond the page limits of this paper. Here, we emphasize that the method of choice depends crucially
on the amount of data to be used, and their dimensionality (i.e., respectively, on the parameters n
and p). In this paper, we focus on a regime in which n and p are large but not so large as to make
gradient computations (of order np) and matrix manipulations (of order p3 ) prohibitive.
Online algorithms are the option of choice for very large n since the computation per update is
independent of n. In the case of Stochastic Gradient Descent (SGD), the descent direction is formed
by a randomly selected gradient. Improvements to SGD have been developed by incorporating the
previous gradient directions in the current update equation [SRB13, Bot10, DHS11].
Batch algorithms, on the other hand, can achieve faster convergence and exploit second order information. They are competitive for intermediate n. Several methods in this category aim at quadratic,
or at least super-linear convergence rates. In particular, Quasi-Newton methods have proven effective [Bis95, Nes04]. Another approach towards the same goal is to utilize sub-sampling to form an
approximate Hessian [Mar10, BCNN11, VP12, Erd15]. If the sub-sampled Hessian is close to the
true Hessian, these methods can approach NM in terms of convergence rate, nevertheless, they enjoy
2
Algorithm 1 NewSamp
Input: ??0 , r, ?, {?t }t , t = 0.
1. Define: PC (?) = argmin?0 2C k? ?0 k2 is the Euclidean projection onto C,
[Uk , ?k ] = TruncatedSVDk (H) is rank-k truncated SVD of H with ?ii = i .
2. while k??t+1 ??t k2 ? ? do
Sub-sample a set of indices St ? [n].
P
Let HSt = |S1t | i2St r2? fi (??t ), and [Ur+1 , ?r+1 ] = TruncatedSVDr+1 (HSt ),
1
1
T
Qt = r+1
Ip + U r ? r 1
r+1
?
? Ir U r ,
??t+1 = PC ??t ?t Qt r? f (??t ) ,
t
t + 1.
3. end while
Output: ??t .
much smaller complexity per update. No convergence rate analysis is available for these methods:
this analysis is the main contribution of our paper. To the best of our knowledge, the best result in
this direction is proven in [BCNN11] that estabilishes asymptotic convergence without quantitative
bounds (exploiting general theory from [GNS09]).
On the further improvements of the sub-sampling algorithms, a common approach is to use Conjugate Gradient (CG) methods and/or Krylov sub-spaces [Mar10, BCNN11, VP12]. Lastly, there are
various hybrid algorithms that combine two or more techniques to increase the performance. Examples include, sub-sampling and Quasi-Newton [BHNS14], SGD and GD [FS12], NGD and NM
[LRF10], NGD and low-rank approximation [LRMB08].
2
NewSamp : Newton-Sampling method via rank thresholding
In the regime we consider, n
p, there are two main drawbacks associated with the classical
second order methods such as Newton?s method. The dominant issue is the computation of the Hessian matrix, which requires O(np2 ) operations, and the other issue is inverting the Hessian, which
requires O(p3 ) computation. Sub-sampling is an effective and efficient way of tackling the first issue. Recent empirical studies show that sub-sampling the Hessian provides significant improvement
in terms of computational cost, yet preserves the fast convergence rate of second order methods
[Mar10, VP12]. If a uniform sub-sample is used, the sub-sampled Hessian will be a random matrix
with expected value at the true Hessian, which can be considered as a sample estimator to the mean.
Recent advances in statistics have shown that the performance of various estimators can be significantly improved by simple procedures such as shrinkage and/or thresholding [CCS10, DGJ13]. To
this extent, we use low-rank approximation as the important second order information is generally
contained in the largest few eigenvalues/vectors of the Hessian.
NewSamp is presented as Algorithm 1. At iteration step t, the sub-sampled set of indices, its size and
the corresponding sub-sampled Hessian is denoted by St , |St | and HSt , respectively. Assuming that
the functions fi ?s are convex, eigenvalues of the symmetric matrix HSt are non-negative. Therefore,
SVD and eigenvalue decomposition coincide. The operation TruncatedSVDk (HSt ) = [Uk , ?k ]
is the best rank-k approximation, i.e., takes HSt as input and returns the largest k eigenvalues
?k 2 Rk?k with the corresponding k eigenvectors Uk 2 Rp?k . This procedure requires O(kp2 )
computation [HMT11]. Operator PC projects the current iterate to the feasible set C using Euclidean
projection. We assume that this projection can be done efficiently. To construct the curvature matrix
[Qt ] 1 , instead of using the basic rank-r approximation, we fill its 0 eigenvalues with the (r + 1)-th
eigenvalue of the sub-sampled Hessian which is the largest eigenvalue below the threshold. If we
compute a truncated SVD with k = r + 1 and ?ii = i , the described operation results in
1
1
T
Qt = r+1
Ip + U r ? r 1
(2.1)
r+1 Ir Ur ,
which is simply the sum of a scaled identity matrix and a rank-r matrix. Note that the low-rank
approximation that is suggested to improve the curvature estimation has been further utilized to
reduce the cost of computing the inverse matrix. Final per-iteration cost of NewSamp will be
O np + (|St | + r)p2 ? O np + |St |p2 . NewSamp takes the parameters {?t , |St |}t and r as
inputs. We discuss in Section 3.4, how to choose them optimally, based on the theory in Section 3.
3
Convergence Rate
Convergence Coefficients
0
0.25
?2
Value
log(Error)
?1
0.20
?3
?4
?5
Sub?sample size
NewSamp : St = 100
NewSamp : St = 200
NewSamp : St = 500
0
200
Iterations
Coefficient
?1 : linear
?2 : quadratic
0.15
400
0
600
20
Rank
40
Figure 1: Left plot demonstrates convergence rate of NewSamp , which starts with a quadratic rate and transitions into linear convergence near the true minimizer. The right plot shows the effect of eigenvalue thresholding
on the convergence coefficients up to a scaling constant. x-axis shows the number of kept eigenvalues. Plots
are obtained using Covertype dataset.
By the construction of Qt , NewSamp will always be a descent algorithm. It enjoys a quadratic
convergence rate at start which transitions into a linear rate in the neighborhood of the minimizer.
This behavior can be observed in Figure 1. The left plot in Figure 1 shows the convergence behavior
of NewSamp over different sub-sample sizes. We observe that large sub-samples result in better
convergence rates as expected. As the sub-sample size increases, slope of the linear phase decreases,
getting closer to that of quadratic phase. We will explain this phenomenon in Section 3, by Theorems
3.2 and 3.3. The right plot in Figure 1 demonstrates how the coefficients of two phases depend on
the thresholded rank. Coefficient of the quadratic phase increases with the rank threshold, whereas
for the linear phase, relation is reversed.
3
Theoretical results
In this section, we provide the convergence analysis of NewSamp based on two different subsampling schemes:
S1: Independent sub-sampling: At each iteration t, St is uniformly sampled from [n] =
{1, 2, ..., n}, independently from the sets {S? }? <t , with or without replacement.
S2: Sequentially dependent sub-sampling: At each iteration t, St is sampled from [n], based
on a distribution which might depend on the previous sets {S? }? <t , but not on any randomness in the data.
The first sub-sampling scheme is simple and commonly used in optimization. One drawback is
that the sub-sampled set at the current iteration is independent of the previous sub-samples, hence
does not consider which of the samples were previously used to form the approximate curvature
information. In order to prevent cycles and obtain better performance near the optimum, one might
want to increase the sample size as the iteration advances [Mar10], including previously unused
samples. This process results in a sequence of dependent sub-samples which falls into the subsampling scheme S2. In our theoretical analysis, we make the following assumptions:
Assumption 1 (Lipschitz continuity). For any subset S ? [n], 9M|S| depending on the size of S,
such that 8?, ?0 2 C,
kHS (?) HS (?0 )k2 ? M|S| k? ?0 k2 .
Assumption 2 (Bounded Hessian). 8i 2 [n], r2? fi (?) is upper bounded by a constant K, i.e.,
max r2? fi (?)
i?n
2
? K.
3.1 Independent sub-sampling
In this section, we assume that St ? [n] is sampled according to the sub-sampling scheme S1. In
fact, many stochastic algorithms assume that St is a uniform subset of [n], because in this case the
sub-sampled Hessian is an unbiased estimator of the full Hessian. That is, 8? 2 C, E [HSt (?)] =
H[n] (?), where the expectation is over the randomness in St . We next show that for any scaling
matrix Qt that is formed by the sub-samples St , iterations of the form Eq. (1.2) will have a composite
convergence rate, i.e., combination of a linear and a quadratic phases.
4
Lemma 3.1. Assume that the parameter set C is convex and St ? [n] is based on sub-sampling
scheme S1 and sufficiently large. Further, let the Assumptions 1 and 2 hold and ?? 2 C. Then, for an
absolute constant c > 0, with probability at least 1 2/p, the updates of the form Eq. (1.2) satisfy
k??t+1 ?? k2 ? ? t k??t ?? k2 + ? t k??t ?? k2 ,
1
2
for coefficients ?1t and ?2t defined as
?1t
= I
?t Q HSt (??t )
t
2
+ ?t cK Q
t
2
s
2
log(p)
,
|St |
?2t = ?t
Mn
Qt
2
2
.
Remark 1. If the initial point ??0 is close to ?? , the algorithm will start with a quadratic rate of
convergence which will transform into linear rate later in the close neighborhood of the optimum.
The above lemma holds for any matrix Qt . In particular, if we choose Qt = HSt1 , we obtain a
bound for the simple sub-sampled Hessian method. In this case, the coefficients ?1t and ?2t depend
on kQt k2 = 1/ tp where tp is the smallest eigenvalue of the sub-sampled Hessian. Note that tp
can be arbitrarily small which might blow up both of the coefficients. In the following, we will see
how NewSamp remedies this issue.
Theorem 3.2. Let the assumptions in Lemma 3.1 hold. Denote by ti , the i-th eigenvalue of HSt (??t )
where ??t is given by NewSamp at iteration step t. If the step size satisfies
2
?t ?
,
(3.1)
1 + tp / tr+1
then we have, with probability at least 1 2/p,
k??t+1 ?? k2 ? ? t k??t
?? k2 + ?2t k??t
1
?? k22 ,
for an absolute constant c > 0, for the coefficients ?1t and ?2t are defined as
s
t
cK
log(p)
Mn
p
?1t = 1 ?t t + ?t t
,
?2t = ?t t .
|St |
2 r+1
r+1
r+1
NewSamp has a composite convergence rate where ?1t and ?2t are the coefficients of the linear and the
quadratic terms, respectively (See the right plot in Figure 1). We observe that the sub-sampling size
has a significant effect on the linear term, whereas the quadratic term is governed by the Lipschitz
constant. We emphasize that the case ?t = 1 is feasible for the conditions of Theorem 3.2.
3.2 Sequentially dependent sub-sampling
Here, we assume that the sub-sampling scheme S2 is used to generate {S? }? 1 . Distribution of
sub-sampled sets may depend on each other, but not on any randomness in the dataset. Examples
include fixed sub-samples as well as sub-samples of increasing size, sequentially covering unused
data. In addition to Assumptions 1-2, we assume the following.
Assumption 3 (i.i.d. observations). Let z1 , z2 , ..., zn 2 Z be i.i.d. observations from a distribution
D. For a fixed ? 2 Rp and 8i 2 [n], we assume that the functions {fi }ni=1 satisfy fi (?) = '(zi , ?),
for some function ' : Z ? Rp ! R.
Most statistical learning algorithms can be formulated as above, e.g., in classification problems, one
has access to i.i.d. samples {(yi , xi )}ni=1 where yi and xi denote the class label and the covariate,
and ' measures the classification error (See Section 4 for examples). For sub-sampling scheme S2,
an analogue of Lemma 3.1 is stated in Appendix as Lemma B.1, which leads to the following result.
Theorem 3.3. Assume that the parameter set C is convex and St ? [n] is based on the sub-sampling
scheme S2. Further, let the Assumptions 1, 2 and 3 hold, almost surely. Conditioned on the event
E = {?? 2 C}, if the step size satisfies Eq. 3.1, then for ??t given by NewSamp at iteration t, with
probability at least 1 cE e p for cE = c/P(E), we have
k??t+1 ?? k2 ? ? t k??t ?? k2 + ? t k??t ?? k2 ,
1
2
for the coefficients ?1t and ?2t defined as
s
?
2
t
0
diam(C) Mn + M|St |
c
K
p
p
?1t = 1 ?t t + ?t t
log
|St |
K2
r+1
r+1
where c, c0 > 0 are absolute constants and
t
i
2
2
|St |
?
,
?2t = ?t
denotes the i-th eigenvalue of HSt (??t ).
5
Mn
2
,
t
r+1
Compared to the Theorem 3.2, we observe that the coefficient of the quadratic term does not change.
This is due to Assumption 1. However, the bound on the linear term is worse, since we use the
uniform bound over the convex parameter set C.
3.3 Dependence of coefficients on t and convergence guarantees
The coefficients ?1t and ?2t depend on the iteration step which is an undesirable aspect of the above
results. However, these constants can be well approximated by their analogues ?1? and ?2? evaluated
at the optimum which are defined by simply replacing tj with ?j in their definition, where the latter
is the j-th eigenvalue of full-Hessian at ?? . For the sake of simplicity, we only consider the case
where the functions ? ! fi (?) are quadratic.
Theorem 3.4. Assume that the functions fi (?) are quadratic, St is based on scheme S1 and ?t = 1.
Let the full Hessian at ?? be lower bounded by k. Then for sufficiently large |St | and absolute
constants c1 , c2 , with probability 1 2/p
p
c1 K log(p)/|St |
t
?
:= .
p
?1 ?1 ?
k k c2 K log(p)/|St |
Theorem 3.4 implies that, when the sub-sampling size is sufficiently large, ?1t will concentrate
around ?1? . Generalizing the above theorem to non-quadratic functions is straightforward, in which
case, one would get additional terms involving the difference k??t ?? k2 . In the case of scheme S2,
if one uses fixed sub-samples, then the coefficient ?1t does not depend on t. The following corollary
gives a sufficient condition for convergence. A detailed discussion on the number of iterations until
convergence and further local convergence properties can be found in [Erd15, EM15].
Corollary 3.5. Assume that ?1t and ?2t are well-approximated by ?1? and ?2? with an error bound of
, i.e., ?it ? ?i? + for i = 1, 2, as in Theorem 3.4. For the initial point ??0 , a sufficient condition for
convergence is
1 ?1?
k??0 ?? k2 <
.
?2? +
3.4 Choosing the algorithm parameters
Step size: Let = O(log(p)/|St |). We suggest the following step size for NewSamp at iteration t,
2
?t ( ) =
.
(3.2)
t
1 + p / tr+1 +
Note that ?t (0) is the upper bound in Theorems 3.2 and 3.3 and it minimizes the first component
of ?1t . The other terms in ?1t and ?2t linearly depend on ?t . To compensate for that, we shrink ?t (0)
towards 1. Contrary to most algorithms, optimal step size of NewSamp is larger than 1. A rigorous
derivation of Eq. 3.2 can be found in [EM15].
Sample size: By Theorem 3.2, a sub-sample of size O((K/ ?p )2 log(p)) should be sufficient to obtain a small coefficient for the linear phase. Also note that sub-sample size |St | scales quadratically
with the condition number.
Rank threshold: For a full-Hessian with effective rank R (trace divided by the largest eigenvalue), it
suffices to use O(R log(p)) samples [Ver10]. Effective rank is upper bounded by the dimension p.
Hence, one can use p log(p) samples to approximate the full-Hessian and choose a rank threshold
which retains the important curvature information.
4
4.1
Examples
Generalized Linear Models (GLM)
Maximum likelihood estimation in a GLM setting is equivalent to minimizing the negative loglikelihood `(?),
n
1X
minimize f (?) =
[ (hxi , ?i) yi hxi , ?i] ,
(4.1)
?2C
n i=1
where is the cumulant generating function, xi 2 Rp denote the rows of design matrix X 2 Rn?p ,
and ? 2 Rp is the coefficient vector. Here, hx, ?i denotes the inner product between the vectors x,
?. The function defines the type of GLM, i.e., (z) = z 2 gives ordinary least squares (OLS) and
(z) = log(1 + ez ) gives logistic regression (LR). Using the results from Section 3, we perform a
convergence analysis of our algorithm on a GLM problem.
6
Corollary 4.1. Let St ? [n] be a uniform sub-sample, and C = Rp be the parameter set. Assume
that the second derivative of the cumulant generating function, (2) is bounded by 1, and it is
Lipschitz continuous
p with Lipschitz constant L.pFurther, assume that the covariates are contained
in a ball of radius Rx , i.e. maxi2[n] kxi k2 ? Rx . Then, for ??t given by NewSamp with constant
step size ?t = 1 at iteration t, with probability at least 1 2/p, we have
k??t+1
for constants ?1t and ?2t defined as
?? k2 ? ?1t k??t
t
r+1
s
where c > 0 is an absolute constant and
t
i
?1t
4.2
=1
t
i
t
r+1
+
cRx
?? k2 + ?2t k??t
log(p)
,
|St |
?? k22 ,
3/2
?2t =
LRx
,
2 tr+1
is the ith eigenvalue of HSt (??t ).
Support Vector Machines (SVM)
A linear SVM provides a separating hyperplane which maximizes the margin, i.e., the distance
between the hyperplane and the support vectors. Although the vast majority of the literature focuses
on the dual problem [SS02], SVMs can be trained using the primal as well. Since the dual problem
does not scale well with the number of data points (some approaches get O(n3 ) complexity) the
primal might be better-suited for optimization of linear SVMs [Cha07]. The primal problem for the
linear SVM can be written as
n
1
1 X
minimize f (?) = k?k22 + C
`(yi , h?, xi i)
(4.2)
?2C
2
2 i=1
where (yi , xi ) denote the data samples, ? defines the separating hyperplane, C > 0 and ` could
be any loss function. The most commonly used loss functions include Hinge-p loss, Huber loss
and their smoothed versions [Cha07]. Smoothing or approximating such losses with more stable
functions is sometimes crucial in optimization. In the case of NewSamp which requires the loss
function to be twice differentiable (almost everywhere), we suggest either smoothed Huber loss, or
2
Hinge-2 loss [Cha07]. In the case of Hinge-2 loss, i.e., `(y, h?, xi) = max {0, 1 yh?, xi} , by
combining the offset and the normal vector of the hyperplane into a single parameter vector ?, and
denoting by SVt the set of indices of all the support vectors at iteration t, we may write the Hessian,
o
X
1 n
r2? f (?) =
I+C
xi xTi ,
where
SVt = {i : yi h?t , xi i < 1}.
|SVt |
i2SVt
When |SVt | is large, the problem falls into our setup and can be solved efficiently using NewSamp.
Note that unlike the GLM setting, Lipschitz condition of our Theorems do not apply here. However,
we empirically demonstrate that NewSamp works regardless of such assumptions.
5
Experiments
In this section, we validate the performance of NewSamp through numerical studies. We experimented on two optimization problems, namely, Logistic Regression (LR) and SVM. LR minimizes
Eq. 4.1 for the logistic function, whereas SVM minimizes Eq. 4.2 for the Hinge-2 loss. In the
following, we briefly describe the algorithms that are used in the experiments:
1. Gradient Descent (GD), at each iteration, takes a step proportional to negative of the full
gradient evaluated at the current iterate. Under certain regularity conditions, GD exhibits a
linear convergence rate.
2. Accelerated Gradient Descent (AGD) is proposed by Nesterov [Nes83], which improves
over the gradient descent by using a momentum term.
3. Newton?s Method (NM) achieves a quadratic convergence rate by utilizing the inverse Hessian evaluated at the current iterate.
4. Broyden-Fletcher-Goldfarb-Shanno (BFGS) is the most popular and stable Quasi-Newton
method. Qt is formed by accumulating the information from iterates and gradients.
5. Limited Memory BFGS (L-BFGS) is a variant of BFGS, which uses only the recent iterates
and gradients to construct Qt , providing improvement in terms of memory usage.
6. Stochastic Gradient Descent (SGD) is a simplified version of GD where, at each iteration,
a randomly selected gradient is used. We follow the guidelines of [Bot10] for the step size.
7
Dataset:)
Synthe'c)
Logistic Regression, rank=3
1
MSD)
Logistic Regression, rank=60
1
log(Error)
Method
NewSamp
BFGS
LBFGS
Newton
GD
AGD
?4
SGD
AdaGrad
?2
0
10
20
30
40
Time(sec)
50
0
log(Error)
0
0
log(Error)
CT)Slices)
Logistic Regression, rank=60
?1
Method
NewSamp
BFGS
LBFGS
Newton
?3
GD
AGD
SGD
AdaGrad
?4
0
?2
SVM, rank=3
5
Time(sec)
10
15
?1
Method
NewSamp
BFGS
LBFGS
Newton
?3
GD
AGD
SGD
AdaGrad
?4
0
10
?2
SVM, rank=60
20
30
40
Time(sec)
50
SVM, rank=60
1
2
2
?1
Method
NewSamp
?2
BFGS
LBFGS
Newton
?3
GD
AGD
SGD
?4
AdaGrad
0
25
50
Time(sec)
75
100
log(Error)
log(Error)
log(Error)
0
0
Method
NewSamp
BFGS
LBFGS
?2
Newton
GD
AGD
SGD
AdaGrad
?4
0
10
Time(sec)
20
30
0
Method
NewSamp
BFGS
LBFGS
?2
Newton
GD
AGD
SGD
AdaGrad
?4
0
30
60
Time(sec)
90
120
Figure 2: Performance of several algorithms on different datasets. NewSamp is represented with red color .
7. Adaptive Gradient Scaling (AdaGrad) uses an adaptive learning rate based on the previous
gradients. AdaGrad significantly improves the performance and stability of SGD [DHS11].
For batch algorithms, we used constant step size and for all the algorithms, the step size that provides
the fastest convergence is chosen. For stochastic algorithms, we optimized over the parameters that
define the step size. Parameters of NewSamp are selected following the guidelines in Section 3.4.
We experimented over various datasets that are given in Table 1. Each dataset consists of a design
matrix X 2 Rn?p and the corresponding observations (classes) y 2 Rn . Synthetic data is generated
through a multivariate Gaussian distribution. As a methodological choice, we selected moderate values of p, for which Newton?s method can still be implemented, and nevertheless we can demonstrate
an improvement. For larger values of p, comparison is even more favorable to our approach.
The effects of sub-sampling size |St | and rank threshold are demonstrated in Figure 1. A thorough
comparison of the aforementioned optimization techniques is presented in Figure 2. In the case of
LR, we observe that stochastic methods enjoy fast convergence at start, but slows down after several
epochs. The algorithm that comes close to NewSamp in terms of performance is BFGS. In the case
of SVM, NM is the closest algorithm to NewSamp . Note that the global
P convergence of BFGS is not
better than that of GD [Nes04]. The condition for super-linear rate is t k?t ?? k2 < 1 for which,
an initial point close to the optimum is required [DM77]. This condition can be rarely satisfied
in practice, which also affects the performance of other second order methods. For NewSamp,
even though rank thresholding provides a level of robustness, we found that initial point is still an
important factor. Details about Figure 2 and additional experiments can be found in Appendix C.
Dataset
CT slices
Covertype
MSD
Synthetic
n
53500
581012
515345
500000
p
386
54
90
300
r
60
20
60
3
Reference
[GKS+ 11, Lic13]
[BD99, Lic13]
[MEWL, Lic13]
?
Table 1: Datasets used in the experiments.
6
Conclusion
In this paper, we proposed a sub-sampling based second order method utilizing low-rank Hessian
estimation. The proposed method has the target regime n
p and has O np + |S|p2 complexity
per-iteration. We showed that the convergence rate of NewSamp is composite for two widely used
sub-sampling schemes, i.e., starts as quadratic convergence and transforms to linear convergence
near the optimum. Convergence behavior under other sub-sampling schemes is an interesting line
of research. Numerical experiments demonstrate the performance of the proposed algorithm which
we compared to the classical optimization methods.
8
References
[Ama98] Shun-Ichi Amari, Natural gradient works efficiently in learning, Neural computation 10 (1998).
[BCNN11] Richard H Byrd, Gillian M Chin, Will Neveitt, and Jorge Nocedal, On the use of stochastic hessian
information in optimization methods for machine learning, SIAM Journal on Optimization (2011).
[BD99]
Jock A Blackard and Denis J Dean, Comparative accuracies of artificial neural networks and discriminant analysis in predicting forest cover types from cartographic variables, Compag (1999).
[BHNS14] Richard H Byrd, SL Hansen, Jorge Nocedal, and Yoram Singer, A stochastic quasi-newton method
for large-scale optimization, arXiv preprint arXiv:1401.7020 (2014).
[Bis95]
Christopher M. Bishop, Neural networks for pattern recognition, Oxford University Press, 1995.
[Bot10]
L`eon Bottou, Large-scale machine learning with stochastic gradient descent, COMPSTAT, 2010.
[BV04]
Stephen Boyd and Lieven Vandenberghe, Convex optimization, Cambridge University Press, 2004.
[CCS10] Jian-Feng Cai, Emmanuel J Cand`es, and Zuowei Shen, A singular value thresholding algorithm
for matrix completion, SIAM Journal on Optimization 20 (2010), no. 4, 1956?1982.
[Cha07]
Olivier Chapelle, Training a support vector machine in the primal, Neural Computation (2007).
[DE15]
Lee H Dicker and Murat A Erdogdu, Flexible results for quadratic forms with applications to
variance components estimation, arXiv preprint arXiv:1509.04388 (2015).
[DGJ13]
David L Donoho, Matan Gavish, and Iain M Johnstone, Optimal shrinkage of eigenvalues in the
spiked covariance model, arXiv preprint arXiv:1311.0851 (2013).
[DHS11] John Duchi, Elad Hazan, and Yoram Singer, Adaptive subgradient methods for online learning
and stochastic optimization, J. Mach. Learn. Res. 12 (2011), 2121?2159.
[DM77]
John E Dennis, Jr and Jorge J Mor?e, Quasi-newton methods, motivation and theory, SIAM review
19 (1977), 46?89.
[EM15]
Murat A Erdogdu and Andrea Montanari, Convergence rates of sub-sampled Newton methods,
arXiv preprint arXiv:1508.02810 (2015).
[Erd15]
Murat A. Erdogdu, Newton-Stein Method: A second order method for GLMs via Stein?s lemma,
NIPS, 2015.
[FS12]
Michael P Friedlander and Mark Schmidt, Hybrid deterministic-stochastic methods for data fitting,
SIAM Journal on Scientific Computing 34 (2012), no. 3, A1380?A1405.
[GKS+ 11] Franz Graf, Hans-Peter Kriegel, Matthias Schubert, Sebastian P?olsterl, and Alexander Cavallaro,
2d image registration in ct images using radial image descriptors, MICCAI 2011, Springer, 2011.
[GN10]
David Gross and Vincent Nesme, Note on sampling without replacing from a finite collection of
matrices, arXiv preprint arXiv:1001.2738 (2010).
[GNS09] Igor Griva, Stephen G Nash, and Ariela Sofer, Linear and nonlinear optimization, Siam, 2009.
[HMT11] Nathan Halko, Per-Gunnar Martinsson, and Joel A Tropp, Finding structure with randomness:
Probabilistic algorithms for constructing approximate matrix decompositions, no. 2, 217?288.
[Lic13]
M. Lichman, UCI machine learning repository, 2013.
[LRF10]
Nicolas Le Roux and Andrew W Fitzgibbon, A fast natural newton method, ICML, 2010.
[LRMB08] Nicolas Le Roux, Pierre-A Manzagol, and Yoshua Bengio, Topmoumoute online natural gradient
algorithm, NIPS, 2008.
[Mar10]
James Martens, Deep learning via hessian-free optimization, ICML, 2010, pp. 735?742.
[MEWL] Thierry B. Mahieux, Daniel P.W. Ellis, Brian Whitman, and Paul Lamere, The million song dataset,
ISMIR-11.
[Nes83]
Yurii Nesterov, A method for unconstrained convex minimization problem with the rate of convergence o (1/k2), Doklady AN SSSR, vol. 269, 1983, pp. 543?547.
[Nes04]
, Introductory lectures on convex optimization: A basic course, vol. 87, Springer, 2004.
[SRB13] Mark Schmidt, Nicolas Le Roux, and Francis Bach, Minimizing finite sums with the stochastic
average gradient, arXiv preprint arXiv:1309.2388 (2013).
[SS02]
Bernhard Sch?olkopf and Alexander J Smola, Learning with kernels: support vector machines,
regularization, optimization, and beyond, MIT press, 2002.
[Tro12]
Joel A Tropp, User-friendly tail bounds for sums of random matrices, Foundations of Computational Mathematics (2012).
[Ver10]
Roman Vershynin, Introduction to the non-asymptotic analysis of random matrices,
arXiv:1011.3027 (2010).
[VP12]
Oriol Vinyals and Daniel Povey, Krylov Subspace Descent for Deep Learning, AISTATS, 2012.
9
| 5918 |@word h:1 repository:1 version:2 bot10:3 briefly:1 c0:1 crucially:1 decomposition:2 covariance:1 sgd:11 tr:3 mar10:6 initial:4 celebrated:1 crx:1 lichman:1 daniel:2 denoting:1 existing:1 recovered:1 current:6 z2:1 yet:3 tackling:1 written:1 john:2 lic13:4 numerical:2 plot:6 update:8 selected:5 prohibitive:1 vp12:5 ith:1 lr:4 provides:6 iterates:3 denis:1 mahieux:1 along:1 c2:2 prove:1 consists:1 nes83:2 combine:1 fitting:1 introductory:1 huber:2 expected:2 behavior:4 cand:1 andrea:2 byrd:2 actual:1 xti:1 increasing:1 becomes:1 project:1 bounded:5 de15:1 maximizes:1 argmin:1 minimizes:3 developed:2 finding:1 guarantee:2 quantitative:1 thorough:1 ti:1 friendly:1 doklady:1 k2:26 scaled:1 uk:3 control:1 demonstrates:2 enjoy:2 engineering:1 local:1 treat:1 svt:4 limit:1 mach:1 oxford:1 might:4 twice:1 studied:2 fastest:1 limited:1 practice:1 fitzgibbon:1 procedure:2 empirical:1 significantly:2 composite:5 projection:4 boyd:1 radial:1 suggest:2 get:2 onto:3 close:7 undesirable:1 operator:1 lamere:1 cartographic:1 nes04:6 accumulating:1 equivalent:1 dean:1 marten:1 demonstrated:1 missing:1 deterministic:1 compstat:1 go:1 straightforward:1 starting:1 independently:1 convex:9 survey:1 regardless:1 simplicity:2 shen:1 roux:3 fs12:2 estimator:3 iain:1 utilizing:2 fill:1 vandenberghe:1 stability:2 hmt11:2 construction:1 target:1 user:1 olivier:1 us:3 expensive:1 approximated:2 utilized:2 recognition:1 observed:1 preprint:6 electrical:1 solved:1 cycle:1 decrease:1 gross:1 intuition:1 nash:1 complexity:4 covariates:1 nesterov:2 trained:1 depend:7 neveitt:1 whitman:1 various:5 represented:1 derivation:1 fast:3 effective:5 describe:2 artificial:1 neighborhood:2 choosing:1 matan:1 stanford:4 solve:1 widely:2 larger:2 loglikelihood:1 amari:1 elad:1 statistic:3 transform:1 ip:2 online:3 final:1 sequence:1 eigenvalue:20 differentiable:1 matthias:1 cai:1 product:1 uci:1 combining:1 achieve:2 validate:1 olkopf:1 getting:1 exploiting:1 convergence:48 regularity:1 optimum:5 generating:2 comparative:1 depending:1 develop:1 completion:1 andrew:1 qt:19 thierry:1 eq:7 p2:5 strong:1 implemented:1 hst:11 implies:1 come:1 direction:7 concentrate:1 radius:1 drawback:2 sssr:1 stochastic:11 shun:1 hx:1 suffices:1 dhs11:3 brian:1 hold:4 sufficiently:3 considered:1 around:1 normal:1 fletcher:2 achieves:2 smallest:1 gavish:1 estimation:4 favorable:1 label:1 hansen:1 largest:5 minimization:3 ama98:2 mit:1 always:1 gaussian:1 aim:1 super:2 ck:2 shrinkage:2 corollary:3 np2:2 focus:3 improvement:6 methodological:1 rank:26 likelihood:1 lrmb08:2 rigorous:1 cg:1 kp2:1 lrf10:2 milder:1 dependent:3 relation:1 quasi:7 schubert:1 issue:4 classification:2 dual:2 aforementioned:1 denoted:1 flexible:1 smoothing:1 s1t:1 equal:2 construct:3 a1380:1 sampling:29 icml:2 igor:1 np:7 yoshua:1 richard:2 few:1 roman:1 randomly:3 preserve:1 resulted:1 replaced:1 phase:7 replacement:1 attempt:1 a1405:1 joel:2 pc:5 behind:1 tj:1 primal:4 maxi2:1 bcnn11:5 closer:1 euclidean:3 desired:1 re:1 theoretical:7 elli:1 cover:1 tp:4 retains:1 zn:1 ordinary:1 cost:7 subset:2 uniform:4 optimally:1 kxi:1 synthetic:3 gd:16 vershynin:1 st:31 shanno:2 randomized:1 siam:5 lee:1 probabilistic:1 michael:1 together:1 sofer:1 nm:6 satisfied:1 choose:4 worse:1 derivative:1 return:1 bfgs:13 blow:1 flatter:1 sec:6 coefficient:19 satisfy:2 explicitly:1 depends:1 later:1 try:1 hazan:1 francis:1 red:1 start:7 competitive:1 option:1 griva:1 slope:1 contribution:1 minimize:3 formed:3 ir:2 ni:2 square:1 accuracy:1 variance:1 efficiently:3 descriptor:1 yield:3 vincent:1 rx:2 randomness:4 explain:1 sebastian:1 definition:1 pp:2 james:1 associated:1 sampled:16 dataset:6 popular:2 knowledge:1 color:1 dimensionality:1 improves:2 organized:1 follow:1 improved:1 evaluated:4 done:1 bhns14:2 shrink:1 though:1 smola:1 lastly:2 miccai:1 until:1 glms:1 hand:1 dennis:1 tropp:2 replacing:2 christopher:1 nonlinear:1 continuity:1 defines:2 logistic:7 scientific:1 grows:1 usage:1 effect:3 k22:5 true:3 unbiased:1 remedy:1 regularization:1 hence:2 symmetric:1 goldfarb:2 bv04:2 em15:3 illustrated:1 covering:1 cha07:4 generalized:1 chin:1 demonstrate:4 duchi:1 image:3 fi:11 common:1 ols:1 empirically:1 insensitive:1 million:1 discussed:1 martinsson:1 mor:1 lieven:1 tail:1 significant:2 cambridge:1 broyden:2 smoothness:1 tuning:1 unconstrained:1 mathematics:1 hxi:2 chapelle:1 access:1 stable:2 han:1 dominant:1 curvature:6 multivariate:1 closest:1 recent:3 showed:1 moderate:1 scenario:1 manipulation:1 certain:1 arbitrarily:1 jorge:3 yi:6 additional:2 zuowei:1 surely:1 ii:2 stephen:2 full:6 faster:1 bach:1 compensate:1 divided:1 msd:2 involving:1 regression:6 basic:2 variant:1 dicker:1 expectation:1 jock:1 arxiv:13 iteration:24 sometimes:1 limt:2 kernel:1 c1:2 whereas:3 ccs10:2 want:1 addition:1 singular:1 jian:1 crucial:1 sch:1 rest:1 unlike:1 posse:1 ver10:2 contrary:1 bd99:2 dgj13:2 call:1 near:5 ideal:1 intermediate:1 unused:2 enough:1 bengio:1 ngd:3 iterate:5 affect:1 zi:1 reduce:1 inner:1 computable:1 song:1 peter:1 hessian:30 remark:1 deep:2 generally:1 detailed:1 eigenvectors:1 gks:2 amount:1 transforms:1 stein:2 extensively:1 svms:2 category:1 generate:1 sl:1 per:10 write:1 vol:2 ichi:1 key:1 gunnar:1 nevertheless:2 threshold:5 prevent:1 ce:2 povey:1 thresholded:1 registration:1 utilize:2 kept:1 nocedal:2 vast:1 subgradient:1 sum:4 inverse:3 everywhere:1 throughout:1 almost:2 lrx:1 ismir:1 p3:3 appendix:2 scaling:6 comparable:1 bound:8 ct:3 followed:1 quadratic:20 covertype:2 precisely:2 n3:1 sake:1 synthe:1 aspect:1 nathan:1 extremely:1 department:2 according:1 combination:1 poor:1 ball:1 conjugate:1 jr:1 smaller:2 ur:2 s1:4 spiked:1 bis95:5 glm:5 computationally:1 equation:1 previously:4 discus:1 singer:2 end:1 yurii:1 available:1 operation:3 apply:2 observe:4 spectral:1 pierre:1 batch:3 alternative:1 robustness:1 schmidt:2 rp:10 cavallaro:1 denotes:2 include:4 subsampling:3 graphical:1 hinge:4 newton:24 exploit:1 yoram:2 eon:1 emmanuel:1 establish:1 approximating:2 classical:2 feng:1 dependence:1 exhibit:1 gradient:22 subspace:1 reversed:1 distance:1 separating:2 majority:1 evaluate:1 extent:1 discriminant:1 assuming:1 index:3 manzagol:1 providing:1 minimizing:3 setup:1 trace:1 negative:3 stated:1 slows:1 design:3 guideline:2 murat:4 perform:1 upper:3 observation:4 datasets:6 finite:2 descent:11 truncated:2 rn:3 smoothed:2 david:2 inverting:1 namely:2 required:1 z1:1 optimized:1 quadratically:1 nip:2 beyond:2 suggested:1 krylov:2 below:2 pattern:1 kriegel:1 regime:4 challenge:1 nesme:1 kqt:1 including:1 max:2 memory:2 analogue:2 suitable:1 event:1 natural:4 hybrid:2 predicting:1 mn:4 scheme:13 improve:1 brief:1 axis:1 review:2 literature:2 epoch:1 friedlander:1 adagrad:8 asymptotic:2 graf:1 loss:10 lecture:1 interesting:1 proportional:1 proven:2 foundation:1 sufficient:4 thresholding:5 row:1 course:1 free:1 enjoys:2 johnstone:1 fall:2 erdogdu:5 taking:1 absolute:5 slice:2 overcome:1 dimension:1 transition:2 commonly:2 adaptive:3 projected:1 coincide:1 simplified:1 franz:1 collection:1 agd:7 approximate:4 emphasize:2 bernhard:1 blackard:1 global:1 sequentially:3 conclude:1 xi:9 continuous:1 table:2 learn:1 robust:1 nicolas:3 forest:1 bottou:1 constructing:1 aistats:1 main:2 montanari:3 linearly:1 s2:6 motivation:1 paul:1 sub:57 momentum:1 governed:1 topmoumoute:1 yh:1 erd15:4 rk:1 theorem:12 down:1 bishop:1 covariate:1 tro12:1 r2:4 offset:1 svm:9 experimented:2 incorporating:1 srb13:2 conditioned:1 margin:1 suited:1 generalizing:1 halko:1 simply:2 lbfgs:6 ez:1 vinyals:1 expressed:1 contained:2 springer:2 corresponds:2 minimizer:5 khs:1 satisfies:2 identity:2 goal:1 formulated:1 diam:1 donoho:1 towards:2 lipschitz:5 fisher:1 feasible:3 change:1 uniformly:1 hyperplane:4 lemma:6 svd:3 e:1 ss02:2 rarely:1 select:1 formally:1 support:6 mark:2 latter:1 cumulant:2 alexander:2 accelerated:1 oriol:1 gillian:1 phenomenon:1 |
5,434 | 5,919 | Variance Reduced Stochastic Gradient Descent
with Neighbors
Aurelien Lucchi
Department of Computer Science
ETH Zurich, Switzerland
Thomas Hofmann
Department of Computer Science
ETH Zurich, Switzerland
Simon Lacoste-Julien
INRIA - Sierra Project-Team
?
Ecole
Normale Sup?erieure, Paris, France
Brian McWilliams
Department of Computer Science
ETH Zurich, Switzerland
Abstract
Stochastic Gradient Descent (SGD) is a workhorse in machine learning, yet its
slow convergence can be a computational bottleneck. Variance reduction techniques such as SAG, SVRG and SAGA have been proposed to overcome this
weakness, achieving linear convergence. However, these methods are either based
on computations of full gradients at pivot points, or on keeping per data point corrections in memory. Therefore speed-ups relative to SGD may need a minimal
number of epochs in order to materialize. This paper investigates algorithms that
can exploit neighborhood structure in the training data to share and re-use information about past stochastic gradients across data points, which offers advantages
in the transient optimization phase. As a side-product we provide a unified convergence analysis for a family of variance reduction algorithms, which we call
memorization algorithms. We provide experimental results supporting our theory.
1
Introduction
We consider a general problem that is pervasive in machine learning, namely optimization of an empirical or regularized convex risk function. Given a convex loss l and a ?-strongly convex regularizer
?, one aims at finding a parameter vector w which minimizes the (empirical) expectation:
n
?
w = argmin f (w),
w
1X
f (w) =
fi (w),
n i=1
fi (w) := l(w, (xi , yi )) + ?(w) .
(1)
We assume throughout that each fi has L-Lipschitz-continuous gradients. Steepest descent can
find the minimizer w? , but requires repeated computations of full gradients f 0 (w), which becomes
prohibitive for massive data sets. Stochastic gradient descent (SGD) is a popular alternative, in
particular in the context of large-scale learning [2, 10]. SGD updates only involve fi0 (w) for an index
i chosen uniformly at random, providing an unbiased gradient estimate, since Efi0 (w) = f 0 (w).
It is a surprising recent finding [11, 5, 9, 6] that the finite sum structure of f allows for significantly
faster convergence in expectation. Instead of the standard O(1/t) rate of SGD for strongly-convex
functions, it is possible to obtain linear convergence with geometric rates. While SGD requires
asymptotically vanishing learning rates, often chosen to be O(1/t) [7], these more recent methods
introduce corrections that ensure convergence for constant learning rates.
Based on the work mentioned above, the contributions of our paper are as follows: First, we define a family of variance reducing SGD algorithms, called memorization algorithms, which includes
SAGA and SVRG as special cases, and develop a unifying analysis technique for it. Second, we
1
1
show geometric rates for all step sizes ? < 4L
, including a universal (?-independent) step size
choice, providing the first ?-adaptive convergence proof for SVRG. Third, based on the above analysis, we present new insights into the trade-offs between freshness and biasedness of the corrections
computed from previous stochastic gradients. Fourth, we propose a new class of algorithms that
resolves this trade-off by computing corrections based on stochastic gradients at neighboring points.
We experimentally show its benefits in the regime of learning with a small number of epochs.
2
2.1
Memorization Algorithms
Algorithms
Variance Reduced SGD Given an optimization problem as in (1), we investigate a class of
stochastic gradient descent algorithms that generates an iterate sequence wt (t ? 0) with updates
taking the form:
w+ = w ? ?gi (w), gi (w) = fi0 (w) ? ?
? i with ?
? i := ?i ? ?
?,
(2)
P
n
where ?
? := n1 j=1 ?j . Here w is the current and w+ the new parameter vector, ? is the step size,
and i is an index selected uniformly at random. ?
? i are variance correction terms such that E?
?i = 0,
which guarantees unbiasedness Egi (w) = f 0 (w). The aim is to define updates of asymptotically
vanishing variance, i.e. gi (w) ? 0 as w ? w? , which requires ?
? i ? fi0 (w? ). This implies that
corrections need to be designed in a way to exactly cancel out the stochasticity of fi0 (w? ) at the
optimum. How the memory ?j is updated distinguishes the different algorithms that we consider.
SAGA The SAGA algorithm [4] maintains variance corrections ?i by memorizing stochastic gradients. The update rule is ?i+ = fi0 (w) for the selected i, and ?j+ = ?j , for j 6= i. Note that
these corrections will be used the next time the same index i gets sampled. Setting ?
? i := ?i ? ?
?
guarantees unbiasedness. Obviously, ?
? can be updated incrementally. SAGA reuses the stochastic
gradient fi0 (w) computed at step t to update w as well as ?
?i.
q-SAGA We also consider q-SAGA, a method that updates q ? 1 randomly chosen ?j variables
at each iteration. This is a convenient reference point to investigate the advantages of ?fresher?
corrections. Note that in SAGA the corrections will be on average n iterations ?old?. In q-SAGA
this can be controlled to be n/q at the expense of additional gradient computations.
SVRG We reformulate a variant of SVRG [5] in our framework using a randomization argument
similar to (but simpler than) the one suggested in [6]. Fix q > 0 and draw in each iteration r ?
Uniform[0; 1). If r < q/n, a complete update, ?j+ = fj0 (w) (?j) is performed, otherwise they are
left unchanged. While q-SAGA updates exactly q variables in each iteration, SVRG occasionally
updates all ? variables by triggering an additional sweep through the data. There is an option to not
maintain ? variables explicitly and to save on space by storing only ?
? = f 0 (w) and w.
Uniform Memorization Algorithms Motivated by SAGA and SVRG, we define a class of algorithms, which we call uniform memorization algorithms.
Definition 1. A uniform q-memorization algorithm evolves iterates w according to Eq. (2) and
selects in each iteration a random index set J of memory locations to update according to
0
fj (w) if j ? J
+
?j :=
(3)
?j
otherwise,
P
such that any j has the same probability of q/n of being updated, i.e. ?j, J3j P{J} = nq .
Note that q-SAGA and the above SVRG are special cases. For q-SAGA: P{J} = 1/ nq if |J| = q
P{J} = 0 otherwise. For SVRG: P{?} = 1 ? q/n, P{[1 : n]} = q/n, P{J} = 0, otherwise.
N -SAGA Because we need it in Section 3, we will also define an algorithm, which we call N SAGA, which makes use of a neighborhood system Ni ? {1, . . . , n} and which selects neighborhoods uniformly, i.e. P{Ni } = n1 . Note that Definition 1 requires |{i : j ? Ni }| = q (?j).
2
Finally, note that for generalized linear models where fi depends on xi only through hw, xi i, we
get fi0 (w) = ?i0 (w)xi , i.e. the update direction is determined by xi , whereas the effective step length
depends on the derivative of a scalar function ?i (w). As used in [9], this leads to significant memory
savings as one only needs to store the scalars ?i0 (w) as xi is always given when performing an update.
2.2
Analysis
Recurrence of Iterates The evolution equation (2) in expectation implies the recurrence (by crucially using the unbiasedness condition Egi (w) = f 0 (w)):
Ekw+ ?w? k2 = kw ? w? k2 ? 2?hf 0 (w), w ? w? i + ? 2 Ekgi (w)k2 .
(4)
Here and in the rest of this paper, expectations are always taken only with respect to i (conditioned
on the past). We utilize a number of bounds (see [4]), which exploit strong convexity of f (wherever
? appears) as well as Lipschitz continuity of the fi -gradients (wherever L appears):
hf 0 (w), w ? w? i ? f (w) ? f (w? ) + ?2 kw ? w? k2 ,
2
Ekgi (w)k
0
kfi (w) ? fi0 (w? )k2
Ekfi0 (w)?fi0 (w? )k2
Ek?
?i ? fi0 (w? )k2
?
2Ekfi0 (w)
? 2Lhi (w),
?
? 2Lf (w),
= Ek?i ?
?
fi0 (w? )k2
+ 2Ek?
?i ?
(5)
fi0 (w? )k2
?
,
hi (w) := fi (w) ? fi (w ) ? hw ? w
(6)
?
, fi0 (w? )i ,
?
?
f (w) := f (w) ? f (w ) ,
fi0 (w? )k2
2
? k?
?k
2
(7)
(8)
? Ek?i ? fi0 (w? )k2 .
2
?1
2
(9)
Eq. (6) can be generalized [4] using kx?yk ? (1+?)kxk +(1+? )kyk with ? > 0. However
for the sake of simplicity, we sacrifice tightness and choose ? = 1. Applying all of the above yields:
Lemma 1. For the iterate sequence of any algorithm that evolves solutions according to Eq. (2), the
following holds for a single update step, in expectation over the choice of i:
kw ? w? k2 ? Ekw+ ? w? k2 ? ??kw ? w? k2 ? 2? 2 Ek?i ? fi0 (w? )k2 + 2? ? 4? 2 L f ? (w) .
All proofs are deferred to the Appendix.
Ideal and Approximate Variance Correction Note that in the ideal case of ?i = fi0 (w? ), we
1
, yielding a rate of 1 ? ?
would immediately get a condition for a contraction by choosing ? = 2L
?
with ? = ?? = 2L
, which is half the inverse of the condition number ? := L/?.
How can we further bound Ek?i ? fi0 (w? )k2 in the case of ?non-ideal? variance-reducing SGD? A
key insight is that for memorization algorithms, we can apply the smoothness bound in Eq. (7)
k?i ? fi0 (w? )k2 = kfi0 (w?i ) ? fi0 (w? )k2 ? 2Lhi (w?i ),
(where w?i is old w) .
(10)
2
Note that if we only had approximations ?i in the sense that k?i ? ?i k ? i (see Section 3), then
we can use kx ? yk ? 2kxk + 2kyk to get the somewhat worse bound:
k?i ? fi0 (w? )k2 ? 2k?i ? fi0 (w? )k2 + 2k?i ? ?i k2 ? 4Lhi (w?i ) + 2i .
(11)
Lyapunov Function Ideally, we would like to show that for a suitable choice of ?, each iteration
results in a contraction Ekw+ ? w? k2 ? (1 ? ?)kw ? w? k2 , where 0 < ? ? 1. However, the main
challenge arises from the fact that the quantities ?i represent stochastic gradients from previous iterations. This requires a somewhat more complex proof technique. Adapting the Lyapunov function
method from [4], we define upper bounds Hi ? k?i ? fi0 (w? )k2 such that Hi ? 0 as w ? w? . We
start with ?i0 = 0 and (conceptually) initialize Hi = kfi0 (w? )k2 , and then update Hi in sync with ?i ,
2L hi (w) if ?i is updated
Hi+ :=
(12)
Hi
otherwise
? with
so that we
maintain valid bounds k?i ? fi0 (w? )k2 ? Hi and Ek?i ? fi0 (w? )k2 ? H
Palways
n
1
?
H := n i=1 Hi . The Hi are quantities showing up in the analysis, but need not be computed. We
now define a ?-parameterized family of Lyapunov functions1
?n
? 2
?
L? (w, H) := kw ? w k + S? H, with S :=
and 0 ? ? ? 1 .
(13)
Lq
1
This is a simplified version of the one appearing in [4], as we assume f 0 (w? ) = 0 (unconstrained regime).
3
In expectation under a random update, the Lyapunov function L? changes as EL? (w+ , H + ) =
? + . We can readily apply Lemma 1 to bound the first part. The second part
Ekw+ ? w? k2 + S? EH
is due to (12), which mirrors the update of the ? variables. By crucially using the property that any
?j has the same probability of being updated in (3), we get the following result:
Lemma 2. For a uniform q-memorization algorithm, it holds that
n ? q ? 2Lq ?
+
?
EH =
H+
f (w).
(14)
n
n
Note that in expectation the shrinkage does not depend on the location of previous iterates w? and
the new increment is proportional to the sub-optimality of the current iterate w. Technically, this is
how the possibly complicated dependency on previous iterates is dealt with in an effective manner.
Convergence Analysis We first state our main Lemma about Lyapunov function contractions:
Lemma 3. Fix c ? (0; 1] and ? ? [0; 1] arbitrarily. For any uniform q-memorization algorithm with
sufficiently small step size ? such that
1
K?
4qL
??
min
, 1 ? ? , and K :=
,
(15)
2L
K + 2c?
n?
we have that
EL? (w+ , H + ) ? (1 ? ?)L? (w, H), with ? := c??.
(16)
Note that ? <
1
2L
max??[0,1] min{?, 1 ? ?} =
1
4L
(in the c ? 0 limit).
By maximizing the bounds in Lemma 3 over the choices of c and ?, we obtain our main result that
1
provides guaranteed geometric rates for all step sizes up to 4L
.
a
Theorem 1. Consider a uniform q-memorization algorithm. For any step size ? = 4L
with a < 1,
the algorithm converges at a geometric rate of at least (1 ? ?(?)) with
1?a
? K(1 ? a)
q
=
?
, if ? ? ? ? (K), otherwise ?(?) = ??
(17)
?(?) = ?
n 1 ? a/2
4L 1 ? a/2
where
a? (K)
2K
4q
4qL
?
? ? (K) :=
, a? (K) :=
=
?.
(18)
, K :=
2
4L
n?
n
1+K + 1+K
We would like to provide more insights into this result.
Corollary 1. In Theorem 1, ? is maximized for ? = ? ? (K). We can write ?? (K) = ?(? ? ) as
? ?
q a? (K)
q
2
?
a (K) =
=
(19)
?? (K) =
4L
n K
n 1 + K + 1 + K2
In the big data regime ?? = nq (1 ? 12 K + O(K 3 )), whereas in the ill-conditioned case ?? =
?
1 ?1
+ O(K ?3 )).
4L (1 ? 2 K
?
The guaranteed rate is bounded by 4L
in the regime where the condition number dominates n (large
q
K) and by n in the opposite regime of large data (small K). Note that if K ? 1, we have ?? = ? nq
?
?
with ? ? [2/(2 + 2); 1] ? [0.585; 1]. So for q ? n 4L
, it pays off to increase freshness as it affects
the rate proportionally. In the ill-conditioned regime (? > n), the influence of q vanishes.
1
Note that for ? ? ? ? (K), ? ? 4L
the rate decreases monotonically, yet the decrease is only minor.
1
1
With the exception of a small neighborhood around 4L
, the entire range of ? ? [? ? ; 4L
) results in
?
very similar rates. Underestimating ? however leads to a (significant) slow-down by a factor ?/? ? .
As the optimal choice of ? depends on K, i.e. ?, we would prefer step sizes that are ?-independent,
thus giving rates that adapt to the local curvature (see [9]). It turns out that by choosing a step size
that maximizes minK ?(?)/?? (K), we obtain a K-agnostic step size with rate off by at most 1/2:
?
?
Corollary 2. Choosing ? = 2?4L 2 , leads to ?(?) ? (2 ? 2)?? (K) > 12 ?? (K) for all K.
To gain more insights into the trade-offs for these fixed large universal step sizes, the following
corollary details the range of rates obtained:
a
1?a q a ?
Corollary 3. Choosing ? = 4L
with a < 1 yields ? = min{ 1?
,
}. In particular, we have
1
an 4L
for the choice ? =
1
5L
2
?
that ? = min{ 13 nq , 15 L
} (roughly matching the rate given in [4] for q = 1).
4
3
3.1
Sharing Gradient Memory
-Approximation Analysis
As we have seen, fresher gradient memory, i.e. a larger choice for q, affects the guaranteed convergence rate as ? ? q/n. However, as long as one step of a q-memorization algorithm is as expensive
as q steps of a 1-memorization algorithm, this insight does not lead to practical improvements per
se. Yet, it raises the question, whether we can accelerate these methods, in particular N -SAGA,
by approximating gradients stored in the ?i variables. Note
P that we are always using the correct
stochastic gradients in the current update and by assuring i ?
? i = 0, we will not introduce any bias
in the update direction. Rather, we lose the guarantee of asymptotically vanishing variance at w? .
However, as we will show, it is possible to retain geometric rates up to a ?-ball around w? .
We will focus on SAGA-style updates for concreteness and investigate an algorithm that mirrors N SAGA with the only difference that it maintains approximations ?i to the true ?i variables. We aim
to guarantee Ek?i ? ?i k2 ? and will use Eq. (11) to modify the right-hand-side of Lemma 1. We
see that approximation errors i are multiplied with ? 2 , which implies that we should aim for small
learning rates, ideally without compromising the N -SAGA rate. From Theorem 1 and Corollary 1
we can see that we can choose ? . q/?n for n sufficiently large, which indicates that there is hope
to dampen the effects of the approximations. We now make this argument more precise.
Theorem 2. Consider a uniform q-memorization algorithm with ?-updates that are on average accurate (i.e. Ek?i ? ?i k2 ? ). For any step size ? ? ?? (K), where ?? is given by Corollary 5 in
the appendix (note that ?? (K) ? 32 ? ? (K) and ?? (K) ? ? ? (K) as K ? 0), we get
4?
EL(wt , H t ) ? (1 ? ??)t L0 +
, with L0 := kw0 ? w? k2 + s(?)Ekfi (w? )k2 ,
(20)
?
where E denote the (unconditional) expectation over histories (in contrast to E which is conditional),
4?
and s(?) := K?
(1 ? 2L?).
Corollary 4. With ? = min{?, ?? (K)} we have
4?
? 4,
with a rate ? = min{?2 , ??
?} .
(21)
?
?
?
In the relevant case of ? ? 1/ n, we thus converge towards some -ball around w? at a similar
rate as for the exact method. For ? ? n?1 , we have?to reduce the step size significantly to compensate the extra variance and to still converge to an -ball, resulting in the slower rate ? ? n?2 ,
instead of ? ? n?1 .
We also note that the geometric convergence of SGD with a constant step size to a neighborhood
of the solution (also proven in [8]) can arise as a special case in our analysis. By setting ?i = 0 in
Lemma 1, we can take = Ekfi0 (w? )k2 for SGD. An approximate q-memorization algorithm can
thus be interpreted as making an algorithmic parameter, rather than a fixed value as in SGD.
3.2
Algorithms
Sharing Gradient Memory We now discuss our proposal of using neighborhoods for sharing
gradient information between close-by data points. Thereby we avoid an increase in gradient computations relative to q- or N -SAGA at the expense of suffering an approximation bias. This leads
to a new tradeoff between freshness and approximation quality, which can be resolved in non-trivial
ways, depending on the desired final optimization accuracy.
We distinguish two types of quantities. First, the gradient memory ?i as defined by the reference
algorithm N -SAGA. Second, the shared gradient memory state ?i , which is used in a modified
? Assume that we select an index i for the
update rule in Eq. (2), i.e. w+ = w ? ?(fi0 (w) ? ?i + ?).
weight update, then we generalize Eq. (3) as follows
0
n
1X
fi (w) if j ? Ni
+
?i , ??i := ?i ? ?? .
(22)
?j :=
,
?? :=
?j
otherwise
n i=1
In the important case of generalized linear models, where one has fi0 (w) = ?i0 (w)xi , we can modify
the relevant case in Eq. (22) by ?j+ := ?i0 (w)xj . This has the advantages of using the correct
direction, while reducing storage requirements.
5
Approximation Bounds For our analysis, we need to control the error k?i ? ?i k2 ? i . This
obviously requires problem-specific investigations.
Let us first look at the case of ridge regression. fi (w) := 21 (hxi , wi ? yi )2 + ?2 kwk2 and thus
fi0 (w) = ?i0 (w)xi + ?w with ?i0 (w) := hxi , wi ? yi . Considering j ? Ni being updated, we have
k?j+ ? ?j+ k = |?j0 (w) ? ?i0 (w)|kxj k ? (?ij kwk + |yj ? yi |) kxj k =: ij (w)
(23)
where ?ij := kxi ? xj k. Note that this can be pre-computed with the exception of the norm kwk
that we only know at the time of an update.
Similarly, for regularized logistic regression with y ? {?1, 1}, we have ?i0 (w) = yi /(1 + eyi hxi ,wi ).
With the requirement on neighbors that yi = yj we get
e?ij kwk ? 1
kxj k =: ij (w)
1 + e?hxi ,wi
Again, we can pre-compute ?ij and kxj k. In addition to ?i0 (w) we can also store hxi , wi.
k?j+ ? ?j+ k ?
(24)
N -SAGA We can use these bounds in two ways. First, assuming that the iterates stay within a
norm-ball (e.g. L2 -ball), we can derive upper bounds
1X
j (r) ? max{ij (w) : j ? Ni , kwk ? r},
(r) =
j (r) .
(25)
n j
Obviously, the more compact the neighborhoods are, the smaller (r). This is most useful for the
analysis. Second, we can specify a target accuracy and then prune neighborhoods dynamically.
This approach is more practically relevant as it allows us to directly control . However, a dynamically varying neighborhood violates Definition 1. We fix this in a sound manner by modifying the
memory updates as follows:
? 0
?fi (w) if j ? Ni and ij (w) ?
?j+ := fj0 (w) if j ? Ni and ij (w) >
(26)
?
?j
otherwise
This allows us to interpolate between sharing more aggressively (saving computation) and performing more computations in an exact manner. In the limit of ? 0, we recover N -SAGA, as ? max
we recover the first variant mentioned.
Computing Neighborhoods Note that the pairwise Euclidean distances show up in the bounds in
Eq. (23) and (24). In the classification case we also require yi = yj , whereas in the ridge regression
case, we also want |yi ? yj | to be small. Thus modulo filtering, this suggests the use of Euclidean
distances as the metric for defining neighborhoods. Standard approximation techniques for finding
near(est) neighbors can be used. This comes with a computational overhead, yet the additional costs
will amortize over multiple runs or multiple data analysis tasks.
4
Experimental Results
Algorithms We present experimental results on the performance of the different variants of memorization algorithms for variance reduced SGD as discussed in this paper. SAGA has been uniformly
superior to SVRG in our experiments, so we compare SAGA and N -SAGA (from Eq. (26)), alongside with SGD as a straw man and q-SAGA as a point of reference for speed-ups. We have chosen
q = 20 for q-SAGA and N -SAGA. The same setting was used across all data sets and experiments.
Data Sets As special cases for the choice of the loss function and regularizer in Eq. (1), we consider two commonly occurring problems in machine learning, namely least-square regression and
`2 -regularized logistic regression. We apply least-square regression on the million song year regression from the UCI repository. This dataset contains n = 515, 345 data points, each described by
d = 90 input features. We apply logistic regression on the cov and ijcnn1 datasets obtained from
the libsvm website 2 . The cov dataset contains n = 581, 012 data points, each described by d = 54
input features. The ijcnn1 dataset contains n = 49, 990 data points, each described by d = 22 input
features. We added an `2 -regularizer ?(w) = ?kwk22 to ensure the objective is strongly convex.
2
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets
6
(a) Cov
10 -4
SGD cst
SGD
SAGA
q-SAGA
0N -SAGA 0 =1
0N -SAGA 0 =0.1
0N -SAGA 0 =0.01
10 -6
-8
2
4
6
10 -4
10 -6
10
8
-8
10 -10
10 12 14 16 18
0
10 -2
Suboptimality
10 -2
10
(c) Year
10
Suboptimality
Suboptimality
10
(b) Ijcnn1
10 0
0
SGD cst
SGD
SAGA
q-SAGA
0N -SAGA 0 =0.1
0N -SAGA 0 =0.05
0N -SAGA 0 =0.01
2
4
epochs
6
8
10 -2
10 -4
SGD cst
SGD
SAGA
q-SAGA
0N -SAGA 0 =2
0N -SAGA 0 =1
0N -SAGA 0 =0.5
10 -6
10 -8
10
2
4
6
8
10
12
14
16
18
14
16
18
epochs
epochs
? = 10?1 , gradient evaluation
10 -2
10 -2
10 -4
10 -6
10 -8
2
4
6
8
Suboptimality
10 0
Suboptimality
Suboptimality
10 0
10 0
10 -4
10 -6
10 -8
10 12 14 16 18
2
epochs
4
6
8
10 -2
10 -4
10 -6
10 -8
10
2
4
6
8
10
12
epochs
epochs
? = 10?3 , gradient evaluation
10 0
10 0
10 0
10 -4
10 -6
Suboptimality
Suboptimality
Suboptimality
10 -2
10 -5
10 -5
10 -8
10 -10
10 -10
2
4
6
8
10
12
14
16
18
2
4
6
8
10 -10
10
2
4
6
8
10 12 14 16 18
epochs
epochs
epochs
? = 10?1 , datapoint evaluation
10 0
10 0
10 -5
Suboptimality
10 -2
Suboptimality
Suboptimality
10 0
10 -5
10 -4
10 -6
10 -8
10 -10
2
4
6
8
10 12 14 16 18
epochs
10 -10
2
4
6
8
epochs
10
10 -10
2
4
6
8
10
12
14
16
18
epochs
? = 10?3 , datapoint evaluation
Figure 1: Comparison of N -SAGA, q-SAGA, SAGA and SGD (with decreasing and constant step
size) on three datasets. The top two rows show the suboptimality as a function of the number
of gradient evaluations for two different values of ? = 10?1 , 10?3 . The bottom two rows show
the suboptimality as a function of the number of datapoint evaluations (i.e. number of stochastic
updates) for two different values of ? = 10?1 , 10?3 .
7
Experimental Protocol We have run the algorithms in question in an i.i.d. sampling setting and
averaged the results over 5 runs. Figure 1 shows the evolution of the suboptimality f ? of the objective as a function of two different metrics: (1) in terms of the number of update steps performed
(?datapoint evaluation?), and (2) in terms of the number of gradient computations (?gradient evaluation?). Note that SGD and SAGA compute one stochastic gradient per update step unlike q-SAGA,
which is included here not as a practically relevant algorithm, but as an indication of potential imq
provements that could be achieved by fresher corrections. A step size ? = ?n
was used everywhere,
except for ?plain SGD?. Note that as K 1 in all cases, this is close to the optimal value suggested
by our analysis; moreover, using a step size of ? L1 for SAGA as suggested in previous work [9]
did not appear to give better results. For plain SGD, we used a schedule of the form ?t = ?0 /t with
constants optimized coarsely via cross-validation. The x-axis is expressed in units of n (suggestively
called ?epochs?).
SAGA vs. SGD cst As we can see, if we run SGD with the same constant step size as SAGA,
it takes several epochs until SAGA really shows a significant gain. The constant step-size variant
of SGD is faster in the early stages until it converges to a neighborhood of the optimum, where
individual runs start showing a very noisy behavior.
SAGA vs. q-SAGA q-SAGA outperforms plain SAGA quite consistently when counting stochastic update steps. This establishes optimistic reference curves of what we can expect to achieve with
N -SAGA. The actual speed-up is somewhat data set dependent.
N -SAGA vs. SAGA and q-SAGA N -SAGA with sufficiently small can realize much of the
possible freshness gains of q-SAGA and performs very similar for a few (2-10) epochs, where it
traces nicely between the SAGA and q-SAGA curves. We see solid speed-ups on all three datasets
for both ? = 0.1 and ? = 0.001.
Asymptotics It should be clearly stated that running N -SAGA at a fixed for longer will not
result in good asymptotics on the empirical risk. This is because, as theory predicts, N -SAGA
can not drive the suboptimality to zero, but rather levels-off at a point determined by . In our
experiments, the cross-over point with SAGA was typically after 5 ? 15 epochs. Note that the gains
in the first epochs can be significant, though. In practice, one will either define a desired accuracy
level and choose accordingly or one will switch to SAGA for accurate convergence.
5
Conclusion
We have generalized variance reduced SGD methods under the name of memorization algorithms
and presented a corresponding analysis, which commonly applies to all such methods. We have
investigated in detail the range of safe step sizes with their corresponding geometric rates as guaranteed by our theory. This has delivered a number of new insights, for instance about the trade-offs
1
between small (? n1 ) and large (? 4L
) step sizes in different regimes as well as about the role of
the freshness of stochastic gradients evaluated at past iterates.
We have also investigated and quantified the effect of additional errors in the variance correction
terms on the convergence behavior. Dependent on how ? scales with n, we have shown that such
errors can be tolerated, yet, for small ?, may have a negative effect on the convergence rate as much
smaller step sizes are needed to still guarantee convergence to a small region. We believe this result
to be relevant for a number of approximation techniques in the context of variance reduced SGD.
Motivated by these insights and results of our analysis, we have proposed N -SAGA, a modification
of SAGA that exploits similarities between training data points by defining a neighborhood system.
Approximate versions of per-data point gradients are then computed by sharing information among
neighbors. This opens-up the possibility of variance-reduction in a streaming data setting, where
each data point is only seen once. We believe this to be a promising direction for future work.
Empirically, we have been able to achieve consistent speed-ups for the initial phase of regularized
risk minimization. This shows that approximate computations of variance correction terms constitutes a promising approach of trading-off computation with solution accuracy.
Acknowledgments We would like to thank Yannic Kilcher, Martin Jaggi, R?emi Leblond and the
anonymous reviewers for helpful suggestions and corrections.
8
References
[1] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in
high dimensions. Commun. ACM, 51(1):117?122, 2008.
[2] L. Bottou. Large-scale machine learning with stochastic gradient descent. In COMPSTAT,
pages 177?186. Springer, 2010.
[3] S. Dasgupta and K. Sinha. Randomized partition trees for nearest neighbor search. Algorithmica, 72(1):237?263, 2015.
[4] A. Defazio, F. Bach, and S. Lacoste-Julien. SAGA: A fast incremental gradient method with
support for non-strongly convex composite objectives. In Advances in Neural Information
Processing Systems, pages 1646?1654, 2014.
[5] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance
reduction. In Advances in Neural Information Processing Systems, pages 315?323, 2013.
[6] J. Kone?cn`y and P. Richt?arik. Semi-stochastic gradient descent methods. arXiv preprint
arXiv:1312.1666, 2013.
[7] H. Robbins and S. Monro. A stochastic approximation method. The annals of mathematical
statistics, pages 400?407, 1951.
[8] M. Schmidt. Convergence rate of stochastic gradient with constant step size. UBC Technical
Report, 2014.
[9] M. Schmidt, N. L. Roux, and F. Bach. Minimizing finite sums with the stochastic average
gradient. arXiv preprint arXiv:1309.2388, 2013.
[10] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter. Pegasos: Primal estimated sub-gradient
solver for SVM. Mathematical programming, 127(1):3?30, 2011.
[11] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized
loss. The Journal of Machine Learning Research, 14(1):567?599, 2013.
9
| 5919 |@word repository:1 version:2 norm:2 open:1 crucially:2 contraction:3 sgd:29 thereby:1 solid:1 reduction:4 initial:1 contains:3 ecole:1 past:3 outperforms:1 current:3 surprising:1 leblond:1 yet:5 readily:1 realize:1 partition:1 hofmann:1 designed:1 update:28 v:3 kilcher:1 half:1 prohibitive:1 selected:2 nq:5 kyk:2 website:1 accordingly:1 steepest:1 vanishing:3 underestimating:1 iterates:6 provides:1 location:2 simpler:1 zhang:2 mathematical:2 j3j:1 overhead:1 sync:1 manner:3 introduce:2 pairwise:1 sacrifice:1 behavior:2 roughly:1 ekw:4 decreasing:1 resolve:1 actual:1 considering:1 solver:1 becomes:1 project:1 bounded:1 moreover:1 maximizes:1 agnostic:1 what:1 argmin:1 interpreted:1 minimizes:1 unified:1 finding:3 guarantee:5 sag:1 exactly:2 k2:35 control:2 mcwilliams:1 unit:1 reuses:1 appear:1 local:1 modify:2 limit:2 inria:1 quantified:1 dynamically:2 suggests:1 kfi:1 range:3 averaged:1 practical:1 acknowledgment:1 yj:4 practice:1 lf:1 j0:1 asymptotics:2 empirical:3 universal:2 eth:3 significantly:2 adapting:1 convenient:1 ups:4 matching:1 pre:2 composite:1 get:7 pegasos:1 close:2 storage:1 risk:3 context:2 memorization:16 applying:1 influence:1 www:1 reviewer:1 maximizing:1 compstat:1 convex:6 simplicity:1 roux:1 immediately:1 insight:7 rule:2 coordinate:1 increment:1 updated:6 annals:1 target:1 massive:1 assuring:1 exact:2 modulo:1 programming:1 expensive:1 predicts:1 bottom:1 csie:1 role:1 preprint:2 region:1 richt:1 trade:4 decrease:2 yk:2 mentioned:2 vanishes:1 convexity:1 ideally:2 depend:1 raise:1 predictive:1 technically:1 accelerate:1 resolved:1 kxj:4 regularizer:3 fast:1 effective:2 neighborhood:13 choosing:4 shalev:2 quite:1 larger:1 tightness:1 otherwise:8 cov:3 gi:3 statistic:1 noisy:1 final:1 delivered:1 obviously:3 indyk:1 advantage:3 biasedness:1 sequence:2 indication:1 propose:1 product:1 neighboring:1 relevant:5 uci:1 achieve:2 fi0:28 convergence:15 optimum:2 requirement:2 incremental:1 converges:2 sierra:1 depending:1 develop:1 derive:1 nearest:2 ij:9 minor:1 eq:11 strong:1 implies:3 come:1 trading:1 lyapunov:5 switzerland:3 direction:4 safe:1 correct:2 compromising:1 modifying:1 stochastic:22 transient:1 libsvmtools:1 violates:1 require:1 fix:3 really:1 investigation:1 randomization:1 ntu:1 brian:1 anonymous:1 correction:15 hold:2 practically:2 sufficiently:3 around:3 algorithmic:1 early:1 lose:1 robbins:1 establishes:1 dampen:1 cotter:1 hope:1 minimization:1 offs:3 clearly:1 always:3 arik:1 aim:4 modified:1 normale:1 rather:3 avoid:1 shrinkage:1 varying:1 pervasive:1 corollary:7 l0:2 focus:1 improvement:1 consistently:1 indicates:1 contrast:1 sense:1 helpful:1 dependent:2 el:3 i0:10 streaming:1 entire:1 typically:1 france:1 selects:2 classification:1 ill:2 among:1 dual:1 special:4 initialize:1 once:1 saving:2 nicely:1 functions1:1 sampling:1 kw:6 look:1 cancel:1 constitutes:1 future:1 report:1 few:1 distinguishes:1 randomly:1 interpolate:1 individual:1 phase:2 algorithmica:1 n1:3 maintain:2 investigate:3 possibility:1 evaluation:8 weakness:1 deferred:1 yielding:1 unconditional:1 kone:1 primal:1 accurate:2 fj0:2 tree:1 old:2 euclidean:2 desired:2 re:1 minimal:1 sinha:1 instance:1 cost:1 uniform:8 johnson:1 stored:1 dependency:1 kxi:1 tolerated:1 unbiasedness:3 randomized:1 retain:1 stay:1 off:5 straw:1 lucchi:1 again:1 yannic:1 choose:3 possibly:1 worse:1 ek:9 derivative:1 style:1 lhi:3 potential:1 suggestively:1 includes:1 explicitly:1 eyi:1 depends:3 performed:2 optimistic:1 kwk:4 sup:1 start:2 hf:2 maintains:2 option:1 complicated:1 recover:2 simon:1 monro:1 contribution:1 provements:1 square:2 ni:8 accuracy:4 variance:19 maximized:1 yield:2 conceptually:1 dealt:1 generalize:1 drive:1 history:1 datapoint:4 sharing:5 definition:3 proof:3 sampled:1 gain:4 dataset:3 popular:1 schedule:1 appears:2 hashing:1 specify:1 evaluated:1 though:1 strongly:4 stage:1 until:2 hand:1 incrementally:1 continuity:1 logistic:3 quality:1 believe:2 name:1 effect:3 unbiased:1 true:1 evolution:2 ekfi0:3 aggressively:1 recurrence:2 suboptimality:16 generalized:4 complete:1 ridge:2 workhorse:1 performs:1 l1:1 fj:1 fi:10 superior:1 empirically:1 million:1 discussed:1 kwk2:1 significant:4 smoothness:1 unconstrained:1 erieure:1 similarly:1 stochasticity:1 had:1 hxi:5 longer:1 similarity:1 jaggi:1 curvature:1 recent:2 commun:1 occasionally:1 store:2 arbitrarily:1 yi:8 freshness:5 seen:2 additional:4 somewhat:3 prune:1 converge:2 monotonically:1 semi:1 full:2 sound:1 multiple:2 technical:1 faster:2 adapt:1 offer:1 long:1 compensate:1 cross:2 bach:2 controlled:1 variant:4 regression:8 expectation:8 metric:2 arxiv:4 iteration:7 represent:1 achieved:1 proposal:1 whereas:3 addition:1 want:1 extra:1 rest:1 unlike:1 ascent:1 kwk22:1 call:3 near:2 counting:1 ideal:3 iterate:3 affect:2 xj:2 switch:1 triggering:1 opposite:1 reduce:1 cn:1 tradeoff:1 pivot:1 bottleneck:1 whether:1 motivated:2 defazio:1 accelerating:1 song:1 useful:1 proportionally:1 involve:1 se:1 reduced:5 http:1 estimated:1 per:4 materialize:1 write:1 dasgupta:1 coarsely:1 key:1 achieving:1 libsvm:1 lacoste:2 utilize:1 asymptotically:3 concreteness:1 sum:2 year:2 run:5 inverse:1 parameterized:1 fourth:1 everywhere:1 family:3 throughout:1 draw:1 appendix:2 prefer:1 investigates:1 bound:12 hi:11 pay:1 guaranteed:4 distinguish:1 aurelien:1 sake:1 generates:1 speed:5 argument:2 optimality:1 min:6 emi:1 performing:2 martin:1 department:3 according:3 ball:5 across:2 smaller:2 wi:5 tw:1 evolves:2 wherever:2 making:1 ijcnn1:3 modification:1 memorizing:1 taken:1 kfi0:2 equation:1 zurich:3 turn:1 kw0:1 discus:1 cjlin:1 needed:1 know:1 singer:1 multiplied:1 apply:4 appearing:1 save:1 alternative:1 schmidt:2 slower:1 thomas:1 top:1 running:1 ensure:2 unifying:1 exploit:3 giving:1 approximating:1 unchanged:1 sweep:1 objective:3 question:2 quantity:3 added:1 gradient:40 distance:2 thank:1 trivial:1 assuming:1 length:1 index:5 reformulate:1 providing:2 minimizing:1 ql:2 expense:2 trace:1 mink:1 stated:1 negative:1 upper:2 datasets:4 finite:2 descent:8 supporting:1 defining:2 team:1 precise:1 namely:2 paris:1 optimized:1 able:1 suggested:3 alongside:1 regime:7 challenge:1 including:1 memory:10 max:3 suitable:1 eh:2 regularized:5 egi:2 julien:2 axis:1 epoch:19 geometric:7 l2:1 relative:2 loss:3 expect:1 suggestion:1 proportional:1 filtering:1 proven:1 srebro:1 validation:1 consistent:1 storing:1 share:1 row:2 keeping:1 svrg:10 side:2 bias:2 neighbor:6 taking:1 benefit:1 overcome:1 plain:3 curve:2 valid:1 dimension:1 commonly:2 adaptive:1 simplified:1 approximate:5 compact:1 xi:8 shwartz:2 continuous:1 search:1 promising:2 investigated:2 complex:1 bottou:1 protocol:1 did:1 main:3 big:1 arise:1 repeated:1 suffering:1 amortize:1 slow:2 sub:2 saga:72 lq:2 third:1 hw:2 theorem:4 down:1 specific:1 showing:2 svm:1 dominates:1 andoni:1 mirror:2 conditioned:3 occurring:1 kx:2 kxk:2 expressed:1 scalar:2 applies:1 springer:1 ubc:1 minimizer:1 acm:1 conditional:1 towards:1 lipschitz:2 shared:1 man:1 experimentally:1 change:1 cst:4 determined:2 included:1 uniformly:4 reducing:3 wt:2 except:1 lemma:8 called:2 experimental:4 est:1 exception:2 select:1 support:1 arises:1 |
5,435 | 592 | Predicting Complex Behavior
in Sparse Asymmetric Networks
An A. Minai and William B. Levy
Department of Neurosurgery
Box 420. Health Sciences Center
University of Virginia
Charlottesville. V A 22908
Abstract
Recurrent networks of threshold elements have been studied intensively as associative memories and pattern-recognition devices. While
most research has concentrated on fully-connected symmetric networks. which relax to stable fixed points. asymmetric networks show
richer dynamical behavior. and can be used as sequence generators or
flexible pattern-recognition devices. In this paper. we approach the
problem of predicting the complex global behavior of a class of random asymmetric networks in terms of network parameters. These networks can show fixed-point. cyclical or effectively aperiodic behavior.
depending on parameter values. and our approach can be used to set
parameters. as necessary. to obtain a desired complexity of dynamics.
The approach also provides qualitative insight into why the system
behaves as it does and suggests possible applications.
1 INTRODUCTION
Recurrent neural networks of threshold elements have been intensively investigated in
recent years. in part because of their interesting dynamics. Most of the interest has
focused on networks with symmetric connections. which always relax to stable fixed
points (Hopfield. 1982) and can be used as associative memories or pattern-recognition
devices. Networks with asymmetric connections. however. have the potential for much
556
Predicting Complex Behavior in Sparse Asymmetric Networks
richer dynamic behavior and may be used for learning sequences (see, e.g., Amari, 1972;
Sompolinsky and Kanter, 1986).
In this paper, we introduce an approach for predicting the complex global behavior of an
interesting class of random sparse asymmetric networks in terms of network parameters.
This approach can be used to set parameter values, as necessary, to obtain a desired
activity level and qualitatively different varieties of dynamic behavior.
2 NETWORK PARAMETERS AND EQUATIONS
A network consists of n identical 011 neurons with threshold O. The fixed pattern of
excitatory connectivity between neurons is generated prior to simulation by a Bernoulli
process with a probability p of connection from neuron j to neuron i. All excitatory
connections have the fixed value w, and there is a global inhibition that is linear in the
number of active neurons. If m (t ) is the number of active neurons at time t, K the inhibitory weight, y; (t) the net excitation and Z; (t) the firing status of neuron i at t, and C;j a
011 variable indicating the presence or absence of a connection from j to i, then the
equations for i are:
y;(t) =
w
~ C;jZj(t-1)
j-
w ~ C;jzj(t-I) +Km(t-I)
,I~m(t-I)~n
(1)
l:t
Z;(t)={1
o
0<0<1
ify;(t)2!O
otherwise '
(2)
If m (t-I) =0, y; (t) =0 'Vi. Equation (1) is a simple variant of the shunting inhibition
neuron model studied by several researchers, and the network is similar to the one proposed by Marr (Marr, 1971). Note that (1) and (2) can be combined to write the neuron
equations in a more familiar subtractive inhibition format Defining a == OK / (I-9)w ,
z; (t )
=1
o
if
Jt
C;j Zj (t -1) - a
~ Zj (t -I) 2! 0
(3)
otherwise
3 NETWORK BEHAVIOR
In this paper, we study the evolution of total activity, m(t), as the system relaxes. From
Equation (3), the firing condition for neuron i at time t, given the activity m (t-l)=M at
time t-I, is: e;(t).= ~C;jZj(t-l) 2! aM. Thus, in order to fire at time t, neuron i must
l:t
have at least raMl active inputs. This allows us to calculate the average firing probability
of a neuron given the prior activity M as:
P{lo! active inputs
2!raMll= ~
k,{aM!
(AfJpk (l-p)M.....t =p(M;n,p,a)
(4)
If M is large enough, we can use a Gaussian approximation to the 'binomial distribution
557
558
Minai and Levy
and a hyperbolic tangent approximation to the error function to get
p(M;
n~.a) = ~ [t-eif[ ~]] = ~ [l-mn+#x]]
(5)
where
x
== [aM] -Mp
"Mp(l-p)
Finally, when M is large enough to assume [aMl = aM, we get an even simpler form:
p(M;
n~.a)= t[l-tanh ~]
(6)
where
T
= _1_
- a-p
~ 1tp(l-p)
a.~p
2'
Assuming that neurons fire independently, as they will tend to do in such large, sparse
networks (Minai and Levy, I992a,b), the network's activity at time t is distributed as
P {m (1 )=N I m (t-I)=M}
=(Z) p(M)H (1- p(M?"-N
(7)
which leads to a stochastic return map for the activity:
m (1)
=n p(m (t-I? + O(..Jii)
(8)
In Figure 1, we plot m (t) against m (1 -1) for a 120 neuron network and two different
values of a.. The vertical bars show two standard deviations on either side of
n p(m (t-I?. It is clear that the network's activity falls within the range predicted by (8).
After an initial transient period, the system either switches off permanently (corresponding to the zero activity fixed point) or gets trapped in an 0 ( ..Jii) region around the point
m defined by m (t) = m (t -1). We call this the attracting region of the map. The size and
location of the attracting region are determined by a and largely dictate the qualitative
dynamic behavior of the network.
As a. ranges from 0 to 1, networks show three kinds of behavior: fixed points, short
cycles, and effectively aperiodic dynamics. Before describing these behaviors, however,
we introduce the notion of available neurons. Let k; be the number of input connections
to neuron i (the fan-in of
Given m (t -1) M, if k; < [aMl, neuron; cannot possibly
meet the firing criterion at time t. Such a neuron is said to be disabled by activity M. The
group of neurons not disabled are considered available neurons. At any specific activity
M, there is a unique set, Na (M), of available neurons in a given network, and only neurons from this set can be active at the next time step. Clearly, Na (M 1) !:: Na (M 2) if
M 1 ~ M 2. The average size of the available set at a given activity M is
n.
=
na (M; n,p ,a) == n [1 - P {k; < [aMl} ]
=n
t
(~)pA: (l-p )"-A:
A:=faMl
(9)
Predicting Complex Behavior in Sparse Asymmetric Networks
(a) Eft'edlytJ, Aperiodic B.Jllwior
8 - 0.85, K - 0.016, w - 0.4 =- a - 0.227
o-~da&a(2000")
..
.'
..
(11) IB&h AcdYit, C,cIe
8-0.85, K -0.012, w -0.4 =-a-o.l1
.'
8-+~--------------------------~
8I(t)
8
.'
a.(t)
118 8
UO
Figure 1: Predicted Distribution of m (t+l) given met), and Empirical Data (0) for Two
Networks A and B. The vertical bars represent 4 standard deviations of the predicted distribution for each m (t). Note that the empirical values fall in the predicted range.
~~--------------------------------------------------------------~
8I(t)
(a) Eft'edlyitly Aperiodic BebaYIor
8-0.85,K -0.016, w -0.4 =-a-0.221
8-+--------------------------------------________________________
~
10000
r-----------------------------,
Low AcdYit, C,cIe
U8 1000
(e)
8-0.85, K -0.0241, w -0.4 ... a-0.35
a.(t)
(11) Hlet- At:thit, C,cIe
8-0.85, K -0.012,
w
-0.4 ... 11-0.11
--+-----------------------------4
400
:zoo
Figure 2: Activity time-series for three kinds of behavior shown by a 120 neuron network. Graphs (a) and (b) correspond to the data shown in Figure 1.
559
560
Minai and Levy
It can be shown that na (M) ~ n p(M), so there are usually enough neurons available to
achieve the average activity as per (8).
We now describe the three kinds of dynamic behavior exhibited by our networks.
(1)
Fixed Point Behavior: If a is very small, m is close to n, inhibition is not strong
enough to control activity and almost all neurons switch on permanently. If a is too
large, iff is close to 0 and the stochastic dynamics eventually finds, and remains at,
the zero activity fixed point.
(2)
Effectively Aperiodic Behavior: While deterministic, finite state systems such as
our networks cannot show truly aperiodic or chaotic behavior, the time to repetition can be so long as to make the dynamics effectively aperiodic. This occurs
when the attracting region is at a moderate activity level, well below the ceiling
defined by the number of available neurons. In such a situation, the network, starting from an initial condition, successively visits a very large number of different
states, and the activity, m (t), yields an effvectively aperiodic time-series of amplitude 0 (-{,1), as shown in Figure 2(a).
(3)
Cyclical Behavior: If the attracting region is at a high activity level, most of the
available neurons must fire at every time step in order to maintain the activity
predicted by (8). This forces network states to be very similar to each other, which,
in turn, leads to even more similar successor states and the network settles into a
relatively short limit cycle of high activity (Figure 2(b?. When the attracting
region is at an activity level just above switch-off, the network can get into a lowactivity limit cycle mediated by a very small group of high fan-in neurons (Figure
2(c?. This effect, however, is unstable with regard to initial conditions and the
value of a; it is expected to become less significant with increasing network size.
u-.-------------------------------------------------.
(a)
Mean-0.2B7
Mean-0.310
(II)
V8riaDce - 0.0689
?
?
us
Flrin& Probabilit,
0.75
V8riance - 0.0003
1
?
G.25
FIrinc Probabilit,
8.75
1
Figure 3: Neuron firing probability histograms for two 120-neuron networks in the effectively aperiodic phase (a ~ 0.227). Graph (a) is for a network with random connectivity
generated through a Bernoulli process with p 0.2, while Graph (b) is for a network
with a fixed fan-in of exactly 24, which corresponds to the mean fan-in for p 0.2.
=
=
Predicting Complex Behavior in Sparse Asymmetric Networks
One interesting issue that arises in the context of effectively aperiodic behavior is that of
state-space sampling within the O(f,i) constraint on activity. We assess this by looking
at the histogram of individual neuron firing rates. Figure 3(a) shows the histogram for a
120 neuron network in the effectively aperiodic phase. Clearly, some subspaces are being
sampled much more than others and the histogram is very broad. This is mainly due to
differences in the fan-in of individual neurons, and will diminish in larger networks. Figure 3(b) shows the neuron firing histogram for a 120 neuron network where each neuron
has a fan-in of 24. The sampling is clearly much more "ergodic" and the dynamics less
biased towards certain subspaces.
0.75
0.2.5
o-+----------------------~--------------------------------------------------~------------------------~
o
0.2.5
0.75
Figure 4: The complete set of non-zero activation values available to two identical neurons i and j with fan-in 24 in a 12O-neuron network.
561
562
Minai and Levy
4 ACTIVATIONDYNAMICS
While our modeling so far has focused on neural firing, it is instructive to look at the
underlying neuron activation values, Yi. If m (t -1) M , the possible Yi (t) values for a
neuron i with fan-in ki are given by the set
=
y(M, ti)
={
wq :qKM I MAX(O,ki-n+M)SqSMIN(M,
til}
M >0
(10)
with Y(0, ki ) == {O}. Here q represents the number of active inputs to i, and the set
n
Yi == ~Y(M, ki) represents the set of all possible activation values for the neuron. The
network's n -dimensional activation state, y(t) == [y It Y2, ... , Yn ], evolves upon the activation space Y 1 X Y 2 X ??? x Y n' which is an extremely complex but regular object. In
Figure 4, we plot a 2-dimensional subspace projection - called a Y-Y plot - of the
activation space for a 120-neuron network excluding the zero states. Both neurons shown
have a fan-in of 24. In actuality, only a small subset of the activation space is sampled
due to the constraining effects of the dynamics and the improbability of most q values.
5 RELATING THE ACTMTY LEVEL TO ex
From a practical standpoint, it would be useful to know how the average activity in a network is related to its a. parameter. This can be done using the hyperbolic tangent approximation of Equation (6). First, we define the activity level at time t as r (t) == n-1m (t), i.e.,
the proportion of active neurons. This is a macrostate variable in the sense of (Amari,
1974). In the long term, the activity level becomes confined to a o (1/...Jn) region around
the value corresponding to the activity fixed point Thus, it is reasonable to use r as an
estimate for the time-averaged activity level (r). To relate m (and thus r) to a, we must
solve the fixed point equation m n p(m ). Substituting this and the definition of r into
=
l-r--------~~--------------------~
0.75
.,
.-r----~~----~----~------.-----~
?
0.04
II
~
0.1
Figure 5: Predicted and empirical activities for 1000 neuron networks with p
Each data point is averaged over 7 networks.
=0.05.
Predicting Complex Behavior in Sparse Asymmetric Networks
(6) gives:
a(r) =p
+ ~ 1tp (I-p) tanh-I (1 - 2r )
2nr
(11)
While a can range from 0 to 1, the approximation of (11) breaks down at very high or
very small values of r. However, the range of its applicability gets wider as n increases.
Figure 5 shows the perfonnance of (11) in predicting the average activity level in a
1000-neuron network. Note that a p always leads to r 0.5 by Equation (11).
=
=
6 CONCLUSION
We have studied a general class of asymmetric networks and have developed a statistical
model to relate its dynamical behavior to its parameters. This behavior, which is largely
characterized by a composite parameter a, is richly varied. Understanding such behavior
provides insight into the complex possibilities offered by sparse asymmetric networks,
especially with regard to modeling such brain regions as the hippocampal CA3 area in
mammals. The complex behavior of random asymmetric networks has been discussed
before by Parisi (Parisi, 1986), Niitzel (Niitzel, 1991), and others. We show how to control this complexity in our networks by setting parameters appropriately.
Acknowledgements: This research was supported by NIMH MH00622 and NIMH
MH48161 to WBL, and by the Department of Neurosurgery, University of Virginia, Dr.
John A. Jane, Chairman.
References
S. Amari (1972). Learning Patterns and Pattern Sequences by Self-Organizing Nets of
Threshold Elements. IEEE Trans. on Computers C-21, 1197-1206
S. Amari (1974). A Method of Statistical Neurodynamics. Kybemetik 14,201-215
J.1. Hopfield (1982). Neural Networks and Physical Systems with Emergent Collective
Computational Abilities. Proc. Nat. Acad. Sci. USA 79, 2554-2558.
D. Marr (1971). Simple Memory: A Theory for Archicortex. Phil. Trans. R. Soc. Lond.
B 262, 23-81.
A.A. Minai and W.B. Levy (1992a). The Dynamics of Sparse Random Networks. In
Review.
A.A. Minai and W.B. Levy (1992b). Setting the Activity Level in Sparse Random Networks. In Review.
K. Niitzel (1991). The Length of Attractors in Asymmetric Random Neural Networks
with Deterministic Dynamics. J. Phys. A: Math. Gen 24, LI51-157.
G. Parisi (1982). Asymmetric Neural Networks and the Process of Learning. J. Phys. A:
Math. Gen. 19, L675-L680.
H. Sompolinsky and I. Kanter (1986), Temporal Association in Asymmetric Neural Networks. Phys. Rev. Lett. 57,2861-2864.
563
| 592 |@word proportion:1 km:1 simulation:1 mammal:1 initial:3 series:2 activation:7 must:3 john:1 plot:3 device:3 short:2 provides:2 math:2 location:1 simpler:1 become:1 qualitative:2 consists:1 introduce:2 expected:1 behavior:26 brain:1 increasing:1 becomes:1 underlying:1 kind:3 developed:1 temporal:1 every:1 ti:1 exactly:1 control:2 uo:1 yn:1 before:2 limit:2 acad:1 meet:1 firing:8 studied:3 suggests:1 raml:1 range:5 averaged:2 unique:1 practical:1 chaotic:1 probabilit:2 area:1 empirical:3 hyperbolic:2 dictate:1 projection:1 composite:1 regular:1 get:5 cannot:2 close:2 context:1 map:2 deterministic:2 center:1 phil:1 starting:1 independently:1 focused:2 ergodic:1 insight:2 marr:3 notion:1 pa:1 element:3 recognition:3 asymmetric:15 calculate:1 region:8 connected:1 cycle:3 sompolinsky:2 complexity:2 nimh:2 dynamic:13 upon:1 hopfield:2 emergent:1 describe:1 kanter:2 richer:2 larger:1 solve:1 relax:2 amari:4 otherwise:2 ability:1 associative:2 sequence:3 parisi:3 net:2 gen:2 organizing:1 iff:1 achieve:1 object:1 wider:1 depending:1 recurrent:2 strong:1 soc:1 predicted:6 met:1 aperiodic:11 aml:3 stochastic:2 transient:1 successor:1 settle:1 around:2 considered:1 diminish:1 substituting:1 proc:1 tanh:2 repetition:1 neurosurgery:2 clearly:3 always:2 gaussian:1 bernoulli:2 mainly:1 am:4 sense:1 issue:1 flexible:1 sampling:2 identical:2 represents:2 broad:1 look:1 qkm:1 others:2 individual:2 familiar:1 phase:2 fire:3 attractor:1 william:1 maintain:1 interest:1 possibility:1 truly:1 necessary:2 perfonnance:1 desired:2 modeling:2 tp:2 applicability:1 ca3:1 deviation:2 subset:1 virginia:2 too:1 combined:1 off:2 na:5 connectivity:2 successively:1 possibly:1 dr:1 til:1 return:1 jii:2 potential:1 mp:2 vi:1 break:1 ass:1 largely:2 correspond:1 yield:1 zoo:1 researcher:1 phys:3 definition:1 against:1 sampled:2 richly:1 intensively:2 amplitude:1 ok:1 done:1 box:1 just:1 disabled:2 usa:1 effect:2 y2:1 evolution:1 symmetric:2 self:1 excitation:1 criterion:1 hippocampal:1 complete:1 l1:1 archicortex:1 behaves:1 i992a:1 physical:1 discussed:1 eft:2 association:1 relating:1 significant:1 stable:2 inhibition:4 attracting:5 recent:1 moderate:1 certain:1 yi:3 period:1 ii:2 characterized:1 long:2 shunting:1 visit:1 variant:1 histogram:5 cie:3 represent:1 confined:1 standpoint:1 appropriately:1 biased:1 exhibited:1 tend:1 wbl:1 call:1 presence:1 constraining:1 enough:4 relaxes:1 variety:1 switch:3 b7:1 actuality:1 useful:1 clear:1 concentrated:1 zj:2 inhibitory:1 trapped:1 per:1 write:1 group:2 threshold:4 graph:3 year:1 almost:1 reasonable:1 ki:4 fan:9 activity:30 constraint:1 extremely:1 lond:1 format:1 relatively:1 department:2 evolves:1 rev:1 jzj:3 ceiling:1 equation:8 remains:1 describing:1 eventually:1 turn:1 know:1 available:8 jane:1 permanently:2 jn:1 binomial:1 especially:1 occurs:1 chairman:1 nr:1 said:1 subspace:3 sci:1 eif:1 unstable:1 assuming:1 length:1 relate:2 ify:1 collective:1 vertical:2 neuron:46 finite:1 situation:1 defining:1 looking:1 excluding:1 varied:1 connection:6 trans:2 bar:2 dynamical:2 pattern:6 usually:1 below:1 max:1 memory:3 force:1 predicting:8 mn:1 mediated:1 health:1 review:2 prior:2 charlottesville:1 understanding:1 tangent:2 acknowledgement:1 fully:1 interesting:3 generator:1 offered:1 subtractive:1 lo:1 excitatory:2 supported:1 side:1 fall:2 sparse:10 distributed:1 regard:2 lett:1 qualitatively:1 far:1 status:1 global:3 active:7 why:1 neurodynamics:1 investigated:1 complex:10 da:1 ib:1 levy:7 down:1 specific:1 jt:1 effectively:7 nat:1 cyclical:2 corresponds:1 u8:1 towards:1 absence:1 determined:1 total:1 called:1 minai:7 indicating:1 wq:1 arises:1 instructive:1 ex:1 |
5,436 | 5,920 | Non-convex Statistical Optimization for Sparse
Tensor Graphical Model
Wei Sun
Yahoo Labs
Sunnyvale, CA
sunweisurrey@yahoo-inc.com
Zhaoran Wang
Department of Operations Research
and Financial Engineering
Princeton University
Princeton, NJ
zhaoran@princeton.edu
Han Liu
Department of Operations Research
and Financial Engineering
Princeton University
Princeton, NJ
hanliu@princeton.edu
Guang Cheng
Department of Statistics
Purdue University
West Lafayette, IN
chengg@stat.purdue.edu
Abstract
We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation
of the precision matrix corresponding to each way of the tensor, we assume the
data follow a tensor normal distribution whose covariance has a Kronecker product
structure. The penalized maximum likelihood estimation of this model involves
minimizing a non-convex objective function. In spite of the non-convexity of this
estimation problem, we prove that an alternating minimization algorithm, which
iteratively estimates each sparse precision matrix while fixing the others, attains
an estimator with the optimal statistical rate of convergence as well as consistent
graph recovery. Notably, such an estimator achieves estimation consistency with
only one tensor sample, which is unobserved in previous work. Our theoretical
results are backed by thorough numerical studies.
1
Introduction
High-dimensional tensor-valued data are prevalent in many fields such as personalized recommendation systems and brain imaging research [1, 2]. Traditional recommendation systems are mainly
based on the user-item matrix, whose entry denotes each user?s preference for a particular item. To
incorporate additional information into the analysis, such as the temporal behavior of users, we need
to consider a user-item-time tensor. For another example, functional magnetic resonance imaging
(fMRI) data can be viewed as a three way (third-order) tensor since it contains the brain measurements
taken on different locations over time for various experimental conditions. Also, in the example of
microarray study for aging [3], thousands of gene expression measurements are recorded on 16 tissue
types on 40 mice with varying ages, which forms a four way gene-tissue-mouse-age tensor.
In this paper, we study the estimation of conditional independence structure within tensor data. For
example, in the microarray study for aging we are interested in the dependency structure across different genes, tissues, ages and even mice. Assuming data are drawn from a tensor normal distribution,
a straightforward way to estimate this structure is to vectorize the tensor and estimate the underlying
Gaussian graphical model associated with the vector. Such an approach ignores the tensor structure
1
and requires estimating a rather high dimensional precision matrix with insufficient sample size. For
instance, in the aforementioned fMRI application the sample size is one if we aim to estimate the
dependency structure across different locations, time and experimental conditions. To address such a
problem, a popular approach is to assume the covariance matrix of the tensor normal distribution is
separable in the sense that it is the Kronecker product of small covariance matrices, each of which
corresponds to one way of the tensor. Under this assumption, our goal is to estimate the precision
matrix corresponding to each way of the tensor. See ?1.1 for a detailed survey of previous work.
Despite the fact that the assumption of the Kronecker product structure of covariance makes the
statistical model much more parsimonious, it poses significant challenges. In particular, the penalized
negative log-likelihood function is non-convex with respect to the unknown sparse precision matrices.
Consequently, there exists a gap between computational and statistical theory. More specifically,
as we will show in ?1.1, existing literature mostly focuses on establishing the existence of a local
optimum that has desired statistical guarantees, rather than offering efficient algorithmic procedures
that provably achieve the desired local optima. In contrast, we analyze an alternating minimization algorithm which iteratively minimizes the non-convex objective function with respect to each
individual precision matrix while fixing the others. The established theoretical guarantees of the
proposed algorithm are as follows. Suppose that we have n observations from a K-th order tensor
normal distribution. We denote by mk , sk , dk (k = 1, . . . , K) the dimension, sparsity, and max
number of non-zero entries in each row of the precision matrix corresponding to the k-th way of the
QK
tensor. Besides, we define m = k=1
p mk . The k-th precision matrix estimator from our alternating
minimization algorithm achieves a mk (mk + sk ) log mk /(nm) statistical rate of convergence in
Frobenius norm, which is minimax-optimal since this is the best rate one can obtain even when the
rest K 1 true precision
pmatrices are known [4]. Furthermore, under an extra irrepresentability
condition,pwe establish a mk log mk /(nm) rate of convergence in max norm, which is also optimal,
and a dk mk log mk /(nm) rate of convergence in spectral norm. These estimation consistency
results and a sufficiently large signal strength condition further imply the model selection consistency
of recovering all the edges. A notable implication of these results is that, when K 3, our alternating
minimization algorithm can achieve estimation consistency in Frobenius norm even if we only have
access to one tensor sample, which is often the case in practice. This phenomenon is unobserved in
previous work. Finally, we conduct extensive experiments to evaluate the numerical performance of
the proposed alternating minimization method. Under the guidance of theory, we propose a way to
significantly accelerate the algorithm without sacrificing the statistical accuracy.
1.1
Related work and our contribution
A special case of our sparse tensor graphical model when K = 2 is the sparse matrix graphical
model, which is studied by [5?8]. In particular, [5] and [6] only establish the existence of a local
optima with desired statistical guarantees. Meanwhile, [7] considers an algorithm that is similar to
ours. However, the statistical rates of convergence obtained by [6, 7] are much slower than ours
when K = 2. See Remark 3.6 in ?3.1 for a detailed comparison. For K = 2, our statistical rate of
convergence in Frobenius norm recovers the result of [5]. In other words, our theory confirms that the
desired local optimum studied by [5] not only exists, but is also attainable by an efficient algorithm. In
addition, for matrix graphical model, [8] establishes the statistical rates of convergence in spectral and
Frobenius norms for the estimator attained by a similar algorithm. Their results achieve estimation
consistency in spectral norm with only one matrix observation. However, their rate is slower than
ours with K = 2. See Remark 3.11 in ?3.2 for a detailed discussion. Furthermore, we allow K to
increase and establish estimation consistency even in Frobenius norm for n = 1. Most importantly,
all these results focus on matrix graphical model and can not handle the aforementioned motivating
applications such as the gene-tissue-mouse-age tensor dataset.
In the context of sparse tensor graphical model with a general K, [9] shows the existence of a
local optimum with desired rates, but does not prove whether there exists an efficient algorithm
that provably attains such a local optimum. In contrast, we prove that our alternating minimization
algorithm achieves an estimator with desired statistical rates. To achieve it, we apply a novel theoretical
framework to separately consider the population and sample optimizers, and then establish the onestep convergence for the population optimizer (Theorem 3.1) and the optimal rate of convergence
for the sample optimizer (Theorem 3.4). A new concentration result (Lemma B.1) is developed for
this purpose, which is also of independent interest. Moreover, we establish additional theoretical
2
guarantees including the optimal rate of convergence in max norm, the estimation consistency in
spectral norm, and the graph recovery consistency of the proposed sparse precision matrix estimator.
In addition to the literature on graphical models, our work is also closely related to a recent line of
research on alternating minimization for non-convex optimization problems [10?13]. These existing
results mostly focus on problems such as dictionary learning, phase retrieval and matrix decomposition.
Hence, our statistical model and analysis are completely different from theirs. Also, our paper is
related to a recent line of work on tensor decomposition. See, e.g., [14?17] and the references therein.
Compared with them, our work focuses on the graphical model structure within tensor-valued data.
Notation: For a matrix A = (Ai,j ) 2 Rd?d , we denoteP
kAk1 , kAk2 , kAkF as its max, spectral,
and Frobenius norm, respectively. We define kAk1,off := i6=j |Ai,j | as its off-diagonal `1 norm and
P
|||A|||1 := maxi j |Ai,j | as the maximum absolute row sum. Denote vec(A) as the vectorization
of A which stacks the columns of A. Let tr(A) be the trace of A. For an index set S = {(i, j), i, j 2
{1, . . . , d}}, we define [A]S as the matrix whose entry indexed by (i, j) 2 S is equal to Ai,j , and
zero otherwise. We denote 1d as the identity matrix with dimension d ? d. Throughout this paper, we
use C, C1 , C2 , . . . to denote generic absolute constants, whose values may vary from line to line.
2
2.1
Sparse tensor graphical model
Preliminary
We employ the tensor notations used by [18]. Throughout this paper, higher order tensors are denoted
by boldface Euler script letters, e.g. T . We consider a K-th order tensor T 2 Rm1 ?m2 ?????mK .
When K = 1 it reduces to a vector and when K = 2 it reduces to a matrix. The (i1 , . . . , iK )-th
element of the tensor T is denoted to be Ti1 ,...,iK . Meanwhile, we define the vectorization
Q of T
as vec(T ) := (T1,1,...,1 , . . . , Tm1 ,1,...,1 , . . . , T1,m2 ,...,mK , Tm1 ,m2 ,...,mK )> 2 Rm with m = k mk .
P
1/2
2
In addition, we define the Frobenius norm of a tensor T as kT kF :=
.
i1 ,...,iK Ti1 ,...,iK
For tensors, a fiber refers to the higher order analogue of the row and column of matrices. A fiber is
obtained by fixing all but one of the indices of the tensor, e.g., the mode-k fiber of T(k) is given by
Ti1 ,...,,ik 1 ,:,ik+1 ,...,iK . Matricization, also known as unfolding, is the process to transform a tensor
into a matrix. We denote T(k) as the mode-k matricization of a tensor T , which arranges the mode-k
fibers to be the columns of the resulting matrix. Another useful operation in tensors is the k-mode
product. The k-mode product of a tensor T 2 Rm1 ?m2 ?????mK with a matrix A 2 RJ?mk is denoted
as T ?k A and is of the sizeP
m1 ? ? ? ? ? mk 1 ? J ? mk+1 ? ? ? ? ? mK . Its entry is defined as (T ?k
m
A)i1 ,...,ik 1 ,j,ik+1 ,...,iK := ik k=1 Ti1 ,...,iK Aj,ik . In addition, for a list of matrices {A1 , . . . , AK }
mk ?mk
with Ak 2 R
, k = 1, . . . , K, we define T ? {A1 , . . . , AK } := T ?1 A1 ?2 ? ? ? ?K AK .
2.2
Model
A tensor T 2 Rm1 ?m2 ?????mK follows the tensor normal distribution with zero mean and covariance
matrices ?1 , . . . , ?K , denoted as T ? TN(0; ?1 , . . . , ?K ), if its probability density function is
?Y
K
m/2
p(T |?1 , . . . , ?K ) = (2?)
|?k | m/(2mk ) exp
kT ? ? 1/2 k2F /2 ,
(2.1)
k=1
QK
1/2
1/2
where m = k=1 mk and ? 1/2 := {?1 , . . . , ?K }. When K = 1, this tensor normal
distribution reduces to the vector normal distribution with zero mean and covariance ?1 . According
to [9, 18], it can be shown that T ? TN(0; ?1 , . . . , ?K ) if and only if vec(T ) ? N(vec(0); ?K ?
? ? ? ? ?1 ), where vec(0) 2 Rm and ? is the matrix Kronecker product.
We consider the parameter estimation for the tensor normal model. Assume that we observe independently and identically distributed tensor samples T1 , . . . , Tn from TN(0; ??1 , . . . , ??K ). We
aim to estimate the true covariance matrices (??1 , . . . , ??K ) and their corresponding true precision
matrices (??1 , . . . , ??K ) where ??k = ??k 1 (k = 1, . . . , K). To address the identifiability issue in
the parameterization of the tensor normal distribution, we assume that k??k kF = 1 for k = 1, . . . , K.
This renormalization assumption does not change the graph structure of the original precision matrix.
3
A standard approach to estimate ??k , k = 1, . . . , K, is to use the maximum likelihood method
via (2.1). Up to a constant, the negative log-likelihood function of the tensor normal distribution
PK
Pn
1
>
is tr[S(?K ? ? ? ? ? ?1 )]
k=1 (m/mk ) log |?k |, where S := n
i=1 vec(Ti )vec(Ti ) . To
encourage the sparsity of each precision matrix in the high-dimensional scenario, we consider a
penalized log-likelihood estimator, which is obtained by minimizing
qn (?1 , . . . , ?K ) :=
1
tr[S(?K ? ? ? ? ? ?1 )]
m
K
K
X
X
1
log |?k | +
P k (?k ),
mk
k=1
(2.2)
k=1
where P k (?) is a penalty function indexed by the tuning parameter k . In this paper, we focus on
the lasso penalty [19], i.e., P k (?k ) = k k?k k1,off . This estimation procedure applies similarly to a
broad family of other penalty functions.
We name the penalized model from (2.2) as the sparse tensor graphical model. It reduces to the sparse
vector graphical model [20, 21] when K = 1, and the sparse matrix graphical model [5?8] when
K = 2. Our framework generalizes them to fulfill the demand of capturing the graphical structure of
higher order tensor-valued data.
2.3
Estimation
This section introduces the estimation procedure for the sparse tensor graphical model. A computationally efficient algorithm is provided to estimate the precision matrix for each way of the
tensor.
Recall that in (2.2), qn (?1 , . . . , ?K ) is jointly non-convex with respect to ?1 , . . . , ?K . Nevertheless,
qn (?1 , . . . , ?K ) is a bi-convex problem since qn (?1 , . . . , ?K ) is convex in ?k when the rest K 1
precision matrices are fixed. The bi-convex property plays a critical role in our algorithm construction
and its theoretical analysis in ?3.
According to its bi-convex property, we propose to solve this non-convex problem by alternatively
update one precision matrix with other matrices fixed. Note that, for any k = 1, . . . , K, minimizing
(2.2) with respect to ?k while fixing the rest K 1 precision matrices is equivalent to minimizing
1
1
L(?k ) :=
tr(Sk ?k )
log |?k | + k k?k k1,off .
(2.3)
mk
mk
?
1/2
1/2
1/2
1/2 ?
mk Pn
k k>
k
Here Sk := nm
i=1 Vi Vi , where Vi := Ti ? ?1 , . . . , ?k 1 , 1mk , ?k+1 , . . . , ?K
(k)
with ? the tensor product operation and [?](k) the mode-k matricization operation defined in ?2.1. The
1/2
1/2
1/2
1/2 >
result in (2.3) can be shown by noting that Vik = [Ti ](k) ?K ? ? ? ? ? ?k+1 ? ?k 1 ? ? ? ? ? ?1
according to the properties of mode-k matricization shown by [18]. Hereafter, we drop the superscript
k of Vik if there is no confusion. Note that minimizing (2.3) corresponds to estimating vector-valued
Gaussian graphical model and can be solved efficiently via the glasso algorithm [21].
Algorithm 1 Solve sparse tensor graphical model via Tensor lasso (Tlasso)
1: Input: Tensor samples T1 . . . , Tn , tuning parameters 1 , . . . , K , max number of iterations T .
(0)
(0)
2: Initialize ?1 , . . . , ?K randomly as symmetric and positive definite matrices and set t = 0.
3: Repeat:
4: t = t + 1.
5: For k = 1, . . . , K:
(t)
(t)
(t 1)
(t 1)
(t)
6:
Given ?1 , . . . , ?k 1 , ?k+1 , . . . , ?K , solve (2.3) for ?k via glasso [21].
(t)
(t)
7:
Normalize ?k such that k?k kF = 1.
8: End For
9: Until t = T .
b k = ?(T ) (k = 1, . . . , K).
10: Output: ?
k
The details of our Tensor lasso (Tlasso) algorithm are shown in Algorithm 1. It starts with a random
initialization and then alternatively updates each precision matrix until it converges. In ?3, we will
illustrate that the statistical properties of the obtained estimator are insensitive to the choice of the
initialization (see the discussion following Theorem 3.5).
4
3
Theory of statistical optimization
We first prove the estimation errors in Frobenius norm, max norm, and spectral norm, and then provide
the model selection consistency of our Tlasso estimator. We defer all the proofs to the appendix.
3.1
Estimation error in Frobenius norm
Based on the penalized log-likelihood in (2.2), we define the population log-likelihood function as
q(?1 , . . . , ?K ) :=
?
?
1
E tr vec(T )vec(T )> (?K ? ? ? ? ? ?1 )
m
K
X
1
log |?k |.
mk
(3.1)
k=1
By minimizing q(?1 , . . . , ?K ) with respect to ?k , k = 1, . . . , K, we obtain the population minimization function with the parameter ?[K] k := {?1 , . . . , ?k 1 , ?k+1 , . . . , ?K }, i.e.,
Mk (?[K]
k)
:= argmin q(?1 , . . . , ?K ).
(3.2)
?k
Theorem 3.1. For any k = 1, . . . , K, if ?j (j 6= k) satisfies tr(??j ?j ) 6= 0, then the population
?
? 1
Q
minimization function in (3.2) satisfies Mk (?[K] k ) = m mk j6=k tr(??j ?j ) ??k .
Theorem 3.1 shows a surprising phenomenon that the population minimization function recovers the
true precision matrix up to a constant in only one iteration. If ?j = ??j , j 6= k, then Mk (?[K] k ) =
??k . Otherwise, after a normalization such that kMk (?[K] k )kF = 1, the normalized population
minimization function still fully recovers ??k . This observation suggests that setting T = 1 in
Algorithm 1 is sufficient. Such a suggestion will be further supported by our numeric results.
In practice, when (3.1) is unknown, we can approximate it via its sample version qn (?1 , . . . , ?K )
defined in (2.2), which gives rise to the statistical error in the estimation procedure. Analogously to
(3.2), we define the sample-based minimization function with parameter ?[K] k as
ck (?[K]
M
k)
:= argmin qn (?1 , . . . , ?K ).
(3.3)
?k
In order to prove the estimation error, it remains to quantify the statistical error induced from finite
samples. The following two regularity conditions are assumed for this purpose.
Condition 3.2 (Bounded Eigenvalues). For any k = 1, . . . , K, there is a constant C1 > 0 such that,
0 < C1 ? min (??k ) ? max (??k ) ? 1/C1 < 1,
where min (??k ) and max (??k ) refer to the minimal and maximal eigenvalue of ??k , respectively.
Condition 3.2 requires the uniform boundedness of the eigenvalues of true covariance matrices ??k . It
has been commonly assumed in the graphical model literature [22].
Condition 3.3p
(Tuning). For any k = 1, . . . , Kpand some constant C2 > 0, the tuning parameter k
satisfies 1/C2 log mk /(nmmk ) ? k ? C2 log mk /(nmmk ).
Condition 3.3 specifies the choice of the tuning parameters. In practice, a data-driven tuning procedure
[23] can be performed to approximate the optimal choice of the tuning parameters.
Before characterizing the statistical error, we define a sparsity parameter for ??k , k = 1, . . . , K. Let
Sk := {(i, j) : [??k ]i,j 6= 0}. Denote the sparsity parameter sk := |Sk | mk , which is the number
of nonzero entries in the off-diagonal component of ??k . For each k = 1, . . . , K, we define B(??k ) as
the set containing ??k and its neighborhood for some sufficiently large constant radius ? > 0, i.e.,
B(??k ) := {? 2 Rmk ?mk : ? = ?> ; ? 0; k? ??k kF ? ?}.
(3.4)
Theorem 3.4. Assume Conditions 3.2 and 3.3 hold. For any k = 1, . . . , K, the statistical error of the
sample-based minimization function defined in (3.3) satisfies that, for any fixed ?j 2 B(??j ) (j 6= k),
!
r
m
(m
+
s
)
log
m
k
k
k
k
ck (?[K] k ) Mk (?[K] k ) = OP
M
,
(3.5)
F
nm
ck (?[K] k ) are defined in (3.2) and (3.3), and m = QK mk .
where Mk (?[K] k ) and M
k=1
5
ck (?[K] k ) for arbitrary ?j 2 B(?? )
Theorem 3.4 establishes the statistical error associated with M
j
with j 6= k. In comparison, previous work on the existence of a local solution with desired statistical
property only establishes theorems similar to Theorem 3.4 for ?j = ??j with j 6= k. The extension
to an arbitrary ?j 2 B(??j ) involves non-trivial technical barriers. Particularly, we first establish the
rate of convergence of the difference between a sample-based quadratic form with its expectation
(Lemma B.1) via concentration of Lipschitz functions of Gaussian random variables [24]. This result
is also of independent interest. We then carefully characterize the rate of convergence of Sk defined
in (2.3) (Lemma B.2). Finally, we develop (3.5) using the results for vector-valued graphical models
developed by [25].
According to Theorem 3.1 and Theorem 3.4, we obtain the rate of convergence of the Tlasso estimator
in terms of Frobenius norm, which is our main result.
Theorem 3.5. Assume that Conditions 3.2 and 3.3 hold. For any k = 1, . . . , K, if the initialization
(0)
b k from Algorithm 1 with T = 1 satisfies,
satisfies ?j 2 B(??j ) for any j 6= k, then the estimator ?
where m =
QK
k=1
bk
?
??k F
= OP
r
!
mk (mk + sk ) log mk
,
nm
(3.6)
mk and B(??j ) is defined in (3.4).
Theorem 3.5 suggests that as long as the initialization is within a constant distance to the truth, our
Tlasso algorithm attains a consistent estimator after only one iteration. This initialization condition
(0)
(0)
?j 2 B(??j ) trivially holds since for any ?j that is positive definite and has unit Frobenius norm,
(0)
we have k?j
??k kF ? 2 by noting that k??k kF = 1 (k = 1, . . . , K) for the identifiability of the
tensor normal distribution. In literature, [9] shows that there exists a local minimizer of (2.2) whose
convergence rate can achieve (3.6). However, it is unknown if their algorithm can find such minimizer
since there could be many other local minimizers.
A notable implication of Theorem 3.5 is that, when K 3, the estimator from our Tlasso algorithm
can achieve estimation consistency even if we only have access to one observation, i.e., n = 1, which
is often the case in practice. To see it, suppose that K = 3 and n = 1. When the dimensions m1 , m2 ,
and m3 are of the same order of magnitude and sk = O(mk ) for k = 1, 2, 3, all the three error rates
corresponding to k = 1, 2, 3 in (3.6) converge to zero.
This result indicates that the estimation of the k-th precision matrix takes advantage of the information
from the j-th way (j 6= k) of the tensor data. Consider a simple case that K = 2 and one precision
matrix ??1 = 1m1 is known. In this scenario the rows of the matrix data are independent and hence
the effective sample size for estimating ??2 is in fact nm1 . The optimality
presult for the vector-valued
graphical model [4] implies that the optimal rate for estimating ??2 is (m2 + s2 ) log m2 /(nm1 ),
which matches our result in (3.6). Therefore, the rate in (3.6) obtained by our Tlasso estimator is
minimax-optimal since it is the best rate one can obtain even when ??j (j 6= k) are known. As far as
we know, this phenomenon has not been discovered by any previous work in tensor graphical model.
Remark 3.6. For K = 2, our tensor graphical model reduces to matrix graphical model with Krob
necker
p product covariance structure [5?8]. In this case, the rate
p of convergence of ?1 in (3.6) reduces
to (m1 + s1 ) log m
/(nm
),
which
is
much
faster
than
m
(m
+
s
)(log
m
2
2
1
1
1 + log m2 )/n esp1
tablished by [6] and (m1 + m2 ) log[max(m1 , m2 , n)]/(nm2 ) established by [7]. In literature, [5]
shows that there exists a local minimizer of the objective function whose estimation errors match ours.
However, it is unknown if their estimator can achieve such convergence rate. On the other hand, our
theorem confirms that our algorithm is able to find such estimator with optimal rate of convergence.
3.2
Estimation error in max norm and spectral norm
We next show the estimation error in max norm and spectral norm. Trivially, these estimation errors are
bounded by that in Frobenius norm shown in Theorem 3.5. To develop improved rates of convergence
in max and spectral norms, we need to impose stronger conditions on true parameters.
6
We first introduce some important notations. Denote dk as the maximum number of non-zeros in any
row of the true precision matrices ??k , that is,
dk :=
max
i2{1,...,mk }
{j 2 {1, . . . , mk } : [??k ]i,j 6= 0} ,
(3.7)
with | ? | the cardinality of the inside set. For each covariance matrix ??k , we define ???k := |||??k |||1 .
2
2
Denote the Hessian matrix ?k := ??k 1 ? ??k 1 2 Rmk ?mk , whose entry [ ?k ](i,j),(s,t) corresponds
to the second order partial derivative of the objective function with respect to [?k ]i,j and [?k ]s,t . We
define its sub-matrix indexed by the index set Sk as [ ?k ]Sk ,Sk = [??k 1 ? ??k 1 ]Sk ,Sk , which is the
|Sk | ? |Sk | matrix with rows and columns of ?k indexed by Sk and Sk , respectively. Moreover, we
define ? ?k := ([ ?k ]Sk ,Sk ) 1 1 . In order to establish the rate of convergence in max norm, we
need to impose an irrepresentability condition on the Hessian matrix.
Condition 3.7 (Irrepresentability). For each k = 1, . . . , K, there exists some ?k 2 (0, 1] such that
maxc [
e2Sk
?
k ]e,Sk
[
?
k ]Sk ,Sk
1
1
?1
?k .
Condition 3.7 controls the influence of the non-connected terms in Sck on the connected edges in Sk .
This condition has been widely applied in lasso penalized models [26, 27].
Condition 3.8 (Bounded Complexity). For each k = 1, . . . , K, the parameters ???k , ? ?k are bounded
p
and the parameter dk in (3.7) satisfies dk = o nm/(mk log mk ) .
Theorem 3.9. Suppose Conditions 3.2, 3.3, 3.7 and 3.8 hold. Assume sk = O(mk ) for k = 1, . . . , K
and assume m0k s are in the same order, i.e., m1 ? m2 ? ? ? ? ? mK . For each k, if the initialization
(0)
b k from Algorithm 1 with T = 2 satisfies,
satisfies ?j 2 B(??j ) for any j 6= k, then the estimator ?
!
r
m
log
m
k
k
?
bk ?
?
.
(3.8)
k 1 = OP
nm
b k is a subset of the true edge set of ?? , that is, supp(?
b k ) ? supp(?? ).
In addition, the edge set of ?
k
k
Theorem 3.9 shows that our Tlasso estimator achieves the optimal rate of convergence in max norm
[4]. Here we consider the estimator obtained after two iterations since we require a new concentration
inequality (Lemma B.3) for the sample covariance matrix, which is built upon the estimator in
Theorem 3.5. A direct consequence from Theorem 3.9 is the estimation error in spectral norm.
Corollary 3.10. Suppose the conditions of Theorem 3.9 hold, for any k = 1, . . . , K, we have
!
r
mk log mk
?
b
?k ? k 2 = O P dk
.
(3.9)
nm
Remark 3.11. Now we compare our obtained rate of convergence in spectral norm for K = 2 with
that established
in the sparse matrix graphical model literature. In particular, [8] establishes the rate
p
of OP
mk (sk _ 1) log(m1 _ m2 )/(nmk ) for k = 1, 2. Therefore, when d2k ? (sk _ 1), which
holds for example in the bounded degree graphs, our obtained rate is faster. However, our faster rate
comes at the price of assuming the irrepresentability condition. Using recent advance in nonconvex
regularization [28], we can eliminate the irrepresentability condition. We leave this to future work.
3.3
Model selection consistency
Theorem 3.9 ensures that the estimated precision matrixpcorrectly excludes all non-informative edges
and includes all the true edges (i, j) with |[??k ]i,j | > C mk log mk /(nm) for some constant C > 0.
Therefore, in order to achieve the model selection consistency, a sufficient condition is to assume that,
for each k = 1, . . . , K, the minimal signal ?k := min(i,j)2supp(??k ) [??k ]i,j is not too small.
p
Theorem 3.12. Under the conditions of Theorem 3.9, if ?k
C mk log mk /(nm) for some
b k = sign(?? ), with high probability.
constant C > 0, then for any k = 1, . . . , K, sign ?
k
Theorem 3.12 indicates that our Tlasso estimator is able to correctly recover the graphical structure of
each way of the high-dimensional tensor data. To the best of our knowledge, these is the first model
selection consistency result in high dimensional tensor graphical model.
7
4
Simulations
We compare the proposed Tlasso estimator with two alternatives. The first one is the direct graphical lasso (Glasso) approach [21] which applies the glasso to the vectorized tensor data to estimate ??1 ? ? ? ? ? ??K directly. The second alternative method is the iterative penalized maximum likelihood method (P-MLE) proposed by [9], whose termination condition is set to be
PK
b (t) ?
b (t 1)
K ? 0.001.
k=1 ?k
k
F
For simplicity, in our Tlasso algorithm we set the initialization of k-th precision matrix
p as 1mk for each
k = 1, . . . , K and the total iteration T = 1. The tuning parameter k is set as 20 log mk /(nmmk ).
For a fair comparison, the same tuning parameter is applied in the P-MLE method. In the direct
Glasso approach, its tuning parameter is chosen by cross-validation via huge package [29].
We consider two simulations with a third order tensor, i.e., K = 3. In Simulation 1, we construct a
triangle graph, while in Simulation 2, we construct a four nearest neighbor graph for each precision
matrix. An illustration of the generated graphs are shown in Figure 1. In each simulation, we consider
three scenarios, i.e., s1: n = 10 and (m1 , m2 , m3 ) = (10, 10, 10); s2: n = 50 and (m1 , m2 , m3 ) =
(10, 10, 10); s3: n = 10 and (m1 , m2 , m3 ) = (100, 5, 5). We repeat each example 100 times
and compute the averaged computational time, the averaged estimation error of the Kronecker
b1 ??????
b K ?? ? ? ? ? ? ??
product of precision matrices (m1 m2 m3 ) 1 ?
1
K F , the true positive
?
rate (TPR), and the true negative rate (TNR). P
More specifically, we denote
Pai,j be?the (i, j)-th
?
?
?
entry of ?1 ? ? ? ? ? ?K , and define TPR := i,j 1(b
ai,j 6= 0, ai,j 6= 0)/ i,j 1(ai,j 6= 0) and
P
P
TNR := i,j 1(b
ai,j = 0, a?i,j = 0)/ i 1(a?i,j = 0).
As shown in Figure 1, our Tlasso is dramatically faster than both alternative methods. In Scenario
s3, Tlasso takes about five seconds for each replicate, the P-MLE takes about 500 seconds while
the direct Glasso method takes more than one hour and is omitted in the plot. Tlasso algorithm is
not only computationally efficient but also enjoys superior estimation accuracy. In all examples, the
direct Glasso method has significantly larger errors than Tlasso due to ignoring the tensor graphical
structure. Tlasso outperforms P-MLE in Scenarios s1 and s2 and is comparable to it in Scenario s3.
Glasso
P-MLE
Tlasso
100
0
s1
0.06
Errors
200
200
0
s2
s3
Scenarios
Glasso
P-MLE
Tlasso
Glasso
P-MLE
Tlasso
s1
0.04
Glasso
P-MLE
Tlasso
0.04
0.02
0.02
s2
s3
Scenarios
0.06
Errors
400
300
0.08
0.08
Time (seconds)
Time (seconds)
400
s1
s2
s3
Scenarios
s1
s2
s3
Scenarios
Figure 1: Left two plots: illustrations of the generated graphs; Middle two plots: computational time;
Right two plots: estimation errors. In each group of two plots, the left (right) is for Simulation 1 (2).
Table 1 shows the variable selection performance. Our Tlasso identifies almost all edges in these six
examples, while the Glasso and P-MLE method miss several true edges. On the other hand, Tlasso
tends to include more non-connected edges than other methods.
Table 1: A comparison of variable selection performance. Here TPR and TNR denote the true positive
rate and true negative rate.
Scenarios
s1
Sim 1 s2
s3
s1
Sim 2 s2
s3
Glasso
TPR
TNR
0.27 (0.002) 0.96 (0.000)
0.34 (0.000) 0.93 (0.000)
/
/
0.08 (0.000) 0.96 (0.000)
0.15 (0.000) 0.92 (0.000)
/
/
P-MLE
TPR
TNR
1 (0)
0.89 (0.002)
1 (0)
0.89 (0.002)
1 (0)
0.93 (0.001)
0.93 (0.004) 0.88 (0.002)
1 (0)
0.85 (0.002)
0.82 (0.001) 0.93 (0.001)
Tlasso
TPR
TNR
1(0)
0.76 (0.004)
1(0)
0.76 (0.004)
1(0)
0.70 (0.004)
1(0)
0.65 (0.005)
1(0)
0.63 (0.005)
0.99(0.001) 0.38 (0.002)
Acknowledgement
We would like to thank the anonymous reviewers for their helpful comments. Han Liu is grateful
for the support of NSF CAREER Award DMS1454377, NSF IIS1408910, NSF IIS1332109, NIH
R01MH102339, NIH R01GM083084, and NIH R01HG06841. Guang Cheng?s research is sponsored
by NSF CAREER Award DMS1151692, NSF DMS1418042, Simons Fellowship in Mathematics,
ONR N00014-15-1-2331 and a grant from Indiana Clinical and Translational Sciences Institute.
8
References
[1] S. Rendle and L. Schmidt-Thieme. Pairwise interaction tensor factorization for personalized tag recommendation. In International Conference on Web Search and Data Mining, 2010.
[2] G.I. Allen. Sparse higher-order principal components analysis. In International Conference on Artificial
Intelligence and Statistics, 2012.
[3] J. Zahn, S. Poosala, A. Owen, D. Ingram, et al. AGEMAP: A gene expression database for aging in mice.
PLOS Genetics, 3:2326?2337, 2007.
[4] T. Cai, W. Liu, and H.H. Zhou. Estimating sparse precision matrix: Optimal rates of convergence and
adaptive estimation. Annals of Statistics, 2015.
[5] C. Leng and C.Y. Tang. Sparse matrix graphical models. Journal of the American Statistical Association,
107:1187?1200, 2012.
[6] J. Yin and H. Li. Model selection and estimation in the matrix normal graphical model. Journal of
Multivariate Analysis, 107:119?140, 2012.
[7] T. Tsiligkaridis, A. O. Hero, and S. Zhou. On convergence of Kronecker graphical Lasso algorithms. IEEE
Transactions on Signal Processing, 61:1743?1755, 2013.
[8] S. Zhou. Gemini: Graph estimation with matrix variate normal instances. Annals of Statistics, 42:532?562,
2014.
[9] S. He, J. Yin, H. Li, and X. Wang. Graphical model selection and estimation for high dimensional tensor
data. Journal of Multivariate Analysis, 128:165?185, 2014.
[10] P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternating minimization. In
Symposium on Theory of Computing, pages 665?674, 2013.
[11] P. Netrapalli, P. Jain, and S. Sanghavi. Phase retrieval using alternating minimization. In Advances in
Neural Information Processing Systems, pages 2796?2804, 2013.
[12] J. Sun, Q. Qu, and J. Wright. Complete dictionary recovery over the sphere. arXiv:1504.06785, 2015.
[13] S. Arora, R. Ge, T. Ma, and A. Moitra. Simple, efficient, and neural algorithms for sparse coding.
arXiv:1503.00778, 2015.
[14] A. Anandkumar, R. Ge, D. Hsu, S. Kakade, and M. Telgarsky. Tensor decompositions for learning latent
variable models. Journal of Machine Learning Research, 15:2773?2832, 2014.
[15] W. Sun, J. Lu, H. Liu, and G. Cheng. Provable sparse tensor decomposition. arXiv:1502.01425, 2015.
[16] S. Zhe, Z. Xu, X. Chu, Y. Qi, and Y. Park. Scalable nonparametric multiway data analysis. In International
Conference on Artificial Intelligence and Statistics, 2015.
[17] S. Zhe, Z. Xu, Y. Qi, and P. Yu. Sparse bayesian multiview learning for simultaneous association discovery
and diagnosis of alzheimer?s disease. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015.
[18] T. Kolda and B. Bader. Tensor decompositions and applications. SIAM Review, 51:455?500, 2009.
[19] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society,
Series B, 58:267?288, 1996.
[20] M. Yuan and Y. Lin. Model selection and estimation in the gaussian graphical model. Biometrika, 94:19?35,
2007.
[21] J. Friedman, H. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical Lasso.
Biostatistics, 9:432?441, 2008.
[22] A. J. Rothman, P. J. Bickel, E. Levina, and J. Zhu. Sparse permutation invariant covariance estimation.
Electronic Journal of Statistics, 2:494?515, 2008.
[23] W. Sun, J. Wang, and Y. Fang. Consistent selection of tuning parameters via variable selection stability.
Journal of Machine Learning Research, 14:3419?3440, 2013.
[24] M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. Springer, 2011.
[25] J. Fan, Y. Feng, and Y. Wu. Network exploration via the adaptive Lasso and scad penalties. Annals of
Statistics, 3:521?541, 2009.
[26] P. Zhao and B. Yu. On model selection consistency of Lasso. Journal of Machine Learning Research,
7:2541?2567, 2006.
[27] P. Ravikumar, M.J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by
minimizing `1 -penalized log-determinant divergence. Electronic Journal of Statistics, 5:935?980, 2011.
[28] Z. Wang, H. Liu, and T. Zhang. Optimal computational and statistical rates of convergence for sparse
nonconvex learning problems. Annals of Statistics, 42:2164?2201, 2014.
[29] T. Zhao, H. Liu, K. Roeder, J. Lafferty, and L. Wasserman. The huge package for high-dimensional
undirected graph estimation in R. Journal of Machine Learning Research, 13:1059?1062, 2012.
[30] A. Gupta and D. Nagar. Matrix variate distributions. Chapman and Hall/CRC Press, 2000.
[31] P. Hoff. Separable covariance arrays via the Tucker product, with applications to multivariate relational
data. Bayesian Analysis, 6:179?196, 2011.
[32] A.P. Dawid. Some matrix-variate distribution theory: Notational considerations and a bayesian application.
Biometrika, 68:265?274, 1981.
[33] S. Negahban and M.J. Wainwright. Estimation of (near) low-rank matrices with noise and high-dimensional
scaling. Annals of Statistics, 39:1069?1097, 2011.
9
| 5920 |@word determinant:1 middle:1 version:1 norm:29 stronger:1 replicate:1 termination:1 confirms:2 simulation:6 covariance:15 decomposition:5 attainable:1 tr:7 boundedness:1 liu:6 contains:1 series:1 hereafter:1 offering:1 ours:4 outperforms:1 existing:2 kmk:1 com:1 surprising:1 chu:1 numerical:2 informative:1 drop:1 plot:5 update:2 sponsored:1 intelligence:3 item:3 parameterization:1 location:2 preference:1 zhang:1 five:1 c2:4 direct:5 symposium:1 ik:13 yuan:1 prove:5 inside:1 introduce:1 pairwise:1 notably:1 behavior:1 brain:2 cardinality:1 provided:1 estimating:5 underlying:1 moreover:2 notation:3 bounded:5 r01mh102339:1 biostatistics:1 argmin:2 thieme:1 minimizes:1 developed:2 unobserved:2 indiana:1 nj:2 guarantee:4 temporal:1 thorough:1 ti:4 biometrika:2 rm:2 control:1 unit:1 grant:1 before:1 t1:4 positive:4 engineering:2 tnr:6 local:10 aging:3 consequence:1 tends:1 despite:1 ak:4 establishing:1 therein:1 studied:2 initialization:7 suggests:2 factorization:1 bi:3 averaged:2 lafayette:1 practice:4 definite:2 optimizers:1 procedure:5 significantly:2 word:1 refers:1 spite:1 selection:14 context:1 influence:1 equivalent:1 reviewer:1 backed:1 straightforward:1 independently:1 convex:11 survey:1 arranges:1 recovery:3 simplicity:1 wasserman:1 m2:17 estimator:22 array:1 importantly:1 dms1454377:1 financial:2 fang:1 population:7 handle:1 poosala:1 stability:1 annals:5 construction:1 suppose:4 play:1 user:4 kolda:1 element:1 dawid:1 particularly:1 database:1 role:1 wang:4 solved:1 thousand:1 ensures:1 connected:3 sun:4 plo:1 disease:1 convexity:1 complexity:1 grateful:1 upon:1 completely:1 triangle:1 accelerate:1 various:1 fiber:4 jain:2 effective:1 artificial:3 neighborhood:1 whose:8 widely:1 valued:7 solve:3 larger:1 otherwise:2 statistic:10 transform:1 jointly:1 superscript:1 advantage:1 eigenvalue:3 ledoux:1 cai:1 propose:2 interaction:1 product:10 maximal:1 kak1:2 achieve:8 frobenius:12 normalize:1 convergence:24 regularity:1 optimum:6 telgarsky:1 converges:1 leave:1 illustrate:1 develop:2 completion:1 pose:1 stat:1 fixing:4 nearest:1 op:4 sim:2 netrapalli:2 recovering:1 involves:2 implies:1 come:1 quantify:1 radius:1 closely:1 bader:1 exploration:1 sunnyvale:1 crc:1 require:1 preliminary:1 anonymous:1 rothman:1 extension:1 hold:6 sufficiently:2 hall:1 wright:1 normal:13 exp:1 algorithmic:1 achieves:4 optimizer:2 dictionary:2 vary:1 nm1:2 purpose:2 omitted:1 estimation:39 bickel:1 establishes:4 minimization:15 unfolding:1 gaussian:4 aim:2 rather:2 fulfill:1 pn:2 ck:4 zhou:3 shrinkage:1 varying:1 corollary:1 focus:5 notational:1 prevalent:1 likelihood:8 mainly:1 indicates:2 rank:2 contrast:2 attains:3 sense:1 helpful:1 roeder:1 minimizers:1 eliminate:1 interested:1 i1:3 provably:2 issue:1 aforementioned:2 translational:1 denoted:4 yahoo:2 resonance:1 special:1 initialize:1 hoff:1 field:1 construct:2 equal:1 chapman:1 pai:1 broad:1 park:1 k2f:1 yu:3 fmri:2 future:1 others:2 sanghavi:2 employ:1 randomly:1 divergence:1 zahn:1 individual:1 phase:2 friedman:1 interest:2 huge:2 mining:1 introduces:1 implication:2 kt:2 edge:9 encourage:1 partial:1 conduct:1 indexed:4 desired:7 sacrificing:1 guidance:1 theoretical:5 minimal:2 mk:62 instance:2 column:4 rmk:2 denotep:1 entry:7 euler:1 subset:1 uniform:1 too:1 characterize:2 motivating:1 dependency:3 density:1 international:3 negahban:1 siam:1 off:5 analogously:1 mouse:5 aaai:1 recorded:1 nm:12 containing:1 moitra:1 d2k:1 american:1 derivative:1 zhao:2 li:2 supp:3 zhaoran:2 coding:1 includes:1 inc:1 notable:2 vi:3 script:1 performed:1 lab:1 analyze:1 start:1 recover:1 identifiability:2 defer:1 simon:1 contribution:1 r01hg06841:1 accuracy:2 qk:4 efficiently:1 necker:1 bayesian:3 lu:1 j6:1 tissue:4 maxc:1 simultaneous:1 tucker:1 associated:2 proof:1 recovers:3 hsu:1 dataset:1 popular:1 recall:1 knowledge:1 carefully:1 attained:1 higher:4 follow:1 wei:1 improved:1 furthermore:2 until:2 talagrand:1 hand:2 web:1 mode:7 aj:1 facilitate:1 name:1 normalized:1 true:14 hence:2 regularization:1 alternating:9 symmetric:1 iteratively:2 tlasso:23 nonzero:1 i2:1 pwe:1 multiview:1 complete:1 confusion:1 tn:5 allen:1 consideration:1 novel:1 nih:3 superior:1 raskutti:1 functional:1 insensitive:1 banach:1 association:2 he:1 m1:12 tpr:6 theirs:1 r01gm083084:1 measurement:2 significant:1 refer:1 vec:9 ai:8 rd:1 tuning:11 consistency:14 trivially:2 i6:1 similarly:1 mathematics:1 multiway:1 access:2 han:2 multivariate:3 recent:3 nagar:1 irrepresentability:5 driven:1 scenario:11 n00014:1 nonconvex:2 inequality:1 onr:1 additional:2 impose:2 converge:1 signal:3 rj:1 reduces:6 technical:1 match:2 faster:4 levina:1 cross:1 long:1 retrieval:2 iis1408910:1 clinical:1 sphere:1 lin:1 mle:10 award:2 ravikumar:1 a1:3 qi:2 scalable:1 regression:1 expectation:1 arxiv:3 iteration:5 normalization:1 c1:4 addition:5 fellowship:1 separately:1 microarray:2 extra:1 rest:3 comment:1 induced:1 undirected:1 lafferty:1 anandkumar:1 alzheimer:1 near:1 noting:2 identically:1 independence:1 variate:3 hastie:1 lasso:10 ti1:4 vik:2 whether:1 expression:2 six:1 penalty:4 hessian:2 remark:4 dramatically:1 useful:1 detailed:3 nonparametric:1 specifies:1 nsf:5 iis1332109:1 s3:9 sizep:1 sign:2 estimated:1 correctly:1 tibshirani:2 diagnosis:1 group:1 four:2 nevertheless:1 drawn:1 imaging:2 graph:10 excludes:1 sum:1 gemini:1 package:2 letter:1 inverse:1 throughout:2 family:1 almost:1 electronic:2 wu:1 parsimonious:1 appendix:1 scaling:1 comparable:1 capturing:1 cheng:3 fan:1 quadratic:1 strength:1 kronecker:6 personalized:2 tag:1 min:3 optimality:1 separable:2 department:3 according:4 scad:1 across:2 kakade:1 qu:1 s1:9 invariant:1 taken:1 computationally:2 remains:1 know:1 rendle:1 hero:1 ge:2 end:1 generalizes:1 operation:5 apply:1 observe:1 spectral:11 generic:1 magnetic:1 alternative:3 schmidt:1 slower:2 existence:4 original:1 denotes:1 include:1 graphical:35 k1:2 establish:7 society:1 feng:1 tensor:66 objective:4 concentration:3 kak2:1 traditional:1 diagonal:2 distance:1 thank:1 vectorize:1 considers:1 trivial:1 boldface:1 provable:1 assuming:2 besides:1 index:3 insufficient:1 illustration:2 minimizing:7 mostly:2 trace:1 negative:4 rise:1 unknown:4 twenty:1 observation:4 purdue:2 finite:1 relational:1 discovered:1 stack:1 ninth:1 arbitrary:2 bk:2 extensive:1 nm2:1 established:3 hour:1 address:2 able:2 sparsity:4 challenge:1 built:1 max:15 including:1 royal:1 analogue:1 wainwright:2 critical:1 isoperimetry:1 zhu:1 minimax:2 imply:1 identifies:1 arora:1 review:1 literature:6 acknowledgement:1 discovery:1 kf:7 sck:1 glasso:13 fully:1 kakf:1 permutation:1 suggestion:1 age:4 validation:1 degree:1 sufficient:2 consistent:3 vectorized:1 nmk:1 m0k:1 row:6 genetics:1 penalized:8 repeat:2 supported:1 enjoys:1 allow:1 institute:1 neighbor:1 characterizing:1 barrier:1 absolute:2 sparse:24 distributed:1 ingram:1 dimension:3 numeric:1 qn:6 ignores:1 commonly:1 adaptive:2 leng:1 far:1 transaction:1 approximate:2 gene:5 b1:1 assumed:2 alternatively:2 zhe:2 search:1 vectorization:2 iterative:1 latent:1 sk:28 tsiligkaridis:1 table:2 matricization:4 ca:1 career:2 ignoring:1 meanwhile:2 pk:2 main:1 s2:9 noise:1 guang:2 fair:1 xu:2 west:1 renormalization:1 precision:27 sub:1 third:2 tang:1 theorem:25 hanliu:1 maxi:1 tm1:2 dk:7 list:1 gupta:1 exists:6 magnitude:1 demand:1 gap:1 yin:2 recommendation:3 applies:2 springer:1 corresponds:3 truth:1 satisfies:9 minimizer:3 ma:1 conditional:1 viewed:1 goal:1 identity:1 consequently:1 rm1:3 lipschitz:1 price:1 owen:1 onestep:1 change:1 specifically:2 miss:1 lemma:4 principal:1 total:1 experimental:2 m3:5 support:1 incorporate:1 evaluate:1 princeton:6 phenomenon:3 |
5,437 | 5,921 | Convergence Rates of Active Learning
for Maximum Likelihood Estimation
Kamalika Chaudhuri ?
Sham M. Kakade ?
Praneeth Netrapalli ?
Sujay Sanghavi ?
Abstract
An active learner is given a class of models, a large set of unlabeled examples, and
the ability to interactively query labels of a subset of these examples; the goal of
the learner is to learn a model in the class that fits the data well.
Previous theoretical work has rigorously characterized label complexity of active
learning, but most of this work has focused on the PAC or the agnostic PAC model.
In this paper, we shift our attention to a more general setting ? maximum likelihood estimation. Provided certain conditions hold on the model class, we provide
a two-stage active learning algorithm for this problem. The conditions we require are fairly general, and cover the widely popular class of Generalized Linear
Models, which in turn, include models for binary and multi-class classification,
regression, and conditional random fields.
We provide an upper bound on the label requirement of our algorithm, and a lower
bound that matches it up to lower order terms. Our analysis shows that unlike
binary classification in the realizable case, just a single extra round of interaction is
sufficient to achieve near-optimal performance in maximum likelihood estimation.
On the empirical side, the recent work in [12] and [13] (on active linear and
logistic regression) shows the promise of this approach.
1
Introduction
In active learning, we are given a sample space X , a label space Y, a class of models that map X to
Y, and a large set U of unlabelled samples. The goal of the learner is to learn a model in the class
with small target error while interactively querying the labels of as few of the unlabelled samples as
possible.
Most theoretical work on active learning has focussed on the PAC or the agnostic PAC model, where
the goal is to learn binary classifiers that belong to a particular hypothesis class [2, 14, 10, 7, 3, 4, 22],
and there has been only a handful of exceptions [19, 9, 20]. In this paper, we shift our attention to
a more general setting ? maximum likelihood estimation (MLE), where Pr(Y |X) is described by a
model ? belonging to a model class ?. We show that when data is generated by a model in this class,
we can do active learning provided the model class ? has the following simple property: the Fisher
information matrix for any model ? 2 ? at any (x, y) depends only on x and ?. This condition is
satisfied in a number of widely applicable model classes, such as Linear Regression and Generalized
Linear Models (GLMs), which in turn includes models for Multiclass Classification and Conditional
Random Fields. Consequently, we can provide active learning algorithms for maximum likelihood
estimation in all these model classes.
The standard solution to active MLE estimation in the statistics literature is to select samples for
label query by optimizing a class of summary statistics of the asymptotic covariance matrix of the
?
Dept. of CS, University of California at San Diego. Email: kamalika@cs.ucsd.edu
Dept. of CS and of Statistics, University of Washington. Email: sham@cs.washington.edu
?
Microsoft Research New England. Email:praneeth@microsoft.com
?
Dept. of ECE, The University of Texas at Austin. Email:sanghavi@mail.utexas.edu
?
1
estimator [6]. The literature, however, does not provide any guidance towards which summary statistic should be used, or any analysis of the solution quality when a finite number of labels or samples
are available. There has also been some recent work in the machine learning community [12, 13, 19]
on this problem; but these works focus on simple special cases (such as linear regression [19, 12] or
logistic regression [13]), and only [19] involves a consistency and finite sample analysis.
In this work, we consider the problem in its full generality, with the goal of minimizing the expected
log-likelihood error over the unlabelled data. We provide a two-stage active learning algorithm for
this problem. In the first stage, our algorithm queries the labels of a small number of random samples
from the data distribution in order to construct a crude estimate ?1 of the optimal parameter ?? . In
the second stage, we select a set of samples for label query by optimizing a summary statistic of the
covariance matrix of the estimator at ?1 ; however, unlike the experimental design work, our choice
of statistic is directly motivated by our goal of minimizing the expected log-likelihood error, which
guides us towards the right objective.
We provide a finite sample analysis of our algorithm when some regularity conditions hold and
when the negative log likelihood function is convex. Our analysis is still fairly general, and applies
to Generalized Linear Models, for example. We match our upper bound with a corresponding lower
bound, which shows that the convergence rate of our algorithm is optimal (except for lower order
terms); the finite sample convergence rate of any algorithm that uses (perhaps multiple rounds of)
sample selection and maximum likelihood estimation is either the same or higher than that of our
algorithm. This implies that unlike what is observed in learning binary classifiers, a single round of
interaction is sufficient to achieve near-optimal log likelihood error for ML estimation.
1.1
Related Work
Previous theoretical work on active learning has focussed on learning a classifier belonging to a
hypothesis class H in the PAC model. Both the realizable and non-realizable cases have been considered. In the realizable case, a line of work [7, 18] has looked at a generalization of binary search;
while their algorithms enjoy low label complexity, this style of algorithms is inconsistent in the
presence of noise. The two main styles of algorithms for the non-realizable case are disagreementbased active learning [2, 10, 4], and margin or confidence-based active learning [3, 22]. While active
learning in the realizable case has been shown to achieve an exponential improvement in label complexity over passive learning [2, 7, 14], in the agnostic case, the gains are more modest (sometimes
a constant factor) [14, 10, 8]. Moreover, lower bounds [15] show that the label requirement of any
agnostic active learning algorithm is always at least ?(? 2 /?2 ), where ? is the error of the best hypothesis in the class, and ? is the target error. In contrast, our setting is much more general than
binary classification, and includes regression, multi-class classification and certain kinds of conditional random fields that are not covered by previous work.
[19] provides an active learning algorithm for linear regression problem under model mismatch.
Their algorithm attempts to learn the location of the mismatch by fitting increasingly refined partitions of the domain, and then uses this information to reweight the examples. If the partition is
highly refined, then the computational complexity of the resulting algorithm may be exponential
in the dimension of the data domain. In contrast, our algorithm applies to a more general setting,
and while we do not address model mismatch, our algorithm has polynomial time complexity. [1]
provides an active learning algorithm for Generalized Linear Models in an online selective sampling
setting; however, unlike ours, their input is a stream of unlabelled examples, and at each step, they
need to decide whether the label of the current example should be queried.
Our work is also related to the classical statistical work on optimal experiment design, which mostly
considers maximum likelihood estimation [6]. For uni-variate estimation, they suggest selecting
samples to maximize the Fisher information which corresponds to minimizing the variance of the
regression coefficient. When ? is multi-variate, the Fisher information is a matrix; in this case, there
are multiple notions of optimal design which correspond to maximizing different parameters of the
Fisher information matrix. For example, D-optimality maximizes the determinant, and A-optimality
maximizes the trace of the Fisher information. In contrast with this work, we directly optimize
the expected log-likelihood over the unlabelled data which guides us to the appropriate objective
function; moreover, we provide consistency and finite sample guarantees.
2
Finally, on the empirical side, [13] and [12] derive algorithms similar to ours for logistic and linear
regression based on projected gradient descent. Notably, these works provide promising empirical
evidence for this approach to active learning; however, no consistency guarantees or convergence
rates are provided (the rates presented in these works are not stated in terms of the sample size). In
contrast, our algorithm applies more generally, and we provide consistency guarantees and convergence rates. Moreover, unlike [13], our logistic regression algorithm uses a single extra round of
interaction, and our results illustrate that a single round is sufficient to achieve a convergence rate
that is optimal except for lower order terms.
2
The Model
We begin with some notation. We are given a pool U = {x1 , . . . , xn } of n unlabelled examples
drawn from some instance space X , and the ability to interactively query labels belonging to a label
space Y of m of these examples. In addition, we are given a family of models M = {p(y|x, ?), ? 2
?} parameterized by ? 2 ? ? Rd . We assume that there exists an unknown parameter ?? 2 ? such
that querying the label of an xi 2 U generates a yi drawn from the distribution p(y|xi , ?? ). We also
abuse notation and use U to denote the uniform distribution over the examples in U .
We consider the fixed-design (or transductive) setting, where our goal is to minimize the error on
the fixed set of points U . For any x 2 X , y 2 Y and ? 2 ?, we define the negative log-likelihood
function L(y|x, ?) as:
L(y|x, ?) = log p(y|x, ?)
? where
Our goal is to find a ?? to minimize LU (?),
LU (?) = EX?U,Y ?p(Y |X,?? ) [L(Y |X, ?)]
by interactively querying labels for a subset of U of size m, where we allow label queries with
replacement i.e., the label of an example may be queried multiple times.
An additional quantity of interest to us is the Fisher information matrix, or the Hessian of the negative log-likelihood L(y|x, ?) function, which determines the convergence rate. For our active learning procedure to work correctly, we require the following condition.
Condition 1. For any x 2 X , y 2 Y, ? 2 ?, the Fisher information
x and ? (and does not depend on y.)
@ 2 L(y|x,?)
@? 2
is a function of only
Condition 1 is satisfied by a number of models of practical interest; examples include linear regression and generalized linear models. Section 5.1 provides a brief derivation of Condition 1 for
generalized linear models.
2
For any x, y and ?, we use I(x, ?) to denote the Hessian @ L(y|x,?)
; observe that by Assumption 1,
@? 2
this is just a function of x and ?. Let be any distribution over the unlabelled samples in U ; for any
? 2 ?, we use:
I (?) = EX? [I(X, ?)]
3
Algorithm
The main idea behind our algorithm is to sample xi from a well-designed distribution over U ,
query the labels of these samples and perform ML estimation over them. To ensure good performance, should be chosen carefully, and our choice of is motivated by Lemma 1. Suppose
the labels yi are generated according to: yi ? p(y|xi , ?? ). Lemma 1 states that the expected loglikelihood error of the ML estimate with respect to m samples from in this case is essentially
Tr I (?? ) 1 IU (?? ) /m.
This suggests selecting as the distribution ? that minimizes Tr I ? (?? ) 1 IU (?? ) . Unfortunately, we cannot do this as ?? is unknown. We resolve this problem through a two stage algorithm;
in the first stage, we use a small number m1 of samples to construct a coarse estimate ?1 of ?? (Steps
1-2). In the second stage, we calculate a distribution 1 which minimizes Tr I 1 (?1 ) 1 IU (?1 ) and
draw samples from (a slight modification of) this distribution for a finer estimation of ?? (Steps 3-5).
3
Algorithm 1 ActiveSetSelect
Input: Samples xi , for i = 1, ? ? ? , n
1: Draw m1 samples u.a.r from U , and query their labels to get S1 .
2: Use S1 to solve the MLE problem:
X
?1 = argmin?2?
L(yi |xi , ?)
(xi ,yi )2S1
3: Solve the following SDP (refer Lemma 3):
a? = argmina Tr S
1
IU (?1 )
s.t.
4: Draw m2 examples using probability
=?
1 + (1
?=1
m2
1/6
P
S = i ai I(xi , ?1 )
0 ? ai ? 1
P
i ai = m2
(
?)U where the distribution
1
=
a?
i
m2
and
. Query their labels to get S2 .
5: Use S2 to solve the MLE problem:
?2 = argmin?2?
X
(xi ,yi )2S2
L(yi |xi , ?)
Output: ?2
The distribution 1 is modified slightly to ? (in Step 4) to ensure that I ? (?? ) is well conditioned
with respect to IU (?? ).
The algorithm is formally presented in Algorithm 1.
Finally, note that Steps 1-2 are necessary because IU and I are functions of ?. In certain special
cases such as linear regression, IU and I are independent of ?. In those cases, Steps 1-2 are
unnecessary, and we may skip directly to Step 3.
4
Performance Guarantees
The following regularity conditions are essentially a quantified version of the standard Local Asymptotic Normality (LAN) conditions for studying maximum likelihood estimation (see [5, 21]).
Assumption 1. (Regularity conditions for LAN)
1. Smoothness: The first three derivatives of L(y|x, ?) exist in all interior points of ? ? Rd .
2. Compactness: ? is compact and ?? is an interior point of ?.
Pn
3. Strong Convexity: IU (?? ) = n1 i=1 I (xi , ?? ) is positive definite with smallest singular
value min > 0.
4. Lipschitz continuity: There exists a neighborhood B of ?? and a constant L3 such that for
all x 2 U , I(x, ?) is L3 -Lipschitz in this neighborhood.
IU (?? )
1/2
(I (x, ?)
I (x, ?0 )) IU (?? )
1/2
2
? L3 k?
?0 kIU (?? ) ,
for every ?, ?0 2 B.
5. Concentration at ?? : For any x 2 U and y, we have (with probability one),
krL(y|x, ?? )kIU (?? )
1
? L1 , and IU (?? )
1/2
I (x, ?? ) IU (?? )
1/2
2
? L2 .
6. Boundedness: max(x,y) sup?2? |L(x, y|?)| ? R.
In addition to the above, we need one extra condition which is essentially a pointwise self concordance. This condition is satisfied by a vast class of models, including the generalized linear models.
4
Assumption 2. Point-wise self concordance:
L4 k? ?? k2 I (x, ?? ) I (x, ?) I (x, ?? ) L4 k? ?? k2 I (x, ?? ) .
?
Definition 1. [Optimal Sampling Distribution ? ] We define the optimal sampling
Pdistribution
?
?
?
?
?
over the points in U as the distribution
= ( 1 , . . . , n ) for which i
0, i i = 1, and
Tr I ? (?? ) 1 IU (?? ) is as small as possible.
Definition 1 is motivated by Lemma 1, which indicates that under some mild regularity conditions, a
ML estimate calculated on samples drawn from ? will provide the best convergence rates (including
the right constant factor) for the expected log-likelihood error.
We now present the main result of our paper. The proof of the following theorem and all the supporting lemmas will be presented in Appendix A.
Theorem 1. Suppose the regularity conditions in Assumptions 1 and 2 hold.
Let?
10, and the number of samples used in step ??
(1) be m1
>
?
?
?
?
?
2 2
L
1
4
O max L2 log2 d, L21 L23 + 1min log2 d, Trdiameter(?)
,
Tr IU (?? )
.
Then with
(IU (?? ) 1 )
probability 1
, the expected log likelihood error of the estimate ?2 of Algorithm 1 is bounded
as:
?
?4
? 1
?
2
R
1
E [LU (?2 )] LU (?? ) ? 1 +
(1 + e
?m2 )Tr I ? (?? ) IU (?? )
+ 2 , (1)
1
m2
m2
?
where
is the optimal
? m2
=
?
? sampling distribution in Definition 1 and e
p
p
log dm2
O L1 L3 + L2
.
Moreover, for any sampling distribution
satisfying
1/6
m2
I (?? ) ? cIU (?? ) and label constraint of m2 , we have the following lower bound on the
expected log likelihood error for ML estimate:
h
i
?
? 1
L21
1
E LU (?b )
LU (?? ) (1 ?m2 ) Tr I (?? ) IU (?? )
,
(2)
m2
cm22
def
where ?m2 =
e
?m2
1/3
c2 m2
.
Remark 1. (Restricting to Maximum Likelihood Estimation) Our restriction to maximum likelihood
estimators is minor, as this is close to minimax optimal (see [16]). Minor improvements with certain
kinds of estimators, such as the James-Stein estimator, are possible.
4.1
Discussions
Several remarks about Theorem 1 are in order.
The high probability bound in Theorem 1 is with respect to the samples drawn in S1 ; provided these
samples are representative (which happens with probability 1
), the output ?2 of Algorithm 1
will satisfy (1). Additionally, Theorem 1 assumes that the labels are sampled with replacement; in
other words, we can query the label of a point xi multiple times. Removing this assumption is an
avenue for future work.
?
?
1
Second, the highest order term in both (1) and (2) is Tr I ? (?? ) IU (?? ) /m. The terms involving
?m2 and e
?m2 are lower order as both ?m2 and e
?m2 are o(1). Moreover, if = !(1), then the term
involving in (1) is of a lowerp
order as well. Observe that also measures the tradeoff between m1
and m2p, and as long as = o( m2 ), m1 is also of a lower order than m2 . Thus, provided is !(1)
and o( m2 ), the convergence rate of our algorithm is optimal except for lower order terms.
Finally, the lower bound (2) applies to distributions for which I (?? ) cIU (?? ), where c occurs
in the lower order terms of the bound. This constraint is not very restrictive, and does not affect
the asymptotic rate. Observe that IU (?? ) is full rank. If I (?? ) is not full rank, then the expected
log likelihood error of the ML estimate with respect to will not be consistent, and thus such a
will never achieve the optimal rate. If I (?? ) is full rank, then there always exists a c for which
I (?? )
cIU (?? ). Thus (2) essentially states that for distributions where I (?? ) is close to
being rank-deficient, the asymptotic convergence rate of O(Tr I (?? ) 1 IU (?? ) /m2 ) is achieved
at larger values of m2 .
5
4.2
Proof Outline
Our main result relies on the following three steps.
4.2.1
Bounding the Log-likelihood Error
First, we characterize the log likelihood error (wrt U ) of the empirical risk minimizer (ERM) estimate obtained using a sampling distribution . Concretely, let be a distribution on U . Let ?b be
the ERM estimate using the distribution :
m2
1 X
?b = argmin?2?
L(Yi |Xi , ?),
m2 i=1
where Xi ?
(3)
and Yi ? p(y|Xi , ?? ).hThe?core? of our analysis
i is Lemma 1, which shows a precise
estimate of the log likelihood error E LU ?b
LU (?? ) .
Lemma 1. Suppose L satisfies the regularity conditions in Assumptions 1 and 2. Let be a distribution on U and ?b be the ERM estimate (3) using m2 labeled examples. Suppose further that
I (?? ) ? ?cIU (?? ) for some constant
c <?1. Then, for any p
2 and m2 large enough such that
q
p
def
p log dm2
1
? m2 = O c2 L1 L3 + L2
< 1, we have:
m2
(1
? m2 )
?2
m2
?
def
where ? 2 = Tr I (?? )
4.2.2
h
? ?
? E LU ?b
L21
p/2
cm2
1
?
IU (?? ) .
i
?2
R
LU (?? ) ? (1 + ?m2 )
+ p,
m2
m2
Approximating ??
Lemma
1 motivates? sampling from the optimal sampling distribution ? that minimizes
?
1
Tr I ? (?? ) IU (?? ) . However, this quantity depends on ?? , which we do not know. To resolve this issue, our algorithm first queries the labels of a small fraction of points (m1 ) and solves a
ML estimation problem to obtain a coarse estimate ?1 of ?? .
How close should ?1 be to ?? ? Our analysis indicates that it is sufficient for ?1 to be close enough that
for any x, I(x, ?1 ) is a constant factor spectral approximation to I(x, ?? ); the number of samples
needed to achieve this is analyzed in Lemma 2.
Lemma 2. Suppose L satisfies the regularity conditions in Assumptions 1 and 2. If the number of
samples used in the first step
0
0
11
?
?
?
?
2 2
1
diameter(?)
L
1 AA
4
?,
m1 > O @max @L2 log2 d, L21 L23 +
log2 d, ?
Tr IU (?? )
,
1
?
min
Tr IU (? )
then, we have:
1
I (x, ?? )
with probability greater than 1
4.2.3
Computing
I (x, ?1 )
I (x, ?? )
1
I (x, ?? ) 8 x 2 X
.
1
Third, we are left with the task of obtaining a distribution
We now pose this optimization problem as an SDP.
1
that minimizes the log likelihood error.
ai
From Lemmas 1 and 2, it is clear that we should aim to obtain a sampling distribution = ( m
:i2
2
?
?
P
1
>
[n]) minimizing Tr I (?1 ) IU (?1 ) . Let IU (?1 ) = j j vj vj be the singular value decompo?
?
Pd
1
1
sition (svd) of IU (?1 ). Since Tr I (?1 ) IU (?1 ) = j=1 j vj > I (?1 ) vj , this is equivalent
6
to solving:
min
a,c
d
X
j=1
j cj
s.t.
P
S = i ai I(xi , ?1 )
vj > S 1 vj ? cj
Pai 2 [0, 1]
i ai = m2 .
8
>
<
>
:
(4)
Among the above constraints, the ?constraint vj > S 1 vj ? cj seems problematic. However, Schur
cj vj >
complement formula tells us that:
? 0 , S ? 0 and vj > S 1 vj ? cj . In our case,
vj
S
we know that S ? 0, since it is a sum of positive semi definite matrices. The above argument proves
the following lemma.
Lemma 3. The following two optimization programs are equivalent:
Pd
mina,c
Pj=1 j cj
mina
Tr P
S 1 IU (?1 )
s.t.
S? = i ai I(xi , ?1 )
s.t.
S = i ai I(xi , ?1 )
cj vj >
?
?0
vj
S
Pai 2 [0, 1]
a
=
m
.
2
i i
Pai 2 [0, 1]
i a i = m2 ,
P
where IU (?1 ) = j j vj vj > denotes the svd of IU (?1 ).
5
Illustrative Examples
We next present some examples that illustrate Theorem 1. We begin by showing that Condition 1 is
satisfied by the popular class of Generalized Linear Models.
5.1
Derivations for Generalized Linear Models
A generalized linear model is specified by three parameters ? a linear model, a sufficient statistic, and a member of the exponential family. Let ? be a linear model: ? = ?> X. Then, in a
Generalized Linear Model (GLM), Y is drawn from an exponential family distribution with pa>
rameter ?. Specifically, p(Y = y|?) = e? t(y) A(?) , where t(?) is the sufficient statistic and
A(?) is the log-partition function. From properties of the exponential family, the log-likelihood
is written as log p(y|?) = ? > t(y) A(?). If we take ? = ?> x, and take the derivative with
respect to ?, we have: @ log p(y|?,x)
= xt(y) xA0 (?> x). Taking derivatives again gives us
@?
@ 2 log p(y|?,x)
@? 2
5.2
=
xx> A00 (?> x), which is independent of y.
Specific Examples
We next present three illustrative examples of problems that our algorithm may be applied to.
Linear Regression. Our first example is linear regression. In this case, x 2 Rd and Y 2 R are
generated according to the distribution: Y = ??> X + ?, where ? is a noise variable drawn from
N (0, 1). In this case, the negative loglikelihood function is: L(y|x, ?) = (y ?> x)2 , and the
corresponding Fisher information matrix I(x, ?) is given as: I(x, ?) = xx> . Observe that in this
(very special) case, the Fisher information matrix does not depend on ?; as a result we
P can eliminate
the first two steps of the algorithm, and proceed directly to step 3. If ? = n1 i xi xi > is the
covariance matrix of U , then Theorem 1 tells us that we need to query labels from a distribution ?
with covariance matrix ? such that Tr ? 1 ? is minimized.
We illustrate the advantages of active learning through a simple example. Suppose U is the unlabelled distribution:
?
e1
w.p. 1 dd21 ,
xi =
1
ej w.p. d2 for j 2 {2, ? ? ? , d} ,
where ej is the standard unit vector in the j th direction. The covariance matrix ? of U is a diagonal
matrix with ?11 = 1 dd21 and ?jj = d12 for j 2. For passive learning over U , we query labels
7
Tr(? 1 ?)
d
of examples drawn from U which gives us a convergence rate of
=m
. On the other hand,
m
?
active learning chooses to sample examples from the distribution
such that
?
e1 w.p. ? 1 d2d1 ,
xi =
1
ej w.p. ? 2d
for j 2 {2, ? ? ? , d} ,
where ? indicates that the probabilities hold upto O
1
d2
. This has a diagonal covariance matrix
Tr(? 1 ?)
2, and convergence rate of
?
m
d 1
1
? such
that ?11 ? 1
2d and ?jj ?? 2d for j
?
1
2d
d 1
4
+ (d 1) ? 2d ? d12 ? m
, which does not grow with d!
m d+1 ? 1
d2
Logistic Regression. Our second example is logistic regression for binary classification. In this
case, x 2 Rd , Y 2 { 1, 1} and the negative log-likelihood function is: L(y|x, ?) = log(1 +
e
y? > x
), and the corresponding Fisher information I(x, ?) is given as: I(x, ?) =
>
e? x
(1+e?> x )2
? xx> .
For illustration, suppose k?? k2 and kxk2 are bounded by a constant and the covariance matrix ? is
C
sandwiched between two multiples of identity in the PSD ordering i.e., dc I
?
d I for some
constants c and C. Then the regularity assumptions 1 and 2 are satisfied for
constant
values
? ?
?? of
1
?
L1 , L2 , L3 and L4 . In this case, Theorem 1 states that choosing m1 to be ! Tr IU (? )
=
! (d) gives us the optimal convergence rate of (1 + o(1))
Tr(I
? (?
?
) 1 IU (? ? ))
.
m2
Multinomial Logistic Regression. Our third example is multinomial logistic regression for multiclass classification. In this case, Y 2 1, . . . , K, x 2 Rd , and the parameter matrix ? 2 R(K 1)?d .
PK 1 >
The negative log-likelihood function is written as: L(y|x, ?) = ?y> x + log(1 + k=1 e?k x ),
PK 1 >
if y 6= K, and L(y = k|x, ?) = log(1 + k=1 e?k x ) otherwise. The corresponding Fisher
information matrix is a (K 1)d ? (K 1)d matrix, which is obtained as follows. Let F be the
(K 1) ? (K 1) matrix with:
P
>
>
>
>
e?i x (1 + k6=i e?k x )
e?i x+?j x
Fii =
P ?> x 2 , Fij =
P ?> x 2
(1 + k e k )
(1 + k e k )
>
Then, I(x, ?) = F ? xx .
Similar to the example in the logistic regression case, suppose ?y? 2 and kxk2 are bounded by
C
a constant and the covariance matrix ? satisfies dc I
?
d I for some constants c and C.
?
?
? ?>
?
?
Since F = diag (pi ) p p , where pi = P (y = i|x, ? ), the boundedness of ?y? 2 and kxk2
e for some constants e
e (depending on K). This means that
implies that e
cI
F?
CI
c and C
e
ce
c
CC
I(x, ?? )
dI
d I and so the regularity assumptions 1 and 2 are satisfied with L1 , L2 , L3 and
L4 being constants. Theorem 1 again tells us that using !(d) samples in the first step gives us the
optimal convergence rate of maximum likelihood error.
6
Conclusion
In this paper, we provide an active learning algorithm for maximum likelihood estimation which
provably achieves the optimal convergence rate (upto lower order terms) and uses only two rounds
of interaction. Our algorithm applies in a very general setting, which includes Generalized Linear
Models.
There are several avenues of future work. Our algorithm involves solving an SDP which is computationally expensive; an open question is whether there is a more efficient, perhaps greedy, algorithm
that achieves the same rate. A second open question is whether it is possible to remove the with
replacement sampling assumption. A final question is what happens if IU (?? ) has a high condition
number. In this case, our algorithm will require a large number of samples in the first stage; an open
question is whether we can use a more sophisticated procedure in the first stage to reduce the label
requirement.
Acknowledgements. KC thanks NSF under IIS 1162581 for research support.
8
References
[1] A. Agarwal. Selective sampling algorithms for cost-sensitive multiclass prediction. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta,
GA, USA, 16-21 June 2013, pages 1220?1228, 2013.
[2] M.-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. J. Comput. Syst.
Sci., 75(1):78?89, 2009.
[3] M.-F. Balcan and P. M. Long. Active and passive learning of linear separators under logconcave distributions. In COLT, 2013.
[4] A. Beygelzimer, D. Hsu, J. Langford, and T. Zhang. Agnostic active learning without constraints. In NIPS, 2010.
[5] L. Cam and G. Yang. Asymptotics in Statistics: Some Basic Concepts. Springer Series in
Statistics. Springer New York, 2000.
[6] J. Cornell. Experiments with Mixtures: Designs, Models, and the Analysis of Mixture Data
(third ed.). Wiley, 2002.
[7] S. Dasgupta. Coarse sample complexity bounds for active learning. In NIPS, 2005.
[8] S. Dasgupta. Two faces of active learning. Theor. Comput. Sci., 412(19), 2011.
[9] S. Dasgupta and D. Hsu. Hierarchical sampling for active learning. In ICML, 2008.
[10] S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In
NIPS, 2007.
[11] R. Frostig, R. Ge, S. M. Kakade, and A. Sidford. Competing with the empirical risk minimizer
in a single pass. arXiv preprint arXiv:1412.6606, 2014.
[12] Q. Gu, T. Zhang, C. Ding, and J. Han. Selective labeling via error bound minimization. In In
Proc. of Advances in Neural Information Processing Systems (NIPS) 25, Lake Tahoe, Nevada,
United States, 2012.
[13] Q. Gu, T. Zhang, and J. Han. Batch-mode active learning via error bound minimization. In
30th Conference on Uncertainty in Artificial Intelligence (UAI), 2014.
[14] S. Hanneke. A bound on the label complexity of agnostic active learning. In ICML, 2007.
[15] M. K?aa? ri?ainen. Active learning in the non-realizable case. In ALT, 2006.
[16] L. Le Cam. Asymptotic Methods in Statistical Decision Theory. Springer, 1986.
[17] E. L. Lehmann and G. Casella. Theory of point estimation, volume 31. Springer Science &
Business Media, 1998.
[18] R. D. Nowak. The geometry of generalized binary search. IEEE Transactions on Information
Theory, 57(12):7893?7906, 2011.
[19] S. Sabato and R. Munos. Active regression through stratification. In NIPS, 2014.
[20] R. Urner, S. Wulff, and S. Ben-David. Plal: Cluster-based active learning. In COLT, 2013.
[21] A. W. van der Vaart. Asymptotic Statistics. Cambridge Series in Statistical and Probabilistic
Mathematics. Cambridge University Press, 2000.
[22] C. Zhang and K. Chaudhuri. Beyond disagreement-based agnostic active learning. In Proc. of
Neural Information Processing Systems, 2014.
9
| 5921 |@word mild:1 determinant:1 version:1 polynomial:1 seems:1 open:3 cm2:1 d2:3 covariance:8 tr:22 boundedness:2 series:2 selecting:2 united:1 ours:2 current:1 com:1 beygelzimer:2 written:2 partition:3 remove:1 designed:1 ainen:1 greedy:1 intelligence:1 core:1 provides:3 coarse:3 location:1 tahoe:1 zhang:4 c2:2 fitting:1 notably:1 expected:8 sdp:3 multi:3 resolve:2 dm2:2 provided:5 begin:2 moreover:5 notation:2 maximizes:2 agnostic:9 bounded:3 xx:4 what:2 medium:1 kind:2 argmin:3 minimizes:4 guarantee:4 every:1 classifier:3 k2:3 unit:1 enjoy:1 positive:2 local:1 abuse:1 quantified:1 suggests:1 practical:1 definite:2 procedure:2 asymptotics:1 empirical:5 confidence:1 word:1 suggest:1 get:2 cannot:1 ga:1 interior:2 unlabeled:1 close:4 selection:1 risk:2 optimize:1 equivalent:2 restriction:1 map:1 maximizing:1 attention:2 convex:1 focused:1 d12:2 m2:38 estimator:5 notion:1 target:2 diego:1 suppose:8 us:4 hypothesis:3 pa:1 satisfying:1 expensive:1 labeled:1 observed:1 preprint:1 ding:1 calculate:1 ordering:1 highest:1 pd:2 convexity:1 complexity:7 rigorously:1 cam:2 depend:2 solving:2 learner:3 gu:2 derivation:2 query:13 artificial:1 tell:3 labeling:1 neighborhood:2 refined:2 choosing:1 widely:2 solve:3 larger:1 loglikelihood:2 m2p:1 otherwise:1 ability:2 statistic:11 vaart:1 transductive:1 final:1 online:1 advantage:1 nevada:1 interaction:4 chaudhuri:2 achieve:6 convergence:15 regularity:9 requirement:3 cluster:1 ben:1 derive:1 illustrate:3 depending:1 pose:1 minor:2 solves:1 netrapalli:1 strong:1 c:4 involves:2 implies:2 skip:1 direction:1 fij:1 require:3 generalization:1 theor:1 hold:4 considered:1 achieves:2 smallest:1 estimation:17 proc:2 applicable:1 label:31 pdistribution:1 utexas:1 sensitive:1 minimization:2 always:2 aim:1 modified:1 pn:1 ej:3 cornell:1 focus:1 june:1 improvement:2 rank:4 likelihood:30 indicates:3 contrast:4 realizable:7 eliminate:1 compactness:1 kc:1 selective:3 provably:1 iu:34 issue:1 classification:7 colt:2 among:1 k6:1 special:3 fairly:2 field:3 construct:2 never:1 washington:2 sampling:12 stratification:1 pai:3 icml:3 future:2 minimized:1 sanghavi:2 few:1 geometry:1 replacement:3 microsoft:2 n1:2 attempt:1 psd:1 atlanta:1 interest:2 highly:1 analyzed:1 mixture:2 behind:1 nowak:1 necessary:1 modest:1 guidance:1 theoretical:3 instance:1 cover:1 sidford:1 cost:1 subset:2 uniform:1 characterize:1 chooses:1 thanks:1 international:1 probabilistic:1 pool:1 again:2 satisfied:6 interactively:4 derivative:3 style:2 concordance:2 syst:1 includes:3 coefficient:1 satisfy:1 depends:2 stream:1 sup:1 xa0:1 minimize:2 variance:1 correspond:1 lu:10 plal:1 hanneke:1 cc:1 finer:1 l21:4 monteleoni:1 casella:1 urner:1 ed:1 email:4 definition:3 james:1 proof:2 di:1 gain:1 sampled:1 hsu:3 popular:2 cj:7 carefully:1 sophisticated:1 higher:1 generality:1 just:2 stage:9 langford:2 glms:1 hand:1 continuity:1 logistic:9 mode:1 quality:1 perhaps:2 usa:1 concept:1 i2:1 round:6 self:2 illustrative:2 generalized:13 mina:2 outline:1 l1:5 passive:3 balcan:2 wise:1 krl:1 multinomial:2 volume:1 belong:1 slight:1 m1:8 refer:1 a00:1 cambridge:2 queried:2 ai:8 smoothness:1 rd:5 sujay:1 consistency:4 mathematics:1 frostig:1 l3:7 han:2 argmina:1 fii:1 recent:2 optimizing:2 certain:4 binary:8 yi:9 der:1 additional:1 greater:1 maximize:1 semi:1 ii:1 multiple:5 full:4 sham:2 match:2 characterized:1 unlabelled:8 england:1 long:2 rameter:1 mle:4 e1:2 prediction:1 involving:2 regression:20 basic:1 essentially:4 sition:1 arxiv:2 sometimes:1 agarwal:1 achieved:1 addition:2 singular:2 grow:1 sabato:1 extra:3 unlike:5 logconcave:1 deficient:1 member:1 inconsistent:1 schur:1 near:2 presence:1 yang:1 enough:2 affect:1 fit:1 variate:2 competing:1 reduce:1 praneeth:2 idea:1 avenue:2 multiclass:3 tradeoff:1 texas:1 shift:2 whether:4 motivated:3 hessian:2 proceed:1 jj:2 remark:2 york:1 generally:1 covered:1 clear:1 stein:1 diameter:1 exist:1 problematic:1 nsf:1 correctly:1 promise:1 dasgupta:4 lan:2 drawn:7 pj:1 ce:1 vast:1 fraction:1 sum:1 parameterized:1 uncertainty:1 lehmann:1 family:4 decide:1 lake:1 draw:3 decision:1 appendix:1 ciu:4 bound:13 def:3 handful:1 constraint:5 ri:1 generates:1 argument:1 optimality:2 min:4 according:2 belonging:3 slightly:1 increasingly:1 kakade:2 modification:1 s1:4 happens:2 pr:1 erm:3 glm:1 computationally:1 turn:2 wrt:1 know:2 needed:1 ge:1 studying:1 available:1 observe:4 hierarchical:1 appropriate:1 spectral:1 upto:2 disagreement:1 batch:1 disagreementbased:1 assumes:1 denotes:1 include:2 ensure:2 log2:4 hthe:1 restrictive:1 prof:1 approximating:1 classical:1 sandwiched:1 objective:2 question:4 quantity:2 looked:1 occurs:1 concentration:1 diagonal:2 gradient:1 sci:2 mail:1 considers:1 pointwise:1 illustration:1 minimizing:4 mostly:1 unfortunately:1 reweight:1 trace:1 negative:6 stated:1 design:5 motivates:1 unknown:2 perform:1 upper:2 finite:5 descent:1 supporting:1 precise:1 dc:2 ucsd:1 community:1 david:1 complement:1 specified:1 california:1 nip:5 address:1 beyond:1 mismatch:3 program:1 max:3 including:2 business:1 normality:1 minimax:1 brief:1 literature:2 l2:7 acknowledgement:1 asymptotic:6 querying:3 sufficient:6 consistent:1 pi:2 austin:1 summary:3 side:2 guide:2 allow:1 focussed:2 taking:1 face:1 munos:1 van:1 dimension:1 xn:1 calculated:1 concretely:1 san:1 projected:1 transaction:1 compact:1 uni:1 ml:7 active:36 uai:1 unnecessary:1 xi:22 search:2 additionally:1 promising:1 learn:4 obtaining:1 separator:1 domain:2 vj:16 diag:1 pk:2 main:4 s2:3 noise:2 bounding:1 wulff:1 x1:1 representative:1 wiley:1 kiu:2 exponential:5 comput:2 crude:1 kxk2:3 third:3 theorem:9 removing:1 formula:1 xt:1 specific:1 pac:5 showing:1 alt:1 evidence:1 exists:3 restricting:1 kamalika:2 ci:2 conditioned:1 margin:1 applies:5 springer:4 aa:2 corresponds:1 minimizer:2 determines:1 relies:1 satisfies:3 conditional:3 goal:7 identity:1 consequently:1 towards:2 lipschitz:2 fisher:11 specifically:1 except:3 lemma:13 pas:1 ece:1 experimental:1 svd:2 exception:1 select:2 formally:1 l4:4 support:1 dept:3 ex:2 |
5,438 | 5,922 | When are Kalman-Filter Restless Bandits Indexable?
Christopher Dance and Tomi Silander
Xerox Research Centre Europe
6 chemin de Maupertuis, Meylan, Is`ere, France
{dance,silander}@xrce.xerox.com
Abstract
We study the restless bandit associated with an extremely simple scalar Kalman
filter model in discrete time. Under certain assumptions, we prove that the problem is indexable in the sense that the Whittle index is a non-decreasing function of
the relevant belief state. In spite of the long history of this problem, this appears
to be the first such proof. We use results about Schur-convexity and mechanical
words, which are particular binary strings intimately related to palindromes.
1
Introduction
We study the problem of monitoring several time series so as to maintain a precise belief while minimising the cost of sensing. Such problems can be viewed as POMDPs with belief-dependent rewards [3] and their applications include active sensing [7], attention mechanisms for multiple-object
tracking [22], as well as online summarisation of massive data from time-series [4]. Specifically, we
discuss the restless bandit [24] associated with the discrete-time Kalman filter [19]. Restless bandits
generalise bandit problems [6, 8] to situations where the state of each arm (project, site or target)
continues to change even if the arm is not played. As with bandit problems, the states of the arms
evolve independently given the actions taken, suggesting that there might be efficient algorithms for
large-scale settings, based on calculating an index for each arm, which is a real number associated
with the (belief-)state of that arm alone. However, while bandits always have an optimal index policy (select the arm with the largest index), it is known that no index policy can be optimal for some
discrete-state restless bandits [17] and such problems are in general PSPACE-hard even to approximate to any non-trivial factor [10]. Further, in this paper we address restless bandits with real-valued
rather than discrete states. On the other hand, Whittle proposed a natural index policy for restless
bandits [24], but this policy only makes sense when the restless bandit is indexable (Section 2).
Briefly, a restless bandit is said to be indexable when an optimal solution to a relaxed version of the
problem consists in playing all arms whose indices exceed a given threshold. (The relaxed version
of the problem relaxes the constraint on the number of arms pulled per turn to a constraint on the
average number of arms pulled per turn). Under certain conditions, indexability implies a form of
asymptotic optimality of Whittle?s policy for the original problem [23, 20].
Restless bandits associated with scalar Kalman(-Bucy) filters in continuous time were recently
shown to be indexable [12] and the corresponding discrete-time problem has attracted considerable
attention over a long period [15, 11, 16, 21]. However, that attention has produced no satisfactory
proof of indexability ? even for scalar time-series and even if we assume that there is a monotone
optimal policy for the single-arm problem, which is a policy that plays the arm if and only if the
relevant belief-state exceeds some threshold (here the relevant belief-state is a posterior variance).
Theorem 1 of this paper addresses that gap. After formalising the problem (Section 2), we describe the concepts and intuition (Section 3) behind the main result (Section 4). The main tools
are mechanical words (which are not sufficiently well-known) and Schur convexity. As these tools
are associated with rather general theorems, we believe that future work (Section 5) should enable
substantial generalisation of our results.
1
2
Problem and Index
We consider the problem of tracking N time-series, which we call arms, in discrete time. The state
Zi,t ? R of arm i at time t ? Z+ evolves as a standard-normal random walk independent of everything but its immediate past (Z+ , R? and R+ all include zero). The action space is U := {1, . . . , N }.
Action ut = i makes an expensive observation Yi,t of arm i which is normally-distributed about Zi,t
with precision bi ? R+ and we receive cheap observations Yj,t of each other arm j with precision
aj ? R+ where aj < bj and aj = 0 means no observation at all.
Let Zt , Yt , Ht , Ft be the state, observation, history and observed history, so that Zt :=
(Z1,t , . . . , ZN,t ), Yt := (Y1,t , . . . , YN,t ), Ht := ((Z0 , u0 , Y0 ), . . . , (Zt , ut , Yt )) and Ft :=
((u0 , Y0 ), . . . , (ut , Yt )). Then we formalise the above as (1? is the indicator function)
1ut 6=i
1ut =i
Zi,0 ? N (0, 1), Zi,t+1 | Ht ? N (Zi,t , 1), Yi,t | Ht?1 , Zt , ut ? N Zi,t ,
+
.
ai
bi
Note that this setting is readily generalised to E[(Zi,t+1 ? Zi,t )2 ] 6= 1 by a change of variables.
Thus the posterior belief is given by the Kalman filter as Zi,t | Ft ? N (Z?i,t , xi,t ) where the
posterior mean is Z?i,t ? R and the error variance xi,t ? R+ satisfies
x+1
x+1
and ?i,1 (x) :=
. (1)
xi,t+1 = ?i,1ut+1 =i (xi,t ) where ?i,0 (x) :=
ai x + ai + 1
bi x + bi + 1
Problem KF1. Let ? be a policy so that ut = ?(Ft?1 ). Let x?i,t be the error variance under ?. The
problem is to choose ? so as to minimise the following objective for discount factor ? ? [0, 1). The
objective consists of a weighted sum of error variances x?i,t with weights wi ? R+ plus observation
costs hi ? R+ for i = 1, . . . , N :
"? N
#
? X
N
XX
X
t
?
E
? hi 1ut =i + wi xi,t =
? t hi 1ut =i + wi x?i,t
t=0 i=1
t=0 i=1
where the equality follows as (1) is a deterministic mapping (and assuming ? is deterministic).
Single-Arm Problem and Whittle Index. Now fix an arm i and write x?t , ?0 (?), . . . instead of
x?t,i , ?i,0 (?), . . . . Say there are now two actions ut = 0, 1 corresponding to cheap and expensive
observations respectively and the expensive observation now costs h + ? where ? ? R. The singlearm problem is to choose a policy, which here is an action sequence, ? := (u0 , u1 , . . . )
?
X
so as to minimise V ? (x|?) :=
? t {(h + ?)ut + wx?t } where x0 = x.
(2)
t=0
Let Q(x, ?|?) be the optimal cost-to-go in this problem if the first action must be ? and let ? ? be an
optimal policy, so that
?
Q(x, ?|?) := (h + ?)? + wx + ?V ? (?? (x)|?).
For any fixed x ? R+ , the value of ? for which actions u0 = 0 and u0 = 1 are both optimal is
known as the Whittle index ?W (x) assuming it exists and is unique. In other words
The Whittle index ?W (x) is the solution to Q(x, 0|?W (x)) = Q(x, 1|?W (x)).
(3)
??
Let us consider a policy which takes action u0 = ? then acts optimally producing actions ut (x)
and error variances x??
t (x). Then (3) gives
?
?
X
X
0?
1?
? t (h + ?W (x))u0?
+
wx
(x)
=
? t (h + ?W (x))u1?
t
t
t + wxt (x) .
t=0
t=0
Solving this linear equation for the index ?W (x) gives
P? t 0?
1?
W
t=1 ? (xt (x) ? xt (x))
P
? (x) = w ?
? h.
1?
0?
t
t=0 ? (ut (x) ? ut (x))
(4)
Whittle [24] recognised that for his index policy (play the arm with the largest ?W (x)) to make
sense, any arm which receives an expensive observation for added cost ?, must also receive an
expensive observation for added cost ? 0 < ?. Such problems are said to be indexable. The question
resolved by this paper is whether Problem KF1 is indexable. Equivalently, is ?W (x) non-decreasing
in x ? R+ ?
2
x0*
t
x1*
t
A
I
? 0(x)
D
? 0(x)
x t+1
x t+1
G
B
J
E
? (x)
? (x)
1
H
1
C
F
xt
xt
1?
Figure 1: Orbit x0?
t (x) traces the path ABCDE . . . for the word 01w = 01101. Orbit xt (x) traces
the path F GHIJ . . . for the word 10w = 10101. Word w = 101 is a palindrome.
3
Main Result, Key Concepts and Intuition
We make the following intuitive assumption about threshold (monotone) policies.
A1. For some x ? R+ depending on ? ? R, the policy ut = 1xt ?x is optimal for problem (2).
Note that under A1, definition (3) means the policy ut = 1xt >x is also optimal, so we can choose
?
0?
0 if x0?
?0 (x0?
?
0?
t?1 (x) ? x
t?1 (x)) if xt?1 (x) ? x
?
u0?
(x)
:=
and
x
(x)
:=
?
t
t
?
1 otherwise
?1 (x0?
(x))
otherwise
t?1
(5)
1?
?
0 if x1?
?0 (x1?
?
1?
1?
t?1 (x) < x
t?1 (x)) if xt?1 (x) < x
?
ut (x) :=
and xt (x) :=
?
1 otherwise
?1 (x1?
t?1 (x)) otherwise
1?
0?
1?
where x0?
0 (x) = x0 (x) = x. We refer to xt (x), xt (x) as the x-threshold orbits (Figure 1).
We are now ready to state our main result.
Theorem 1. Suppose a threshold policy (A1) is optimal for the single-arm problem (2). Then
Problem KF1 is indexable. Specifically, for any b > a ? 0 let
x+1
x+1
?0 (x) :=
,
?1 (x) :=
ax + a + 1
bx + b + 1
and for any w ? R+ , h ? R and 0 < ? < 1, let
P? t 0?
1?
W
t=1 ? (xt (x) ? xt (x))
? (x) := w P?
?h
(6)
0?
t 1?
t=0 ? (ut (x) ? ut (x))
1?
0?
1?
in which action sequences u0?
t (x), ut (x) and error variance sequences xt (x), xt (x) are given
W
in terms of ?0 , ?1 by (5). Then ? (x) is a continuous and non-decreasing function of x ? R+ .
We are now ready to describe the key concepts underlying this result.
Words. In this paper, a word w is a string on {0, 1}? with k th letter wk and wi:j := wi wi+1 . . . wj .
The empty word is , the concatenation of words u, v is uv, the word that is the n-fold repetition
of w is wn , the infinite repetition of w is w? and w
? is the reverse of w, so w = w
? means w is
a palindrome. The length of w is |w| and |w|u is the number of times that word u appears in w,
overlaps included.
Christoffel, Sturmian and Mechanical Words. It turns out that the action sequences in (5) are
given by such words, so the following definitions are central to this paper.
3
(0,1)
(0,01)
(01,1)
(0,001)
(0,0001)
(0,00001)
(00001,0001)
(001,01)
(0001,001)
(0001,0001001)
(001,00101)
(0001001,001)
(001,00100101)
(01,011)
(00101,01)
(00100101,00101)
(00101,0010101)
(01,01011)
(0010101,01)
(01,0101011)
(011,1)
(01011,011)
(0101011,01011) (01011,01011011)
(01011011,011)
(011,0111)
(011,0110111)
(0110111,0111)
(0111,1)
(0111,01111)
(01111,1)
Figure 2: Part of the Christoffel tree.
The Christoffel tree (Figure 2) is an infinite complete binary tree [5] in which each node is labelled
with a pair (u, v) of words. The root is (0, 1) and the children of (u, v) are (u, uv) and (uv, v).
The Christoffel words are the words 0, 1 and the concatenations uv for all (u, v) in that tree. The
fractions |uv|1 /|uv|0 form the Stern-Brocot tree [9] which contains each positive rational number
exactly once. Also, infinite paths in the Stern-Brocot tree converge to the positive irrational numbers.
Analogously, Sturmian words could be thought of as infinitely-long Christoffel words.
Alternatively, among many known characterisations, the Christoffel words can be defined as the
words 0, 1 and the words 0w1 where a := |0w1|1 /|0w1| and
(01w)n := b(n + 1)ac ? bnac
for any relatively prime natural numbers |0w1|0 and |0w1|1 and for n = 1, 2, . . . , |0w1|. The
Sturmian words are then the infinite words 0w1 w2 ? ? ? where, for n = 1, 2, . . . and a ? (0, 1)\Q,
(01w1 w2 ? ? ? )n := b(n + 1)ac ? bnac.
We use the notation 0w1 for Sturmian words although they are infinite.
The set of mechanical words is the union of the Christoffel and Sturmian words [13]. (Note that the
mechanical words are sometimes defined in terms of infinite repetitions of the Christoffel words.)
Majorisation. As in [14], let x, y ? Rm and let x(i) and y(i) be their elements sorted in ascending
order. We say x is weakly supermajorised by y and write x ?w y if
j
X
x(k) ?
k=1
j
X
y(k)
for all j = 1, . . . , m.
k=1
If this is an equality for j = m we say x is majorised by y and write x ? y. It turns out that
x?y
?
j
X
k=1
x[k] ?
j
X
y[k]
for j = 1, . . . , m ? 1 with equality for j = m
k=1
where x[k] , y[k] are the sequences sorted in descending order. For x, y ? Rm we have [14]
x?y
?
m
X
i=1
f (xi ) ?
m
X
f (yi ) for all convex functions f : R ? R.
i=1
More generally, a real-valued function ? defined on a subset A of Rm is said to be Schur-convex on
A if x ? y implies that ?(x) ? ?(y).
A11 x+A12
M?obius Transformations. Let ?A (x) denote the M?obius transformation ?A (x) := A
21 x+A22
where A ? R2?2 . M?obius transformations such as ?0 (?), ?1 (?) are closed under composition, so
for any word w we define ?w (x) := ?w|w| ? ? ? ? ? ?w2 ? ?w1 (x) and ? (x) := x.
Intuition. Here is the intuition behind our main result.
For any x ? R+ , the orbits in (5) correspond to a particular mechanical word 0, 1 or 0w1 depending
on the value of x (Figure 1). Specifically, for any word u, let yu be the fixed point of the mapping ?u
on R+ so that ?u (yu ) = yu and yu ? R+ . Then the word corresponding to x is 1 for 0 ? x ? y1 ,
0w1 for x ? [y01w , y10w ] and 0 for y0 ? x < ?. In passing we note that these fixed points
are sorted in ascending order by the ratio ? := |01w|0 /|01w|1 of counts of 0s to counts of 1s, as
4
100
y 01w and ?w(0)
80
60
40
20
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
|01w| 0 / |01w|1
Figure 3: Lower fixed points y01w of Christoffel words (black dots), majorisation points for those
words (black circles) and the tree of ?w (0) (blue).
illustrated by Figure 3. Interestingly, it turns out that ratio ? is a piecewise-constant yet continuous
function of x, reminiscent of the Cantor function.
Also, composition of M?obius transformations is homeomorphic to matrix multiplication so that
for any A, B ? R2?2 .
?A ? ?B (x) = ?AB (x)
Thus, the index (6) can be written in terms of the orbits of a linear system (11) given by 0, 1 or 0w1.
Further, if A ? R2?2 and det(A) = 1 then the gradient of the corresponding M?obius transformation
is the convex function
1
d?A (x)
=
.
dx
(A21 x + A22 )2
So the gradient of the index is the difference of the sums of a convex function of the linear-system
orbits. However, such sums are Schur-convex functions and it follows that the index is increasing
because one orbit weakly supermajorises the other, as we now show for the case 0w1 (noting that
the proof is easier for words 0, 1). As 0w1 is a mechanical word, w is a palindrome. Further, if w is
a palindrome, it turns out that the difference between the linear-system orbits increases with x. So,
we might define the majorisation point for w as the x for which one orbit majorises the other. Quite
remarkably, if w is a palindrome then the majorisation point is ?w (0) (Proposition 7). Indeed the
black circles and blue dots of Figure 3 coincide. Finally, ?w (0) is less than or equal to y01w which
is the least x for which the orbits correspond to the word 0w1. Indeed, the blue dots of Figure 3 are
below the corresponding black dots. Thus one orbit does indeed supermajorise the other.
4
4.1
Proof of Main Result
Mechanical Words
The M?obius transformations of (1) satisfy the following assumption for I := R+ . We prove that the
fixed point yw of word w (the solution to ?w (x) = x on I) is unique in the supplementary material.
Assumption A2. Functions ?0 : I ? I, ?1 : I ? I, where I is an interval of R, are increasing
and non-expansive, so for all x, y ? I : x < y and for k ? {0, 1} we have
?k (x) < ?k (y)
{z
}
|
?k (y) ? ?k (x) < y ? x .
{z
}
|
and
increasing
non-expansive
Furthermore, the fixed points y0 , y1 of ?0 , ?1 on I satisfy y1 < y0 .
Hence the following two propositions (supplementary material) apply to ?0 , ?1 of (1) on I = R+ .
5
Proposition 1. Suppose A2 holds, x ? I and w is a non-empty word. Then
x < ?w (x) ? ?w (x) < yw ? x < yw
and
x > ?w (x) ? ?w (x) > yw ? x > yw .
1?
?
For a given x, in the notation of (5), we call the shortest word u such that (u1?
1 , u2 , . . . ) = u
the x-threshold word. Proposition 2 generalises a recent result about x-threshold words in a setting
where ?0 , ?1 are linear [18].
Proposition 2. Suppose A2 holds and 0w1 is a mechanical word. Then
0w1 is the x-threshold word ? x ? [y01w , y10w ].
Also, if x0 , x1 ? I with x0 ? y0 and x1 ? y1 then the x0 - and x1 -threshold words are 0 and 1.
We also use the following very interesting fact (Proposition 4.2 on p.28 of [5]).
Proposition 3. Suppose 0w1 is a mechanical word. Then w is a palindrome.
4.2
Properties of the Linear-System Orbits M (w) and Prefix Sums S(w)
Definition. Assume that a, b ? R+ and a < b. Consider the matrices
1
1
1
1
?1
F :=
,
G :=
and K :=
a 1+a
b 1+b
0
?1
1
so that the M?obius transformations ?F , ?G are the functions ?0 , ?1 of (1) and GF ?F G = (b?a)K.
Given any word w ? {0, 1}? , we define the matrix product M (w)
M (w) := M (w|w| ) ? ? ? M (w1 ),
where M () := I, M (0) := F and M (1) := G
where I ? R2?2 is the identity and the prefix sum S(w) as the matrix polynomial
S(w) :=
|w|
X
M (w1:k ),
where S() := 0 (the all-zero matrix).
(7)
k=1
For any A ? R2?2 , let tr(A) be the trace of A, let Aij = [A]ij be the entries of A and let A ? 0
indicate that all entries of A are non-negative.
Remark. Clearly, det(F ) = det(G) = 1 so that det(M (w)) = 1 for any word w. Also, S(w)
corresponds to the partial sums of the linear-system orbits, as hinted in the previous section.
The following proposition captures the role of palindromes (proof in the supplementary material).
Proposition 4. Suppose w is a word, p is a palindrome and n ? Z+ . Then
!
f h+1
f
1. M (p) = hh+f
for some f, h ? R,
2
?1
h
h+f
2. tr(M (10p)) = tr(M (01p)),
3. If u ? {p(10p)n , (10p)n 10} then M (u) ? M (?
u) = ?K for some ? ? R? ,
4. If w is a prefix of p then [M (p(10p)n 10w)]22 ? [M (p(01p)n 01w)]22 ,
5. [M ((10p)n 10w)]21 ? [M ((01p)n 01w)]21 ,
6. [M ((10p)n 1)]21 ? [M ((01p)n 0)]21 .
We now demonstrate a surprisingly simple relation between S(w) and M (w).
Proposition 5. Suppose w is a palindrome. Then
S21 (w) = M22 (w) ? 1
S22 (w) = M12 (w) + S21 (w).
and
(8)
Furthermore, if ?k := [S(10w)M (w(10w)k ) ? S(01w)M (w(01w)k )]22 then
?k = 0
for all k ? Z+ .
6
(9)
Proof. Let us write M := M (w), S := S(w). We prove (8) by induction on |w|. In the base
case w ? {, 0, 1}. For w = , M22 ? 1 = 0 = S21 , M12 + S21 = 0 = S22 . For w ? {0, 1},
M22 ? 1 = c = S21 , M12 + S21 = 1 + c = S22 for some c ? {a, b}. For the inductive step, in
accordance with Claim 1 of Proposition 4, assume w ? {0v0, 1v1} for some word v satisfying
!
f h+1
f
c
d
h+f
M (v) = h2 ?1
,
S(v) =
for some c, d, f, h ? R.
h?1 f +h?1
h
h+f
For w = 1v1, M := M (1v1) = GM (v)G and S := S(1v1) = GM (v)G+S(v)G+G. Calculating
the corresponding matrix products and sums gives
S21 = (bh + h + bf ? 1)(bh + 2h + bf + f + 1)(h + f )?1 = M22 ? 1
S22 ? S21 = bh + 2h + bf + f = M12
as claimed. For w = 0u0 the claim also holds as F = G|b=a . This completes the proof of (8).
Furthermore Part. Let A := S(w)F G + F G + G and B := S(w)GF + GF + F . Then
?k = [(A(M (w)F G)k ? B(M (w)GF )k )M (w)]22
(10)
by definition of S(?). By Claim 1 of Proposition 4 and (8) we know that
!
f h+1
f
c
d
h+f
M (w) = h2 ?1
,
S(w) =
for some c, d, f, h ? R.
h?1 f +h?1
h
h+f
Substituting these expressions and the definitions of F, G into the definitions of A, B and then
into (10) for k ? {0, 1} directly gives ?0 = ?1 = 0 (although this calculation is long).
Now consider the case k ? 2. Claim 2 of Proposition 4 says tr(M (10w)) = tr(M (01w)) and clearly
det(M (10w)) = det(M (01w)) = 1. Thus we can diagonalise as
M (w)F G =: U DU ?1 ,
D := diag(?, 1/?) for some ? ? 1
so that ?k = [AU D U M (w) ? e BV D V M (w)]22 =: ?1 ?k + ?2 ??k . So, if ? = 1 then
?k = ?1 + ?2 = ?0 and we already showed that ?0 = 0. Otherwise ? 6= 1, so ?0 = ?1 = 0
implies ?1 + ?2 = ?1 ? + ?2 ??1 = 0 which gives ?1 = ?2 = 0. Thus for any k ? Z+ we have
?k = ?1 ?k + ?2 ??k = 0.
k
4.3
?1
M (w)GF =: V DV ?1 ,
T
k
?1
Majorisation
The following is a straightforward consequence of results in [14] proved in the supplementary material. We emphasize that the notation ?w has nothing to do with the notion of w as a word.
a symmetric function that is convex and
Proposition 6. Suppose x, y ? Rm
+ and f : R ? R is
Pm i
Pm i
decreasing on R+ . Then x ?w y and ? ? [0, 1] ?
i=1 ? f (x(i) ) ?
i=1 ? f (y(i) ).
For any x ? R and any fixed word w, define the sequences for n ? Z+ and k = 1, . . . , m
)
xnm+k (x) := [M ((10w)n (10w)1:k )v(x)]2 , ?x(n) := (xnm+1 (x), . . . , xnm+m (x))
ynm+k (x) := [M ((01w)n (01w)1:k )v(x)]2 ,
?y(n) := (ynm+1 (x), . . . , ynm+m (x))
(11)
where m := |10w| and v(x) := (x, 1)T .
(n)
Proposition 7. Suppose w is a palindrome and x ? ?w (0). Then ?x
(n)
(n)
sequences on R+ and ?x ?w ?y for any n ? Z+ .
(n)
and ?y
are ascending
Proof. Clearly ?w (0) ? 0 so x ? 0 and hence v(x) ? 0. So for any word u and letter c ? {0, 1} we
have M (uc)v(x) = M (c)M (u)v(x) ? M (u)v(x) ? 0 as M (c) ? I. Thus xk+1 (x) ? xk (x) ? 0
(n)
(n)
and yk+1 (x) ? yk (x) ? 0. In conclusion, ?x and ?y are ascending sequences on R+ .
Now ?w (0) =
[M (w)]12
[M (w)]22 .
Thus [Av(?w (0))]2 :=
[AM (w)]22
[M (w)]22
for any A ? R2?2 . So
xnm+k (?w (0)) ? ynm+k (?w (0))
1
=
[(M ((10w)n (10w)1:k ) ? M ((01w)n (01w)1:k ))M (w)]22 ? 0
[M (w)]22
7
for k = 2, . . . , m by Claim 4 of Proposition 4. So all but the first term of the sum Tm (?w (0)) is
non-positive where
Tj (x) :=
j
X
(xnm+k (x) ? ynm+k (x)).
k=1
Thus T1 (?w (0)) ? T2 (?w (0)) ? . . . Tm (?w (0)). But
m
X
1
Tm (?w (0)) =
[(M ((10w)n (10w)1:k ) ? M ((01w)n (01w)1:k ))M (w)]22
[M (w)]22
k=1
1
[S(10w)M (w(10w)n ) ? S(01w)M (w(01w)n )]22 = 0
=
[M (w)]22
where the last step follows from (9). So Tj (?w (0)) ? 0 for j = 1, . . . , m. Yet Claims 5 and 6 of
Pj
d
Tj (x) = k=1 [M ((10w)n (10w)1:k ) ? M ((01w)n (01w)1:k )]21 ? 0. So for
Proposition 4 give dx
(n)
(n)
x ? ?w (0) we have Tj (x) ? 0 for j = 1, . . . , m which means that ?x ?w ?y .
4.4
Indexability
Theorem 1. The index ?W (x) of (6) is continuous and non-decreasing for x ? R+ .
Proof. As weight w is non-negative and cost h is a constant we only need to prove the result for
?(x) := ?W (x)w=1,h=0 and we can use w to denote a word. By Proposition 2, x ? [y01w , y10w ]
for some mechanical word 0w1. (Cases x ?
/ (y1 , y0 ) are clarified in the supplementary material.)
Let us show that the hypotheses of Proposition 7 are satisfied by w and x. Firstly, w is a palindrome
by Proposition 3. Secondly, ?w01 (0) ? 0 and as ?w (?) is monotonically increasing, it follows that
?w ??w01 (0) ? ?w (0). Equivalently, ?01w ??w (0) ? ?w (0) so that ?w (0) ? y01w by Proposition 1.
Hence x ? y01w ? ?w (0).
(n)
(n)
Thus Proposition 7 applies, showing that the sequences ?x and ?y , with elements xnm+k (x) and
(n)
(n)
ynm+k (x) as defined in (11), are non-decreasing sequences on R+ with ?x ?w ?y . Also, 1/x2
is a symmetric function that is convex and decreasing on R+ . Therefore Proposition 6 applies giving
m
X
? nm+k?1
? nm+k?1
?
?0
for any n ? Z+ where m := |01w|.
(12)
(xnm+k (x))2
(ynm+k (x))2
k=1
Also Proposition 2 shows that the x-threshold orbits are (?u1 (x), . . . , ?u1:k (x), . . . ) and
(?l1 (x), . . . , ?l1:k (x), . . . ) where u := (01w)? and l := (10w)? . So the denominator of (6) is
?
?
?
X
X
1 ? ? m X k?1
? k (1lk+1 =1 ? 1uk+1 =1 ) =
? mk (1 ? ?) ? ?(x) =
?
(?u1:k (x) ? ?l1:k (x)).
1??
k=0
k=0
Note that
d ex+f
dx gx+h
k=1
1
(gx+h)2
for any eh ? f g = 1. Then (12) gives
? m
d?(x)
1 ? ?m X X
? nm+k?1
? nm+k?1
=
?
? 0.
dx
1 ? ? n=0
(xnm+k (x))2
(ynm+k (x))2
=
k=1
But ?(x) is continuous for x ? R+ (as shown in the supplementary material). Therefore we conclude that ?(x) is non-decreasing for x ? R+ .
5
Further Work
One might attempt to prove that assumption A1 holds using general results about monotone optimal
policies for two-action MDPs based on submodularity [2] or multimodularity [1]. However, we find
counter-examples to the required submodularity condition. Rather, we are optimistic that the ideas
of this paper themselves offer an alternative approach to proving A1. It would then be natural to
extend our results to settings where the underlying state evolves as Zt+1 | Ht ? N (mZt , 1) for
some multiplier m 6= 1 and to cost functions other than the variance. Finally, the question of the
indexability of the discrete-time Kalman filter in multiple dimensions remains open.
8
References
[1] E. Altman, B. Gaujal, and A. Hordijk. Multimodularity, convexity, and optimization properties. Mathematics of Operations Research, 25(2):324?347, 2000.
[2] E. Altman and S. Stidham Jr. Optimality of monotonic policies for two-action Markovian decision processes, with applications to control of queues with delayed information. Queueing Systems, 21(3-4):267?
291, 1995.
[3] M. Araya, O. Buffet, V. Thomas, and F. Charpillet. A POMDP extension with belief-dependent rewards.
In Neural Information Processing Systems, pages 64?72, 2010.
[4] A. Badanidiyuru, B. Mirzasoleiman, A. Karbasi, and A. Krause. Streaming submodular maximization:
Massive data summarization on the fly. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 671?680, 2014.
[5] J. Berstel, A. Lauve, C. Reutenauer, and F. Saliola. Combinatorics on Words: Christoffel Words and
Repetitions in Words. CRM Monograph Series, 2008.
[6] S. Bubeck and N. Cesa-Bianchi. Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit
Problems, Foundation and Trends in Machine Learning, Vol. 5. NOW, 2012.
[7] Y. Chen, H. Shioi, C. Montesinos, L. P. Koh, S. Wich, and A. Krause. Active detection via adaptive
submodularity. In Proceedings of The 31st International Conference on Machine Learning, pages 55?63,
2014.
[8] J. Gittins, K. Glazebrook, and R. Weber. Multi-armed bandit allocation indices. John Wiley & Sons,
2011.
[9] R. Graham, D. Knuth, and O. Patashnik. Concrete Mathematics: A Foundation for Computer Science.
Addison-Wesley, 1994.
[10] S. Guha, K. Munagala, and P. Shi. Approximation algorithms for restless bandit problems. Journal of the
ACM, 58(1):3, 2010.
[11] B. La Scala and B. Moran. Optimal target tracking with restless bandits. Digital Signal Processing,
16(5):479?487, 2006.
[12] J. Le Ny, E. Feron, and M. Dahleh. Scheduling continuous-time Kalman filters. IEEE Trans. Automatic
Control, 56(6):1381?1394, 2011.
[13] M. Lothaire. Algebraic combinatorics on words. Cambridge University Press, 2002.
[14] A. Marshall, I. Olkin, and B. Arnold. Inequalities: Theory of majorization and its applications. Springer
Science & Business Media, 2010.
[15] L. Meier, J. Peschon, and R. Dressler. Optimal control of measurement subsystems. IEEE Trans. Automatic Control, 12(5):528?536, 1967.
[16] J. Ni?no-Mora and S. Villar. Multitarget tracking via restless bandit marginal productivity indices and
Kalman filter in discrete time. In Proceedings of the 48th IEEE Conference on Decision and Control,
pages 2905?2910, 2009.
[17] R. Ortner, D. Ryabko, P. Auer, and R. Munos. Regret bounds for restless Markov bandits. In Algorithmic
Learning Theory, pages 214?228. Springer, 2012.
[18] B. Rajpathak, H. Pillai, and S. Bandyopadhyay. Analysis of stable periodic orbits in the one dimensional
linear piecewise-smooth discontinuous map. Chaos, 22(3):033126, 2012.
[19] T. Thiele. Sur la compensation de quelques erreurs quasi-syst?ematiques par la m?ethode des moindres
carr?es. CA Reitzel, 1880.
[20] I. Verloop. Asymptotic optimal control of multi-class restless bandits. CNRS Technical Report, hal00743781, 2014.
[21] S. Villar. Restless bandit index policies for dynamic sensor scheduling optimization. PhD thesis, Statistics
Department, Universidad Carlos III de Madrid, 2012.
[22] E. Vul, G. Alvarez, J. B. Tenenbaum, and M. J. Black. Explaining human multiple object tracking as
resource-constrained approximate inference in a dynamic probabilistic model. In Neural Information
Processing Systems, pages 1955?1963, 2009.
[23] R. R. Weber and G. Weiss. On an index policy for restless bandits. Journal of Applied Probability, pages
637?648, 1990.
[24] P. Whittle. Restless bandits: Activity allocation in a changing world. Journal of Applied Probability,
pages 287?298, 1988.
9
| 5922 |@word version:2 briefly:1 polynomial:1 bf:3 open:1 tr:5 series:5 contains:1 interestingly:1 prefix:3 past:1 com:1 olkin:1 yet:2 dx:4 attracted:1 readily:1 john:1 must:2 reminiscent:1 written:1 wx:3 xrce:1 cheap:2 s21:8 alone:1 summarisation:1 xk:2 node:1 clarified:1 gx:2 firstly:1 xnm:8 m22:4 prove:5 consists:2 x0:11 indeed:3 themselves:1 multi:3 decreasing:8 armed:2 increasing:4 project:1 xx:1 underlying:2 notation:3 medium:1 string:2 transformation:7 act:1 exactly:1 rm:4 uk:1 control:6 normally:1 yn:1 producing:1 generalised:1 positive:3 t1:1 accordance:1 consequence:1 path:3 might:3 plus:1 black:5 au:1 bi:4 unique:2 yj:1 union:1 regret:2 dahleh:1 thought:1 word:61 glazebrook:1 spite:1 subsystem:1 bh:3 scheduling:2 quelques:1 descending:1 deterministic:2 map:1 yt:4 shi:1 indexability:4 go:1 attention:3 straightforward:1 independently:1 convex:7 pomdp:1 wich:1 his:1 mora:1 proving:1 notion:1 altman:2 target:2 play:2 suppose:8 massive:2 gm:2 hypothesis:1 element:2 trend:1 expensive:5 satisfying:1 continues:1 observed:1 ft:4 majorisation:5 role:1 fly:1 capture:1 abcde:1 wj:1 ryabko:1 counter:1 thiele:1 yk:2 substantial:1 intuition:4 monograph:1 convexity:3 reward:2 dynamic:2 irrational:1 weakly:2 solving:1 badanidiyuru:1 resolved:1 describe:2 whose:1 quite:1 supplementary:6 valued:2 say:4 otherwise:5 statistic:1 online:1 sequence:10 product:2 silander:2 relevant:3 hordijk:1 intuitive:1 w01:2 empty:2 a11:1 mirzasoleiman:1 gittins:1 object:2 depending:2 ac:2 ij:1 a22:2 implies:3 indicate:1 submodularity:3 discontinuous:1 filter:8 stochastic:1 human:1 munagala:1 enable:1 a12:1 material:6 everything:1 montesinos:1 fix:1 proposition:24 hinted:1 secondly:1 extension:1 hold:4 sufficiently:1 normal:1 mapping:2 bj:1 algorithmic:1 claim:6 substituting:1 patashnik:1 a2:3 villar:2 largest:2 repetition:4 ere:1 tool:2 weighted:1 clearly:3 sensor:1 always:1 rather:3 ax:1 expansive:2 sigkdd:1 sense:3 am:1 inference:1 dependent:2 cnrs:1 streaming:1 bandit:23 relation:1 quasi:1 france:1 among:1 constrained:1 uc:1 marginal:1 equal:1 once:1 yu:4 s22:4 future:1 feron:1 t2:1 report:1 piecewise:2 ortner:1 delayed:1 maintain:1 ab:1 attempt:1 detection:1 mining:1 behind:2 tj:4 ynm:8 berstel:1 partial:1 tree:7 walk:1 orbit:15 circle:2 formalise:1 mk:1 markovian:1 marshall:1 zn:1 maximization:1 cost:8 subset:1 entry:2 formalising:1 guha:1 optimally:1 bucy:1 periodic:1 st:1 international:2 multitarget:1 probabilistic:1 universidad:1 analogously:1 concrete:1 w1:22 thesis:1 central:1 satisfied:1 nm:4 cesa:1 choose:3 bx:1 syst:1 suggesting:1 de:4 whittle:8 wk:1 satisfy:2 combinatorics:2 root:1 closed:1 optimistic:1 carlos:1 maupertuis:1 majorization:1 ni:1 variance:7 correspond:2 produced:1 monitoring:1 pomdps:1 history:3 definition:6 associated:5 proof:9 charpillet:1 rational:1 proved:1 knowledge:1 ut:21 auer:1 appears:2 wesley:1 alvarez:1 wei:1 scala:1 furthermore:3 hand:1 receives:1 christopher:1 aj:3 believe:1 concept:3 multiplier:1 inductive:1 equality:3 hence:3 symmetric:2 satisfactory:1 illustrated:1 shioi:1 recognised:1 complete:1 demonstrate:1 carr:1 l1:3 weber:2 chaos:1 recently:1 obius:7 bandyopadhyay:1 extend:1 refer:1 composition:2 mzt:1 cambridge:1 measurement:1 ai:3 automatic:2 uv:6 pm:2 mathematics:2 centre:1 submodular:1 dot:4 stable:1 europe:1 v0:1 base:1 posterior:3 wxt:1 recent:1 cantor:1 showed:1 reverse:1 prime:1 claimed:1 certain:2 inequality:1 binary:2 yi:3 vul:1 relaxed:2 converge:1 shortest:1 period:1 monotonically:1 signal:1 u0:10 multiple:3 exceeds:1 generalises:1 smooth:1 technical:1 calculation:1 minimising:1 long:4 christoffel:10 offer:1 a1:5 denominator:1 sometimes:1 pspace:1 receive:2 remarkably:1 krause:2 interval:1 completes:1 w2:3 schur:4 call:2 noting:1 exceed:1 iii:1 relaxes:1 wn:1 crm:1 zi:9 nonstochastic:1 idea:1 tm:3 det:6 minimise:2 whether:1 expression:1 queue:1 algebraic:1 passing:1 action:13 remark:1 generally:1 yw:5 discount:1 tenenbaum:1 per:2 blue:3 pillai:1 discrete:8 write:4 vol:1 key:2 threshold:10 characterisation:1 queueing:1 changing:1 pj:1 ht:5 v1:4 monotone:3 fraction:1 sum:8 letter:2 decision:2 graham:1 bound:1 hi:3 played:1 fold:1 activity:1 bv:1 constraint:2 x2:1 u1:6 extremely:1 optimality:2 relatively:1 department:1 xerox:2 tomi:1 jr:1 son:1 intimately:1 y0:7 wi:6 evolves:2 dv:1 karbasi:1 koh:1 taken:1 equation:1 resource:1 remains:1 discus:1 turn:6 mechanism:1 count:2 hh:1 know:1 addison:1 ascending:4 operation:1 apply:1 alternative:1 buffet:1 ematiques:1 original:1 thomas:1 include:2 calculating:2 homeomorphic:1 giving:1 objective:2 added:2 question:2 already:1 said:3 gradient:2 concatenation:2 trivial:1 induction:1 assuming:2 kalman:8 length:1 index:21 sur:1 ratio:2 equivalently:2 trace:3 negative:2 zt:5 policy:20 stern:2 summarization:1 bianchi:1 m12:4 observation:9 av:1 markov:1 indexable:8 compensation:1 immediate:1 situation:1 precise:1 y1:6 palindrome:12 pair:1 mechanical:11 required:1 meier:1 z1:1 kf1:3 trans:2 address:2 below:1 belief:8 overlap:1 natural:3 eh:1 business:1 indicator:1 arm:20 mdps:1 lk:1 ready:2 gf:5 discovery:1 evolve:1 multiplication:1 asymptotic:2 araya:1 par:1 interesting:1 allocation:2 digital:1 h2:2 foundation:2 playing:1 surprisingly:1 last:1 aij:1 pulled:2 generalise:1 arnold:1 explaining:1 munos:1 distributed:1 dimension:1 world:1 adaptive:1 coincide:1 approximate:2 emphasize:1 active:2 conclude:1 xi:6 alternatively:1 continuous:6 ca:1 du:1 diag:1 main:6 nothing:1 child:1 x1:7 site:1 madrid:1 ny:1 wiley:1 precision:2 a21:1 theorem:4 z0:1 xt:16 showing:1 sensing:2 r2:6 moran:1 exists:1 knuth:1 phd:1 restless:18 gap:1 easier:1 chen:1 infinitely:1 bubeck:1 tracking:5 scalar:3 u2:1 applies:2 monotonic:1 springer:2 corresponds:1 satisfies:1 acm:2 viewed:1 sorted:3 identity:1 labelled:1 considerable:1 change:2 hard:1 included:1 specifically:3 generalisation:1 infinite:6 e:1 la:3 productivity:1 select:1 dance:2 ex:1 |
5,439 | 5,923 | Policy Gradient for Coherent Risk Measures
Yinlam Chow
Stanford University
ychow@stanford.edu
Aviv Tamar
UC Berkeley
avivt@berkeley.edu
Mohammad Ghavamzadeh
Adobe Research & INRIA
mohammad.ghavamzadeh@inria.fr
Shie Mannor
Technion
shie@ee.technion.ac.il
Abstract
Several authors have recently developed risk-sensitive policy gradient methods
that augment the standard expected cost minimization problem with a measure of
variability in cost. These studies have focused on specific risk-measures, such as
the variance or conditional value at risk (CVaR). In this work, we extend the policy gradient method to the whole class of coherent risk measures, which is widely
accepted in finance and operations research, among other fields. We consider
both static and time-consistent dynamic risk measures. For static risk measures,
our approach is in the spirit of policy gradient algorithms and combines a standard
sampling approach with convex programming. For dynamic risk measures, our approach is actor-critic style and involves explicit approximation of value function.
Most importantly, our contribution presents a unified approach to risk-sensitive
reinforcement learning that generalizes and extends previous results.
1
Introduction
Risk-sensitive optimization considers problems in which the objective involves a risk measure of
the random cost, in contrast to the typical expected cost objective. Such problems are important
when the decision-maker wishes to manage the variability of the cost, in addition to its expected
outcome, and are standard in various applications of finance and operations research. In reinforcement learning (RL) [27], risk-sensitive objectives have gained popularity as a means to regularize
the variability of the total (discounted) cost/reward in a Markov decision process (MDP).
Many risk objectives have been investigated in the literature and applied to RL, such as the celebrated Markowitz mean-variance model [16], Value-at-Risk (VaR) and Conditional Value at Risk
(CVaR) [18, 29, 21, 10, 8, 30]. The view taken in this paper is that the preference of one risk measure
over another is problem-dependent and depends on factors such as the cost distribution, sensitivity to
rare events, ease of estimation from data, and computational tractability of the optimization problem.
However, the highly influential paper of Artzner et al. [2] identified a set of natural properties that
are desirable for a risk measure to satisfy. Risk measures that satisfy these properties are termed coherent and have obtained widespread acceptance in financial applications, among others. We focus
on such coherent measures of risk in this work.
For sequential decision problems, such as MDPs, another desirable property of a risk measure is
time consistency. A time-consistent risk measure satisfies a ?dynamic programming? style property:
if a strategy is risk-optimal for an n-stage problem, then the component of the policy from the t-th
time until the end (where t < n) is also risk-optimal (see principle of optimality in [5]). The recently
proposed class of dynamic Markov coherent risk measures [24] satisfies both the coherence and time
consistency properties.
In this work, we present policy gradient algorithms for RL with a coherent risk objective. Our
approach applies to the whole class of coherent risk measures, thereby generalizing and unifying
previous approaches that have focused on individual risk measures. We consider both static coherent
1
risk of the total discounted return from an MDP and time-consistent dynamic Markov coherent
risk. Our main contribution is formulating the risk-sensitive policy-gradient under the coherent-risk
framework. More specifically, we provide:
? A new formula for the gradient of static coherent risk that is convenient for approximation
using sampling.
? An algorithm for the gradient of general static coherent risk that involves sampling with
convex programming and a corresponding consistency result.
? A new policy gradient theorem for Markov coherent risk, relating the gradient to a suitable
value function and a corresponding actor-critic algorithm.
Several previous results are special cases of the results presented here; our approach allows to rederive them in greater generality and simplicity.
Related Work Risk-sensitive optimization in RL for specific risk functions has been studied recently by several authors. [6] studied exponential utility functions, [18], [29], [21] studied meanvariance models, [8], [30] studied CVaR in the static setting, and [20], [9] studied dynamic coherent
risk for systems with linear dynamics. Our paper presents a general method for the whole class of
coherent risk measures (both static and dynamic) and is not limited to a specific choice within that
class, nor to particular system dynamics.
Reference [19] showed that an MDP with a dynamic coherent risk objective is essentially a robust MDP. The planning for large scale MDPs was considered in [31], using an approximation of
the value function. For many problems, approximation in the policy space is more suitable (see,
e.g., [15]). Our sampling-based RL-style approach is suitable for approximations both in the policy
and value function, and scales-up to large or continuous MDPs. We do, however, make use of a
technique of [31] in a part of our method.
Optimization of coherent risk measures was thoroughly investigated by Ruszczynski and
Shapiro [25] (see also [26]) for the stochastic programming case in which the policy parameters
do not affect the distribution of the stochastic system (i.e., the MDP trajectory), but only the reward
function, and thus, this approach is not suitable for most RL problems. For the case of MDPs and
dynamic risk, [24] proposed a dynamic programming approach. This approach does not scale-up
to large MDPs, due to the ?curse of dimensionality?. For further motivation of risk-sensitive policy
gradient methods, we refer the reader to [18, 29, 21, 8, 30].
2
Preliminaries
Consider a probability space (?, F, P? ), where ? is the set of outcomes (sample space), F is a
?-algebra
over ? representing
R
the set of events we are interested in, and P? ? B, where B :=
? : ??? ?(?) = 1, ? ? 0 is the set of probability distributions, is a probability measure over F
parameterized by some tunable parameter ? ? RK . In the following, we suppress the notation of ?
in ?-dependent quantities.
To ease the technical exposition, in this paper we restrict our attention to finite probability spaces,
i.e., ? has a finite number of elements. Our results can be extended to the Lp -normed spaces without
loss of generality, but the details are omitted for brevity.
Denote by Z the space of random variables Z : ? 7? (??, ?) defined over the probability space
(?, F, P? ). In this paper, a random variable Z ? Z is interpreted as a cost, i.e., the smaller the
realization of Z, the better. For Z, W ? Z, we denote by Z
? W the point-wise partial order,
. P
i.e., Z(?) ? W (?) for all ? ? ?. We denote by E? [Z] = ??? P? (?)?(?)Z(?) a ?-weighted
expectation of Z.
An MDP is a tuple M = (X , A, C, P, ?, x0 ), where X and A are the state and action spaces;
C(x) ? [?Cmax , Cmax ] is a bounded, deterministic, and state-dependent cost; P (?|x, a) is the transition probability distribution; ? is a discount factor; and x0 is the initial state.1 Actions are chosen
according to a ?-parameterized stationary Markov2 policy ?? (?|x). We denote by x0 , a0 , . . . , xT , aT
a trajectory of length T drawn by following the policy ?? in the MDP.
1
Our results may easily be extended to random costs, state-action dependent costs, and random initial states.
For Markov coherent risk, the class of optimal policies is stationary Markov [24], while this is not necessarily true for static risk. Our results can be extended to history-dependent policies or stationary Markov
2
2
2.1 Coherent Risk Measures
A risk measure is a function ? : Z ? R that maps an uncertain outcome Z to the extended real line
R ? {+?, ??},
e.g., the expectation E [Z] or the conditional value-at-risk (CVaR) min??R ? +
1
+
E
(Z
?
?)
. A risk measure is called coherent, if it satisfies the following conditions for all
?
Z, W ? Z [2]:
A1 Convexity: ?? ? [0, 1], ? ?Z + (1 ? ?)W ? ??(Z) + (1 ? ?)?(W );
A2 Monotonicity: if Z ? W , then ?(Z) ? ?(W );
A3 Translation invariance: ?a ? R, ?(Z + a) = ?(Z) + a;
A4 Positive homogeneity: if ? ? 0, then ?(?Z) = ??(Z).
Intuitively, these condition ensure the ?rationality? of single-period risk assessments: A1 ensures
that diversifying an investment will reduce its risk; A2 guarantees that an asset with a higher cost
for every possible scenario is indeed riskier; A3, also known as ?cash invariance?, means that the
deterministic part of an investment portfolio does not contribute to its risk; the intuition behind A4
is that doubling a position in an asset doubles its risk. We further refer the reader to [2] for a more
detailed motivation of coherent risk.
The following representation theorem [26] shows an important property of coherent risk measures
that is fundamental to our gradient-based approach.
Theorem 2.1. A risk measure ? : Z ? R is coherent if and only if there exists a convex bounded
and closed set U ? B such that3
?(Z) =
max
? : ?P? ?U (P? )
E? [Z].
(1)
The result essentially states that any coherent risk measure is an expectation w.r.t. a worst-case
density function ?P? , i.e., a re-weighting of P? by ?, chosen adversarially from a suitable set of test
density functions U(P? ), referred to as risk envelope. Moreover, a coherent risk measure is uniquely
represented by its risk envelope. In the sequel, we shall interchangeably refer to coherent risk
measures either by their explicit functional representation, or by their corresponding risk-envelope.
In this paper, we assume that the risk envelope U(P? ) is given in a canonical convex programming
formulation, and satisfies the following conditions.
Assumption 2.2 (The General Form of Risk Envelope). For each given policy parameter ? ? RK ,
the risk envelope U of a coherent risk measure can be written as
X
U(P? ) = ?P? : ge (?, P? ) = 0, ?e ? E, fi (?, P? ) ? 0, ?i ? I,
?(?)P? (?) = 1, ?(?) ? 0 , (2)
???
where each constraint ge (?, P? ) is an affine function in ?, each constraint fi (?, P? ) is a convex
function in ?, and there exists a strictly feasible point ?. E and I here denote the sets of equality
and inequality constraints, respectively. Furthermore, for any given ? ? B, fi (?, p) and ge (?, p) are
twice differentiable in p, and there exists a M > 0 such that
dfi (?, p)
, max dge (?, p) ? M, ?? ? ?.
max max
i?I
e?E
dp(?)
dp(?)
Assumption 2.2 implies that the risk envelope U(P? ) is known in an explicit form. From Theorem
6.6 of [26], in the case of a finite probability space, ? is a coherent risk if and only if U(P? ) is a
convex and compact set. This justifies the affine assumption of ge and the convex assumption of fi .
Moreover, the additional assumption on the smoothness of the constraints holds for many popular
coherent risk measures, such as the CVaR, the mean-semi-deviation, and spectral risk measures [1].
2.2 Dynamic Risk Measures
The risk measures defined above do not take into account any temporal structure that the random
variable might have, such as when it is associated with the return of a trajectory in the case of
MDPs. In this sense, such risk measures are called static. Dynamic risk measures, on the other hand,
policies on a state space augmented with accumulated cost. The latter has shown to be sufficient for optimizing
the CVaR risk [4].
3
When we study risk in MDPs, the risk envelope U(P? ) in Eq. 1 also depends on the state x.
3
explicitly take into account the temporal nature of the stochastic outcome. A primary motivation for
considering such measures is the issue of time consistency, usually defined as follows [24]: if a
certain outcome is considered less risky in all states of the world at stage t + 1, then it should also
be considered less risky at stage t. Example 2.1 in [13] shows the importance of time consistency
in the evaluation of risk in a dynamic setting. It illustrates that for multi-period decision-making,
optimizing a static measure can lead to ?time-inconsistent? behavior. Similar paradoxical results
could be obtained with other risk metrics; we refer the readers to [24] and [13] for further insights.
Markov Coherent Risk Measures. Markov risk measures were introduced in [24] and constitute
a useful class of dynamic time-consistent risk measures that are important to our study of risk in
MDPs. For a T -length horizon and MDP M, the Markov coherent risk measure ?T (M) is
!
,
?T (M) = C(x0 ) + ?? C(x1 ) + . . . + ?? C(xT ?1 ) + ?? C(xT )
(3)
where ? is a static coherent risk measure that satisfies Assumption 2.2 and x0 , . . . , xT is a trajectory
drawn from the MDP M under policy ?? . It is important to note that P
in (3), each static coherent risk
? at state x ? X is induced by the transition probability P? (?|x) = a?A P (x0 |x, a)?? (a|x). We
.
also define ?? (M) = limT ?? ?T (M), which is well-defined since ? < 1 and the cost is bounded.
We further assume that ? in (3) is a Markov risk measure, i.e., the evaluation of each static coherent
risk measure ? is not allowed to depend on the whole past.
3
Problem Formulation
In this paper, we are interested in solving two risk-sensitive optimization problems. Given a random
variable Z and a static coherent risk measure ? as defined in Section 2, the static risk problem (SRP)
is given by
min ?(Z).
(4)
?
For example, in an RL setting, Z may correspond to the cumulative discounted cost Z = C(x0 ) +
?C(x1 ) + ? ? ? + ? T C(xT ) of a trajectory induced by an MDP with a policy parameterized by ?.
For an MDP M and a dynamic Markov coherent risk measure ?T as defined by Eq. 3, the dynamic
risk problem (DRP) is given by
min ?? (M).
(5)
?
Except for very limited cases, there is no reason to hope that neither the SRP in (4) nor the DRP
in (5) should be tractable problems, since the dependence of the risk measure on ? may be complex
and non-convex. In this work, we aim towards a more modest goal and search for a locally optimal
?. Thus, the main problem that we are trying to solve in this paper is how to calculate the gradients
of the SRP?s and DRP?s objective functions
?? ?(Z)
and
?? ?? (M).
We are interested in non-trivial cases in which the gradients cannot be calculated analytically. In
the static case, this would correspond to a non-trivial dependence of Z on ?. For dynamic risk, we
also consider cases where the state space is too large for a tractable computation. Our approach for
dealing with such difficult cases is through sampling. We assume that in the static case, we may
obtain i.i.d. samples of the random variable Z. For the dynamic case, we assume that for each state
and action (x, a) of the MDP, we may obtain i.i.d. samples of the next state x0 ? P (?|x, a). We
show that sampling may indeed be used in both cases to devise suitable estimators for the gradients.
To finally solve the SRP and DRP problems, a gradient estimate may be plugged into a standard
stochastic gradient descent (SGD) algorithm for learning a locally optimal solution to (4) and (5).
From the structure of the dynamic risk in Eq. 3, one may think that a gradient estimator for ?(Z) may
help us to estimate the gradient ?? ?? (M). Indeed, we follow this idea and begin with estimating
the gradient in the static risk case.
4
Gradient Formula for Static Risk
In this section, we consider a static coherent risk measure ?(Z) and propose sampling-based estimators for ?? ?(Z). We make the following assumption on the policy parametrization, which is
standard in the policy gradient literature [15].
Assumption 4.1. The likelihood ratio ?? log P (?) is well-defined and bounded for all ? ? ?.
4
Moreover, our approach implicitly assumes that given some ? ? ?, ?? log P (?) may be easily
calculated. This is also a standard requirement for policy gradient algorithms [15] and is satisfied
in various applications such as queueing systems, inventory management, and financial engineering
(see, e.g., the survey by Fu [11]).
Using Theorem 2.1 and Assumption 2.2, for each ?, we have that ?(Z) is the solution to the convex optimization problem (1) (for that value of ?). The Lagrangian function of (1), denoted by
L? (?, ?P , ?E , ?I ), may be written as
!
P
E
I
L? (?, ? , ? , ? ) =
X
?(?)P? (?)Z(?)??
P
???
X
?(?)P? (?)?1 ?
???
X
X I
?E (e)ge (?,P? )?
? (i)fi (?,P? ).
e?E
i?I
(6)
The convexity of (1) and its strict feasibility due to Assumption 2.2 implies that L? (?, ?P , ?E , ?I )
has a non-empty set of saddle points S. The next theorem presents a formula for the gradient
?? ?(Z). As we shall subsequently show, this formula is particularly convenient for devising sampling based estimators for ?? ?(Z).
?,E
?,I
Theorem 4.2. Let Assumptions 2.2 and 4.1 hold. For any saddle point (??? , ??,P
? , ?? , ?? ) ? S
of (6), we have
h
i X
X ?,I
?
?? ?(Z) = E??? ?? log P (?)(Z ? ??,P
??,E
?? (i)?? fi (??? ; P? ).
? ) ?
? (e)?? ge (?? ; P? ) ?
e?E
i?I
The proof of this theorem, given in the supplementary material, involves an application of the Envelope theorem [17] and a standard ?likelihood-ratio? trick. We now demonstrate the utility of Theorem
4.2 with several examples in which we show that it generalizes previously known results, and also
enables deriving new useful gradient formulas.
4.1
Example 1: CVaR
The CVaR at level ? ? [0, 1] of a random variable Z, denoted by ?CVaR (Z; ?), is a very popular
coherent risk measure [23], defined as
.
?CVaR (Z; ?) = inf t + ??1 E [(Z ? t)+ ] .
t?R
When Z is continuous, ?CVaR (Z; ?) is well-known to be the mean of the ?-tail distribution of Z,
E [ Z| Z > q? ], where q? is a (1 ? ?)-quantile of Z. Thus, selecting a small ? makes CVaR particularly sensitive to rare, but very high costs.
The risk envelope for CVaR is known to be [26] U
=
?P?
: ?(?) ?
P
[0, ??1 ],
?(?)P
(?)
=
1
.
Furthermore,
[26]
show
that
the
saddle
points
of (6) satisfy
?
???
?,P
?,P
?
??? (?) = ??1 when Z(?) > ??,P
,
and
?
(?)
=
0
when
Z(?)
<
?
,
where
?
is
any (1 ? ?)?
?
?
?
quantile of Z. Plugging this result into Theorem 4.2, we can easily show that
?? ?CVaR (Z; ?) = E [ ?? log P (?)(Z ? q? )| Z(?) > q? ] .
This formula was recently proved in [30] for the case of continuous distributions by an explicit
calculation of the conditional expectation, and under several additional smoothness assumptions.
Here we show that it holds regardless of these assumptions and in the discrete case as well. Our
proof is also considerably simpler.
4.2
Example 2: Mean-Semideviation
1/2
.
The semi-deviation of a random variable Z is defined as SD[Z] = E (Z ? E [Z])2+
. The
semi-deviation captures the variation of the cost only above its mean, and is an appealing alternative
to the standard deviation, which does not distinguish between the variability of upside and downside
.
deviations. For some ? ? [0, 1], the mean-semideviation risk measure is defined as ?MSD (Z; ?) =
E [Z] + ?SD[Z], and is a coherent risk measure [26]. We have the following result:
Proposition 4.3. Under Assumption 4.1, with ?? E [Z] = E [?? log P (?)Z], we have
?E [(Z ?E [Z])+ (?? log P (?)(Z ?E [Z])??? E [Z])]
?? ?MSD (Z; ?) = ?? E [Z] +
.
SD(Z)
This proposition can be used to devise a sampling based estimator for ?? ?MSD (Z; ?) by replacing
all the expectations with sample averages. The algorithm along with the proof of the proposition are
in the supplementary material. In Section 6 we provide a numerical illustration of optimization with
a mean-semideviation objective.
5
4.3
General Gradient Estimation Algorithm
In the two previous examples, we obtained a gradient formula by analytically calculating the Lagrangian saddle point (6) and plugging it into the formula of Theorem 4.2. We now consider a
general coherent risk ?(Z) for which, in contrast to the CVaR and mean-semideviation cases, the
Lagrangian saddle-point is not known analytically. We only assume that we know the structure of the
risk-envelope as given by (2). We show that in this case, ?? ?(Z) may be estimated using a sample
average approximation (SAA; [26]) of the formula in Theorem 4.2.
.
Assume that we are given N i.i.d. samples ?i ? P? , i = 1, . . . , N , and let P?;N (?) =
P
N
1
i=1 I {?i = ?} denote the corresponding empirical distribution. Also, let the sample risk enN
velope U(P?;N ) be defined according to Eq. 2 with P? replaced by P?;N . Consider the following
SAA version of the optimization in Eq. 1:
X
?N (Z) =
max
P?;N (?i )?(?i )Z(?i ).
(7)
?:?P?;N ?U (P?;N )
i?1,...,N
Note that (7) defines a convex optimization problem with O(N ) variables and constraints. In
the following, we assume that a solution to (7) may be computed efficiently using standard con?
denote a solution to (7) and
vex programming tools such as interior point methods [7]. Let ??;N
?,P
?,E
?,I
??;N , ??;N , ??;N denote the corresponding KKT multipliers, which can be obtained from the convex programming algorithm [7]. We propose the following estimator for the gradient-based on
Theorem 4.2:
N
X
?
??;N ?(Z) =
P?;N (?i )??;N
(?i )?? log P (?i )(Z(?i ) ? ??,P
(8)
?;N )
i=1
?
X
?
??,E
?;N (e)?? ge (??;N ; P?;N ) ?
e?E
X
?
??,I
?;N (i)?? fi (??;N ; P?;N ).
i?I
Thus, our gradient estimation algorithm is a two-step procedure involving both sampling and convex
programming. In the following, we show that under some conditions on the set U(P? ), ??;N ?(Z)
is a consistent estimator of ?? ?(Z). The proof has been reported in the supplementary material.
Proposition 4.4. Let Assumptions 2.2 and 4.1 hold. Suppose there exists a compact set C = C? ?C?
such that: (I) The set of Lagrangian saddle points S ? C is non-empty and bounded. (II) The
functions fe (?, P? ) for all e ? E and fi (?, P? ) for all i ? I are finite-valued and continuous (in ?)
on C? . (III) For N large enough, the set SN is non-empty and SN ? C w.p. 1. Further assume that:
(IV) If ?N P?;N ? U(P?;N ) and ?N converges w.p. 1 to a point ?, then ?P? ? U(P? ). We then have
that limN ?? ?N (Z) = ?(Z) and limN ?? ??;N ?(Z) = ?? ?(Z) w.p. 1.
The set of assumptions for Proposition 4.4 is large, but rather mild. Note that (I) is implied by
the Slater condition of Assumption 2.2. For satisfying (III), we need that the risk be well-defined
for every empirical distribution, which is a natural requirement. Since P?;N always converges to P?
uniformly on ?, (IV) essentially requires smoothness of the constraints. We remark that in particular,
constraints (I) to (IV) are satisfied for the popular CVaR, mean-semideviation, and spectral risk.
It is interesting to compare the performance of the SAA estimator (8) with the analytical-solution
based estimator, as in Sections 4.1 and 4.2. In the supplementary material, we report an empirical
comparison between the two approaches for the case of CVaR risk, which showed that the two
approaches performed very similarly. This is well-expected, since in general, both SAA and?standard
likelihood-ratio based estimators obey a law-of-large-numbers variance bound of order 1/ N [26].
To summarize this section, we have seen that by exploiting the special structure of coherent risk
measures in Theorem 2.1 and by the envelope-theorem style result of Theorem 4.2, we are able to
derive sampling-based, likelihood-ratio style algorithms for estimating the policy gradient ?? ?(Z)
of coherent static risk measures. The gradient estimation algorithms developed here for static risk
measures will be used as a sub-routine in our subsequent treatment of dynamic risk measures.
5
Gradient Formula for Dynamic Risk
In this section, we derive a new formula for the gradient of the Markov coherent dynamic risk measure, ?? ?? (M). Our approach is based on combining the static gradient formula of Theorem 4.2,
with a dynamic-programming decomposition of ?? (M).
6
The risk-sensitive value-function for an MDP M under the policy ? is defined as V? (x) =
?? (M|x0 = x), where with a slight abuse of notation, ?? (M|x0 = x) denotes the Markovcoherent dynamic risk in (3) when the initial state x0 is x. It is shown in [24] that due to the structure
of the Markov dynamic risk ?? (M), the value function is the unique solution to the risk-sensitive
Bellman equation
V? (x) = C(x) + ?
max
E? [V? (x0 )],
(9)
?P? (?|x)?U (x,P? (?|x))
where the expectation is taken over the next state transition. Note that by definition, we have
?? (M) = V? (x0 ), and thus, ?? ?? (M) = ?? V? (x0 ).
We now develop a formula for ?? V? (x); this formula extends the well-known ?policy gradient
theorem? [28, 14], developed for the expected return, to Markov-coherent dynamic risk measures.
We make a standard assumption, analogous to Assumption 4.1 of the static case.
Assumption 5.1. The likelihood ratio ?? log ?? (a|x) is well-defined and bounded for all x ? X
and a ? A.
?,E
?,I
?
For each state x ? X , let (??,x
, ??,P
?,x , ??,x , ??,x ) denote a saddle point of (6), corresponding to the
state x, with P? (?|x) replacing P? in (6) and V? replacing Z. The next theorem presents a formula
for ?? V? (x); the proof is in the supplementary material.
Theorem 5.2. Under Assumptions"2.2 and 5.1, we have
#
?
X
?V? (x) = E???
? t ?? log ?? (at |xt )h? (xt , at ) x0 = x ,
t=0
where E??? [?] denotes the expectation w.r.t. trajectories generated by the Markov chain with transition
?
probabilities P? (?|x)??,x
(?), and the stage-wise cost function h? (x, a) is defined as
"
h? (x, a) = C(x)+
X
0
P (x
?
|x, a)??,x
(x0 )
0
?V? (x
)???,P
?,x ?
x0 ?X
#
?
?
X ?,I dfi (??,x
, p) X ?,E
dge (??,x
, p)
??,x (i)
?
??,x (e)
.
dp(x0 )
dp(x0 )
i?I
e?E
Theorem 5.2 may be used to develop an actor-critic style [28, 14] sampling-based algorithm for
solving the DRP problem (5), composed of two interleaved procedures:
Critic: For a given policy ?, calculate the risk-sensitive value function V? , and
Actor: Using the critic?s V? and Theorem 5.2, estimate ?? ?? (M) and update ?.
Space limitation restricts us from specifying the full details of our actor-critic algorithm and its
analysis. In the following, we highlight only the key ideas and results. For the full details, we refer
the reader to the full paper version, provided in the supplementary material.
For the critic, the main challenge is calculating the value function when the state space X is large
and dynamic programming cannot be applied due to the ?curse of dimensionality?. To overcome
this, we exploit the fact that V? is equivalent to the value function in a robust MDP [19] and modify
a recent algorithm in [31] to estimate it using function approximation.
For the actor, the main challenge is that in order to estimate the gradient using Thm. 5.2, we need to
sample from an MDP with ??? -weighted transitions. Also, h? (x, a) involves an expectation for each
s and a. Therefore, we propose a two-phase sampling procedure to estimate ?V? in which we first
use the critic?s estimate of V? to derive ??? , and sample a trajectory from an MDP with ??? -weighted
transitions. For each state in the trajectory, we then sample several next states to estimate h? (x, a).
The convergence analysis of the actor-critic algorithm and the gradient error incurred from function
approximation of V? are reported in the supplementary material. We remark that our actor-critic
algorithm requires a simulator for sampling multiple state-transitions from each state. Extending
our approach to work with a single trajectory roll-out is an interesting direction for future research.
6
Numerical Illustration
In this section, we illustrate our approach with a numerical example. The purpose of this illustration
is to emphasize the importance of flexibility in designing risk criteria for selecting an appropriate
risk-measure ? such that suits both the user?s risk preference and the problem-specific properties.
We consider a trading agent that can invest in one of three assets (see Figure 1 for their distributions).
The returns of the first two assets, A1 and A2, are normally distributed: A1 ? N (1, 1) and A2 ?
7
Figure 1: Numerical illustration - selection between 3 assets. A: Probability density of asset return.
B,C,D: Bar plots of the probability of selecting each asset vs. training iterations, for policies ?1 , ?2 ,
and ?3 , respectively. At each iteration, 10,000 samples were used for gradient estimation.
?
?z > 1, with ? =
N (4, 6). The return of the third asset A3 has a Pareto distribution: f (z) = z?+1
1.5. The mean of the return from A3 is 3 and its variance is infinite; such heavy-tailed distributions
are widely used in financial modeling [22]. The agent selects an action randomly, with probability
P (Ai ) ? exp(?i ), where ? ? R3 is the policy parameter. We trained three different policies ?1 , ?2 ,
and ?3 . Policy ?1 is risk-neutral, i.e., max? E [Z], and it was trained using standard policy gradient
[15]. Policy ?2 is risk-averse and had a mean-semideviation objective max? E [Z] ? SD[Z], and
was trained using the algorithm in Section 4. Policy ?3 is also
p risk-averse, with a mean-standarddeviation objective, as proposed in [29, 21], max? E [Z] ? Var[Z], and was trained using the
algorithm of [29]. For each of these policies, Figure 1 shows the probability of selecting each asset
vs. training iterations. Although A2 has the highest mean return, the risk-averse policy ?2 chooses
A3, since it has a lower downside, as expected. However, because of the heavy upper-tail of A3,
policy ?3 opted to choose A1 instead. This is counter-intuitive as a rational investor should not avert
high returns. In fact, in this case A3 stochastically dominates A1 [12].
7
Conclusion
We presented algorithms for estimating the gradient of both static and dynamic coherent risk measures using two new policy gradient style formulas that combine sampling with convex programming. Thereby, our approach extends risk-sensitive RL to the whole class of coherent risk measures,
and generalizes several recent studies that focused on specific risk measures.
On the technical side, an important future direction is to improve the convergence rate of gradient
estimates using importance sampling methods. This is especially important for risk criteria that are
sensitive to rare events, such as the CVaR [3].
From a more conceptual point of view, the coherent-risk framework explored in this work provides
the decision maker with flexibility in designing risk preference. As our numerical example shows,
such flexibility is important for selecting appropriate problem-specific risk measures for managing
the cost variability. However, we believe that our approach has much more potential than that.
In almost every real-world application, uncertainty emanates from stochastic dynamics, but also,
and perhaps more importantly, from modeling errors (model uncertainty). A prudent policy should
protect against both types of uncertainties. The representation duality of coherent-risk (Theorem
2.1), naturally relates the risk to model uncertainty. In [19], a similar connection was made between
model-uncertainty in MDPs and dynamic Markov coherent risk. We believe that by carefully shaping the risk-criterion, the decision maker may be able to take uncertainty into account in a broad
sense. Designing a principled procedure for such risk-shaping is not trivial, and is beyond the scope
of this paper. However, we believe that there is much potential to risk shaping as it may be the key
for handling model misspecification in dynamic decision making.
Acknowledgments
The research leading to these results has received funding from the European Research Council
under the European Unions Seventh Framework Program (FP7/2007-2013) / ERC Grant Agreement
n. 306638. Yinlam Chow is partially supported by Croucher Foundation Doctoral Scholarship.
8
References
[1] C. Acerbi. Spectral measures of risk: a coherent representation of subjective risk aversion. Journal of
Banking & Finance, 26(7):1505?1518, 2002.
[2] P. Artzner, F. Delbaen, J. Eber, and D. Heath. Coherent measures of risk. Mathematical finance, 9(3):203?
228, 1999.
[3] O. Bardou, N. Frikha, and G. Pag`es. Computing VaR and CVaR using stochastic approximation and
adaptive unconstrained importance sampling. Monte Carlo Methods and Applications, 15(3):173?210,
2009.
[4] N. B?auerle and J. Ott. Markov decision processes with average-value-at-risk criteria. Mathematical
Methods of Operations Research, 74(3):361?379, 2011.
[5] D. Bertsekas. Dynamic programming and optimal control. Athena Scientific, 4th edition, 2012.
[6] V. Borkar. A sensitivity formula for risk-sensitive cost and the actor?critic algorithm. Systems & Control
Letters, 44(5):339?346, 2001.
[7] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press, 2009.
[8] Y. Chow and M. Ghavamzadeh. Algorithms for CVaR optimization in MDPs. In NIPS 27, 2014.
[9] Y. Chow and M. Pavone. A unifying framework for time-consistent, risk-averse model predictive control:
theory and algorithms. In American Control Conference, 2014.
[10] E. Delage and S. Mannor. Percentile optimization for Markov decision processes with parameter uncertainty. Operations Research, 58(1):203213, 2010.
[11] M. Fu. Gradient estimation. In Simulation, volume 13 of Handbooks in Operations Research and Management Science, pages 575 ? 616. Elsevier, 2006.
[12] J. Hadar and W. R. Russell. Rules for ordering uncertain prospects. The American Economic Review,
pages 25?34, 1969.
[13] D. Iancu, M. Petrik, and D. Subramanian.
arXiv:1106.6102, 2011.
Tight approximations of dynamic risk measures.
[14] V. Konda and J. Tsitsiklis. Actor-critic algorithms. In NIPS, 2000.
[15] P. Marbach and J. Tsitsiklis. Simulation-based optimization of Markov reward processes. IEEE Transactions on Automatic Control, 46(2):191?209, 1998.
[16] H. Markowitz. Portfolio selection: Efficient diversification of investment. John Wiley and Sons, 1959.
[17] P. Milgrom and I. Segal. Envelope theorems for arbitrary choice sets. Econometrica, 70(2):583?601,
2002.
[18] J. Moody and M. Saffell. Learning to trade via direct reinforcement. Neural Networks, IEEE Transactions
on, 12(4):875?889, 2001.
[19] T. Osogami. Robustness and risk-sensitivity in Markov decision processes. In NIPS, 2012.
[20] M. Petrik and D. Subramanian. An approximate solution method for large risk-averse Markov decision
processes. In UAI, 2012.
[21] L. Prashanth and M. Ghavamzadeh. Actor-critic algorithms for risk-sensitive MDPs. In NIPS 26, 2013.
[22] S. Rachev and S. Mittnik. Stable Paretian models in finance. John Willey & Sons, New York, 2000.
[23] R. Rockafellar and S. Uryasev. Optimization of conditional value-at-risk. Journal of risk, 2:21?42, 2000.
[24] A. Ruszczy?nski. Risk-averse dynamic programming for Markov decision processes. Mathematical Programming, 125(2):235?261, 2010.
[25] A. Ruszczy?nski and A. Shapiro. Optimization of convex risk functions. Math. OR, 31(3):433?452, 2006.
[26] A. Shapiro, D. Dentcheva, and A. Ruszczy?nski. Lectures on stochastic programming, chapter 6, pages
253?332. SIAM, 2009.
[27] R. Sutton and A. Barto. Reinforcement learning: An introduction. Cambridge Univ Press, 1998.
[28] R. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning
with function approximation. In NIPS 13, 2000.
[29] A. Tamar, D. Di Castro, and S. Mannor. Policy gradients with variance related risk criteria. In International Conference on Machine Learning, 2012.
[30] A. Tamar, Y. Glassner, and S. Mannor. Optimizing the CVaR via sampling. In AAAI, 2015.
[31] A. Tamar, S. Mannor, and H. Xu. Scaling up robust MDPs using function approximation. In International
Conference on Machine Learning, 2014.
9
| 5923 |@word mild:1 version:2 simulation:2 decomposition:1 sgd:1 thereby:2 initial:3 celebrated:1 selecting:5 past:1 subjective:1 riskier:1 written:2 john:2 subsequent:1 numerical:5 enables:1 plot:1 update:1 v:2 stationary:3 devising:1 parametrization:1 provides:1 mannor:5 contribute:1 math:1 preference:3 simpler:1 mathematical:3 along:1 direct:1 combine:2 x0:19 indeed:3 expected:6 behavior:1 nor:2 planning:1 multi:1 simulator:1 bellman:1 discounted:3 curse:2 considering:1 begin:1 estimating:3 notation:2 bounded:6 moreover:3 provided:1 interpreted:1 developed:3 unified:1 guarantee:1 temporal:2 berkeley:2 every:3 glassner:1 finance:5 control:5 normally:1 grant:1 bertsekas:1 positive:1 engineering:1 modify:1 sd:4 sutton:2 abuse:1 inria:2 might:1 twice:1 doctoral:1 studied:5 specifying:1 ease:2 limited:2 unique:1 acknowledgment:1 investment:3 union:1 procedure:4 delage:1 empirical:3 convenient:2 boyd:1 cannot:2 interior:1 selection:2 risk:156 equivalent:1 deterministic:2 map:1 lagrangian:4 attention:1 regardless:1 normed:1 convex:15 focused:3 survey:1 simplicity:1 insight:1 estimator:10 rule:1 importantly:2 deriving:1 regularize:1 vandenberghe:1 financial:3 variation:1 analogous:1 rationality:1 suppose:1 user:1 programming:16 designing:3 agreement:1 trick:1 element:1 satisfying:1 particularly:2 slater:1 drp:5 capture:1 worst:1 calculate:2 ensures:1 averse:6 ordering:1 counter:1 highest:1 russell:1 prospect:1 trade:1 principled:1 intuition:1 convexity:2 reward:3 econometrica:1 saffell:1 dynamic:36 ghavamzadeh:4 trained:4 depend:1 solving:2 tight:1 algebra:1 singh:1 predictive:1 delbaen:1 petrik:2 avivt:1 easily:3 various:2 represented:1 chapter:1 univ:1 monte:1 outcome:5 stanford:2 widely:2 solve:2 supplementary:7 valued:1 think:1 differentiable:1 analytical:1 propose:3 fr:1 combining:1 realization:1 flexibility:3 emanates:1 intuitive:1 exploiting:1 convergence:2 double:1 requirement:2 empty:3 extending:1 invest:1 converges:2 help:1 derive:3 develop:2 ac:1 illustrate:1 received:1 eq:5 involves:5 implies:2 trading:1 direction:2 stochastic:7 subsequently:1 mcallester:1 material:7 preliminary:1 proposition:5 strictly:1 hold:4 considered:3 exp:1 scope:1 a2:5 omitted:1 purpose:1 estimation:6 maker:3 sensitive:16 council:1 tool:1 weighted:3 minimization:1 hope:1 always:1 aim:1 rather:1 cash:1 barto:1 focus:1 likelihood:5 contrast:2 opted:1 sense:2 elsevier:1 dependent:5 accumulated:1 chow:4 a0:1 interested:3 selects:1 issue:1 among:2 augment:1 denoted:2 prudent:1 ychow:1 special:2 uc:1 field:1 sampling:18 adversarially:1 broad:1 future:2 markowitz:2 others:1 report:1 randomly:1 composed:1 homogeneity:1 individual:1 replaced:1 phase:1 suit:1 acceptance:1 highly:1 evaluation:2 behind:1 chain:1 tuple:1 fu:2 partial:1 modest:1 iv:3 plugged:1 re:1 uncertain:2 modeling:2 downside:2 ott:1 cost:20 tractability:1 deviation:5 neutral:1 rare:3 technion:2 seventh:1 too:1 reported:2 considerably:1 chooses:1 thoroughly:1 nski:3 density:3 international:2 fundamental:1 sensitivity:3 siam:1 sequel:1 moody:1 aaai:1 satisfied:2 manage:1 management:2 choose:1 stochastically:1 american:2 style:7 return:9 leading:1 account:3 potential:2 segal:1 rockafellar:1 satisfy:3 explicitly:1 depends:2 performed:1 view:2 closed:1 investor:1 prashanth:1 contribution:2 il:1 roll:1 variance:5 efficiently:1 correspond:2 avert:1 carlo:1 trajectory:9 asset:9 enn:1 history:1 definition:1 against:1 naturally:1 associated:1 proof:5 di:1 static:25 con:1 rational:1 tunable:1 proved:1 popular:3 treatment:1 dimensionality:2 shaping:3 routine:1 carefully:1 higher:1 follow:1 formulation:2 generality:2 furthermore:2 stage:4 until:1 hand:1 replacing:3 assessment:1 widespread:1 defines:1 perhaps:1 scientific:1 mdp:16 aviv:1 dge:2 believe:3 true:1 multiplier:1 equality:1 analytically:3 interchangeably:1 uniquely:1 percentile:1 croucher:1 criterion:5 trying:1 mohammad:2 demonstrate:1 wise:2 recently:4 fi:8 funding:1 functional:1 rl:8 volume:1 extend:1 diversifying:1 tail:2 relating:1 slight:1 refer:5 cambridge:2 ai:1 smoothness:3 automatic:1 unconstrained:1 consistency:5 similarly:1 erc:1 marbach:1 portfolio:2 had:1 stable:1 actor:11 showed:2 recent:2 optimizing:3 inf:1 termed:1 scenario:1 certain:1 diversification:1 inequality:1 devise:2 seen:1 greater:1 additional:2 managing:1 period:2 semi:3 relates:1 upside:1 desirable:2 ii:1 full:3 multiple:1 technical:2 calculation:1 msd:3 a1:6 feasibility:1 adobe:1 plugging:2 involving:1 essentially:3 expectation:8 metric:1 arxiv:1 iteration:3 limt:1 addition:1 yinlam:2 limn:2 envelope:13 heath:1 strict:1 induced:2 shie:2 inconsistent:1 spirit:1 bardou:1 ee:1 iii:2 enough:1 affect:1 identified:1 restrict:1 reduce:1 idea:2 economic:1 tamar:4 utility:2 york:1 constitute:1 action:5 remark:2 useful:2 detailed:1 discount:1 locally:2 shapiro:3 restricts:1 canonical:1 estimated:1 popularity:1 discrete:1 shall:2 key:2 drawn:2 queueing:1 neither:1 parameterized:3 uncertainty:7 letter:1 extends:3 almost:1 reader:4 decision:12 coherence:1 vex:1 banking:1 scaling:1 interleaved:1 bound:1 distinguish:1 constraint:7 optimality:1 formulating:1 min:3 influential:1 according:2 smaller:1 son:2 lp:1 appealing:1 osogami:1 making:2 castro:1 intuitively:1 taken:2 equation:1 previously:1 r3:1 know:1 ge:7 tractable:2 fp7:1 end:1 milgrom:1 generalizes:3 operation:5 obey:1 spectral:3 appropriate:2 alternative:1 robustness:1 assumes:1 denotes:2 ensure:1 a4:2 paradoxical:1 cmax:2 unifying:2 calculating:2 konda:1 exploit:1 quantile:2 especially:1 scholarship:1 implied:1 objective:10 quantity:1 ruszczy:3 strategy:1 primary:1 dependence:2 gradient:45 dp:4 athena:1 cvar:21 considers:1 trivial:3 reason:1 pavone:1 length:2 illustration:4 ratio:5 difficult:1 fe:1 dentcheva:1 suppress:1 policy:41 upper:1 markov:23 finite:4 descent:1 extended:4 variability:5 misspecification:1 mansour:1 arbitrary:1 thm:1 introduced:1 connection:1 auerle:1 coherent:51 protect:1 nip:5 able:2 bar:1 beyond:1 usually:1 summarize:1 challenge:2 program:1 max:9 subramanian:2 event:3 suitable:6 natural:2 representing:1 improve:1 mdps:12 ruszczynski:1 risky:2 sn:2 review:1 literature:2 law:1 loss:1 lecture:1 highlight:1 artzner:2 interesting:2 limitation:1 var:3 foundation:1 aversion:1 incurred:1 agent:2 affine:2 sufficient:1 consistent:6 acerbi:1 principle:1 pareto:1 critic:13 heavy:2 translation:1 supported:1 tsitsiklis:2 side:1 pag:1 distributed:1 overcome:1 calculated:2 transition:7 world:2 cumulative:1 author:2 made:1 reinforcement:5 adaptive:1 saa:4 uryasev:1 transaction:2 approximate:1 compact:2 emphasize:1 implicitly:1 monotonicity:1 dealing:1 kkt:1 uai:1 handbook:1 conceptual:1 continuous:4 search:1 tailed:1 nature:1 robust:3 inventory:1 investigated:2 necessarily:1 complex:1 european:2 main:4 whole:5 motivation:3 edition:1 srp:4 allowed:1 x1:2 augmented:1 xu:1 referred:1 wiley:1 sub:1 position:1 dfi:2 explicit:4 wish:1 exponential:1 rederive:1 weighting:1 third:1 rachev:1 formula:17 theorem:25 rk:2 specific:6 xt:7 explored:1 a3:7 dominates:1 exists:4 sequential:1 gained:1 importance:4 justifies:1 illustrates:1 horizon:1 generalizing:1 borkar:1 saddle:7 partially:1 doubling:1 applies:1 satisfies:5 eber:1 willey:1 conditional:5 goal:1 exposition:1 towards:1 feasible:1 typical:1 specifically:1 except:1 uniformly:1 infinite:1 total:2 called:2 accepted:1 invariance:2 duality:1 e:1 latter:1 brevity:1 handling:1 |
5,440 | 5,924 | A Dual-Augmented Block Minimization Framework
for Learning with Limited Memory
Ian E.H. Yen ? Shan-Wei Lin ? Shou-De Lin ?
?
University of Texas at Austin
National Taiwan University
ianyen@cs.utexas.edu {r03922067,sdlin}@csie.ntu.edu.tw
?
?
Abstract
In past few years, several techniques have been proposed for training of linear
Support Vector Machine (SVM) in limited-memory setting, where a dual blockcoordinate descent (dual-BCD) method was used to balance cost spent on I/O and
computation. In this paper, we consider the more general setting of regularized
Empirical Risk Minimization (ERM) when data cannot fit into memory. In particular, we generalize the existing block minimization framework based on strong
duality and Augmented Lagrangian technique to achieve global convergence for
general convex ERM. The block minimization framework is flexible in the sense
that, given a solver working under sufficient memory, one can integrate it with
the framework to obtain a solver globally convergent under limited-memory condition. We conduct experiments on L1-regularized classification and regression
problems to corroborate our convergence theory and compare the proposed framework to algorithms adopted from online and distributed settings, which shows superiority of the proposed approach on data of size ten times larger than the memory
capacity.
1
Introduction
Nowadays data of huge scale are prevalent in many applications of statistical learning and data
mining. It has been argued that model performance can be boosted by increasing both number
of samples and features, and through crowdsourcing technology, annotated samples of terabytes
storage size can be generated [3]. As a result, the performance of model is no longer limited by the
sample size but the amount of available computational resources. In other words, the data size can
easily go beyond the size of physical memory of available machines. Under this setting, most of
learning algorithms become slow due to expensive I/O from secondary storage device [26].
When it comes to huge-scale data, two settings are often considered ? online and distributed learning. In the online setting, each sample is processed only once without storage, while in the distributed setting, one has several machines that can jointly fit the data into memory. However, the
real cases are often not as extreme as these two ? there are usually machines that can fit part of the
data, but not all of them. In this setting, an algorithm can only process a block of data at a time.
Therefore, balancing the time spent on I/O and computation becomes the key issue [26]. Although
one can employ an online-fashioned learning algorithm in this setting, it has been observed that online method requires large number of epoches to achieve comparable performance to batch method,
and at each epoch it spends most of time on I/O instead of computation [2, 21, 26]. The situation
for online method could become worse for problem of non-smooth, non-strongly convex objective
function, where a qualitatively slower convergence of online method is exhibited [15, 16] than that
proved for strongly-convex problem like SVM [14].
In the past few years, several algorithms have been proposed to solve large-scale linear Support Vector Machine (SVM) in the limited memory setting [2, 21, 26]. These approaches are based on a dual
1
Block Coordinate Descent (dual-BCD) algorithim, which decomposes the original problem into a
series of block sub-problems, each of them requires only a block of data loaded into memory. The
approach was proved linearly convergent to the global optimum, and demonstrated fast convergence
empirically. However, the convergence of the algorithm relies on the assumption of a smooth dual
problem, which, as we show, does not hold generally for other regularized Empirical Risk Minimizaton (ERM) problem. As a result, although the dual-BCD approach can be extended to the more
general setting, it is not globally convergent except for a class of problems with L2-regularizer.
In this paper, we first show how to adapt the dual block-coordinate descnet method of [2, 26] to
the general setting of regularized Empirical Risk Mimization (ERM), which subsumes most of supervised learning problems ranging from classification, regression to ranking and recommendation.
Then we discuss the convergence issue arises when the underlying ERM is not strongly-convex. A
Primal Proximal Point ( or Dual Augmented Lagrangian ) method is then proposed to address this
issue, which as we show, results in a block minimization algorithm with global convergence to optimum for convex regularized ERM problems. The framework is flexible in the sense that, given a
solver working under sufficient-memory condition, it can be integrated into the block minimization
framework to obtain a solver globally convergent under limited-memory condition.
We conduct experiments on L1-regularized classification and regression problems to corroborate
our convergence theory, which shows that the proposed simple dual-augmented technique changes
the convergence behavior dramatically. We also compare the proposed framework to algorithms
adopted from online and distributed settings. In particular, we describe how to adapt a distributed optimization framework ? Alternating Direction Method of Multiplier (ADMM) [1] ? to the limitedmemory setting, and show that, although the adapted algorithm is effective, it is not as efficient as the
proposed framework specially designed for limited-memory setting. Note our experiment does not
adapt into comparison some recently proposed distributed learning algorithms (CoCoA etc.) [7, 10]
that only apply to ERM with L2-regularizer or some other distributed method designed for some
specific loss function [19].
2
Problem Setup
In this work, we consider the regularized Empirical Risk Minimization problem, which given a data
set D = {(?n , y n )}N
n=1 , estimates a model through
min
w?Rd ,?n ?Rp
s.t.
F (w, ?) = R(w) +
N
X
n=1
Ln (? n )
(1)
?n w = ? n , n ? [N ]
where w ? Rd is the model parameter to be estimated, ?n is a p by d design matrix that encodes
features of the n-th data sample, Ln (? n ) is a convex loss function that penalizes the discrepancy
between ground truth and prediction vector ? n ? Rp , and R(w) is a convex regularization term
penalizing model complexity.
The formulation (1) subsumes a large class of statistical learning problems ranging from classification [27], regression [17], ranking [8], and convex clustering [24]. For example, in classification
problem, we have p = |Y| where YPconsists of the set of all possible labels and Ln (?) can be defined
as the logistic loss Ln (?) = log( k?Y exp(?k )) ? ?yn as in logistic regression or the hinge loss
Ln (?) = maxk?Y (1 ? ?k,yn + ?k ? ?yn ) as used in support vector machine; in a (multi-task) regression problem, the target variable consists of K real values Y = RK , the prediction vector has p = K
dimensions, and a square loss Ln (?) = 21 k??y n k22 is often used. There are also a variety of regularizers R(w) employed in different applications, which includes the L2-regularizer R(w) = ?2 kwk2
in ridge regression, L1-regularizer R(w) = ?kwk1 in Lasso, nuclear-norm R(w) = ?kwk? in
matrix completion, and a family of structured group norms R(w) = ?kwkG [11]. Although the
specific form of Ln (?), R(w) does not affect the implementation of the limited-memory training
procedure, two properties of the functions ? strong convexity and smoothness ? have key effects
on the behavior of the block minimization algorithm.
2
Definition 1 (Strong Convexity). A function f (x) is strongly convex iff it is lower bounded by a
simple quadratic function
m
f (y) ? f (x) + ?f (x)T (y ? x) + kx ? yk2
(2)
2
for some constant m > 0 and ?x, y ? dom(f ).
Definition 2 (Smoothness). A function f (x) is smooth iff it is upper bounded by a simple quadratic
function
M
f (y) ? f (x) + ?f (x)T (y ? x) +
kx ? yk2
(3)
2
for some constant 0 ? M < ? and ?x, y ? dom(f ).
For instance, the square loss and logistic loss are both smooth and strongly convex 1 , while the hingeloss satisfies neither of them. On the other hand, most of regularizers such as L1-norm, structured
group norm, and nuclear norm are neither smooth nor strongly convex, except for the L2-regularizer,
which satifies both. In the following we will demonstrate the effects of these properties to Block
Minimization algorithms.
Throughout this paper, we will assume that a solver for (1) that works in sufficient-memory condition
is given, and our task is to design an algorithmic framework that integrates with the solver to efficiently solve (1) when data cannot fit into memory. We will assume, however, that the d-dimensional
parameter vector w can be fit into memory.
3
Dual Block Minimization
In this section, we extend the block minimization framework of [26] from linear SVM to the general
setting of regularized ERM (1).The dual of (1) can be expressed as
min
??Rd ,?n ?Rp
R? (??) +
L?n (?n )
n=1
N
X
s.t.
N
X
?Tn ?n
(4)
=?
n=1
where R? (??) is the convex conjugate of R(w) and L?n (?n ) is the convex conjugate of Ln (? n ).
The block minimization algorithm of [26] basically performs a dual Block-Coordinate Descent
(dual-BCD) over (4) by dividing the whole data set D into K blocks DB1 , ..., DBK , and optimizing a block of dual variables (?Bk , ?) at a time, where DBk = {(?n , y n )}n?Bk and
?Bk = {?n |n ? Bk }.
In [26], the dual problem (4) is derived explicitly in order to perform the algorithm. However,
for many sparsity-inducing regularizer such as L1-norm and nuclear norm, it is more efficient and
convenient to solve (1) in the primal [6, 28]. Therefore, here instead of explicitly forming the dual
problem, we express it implicitly as
G(?) = min L(?, w, ?),
(5)
w,?
where L(?, w, ?) is the Lagrangian function of (1), and maximize (5) w.r.t. a block of variables
?Bk from the primal instead of dual by strong duality
max min L(?, w, ?) = min max L(?, w, ?)
(6)
?Bk
w,?
w,?
?Bk
?tBj }j6=k
with other dual variables {?Bj =
fixed. The maximization of dual variables ?Bk in (6)
then enforces the primal equalities ?n w = ? n , n ? Bk , which results in the block minimization
problem
X
min
R(w) +
Ln (? n ) + ?tT
Bk w
w?Rd ,?n ?Rp
(7)
n?Bk
s.t.
? n w = ? n , n ? Bk ,
1
The logistic loss is strongly convex when its input ? are within a bounded range, which is true as long as
we have a non-zero regularizer R(w).
3
P
T t
where ?tBk = n?B
/ k have been dropped since they
/ k ?n ?n . Note that, in (7), variables {? n }n?B
are not relevant to the block of dual variables ?Bk , and thus given the d dimensional vector ?tBk ,
one can solve (7) without accessing data {(?n , y n )}n?B
/ k outside the block Bk . Throughout the
PN
dual-BCD algorithm, we maintain d-dimensional vector ?t = n=1 ?Tn ?tn and compute ?tB via
X
?tB = ?t ?
?Tn ?tn
(8)
n?Bk
in the beginning of solving each block subproblem (7). Since subproblem (7) is of the same form to
the original problem (1) except for one additional linear augmented term ?TBk w, one can adapt the
solver of (1) to solve (7) easily by providing an augmented version of the gradient
?w F? (w, ?) = ?w F (w, ?) + ?tBk
to the solver, where F? (.) denotes the function with augmented terms and F (.) denotes the function
without augmented terms. Note the augmented term ?tBk is constant and separable w.r.t. coordinates, so it adds little overhead to the solver. After obtaining solution (w? , ? ?Bk ) from (7), we can
derive the corresponding optimal dual variables ?Bk for (6) according to the KKT condition and
maintain ? subsequently by
?t+1
= ??n Ln (? ?n ), n ? Bk
n
X
?t+1 = ?tBk +
?Tn ?t+1
n .
(9)
(10)
n?Bk
The procedure is summarized in Algorithm 1, which requires a total memory capacity of O(d +
|DBk | + p|Bk |). The factor d comes from the storage of ?t , wt , factor |DBk | comes from the
storage of data block, and the factor p|Bk | comes from the storage of ?Bk . Note this requires the
same space complexity as that required in the original algorithm proposed for linear SVM [26],
where p = 1 for the binary classification setting.
4
Dual-Augmented Block Minimization
The Block Minimization Algorithm 1, though can be applied to the general regularized ERM problem (1), it is not guaranteed that the sequence {?t }?
t=0 produced by Algorithm 1 converges to global
optimum of (1). In fact, the global convergence of Algorithm 1 only happens for some special cases.
One sufficient condition for the global convergence of a Block-Coordinate Descent algorithm is that
the terms in objective function that are not separable w.r.t. blocks must be smooth (Definition 2).
The dual objective function (4) (expressed using only ?) comprises two terms
PN
PN
R? (? n=1 ?Tn ?n ) + n=1 L?n (?n ), where second term is separable w.r.t. to {?n }N
n=1 ,
K
and thus is also separable w.r.t. {?Bk }k=1 , while the first term couples variables ?B1 , ..., ?BK
involving all the blocks. As a result, if R? (??) is a smooth function according to Definition 2, then
Algorithm 1 has global convergence to the optimum. However, the following theorem states this is
true only when R(w) is strongly convex.
Theorem 1 (Strong/Smooth Duality). Assume f (.) is closed and convex. Then f (.) is smooth with
1
parameter M if and only if its convex conjugate f ? (.) is strongly convex with parameter m = M
.
A proof of above theorem can be found in [9]. According to Theorem 1, the Block Minimization
Algorithm 1 is not globally convergent if R(w) is not strongly convex, which however, is the case
for most of regularizers other than the L2-norm R(w) = 21 kwk2 , as discussed in Section 2.
In this section, we propose a remedy to this problem, which by a Dual-Augmented Lagrangian
method (or equivalently, Primal Proximal Point method), creates a dual objective function of desired
property that iteratively approaches the original objective (1), and results in fast global convergence
of the dual-BCD approach.
4
Algorithm 1 Dual Block Minimization
1. Split data D into blocks B1 , B2 , ..., BK .
2. Initialize ?0 = 0.
for t = 0, 1, ... do
3.1. Draw k uniformly from [K].
3.2. Load DBk and ?tBk into memory.
3.3. Compute ?tBk from (8).
3.4. Solve (7) to obtain (w? , ? ?Bk ).
3.5. Compute ?t+1
Bk by (9).
3.6. Maintain ?t+1 through (10).
3.7. Save ?t+1
Bk out of memory.
end for
4.1
Algorithm 2 Dual-Aug. Block Minimization
1. Split data D into blocks B1 , B2 , ..., BK .
2. Initialize w0 = 0, ?0 = 0.
for t = 0, 1, ... (outer iteration) do
for s = 0, 1, ..., S do
3.1.1. Draw k uniformly from [K].
3.1.2. Load DBk , ?sBk into memory.
3.1.3. Compute ?sBk from (15).
3.1.4. Solve (14) to obtain (w? , ? ?Bk ).
3.1.5. Compute ?s+1
Bk by (16).
3.1.6. Maintain ?s+1 through (17).
3.1.7. Save ?s+1
Bk out of memory.
end for
3.2. wt+1 = w? (?S ).
end for
Algorithm
The Dual Augmented Lagrangian (DAL) method (or equivalently, Proximal Point Method) modifies
the original problem by introducing a sequence of Proximal Maps
1
wt+1 = arg min F (w) +
kw ? wt k2 ,
(11)
2?
w
t
where F (w) denotes the ERM problem (1) Under this simple modification, instead of doing BlockCoordinate Descent in the dual of original problem (1), we perform Dual-BCD on the proximal subproblem (11). As we show in next section, the dual formulation of (11) has the required property
for global convergence of the Dual BCD algorithm ? all terms involving more than one block of
variables ?Bk are smooth. Given the current iterate wt , the Dual-Augmented Block Minimization
algorithm optimizes the dual of proximal-point problem (11) w.r.t. one block of variables ?Bk at a
(t,s)
time, keeping others fixed {?Bj = ?Bj }j6=k :
max min L(w, ?, ?) = min max L(w, ?, ?)
?Bk w,?
where L(.) is the Lagrangian of (11)
L(w, ?, ?) = F (w, ?) +
w,? ?Bk
N
X
?Tn (?n w ? ? n ) +
n=1
1
kw ? wt k2 .
2?t
(12)
(13)
Once again, the maximization w.r.t. ?Bk in (12) enforces the equalities ?n w = ? n , n ? Bk and
thus leads to a primal sub-problem involving only data in block Bk :
X
1
(t,s)T
min
R(w) +
Ln (? n ) + ?Bk w +
kw ? wt k2
2?t
w?Rd ,?n ?Rp
(14)
n?Bk
s.t.
?n w = ? n , n ? Bk ,
P
T (t,s)
where
=
. Note that (14) is almost the same as (7) except that it has a
n?B
/ k ?n ? n
proximal-point augmented term. Therefore, one can follow the same procedure as in Algorithm 1 to
PN
(t,s)
maintain the vector ?(t,s) = n=1 ?Tn ?n and computes
X
(t,s)
?Bk = ?(t,s) ?
?Tn ?(t,s)
(15)
n
(t,s)
?Bk
n?Bk
before solving each block subproblem (14). After obtaining solution (w? , ? ?Bk ) from (14), we update
dual variables ?Bk as
?(t,s+1)
= ??n Ln (? ?n ), n ? Bk .
(16)
n
and maintain ? subsequently as
X
(t,s)
?(t,s+1) = ?Bk +
?Tn ?(t,s+1)
.
(17)
n
n?Bk
5
The sub-problem (14) is of similar form to the original ERM problem (1). Since the augmented
term is a simple quadratic function separable w.r.t. each coordinate, given a solver for (1) working
in sufficient-memory condition, one can easily adapt it by modifying
?w F? (w, ?) = ?w F (w, ?) + ?tBk + (w ? wt )/?t
?2w F? (w, ?) = ?2w F (w, ?) + I/?t ,
where F? (.) denotes the function with augmented terms and F (.) denotes the function without augmented terms. The Block Minimization procedure is repeated until every sub-problem (14) reaches
a tolerance in . Then the proximal point method update wt+1 = w? (?(t,s) ) is performed, where
w? (?(t,s) ) is the solution of (14) for the latest dual iterate ?(t,s) . The resulting algorithm is summarized in Algorithm 2.
4.2
Analysis
In this section, we analyze the convergence rate of Algorithm 2 to the optimum of (1). First, we
show that the proximal-point formulation (11) has a dual problem with desired property for the
global convergence of Block-Coordinate Descent. In particular, since the dual of (11) takes the form
minp
?n ?R
? ? (?
R
N
X
?Tn ?n ) +
n=1
N
X
L?n (?n )
(18)
n=1
? ? (.) is the convex conjugate of R(w)
?
?
where R
= R(w)+ 2?1 t kw ?wt k2 , and since R(w)
is strongly
?
?
convex with parameter m = 1/?t , the convex conjugate R (.) is smooth with parameter M = ?t
according to Theorem 1. Therefore, (18) is in the composite form of a convex, smooth function plus
a convex, block-separable function. This type of function has been widely studied in the literature
of Block-Coordinate Descent [13]. In particular, one can show that a Block-Coordinate Descent
applied on (18) has global convergence to optimum with a fast rate by the following theorem.
Theorem 2 (BCD Convergence). Let the sequence {?s }?
s=1 be the iterates produced by Block
Coordinate Descent in the inner loop of Algorithm 2, and K be the number of blocks. Denote F? ? (?)
?
the optimal value of (18). Then with probability 1??,
as the dual objective function of (18) and F?opt
?
F? ? (?0 ) ? F?opt
)
(19)
?
for some constant ? > 0 if (i) Ln (.) is smooth, or (ii) Ln (.) is polyhedral function and R(.) is also
polyhedral or smooth. Otherwise, for any convex Ln (.), R(.) we have
?
F? ? (?s ) ? F?opt
? , for s ? ?K log(
?
F? ? (?0 ) ? F?opt
cK
?
F? ? (?s ) ? F?opt
? , for s ?
log(
)
?
(20)
for some constant c > 0.
Note the above analysis (in appendix) does not assume exact solution of each block subproblem.
Instead, it only assumes each block minimization step leads to a dual ascent amount proportional to
that produced by a single (dual) proximal gradient ascent step on the block of dual variables. For
the outer loop of Primal Proximal-Point (or Dual Augmented Lagrangian) iterates (11), we show the
following convergence theorem.
Theorem 3 (Proximal Point Convergence). Let F (w) be objective of the regularized ERM problem
(1), and R = maxv maxw {kv ? wk : F (w) ? F (w0 ), F (v) ? F (w0 )} be the radius of initial
level set. The sequence {wt }?
t=1 produced by the Proximal-Point update (11) with ?t = ? has
?
F (wt+1 ) ? Fopt ? , for t ? ? log( ).
(21)
for some constant ?, ? > 0 if both Ln (.) and R(.) are (i) strictly convex and smooth or (ii) polyhedral. Otherwise, for any convex F (w) we have
F (wt+1 ) ? Fopt ? R2 /(2?t).
6
The following theorem further shows that solving sub-problem (11) inexactly with tolerance /t
suffices for convergence to overall precision, where t is the number of outer iterations required.
Theorem 4 (Inexact Proximal Map). Suppose, for a given dual iterate wt , each sub-problem (11)
? t+1 has
is solved inexactly s.t. the solution w
? t+1 ? prox?t F (wt )k ? 0 .
kw
? t }?
{w
t=1
Then let
be the sequence of iterates produced by inexact proximal updates and
as that generated by exact updates. After t iterations, we have
? t ? wt k ? t0 .
kw
(22)
{wt }?
t=1
(23)
Note for Ln (.), R(.) being strictly convex and smooth, or polyhedral, t is of order O(log(1/)),
and thus it only requires O(K log(1/) log(t/)) = O(K log2 (1/)) overall number of block minimization steps to achieve suboptimality. Otherwise, as long as Ln (.) is smooth, for any convex
regularizer R(.), t is of order O(1/), so it requires O(K(1/) log(t/)) = O( K log(1/)
) total
number of block minimization steps.
4.3
Practical Issues
4.3.1 Solving Sub-Problem Inexactly
While the analysis in Section 4.2 assumes exact solution of subproblems, in practice, the Block
Minimization framework does not require solving subproblem (11), (14) exactly. In our experiments,
it suffices for the fast convergence of proximal-point update (11) to solve subproblem (14) for only a
single pass of all blocks of variables ?B1 ,..., ?BK , and limit the number of iterations the designated
solver spends on each subproblem (7), (14) to be no more than some parameter Tmax .
4.3.2 Random Selection w/o Replacement
In Algorithm 1 and 2, the block to be optimized is chosen uniformly at random from k ? {1, ..., K},
which eases the analysis for proving a better convergence rate [13]. However, in practice, to avoid
unbalanced update frequency among blocks, we do random sampling without replacement for both
Algorithm 1 and 2, that is, for every K iterations, we generate a random permutation ?1 , ..., ?K of
block index 1, .., K and optimize block subproblems (7), (14) according to the order ?1 , .., ?K . This
also eases the checking of inner-loop stopping condition.
4.3.3 Storage of Dual Variables
Both the algorithms 1 and 2 need to store the dual variables ?Bk into memory and load/save them
from/to some secondary storage units, which requires a time linear to p|Bk |. For some problems,
such as multi-label classification with large number of labels or structured prediction with large
number of factors, this can be very expensive. In this situation, one can instead maintain ?B?k =
P
T
? k has I/O and storage cost linear to d, which can be
n?Bk ?n ?n = ? ? ?Bk directly. Note ?B
much smaller than p|Bk | in a low-dimensional problem.
5
Experiment
In this section, we compare the proposed Dual Augmented Block Minimization framework (Algorithm 2) to the vanilla Dual Block Coordinate Descent algorithm [26] and methods adopted from
Online and Distributed Learning. The experiments are conducted on the problem of L1-regularized
L2-loss SVM [27] and the (Lasso) (L1-regularized Regression) problem [17] in the limited-memory
setting with data size 10 times larger than the available memory. For both problems, we use stateof-the-art randomized coordinate descent method [13, 27] as the solver for solving sub-problems
(7), (14), (59), (63), and we set parameter ?t = 1, ? = 1 (of L1-regularizer) for all experiments.
Four public benchmark data sets are used? webspam, rcv1-binary for classification and year-pred,
E2006 for regression, which can be obtained from the LIBSVM data set collections. For year-pred
and E2006, the features are generated from Random Fourier Features [12, 23] that approximate the
effect of Gaussian RBF kernel. Table 1 summarizes the data statistics. The algorithms in comparison and their shorthands are listed below, where all solvers are implemented in C/C++ and run on
64-bit machine with 2.83GHz Intel(R) Xeon(R) CPU. We constrained the process to use no more
than 1/10 of memory required to store the whole data.
? OnlineMD: Stochastic Mirror Descent method specially designed for L1-regularized problem proposed in [15] with step size chosen from 10?2 -102 for best performance.
7
Table 1: Data Statistics: Summary of data statistics when stored using sparse format. The last two
columns specify memory consumption in (MB) of the whole data and that of a block when data is
split into K = 10 partitions.
Data
#train
#test
dimension
#non-zeros
Memory Block
webspam 315,000 31,500
680,714
1,174,704,031
20,679
2,068
rcv1
202,420 20,242 7,951,176
656,977,694
12,009
1,201
year-pred 463,715 51,630
2,000
927,893,715
13,702
1,370
E2006
16,087
3,308
30,000
8,088,636
8,088
809
Figure 1: Relative function value difference to the optimum and Testing RMSE (Accuracy) on
LASSO (top) and L1-regularized L2-SVM (bottom). (RMSE best for year-pred: 9.1320; for E2006:
0.4430), (Accuracy best for for webspam: 0.4761%; best for rcv1: 2.213%).
year?pred?obj
year?rmse
ADMM
BC?ADMM
DA?BCD
D?BCD
onlineMD
e2006?obj
ADMM
BC?ADMM
DA?BCD
D?BCD
onlineMD
ADMM
BC?ADMM
DA?BCD
D?BCD
onlineMD
?1
10
?2
ADMM
BC?ADMM
DA?BCD
D?BCD
onlineMD
obj
RMSE
10
rmse
objective
e2006?rmse
0
10
0
10
?2
10
?3
10
?3
10
?2
1000
2000
3000
4000
time
5000
10
6000
?1
1000
webspam?obj
1
10
3000
4000
time
5000
6000
1000
3000
time
4000
1
10
ADMM
BC?ADMM
DA?BCD
D?BCD
onlineMD
1
10
2000
5000
10
6000
1000
2000
rcv1?obj
webspam?error
ADMM
BC?ADMM
DA?BCD
D?BCD
onlineMD
0
2000
3000
time
4000
ADMM
BC?ADMM
DA?BCD
D?BCD
onlineMD
0
10
5000
6000
rcv1?error
ADMM
BC?ADMM
DA?BCD
D?BCD
onlineMD
10
0
10
error
obj
obj
error
10
0
?1
10
?1
10
?2
10
?1
10
?2
10
2000
4000
6000
time
8000
10000
12000
2000
4000
6000
time
8000
10000
2000
12000
4000
6000 8000
time
10000 12000
2000
4000
6000 8000
time
10000 12000
? D-BCD2 : Dual Block-Coordinate Descent method (Algorithm 1).
? DA-BCD: Dual-Augmented Block Minimization (Algorithm 2).
? ADMM: ADMM for limited-memory learning (Algorithm 3 in appendix-B).
? BC-ADMM: Block-Coordinate ADMM that updates a randomly chosen block of dual variables at a time for limited-memory learning (Algorithm 4 in appendix-B) .
We use wall clock time that includes both I/O and computation as measure for training time in all
experiments. In Figure 5, three measures are plotted versus the training time: Relative objective
function difference to the optimum, Testing RMSE and Accuracy. Figure 5 shows the results, where
as expected, the dual Block Coordinate Descent (D-BCD) method without augmentation cannot improve the objective after certain number of iterations. However, with extremely simple modification,
the Dual-Augmented Block Minimization (DA-BCD) algorithm becomes not only globally convergent but with a rate several times faster than other approaches. Among all methods, the convergence
of Online Mirror Descent (SMIDAS) is significantly slower, which is expected since (i) the online
Mirror Descent on a non-smooth, non-strongly convex function converges at a rate qualitatively
slower than the linear convergence rate of DA-BCD and ADMM [15, 16], and (ii) Online method
does not utilize the available memory capacity and thus spends unbalanced time on I/O and computation. For methods adopted from distributed optimization, the experiment shows BC-ADMM
consistently, but only slightly, improves ADMM, and both of them converge much slower than the
DA-BCD approach, presumably due to the conservative updates on the dual variables.
Acknowledgement We thank to the support of Telecommunication Lab., Chunghwa Telecom Co.,
Ltd via TL-103-8201, AOARD via No. FA2386-13-1-4045, Ministry of Science and Technology,
National Taiwan University and Intel Co. via MOST102-2911-I-002-001, NTU103R7501, 1022923-E-002-007-MY2, 102-2221-E-002-170, 103-2221-E-002-104-MY2.
2
The objective value obtained from D-BCD fluctuates a lot, in figures we plot the lowest values achieved by
D-BCD from the beginning to time t.
8
References
[1] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning
via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 2011.
[2] K. Chang and D. Roth. Selective block minimization for faster convergence of limited memory large-scale
linear models. In SIGKDD. ACM, 2011.
[3] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and L. F. Fei. Imagenet: A large-scale hierarchical image
database. In CVPR, 2009.
[4] A. Hoffman. On approximate solutions of systems of linear inequalities. Journal of Research of the
National Bureau of Standards, 1952.
[5] M. Hong and Z. Luo. On the linear convergence of the alternating direction method of multipliers, 2012.
[6] C. Hsieh, I. Dhillon, P. Ravikumar, S. Becker, and P. Olsen. Quic & dirty: A quadratic approximation
approach for dirty statistical models. In NIPS, 2014.
[7] M. Jaggi, V. Smith, M. Tak?ac, J. Terhorst, S. Krishnan, T. Hofmann, and M. Jordan. Communicationefficient distributed dual coordinate ascent. In NIPS, 2014.
[8] T. Joachims. A support vector method for multivariate performance measures. In ICML, 2005.
[9] S. Kakade, S. Shalev-Shwartz, and A. Tewari. Applications of strong convexity?strong smoothness duality to learning with matrices. CoRR, 2009.
[10] C. Ma, V. Smith, M. Jaggi, M. Jordan, P. Richt?arik, and M. Tak?ac? . Adding vs. averaging in distributed
primal-dual optimization. ICML, 2015.
[11] G. Obozinski, L. Jacob, and J. Vert. Group lasso with overlaps: the latent group lasso approach. arXiv
preprint, 2011.
[12] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, 2007.
[13] P. Richt?arik and M. Tak?ac? . Iteration complexity of randomized block-coordinate descent methods for
minimizing a composite function. Mathematical Programming, 2014.
[14] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter. Pegasos: Primal estimated sub-gradient solver for
svm. Mathematical programming, 2011.
[15] S. Shalev-Shwartz and A. Tewari. Stochastic methods for l1-regularized loss minimization. JMLR, 2011.
[16] N. Srebro, K. Sridharan, and A. Tewari. On the universality of online mirror descent. In NIPS, 2011.
[17] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society,
1996.
[18] R. Tomioka, T. Suzuki, and M. Sugiyama. Super-linear convergence of dual augmented lagrangian algorithm for sparsity regularized estimation. JMLR, 2011.
[19] I. Trofimov and A. Genkin. Distributed coordinate descent for l1-regularized logistic regression. arXiv
preprint, 2014.
[20] P. Wang and C. Lin. Iteration complexity of feasible descent methods for convex optimization. JMLR,
2014.
[21] I. Yen, C. Chang, T. Lin, S., and S. Lin. Indexed block coordinate descent for large-scale linear classification with limited memory. In SIGKDD. ACM, 2013.
[22] I. Yen, C. Hsieh, P. Ravikumar, and I. Dhillon. Constant nullspace strong convexity and fast convergence
of proximal methods under high-dimensional settings. In NIPS, 2014.
[23] I. Yen, T. Lin, S. Lin, P. Ravikumar, and I. Dhillon. Sparse random feature algorithm as coordinate descent
in hilbert space. In NIPS, 2014.
[24] I. Yen, X. Lin, K. Zhong, P. Ravikumar, and I. Dhillon. A convex exemplar-based approach to MADBayes dirichlet process mixture models. In ICML, 2015.
[25] I. Yen, K. Zhong, C. Hsieh, P. Ravikumar, and I. Dhillon. Sparse linear programming via primal and dual
augmented coordinate descent. In NIPS, 2015.
[26] H. Yu, C. Hsieh, . Chang, and C. Lin. Large linear classification when data cannot fit in memory. SIGKDD,
2010.
[27] G. Yuan, K. Chang, C. Hsieh, and C. Lin. A comparison of optimization methods and software for
large-scale L1-regularized linear classification. JMLR, 2010.
[28] K. Zhong, I. Yen, I. Dhillon, and P. Ravikumar. Proximal quasi-Newton for computationally intensive
l1-regularized m-estimators. In NIPS, 2014.
9
| 5924 |@word version:1 norm:8 trofimov:1 hsieh:5 jacob:1 initial:1 series:1 bc:10 past:2 existing:1 current:1 luo:1 universality:1 chu:1 must:1 partition:1 hofmann:1 designed:3 plot:1 update:9 maxv:1 v:1 device:1 beginning:2 smith:2 iterates:3 shou:1 mathematical:2 become:2 yuan:1 consists:1 shorthand:1 overhead:1 polyhedral:4 expected:2 behavior:2 nor:1 multi:2 globally:5 little:1 cpu:1 solver:14 increasing:1 becomes:2 underlying:1 bounded:3 lowest:1 spends:3 every:2 exactly:1 k2:4 unit:1 superiority:1 yn:3 before:1 dropped:1 limit:1 tmax:1 plus:1 studied:1 tbk:9 co:2 limited:13 range:1 practical:1 enforces:2 testing:2 practice:2 block:70 procedure:4 empirical:4 significantly:1 composite:2 convenient:1 boyd:1 word:1 vert:1 cannot:4 pegasos:1 selection:2 storage:9 risk:4 fa2386:1 optimize:1 limitedmemory:1 map:2 lagrangian:8 demonstrated:1 roth:1 modifies:1 go:1 latest:1 convex:32 communicationefficient:1 estimator:1 aoard:1 nuclear:3 proving:1 coordinate:21 target:1 suppose:1 exact:3 programming:3 trend:1 expensive:2 blockcoordinate:2 database:1 observed:1 csie:1 subproblem:8 bottom:1 preprint:2 solved:1 wang:1 richt:2 accessing:1 convexity:4 complexity:4 dom:2 solving:6 creates:1 easily:3 regularizer:9 train:1 fast:5 describe:1 effective:1 outside:1 shalev:3 my2:2 larger:2 solve:8 widely:1 fluctuates:1 cvpr:1 otherwise:3 statistic:3 jointly:1 online:13 sequence:5 propose:1 mb:1 relevant:1 loop:3 iff:2 achieve:3 inducing:1 kv:1 convergence:29 optimum:8 converges:2 spent:2 derive:1 completion:1 ac:3 exemplar:1 aug:1 strong:8 dividing:1 implemented:1 c:1 come:4 direction:3 radius:1 annotated:1 modifying:1 subsequently:2 stochastic:2 public:1 argued:1 require:1 suffices:2 wall:1 ntu:1 opt:5 strictly:2 hold:1 considered:1 ground:1 exp:1 presumably:1 algorithmic:1 bj:3 estimation:1 integrates:1 label:3 utexas:1 hoffman:1 minimization:29 cotter:1 gaussian:1 arik:2 super:1 ck:1 pn:4 avoid:1 shrinkage:1 boosted:1 zhong:3 derived:1 joachim:1 consistently:1 prevalent:1 sigkdd:3 sense:2 stopping:1 integrated:1 tak:3 selective:1 quasi:1 issue:4 dual:61 flexible:2 classification:11 arg:1 overall:2 among:2 stateof:1 art:1 special:1 initialize:2 constrained:1 once:2 sampling:1 kw:6 yu:1 icml:3 discrepancy:1 others:1 few:2 employ:1 randomly:1 genkin:1 national:3 replacement:2 maintain:7 huge:2 mining:1 mixture:1 extreme:1 primal:10 regularizers:3 smidas:1 nowadays:1 conduct:2 indexed:1 penalizes:1 desired:2 plotted:1 dal:1 instance:1 xeon:1 column:1 corroborate:2 maximization:2 cost:2 introducing:1 conducted:1 stored:1 proximal:18 recht:1 randomized:2 eas:2 dong:1 again:1 augmentation:1 worse:1 li:2 prox:1 de:1 summarized:2 subsumes:2 includes:2 b2:2 wk:1 explicitly:2 ranking:2 performed:1 lot:1 closed:1 lab:1 kwk:1 doing:1 analyze:1 rmse:7 yen:7 square:2 accuracy:3 loaded:1 efficiently:1 generalize:1 produced:5 basically:1 j6:2 reach:1 definition:4 inexact:2 frequency:1 proof:1 couple:1 proved:2 ianyen:1 improves:1 hilbert:1 supervised:1 follow:1 specify:1 wei:1 formulation:3 though:1 strongly:12 fashioned:1 until:1 working:3 hand:1 clock:1 logistic:5 sbk:2 effect:3 k22:1 multiplier:3 true:2 remedy:1 e2006:6 regularization:1 equality:2 alternating:3 iteratively:1 dhillon:6 suboptimality:1 hong:1 ridge:1 kwkg:1 demonstrate:1 tn:12 tt:1 l1:14 performs:1 ranging:2 image:1 recently:1 parikh:1 physical:1 empirically:1 extend:1 discussed:1 kwk2:2 smoothness:3 rd:5 vanilla:1 sugiyama:1 dbk:6 longer:1 yk2:2 etc:1 add:1 jaggi:2 multivariate:1 optimizing:1 optimizes:1 store:2 certain:1 inequality:1 binary:2 kwk1:1 cocoa:1 ministry:1 additional:1 terabyte:1 employed:1 deng:1 converge:1 maximize:1 ii:3 rahimi:1 smooth:18 faster:2 adapt:5 long:2 lin:10 ravikumar:6 prediction:3 involving:3 regression:11 arxiv:2 iteration:8 kernel:2 achieved:1 specially:2 exhibited:1 ascent:3 fopt:2 sridharan:1 obj:7 jordan:2 split:3 krishnan:1 variety:1 affect:1 fit:6 iterate:3 lasso:6 inner:2 intensive:1 texas:1 hingeloss:1 ltd:1 becker:1 dramatically:1 generally:1 tewari:3 listed:1 amount:2 ten:1 processed:1 generate:1 estimated:2 tibshirani:1 express:1 group:4 key:2 four:1 penalizing:1 neither:2 libsvm:1 utilize:1 year:8 run:1 telecommunication:1 family:1 throughout:2 almost:1 draw:2 appendix:3 summarizes:1 comparable:1 bit:1 shan:1 guaranteed:1 convergent:6 quadratic:4 adapted:1 fei:1 software:1 encodes:1 bcd:32 fourier:1 min:10 extremely:1 rcv1:5 separable:6 format:1 structured:3 designated:1 according:5 conjugate:5 smaller:1 slightly:1 kakade:1 tw:1 modification:2 happens:1 satifies:1 erm:12 ln:18 resource:1 computationally:1 discus:1 singer:1 end:3 adopted:4 available:4 apply:1 hierarchical:1 save:3 batch:1 slower:4 rp:5 original:7 bureau:1 denotes:5 clustering:1 assumes:2 top:1 dirty:2 dirichlet:1 log2:1 hinge:1 newton:1 society:1 objective:11 quic:1 gradient:3 thank:1 capacity:3 outer:3 w0:3 consumption:1 taiwan:2 index:1 providing:1 balance:1 minimizing:1 equivalently:2 setup:1 subproblems:2 design:2 implementation:1 perform:2 upper:1 benchmark:1 descent:23 situation:2 extended:1 maxk:1 peleato:1 bk:56 pred:5 eckstein:1 required:4 optimized:1 imagenet:1 nip:8 address:1 beyond:1 usually:1 below:1 sparsity:2 tb:2 max:4 memory:35 royal:1 webspam:5 overlap:1 regularized:19 improve:1 technology:2 epoch:2 literature:1 l2:7 checking:1 acknowledgement:1 relative:2 loss:10 permutation:1 proportional:1 srebro:2 versus:1 foundation:1 integrate:1 sufficient:5 minp:1 db1:1 balancing:1 austin:1 summary:1 last:1 keeping:1 sparse:3 distributed:13 tolerance:2 ghz:1 dimension:2 computes:1 qualitatively:2 collection:1 suzuki:1 approximate:2 olsen:1 implicitly:1 global:11 kkt:1 b1:4 shwartz:3 latent:1 decomposes:1 table:2 obtaining:2 da:12 linearly:1 whole:3 repeated:1 augmented:23 intel:2 telecom:1 tl:1 slow:1 precision:1 sub:9 tomioka:1 comprises:1 jmlr:4 nullspace:1 ian:1 rk:1 theorem:11 load:3 specific:2 r2:1 svm:8 socher:1 sdlin:1 adding:1 corr:1 mirror:4 terhorst:1 kx:2 forming:1 expressed:2 recommendation:1 maxw:1 chang:4 truth:1 satisfies:1 relies:1 inexactly:3 acm:2 ma:1 obozinski:1 rbf:1 admm:24 feasible:1 change:1 except:4 uniformly:3 wt:17 averaging:1 conservative:1 total:2 secondary:2 duality:4 pas:1 support:5 arises:1 unbalanced:2 crowdsourcing:1 |
5,441 | 5,925 | On the Global Linear Convergence
of Frank-Wolfe Optimization Variants
Simon Lacoste-Julien
INRIA - SIERRA project-team
?
Ecole
Normale Sup?erieure, Paris, France
Martin Jaggi
Dept. of Computer Science
ETH Z?urich, Switzerland
Abstract
The Frank-Wolfe (FW) optimization algorithm has lately re-gained popularity
thanks in particular to its ability to nicely handle the structured constraints appearing in machine learning applications. However, its convergence rate is known
to be slow (sublinear) when the solution lies at the boundary. A simple lessknown fix is to add the possibility to take ?away steps? during optimization, an
operation that importantly does not require a feasibility oracle. In this paper, we
highlight and clarify several variants of the Frank-Wolfe optimization algorithm
that have been successfully applied in practice: away-steps FW, pairwise FW,
fully-corrective FW and Wolfe?s minimum norm point algorithm, and prove for
the first time that they all enjoy global linear convergence, under a weaker condition than strong convexity of the objective. The constant in the convergence rate
has an elegant interpretation as the product of the (classical) condition number of
the function with a novel geometric quantity that plays the role of a ?condition
number? of the constraint set. We provide pointers to where these algorithms have
made a difference in practice, in particular with the flow polytope, the marginal
polytope and the base polytope for submodular optimization.
The Frank-Wolfe algorithm [9] (also known as conditional gradient) is one of the earliest existing
methods for constrained convex optimization, and has seen an impressive revival recently due to
its nice properties compared to projected or proximal gradient methods, in particular for sparse
optimization and machine learning applications.
On the other hand, the classical projected gradient and proximal methods have been known to exhibit
a very nice adaptive acceleration property, namely that the the convergence rate becomes linear for
strongly convex objective, i.e. that the optimization error of the same algorithm after t iterations will
decrease geometrically with O((1 ? ?)t ) instead of the usual O(1/t) for general convex objective
functions. It has become an active research topic recently whether such an acceleration is also
possible for Frank-Wolfe type methods.
Contributions. We clarify several variants of the Frank-Wolfe algorithm and show that they all
converge linearly for any strongly convex function optimized over a polytope domain, with a constant bounded away from zero that only depends on the geometry of the polytope. Our analysis does
not depend on the location of the true optimum with respect to the domain, which was a disadvantage of earlier existing results such as [34, 12, 5], and the newer work of [28], as well as the line of
work of [1, 19, 26] which rely on Robinson?s condition [30]. Our analysis yields a weaker sufficient
condition than Robinson?s condition; in particular we can have linear convergence even in some
cases when the function has more than one global minima, and is not globally strongly convex. The
constant also naturally separates as the product of the condition number of the function with a novel
notion of condition number of a polytope, which might have applications in complexity theory.
Related Work. For the classical Frank-Wolfe algorithm, [5] showed a linear rate for the special
case of quadratic objectives when the optimum is in the strict interior of the domain, a result already
subsumed by the more general [12]. The early work of [23] showed linear convergence for strongly
1
convex constraint sets, under the strong requirement that the gradient norm is not too small (see [11]
for a discussion). The away-steps variant of the Frank-Wolfe algorithm, that can also remove weight
from ?bad? atoms in the current active set, was proposed in [34], and later also analyzed in [12].
The precise method is stated below in Algorithm 1. [12] showed a (local) linear convergence rate
on polytopes, but the constant unfortunately depends on the distance between the solution and its
relative boundary, a quantity that can be arbitrarily small. More recently, [1, 19, 26] have obtained
linear convergence results in the case that the optimum solution satisfies Robinson?s condition [30].
In a different recent line of work, [10, 22] have studied a variation of FW that repeatedly moves mass
from the worst vertices to the standard FW vertex until a specific condition is satisfied, yielding a
linear rate on strongly convex functions. Their algorithm requires the knowledge of several constants
though, and moreover is not adaptive to the best-case scenario, unlike the Frank-Wolfe algorithm
with away steps and line-search. None of these previous works was shown to be affine invariant,
and most require additional knowledge about problem specific parameters.
Setup.
We consider general constrained convex optimization problems of the form:
min f (x) ,
x?M
M = conv(A),
with only access to: LMOA (r) ? arg minhr, xi,
(1)
x?A
where A ? Rd is a finite set of vectors that we call atoms.1 We assume that the function f is ?strongly convex with L-Lipschitz continuous gradient over M. We also consider weaker conditions
than strong convexity for f in Section 4. As A is finite, M is a (convex and bounded) polytope. The
methods that we consider in this paper only require access to a linear minimization oracle LMOA (.)
associated with the domain M through a generating set of atoms A. This oracle is defined as to
return a minimizer of a linear subproblem over M = conv(A), for any given direction r ? Rd .2
Examples. Optimization problems of the form (1) appear widely in machine learning and signal
processing applications. The set of atoms A can represent combinatorial objects of arbitrary type.
Efficient linear minimization oracles often exist in the form of dynamic programs or other combinatorial optimization approaches. As an example from tracking in computer vision, A could be the set
of integer flows on a graph [16, 7], where LMOA can be efficiently implemented by a minimum cost
network flow algorithm. In this case, M can also be described with a polynomial number of linear
inequalities. But in other examples, M might not have a polynomial description in terms of linear
inequalities, and testing membership in M might be much more expensive than running the linear
oracle. This is the case when optimizing over the base polytope, an object appearing in submodular
function optimization [3]. There, the LMOA oracle is a simple greedy algorithm. Another example
is when A represents the possible consistent value assignments on cliques of a Markov random field
(MRF); M is the marginal polytope [32], where testing membership is NP-hard in general, though
efficient linear oracles exist for some special cases [17]. Optimization over the marginal polytope
appears for example in structured SVM learning [21] and variational inference [18].
The Original Frank-Wolfe Algorithm. The Frank-Wolfe (FW) optimization algorithm [9], also
known as conditional gradient [23], is particularly suited for the setup (1) where M is only accessed
through the linear minimization oracle. It works as follows: At a current iterate x(t) , the algorithm
finds a feasible search atom st to move towards by minimizing the linearization of the objective
function f over M (line 3 in Algorithm 1) ? this is where the linear minimization oracle LMOA
is used. The next iterate x(t+1) is then obtained by doing a line-search on f between x(t) and st
(line 11 in Algorithm 1). One reason for the recent increased popularity of Frank-Wolfe-type algorithms is the sparsity of their iterates: in iteration t of the algorithm, the iterate can be represented
as a sparse convex combination of at most t + 1 atoms S (t) ? A of the domain M, which we write
P
(t)
as x(t) = v?S (t) ?v v. We write S (t) for the active set, containing the previously discovered
(t)
search atoms sr for r < t that have non-zero weight ?sr > 0 in the expansion (potentially also
(0)
including the starting point x ). While tracking the active set S (t) is not necessary for the original
FW algorithm, the improved variants of FW that we discuss will require that S (t) is maintained.
Zig-Zagging Phenomenon. When the optimal solution lies at the boundary
of M, the conver
gence rate of the iterates is slow, i.e. sublinear: f (x(t) ) ? f (x? ) ? O 1/t , for x? being an optimal
solution [9, 6, 8, 15]. This is because the iterates of the classical FW algorithm start to zig-zag
1
The atoms do not have to be extreme points (vertices) of M.
All our convergence results can be carefully extended to approximate linear minimization oracles with
multiplicative approximation guarantees; we state them for exact oracles in this paper for simplicity.
2
2
vt
vt
//
x? x(t+1)
st
x?
x(t)
x(t)
x(t)
x(t+1)
//
x(0)
x(0)
x(0)
st
x?
x(t+1)
st
Figure 1: (left) The FW algorithm zig-zags when the solution x? lies on the boundary. (middle) Adding the
possibility of an away step attenuates this problem. (right) As an alternative, a pairwise FW step.
between the vertices defining the face containing the solution x? (see left of Figure 1). In fact, the
1/t rate is tight for a large class of functions: Canon and Cullum [6], Wolfe [34] showed (roughly)
that f (x(t) ) ? f (x? ) ? ? 1/t1+? for any ? > 0 when x? lies on a face of M with some additional
regularity assumptions. Note that this lower bound is different than the ? 1/t one presented in [15,
Lemma 3] which holds for all one-atom-per-step algorithms but assumes high dimensionality d ? t.
1
Improved Variants of the Frank-Wolfe Algorithm
Algorithm 1 Away-steps Frank-Wolfe algorithm: AFW(x(0) , A, )
1: Let x(0) ? A, and S (0) := {x(0) }
(so that ?v = 1 for v = x(0) and 0 otherwise)
2: for t = 0 . . . T do
(the FW direction)
3:
Let st := LMOA ?f (x(t) ) and dtFW := st ? x(t)
(the away direction)
4:
Let vt ? arg max ?f (x(t) ), v and dtA := x(t) ? vt
(0)
(t)
5:
6:
7:
8:
9:
10:
11:
v?S
if
:= ??f (x(t) ), dtFW ? then return x(t)
(FW gap is small enough, so return)
if ??f (x(t) ), dtFW ? ??f (x(t) ), dtA then
dt := dtFW , and ?max := 1
(choose the FW direction)
else
dt := dtA , and ?max := ?vt /(1 ? ?vt )
(choose away direction; maximum feasible step-size)
end if
Line-search: ?t ? arg min f x(t) + ?dt
gtFW
??[0,?max ]
Update x(t+1) := x(t) + ?t dt
(and accordingly for the weights ?(t+1) , see text)
(t+1)
13:
Update S (t+1) := {v ? A s.t. ?v
> 0}
14: end for
12:
Algorithm 2 Pairwise Frank-Wolfe algorithm: PFW(x(0) , A, )
1: . . . as in Algorithm 1, except replacing lines 6 to 10 by: dt = dtPFW:= st ?vt , and ?max := ?vt .
Away-Steps Frank-Wolfe. To address the zig-zagging problem of FW, Wolfe [34] proposed to
add the possibility to move away from an active atom in S (t) (see middle of Figure 1); this simple
modification is sufficient to make the algorithm linearly convergent for strongly convex functions.
We describe the away-steps variant of Frank-Wolfe in Algorithm 1.3 The away direction dtA is
(t)
defined
in line 4 by finding the
by
atom vt in S that maximizes the potential of descent given (t)
A
(t)
(t)
gt := ??f (x ), x ? vt . Note that this search is over the (typically small) active set S ,
and is fundamentally easier than the linear oracle LMOA . The maximum step-size ?max as defined
on line 9 ensures that the new iterate x(t) + ?dtA stays in M. In fact, this guarantees that the convex
representation is maintained, and we stay inside conv(S (t) ) ? M. When M is a simplex, then the
barycentric coordinates are unique and x(t) + ?max dtA truly lies on the boundary of M. On the other
hand, if |A| > dim(M) + 1 (e.g. for the cube), then it could hypothetically be possible to have a
step-size bigger than ?max which is still feasible. Computing the true maximum feasible step-size
would require the ability to know when we cross the boundary of M along a specific line, which
is not possible for general M. Using the conservative maximum step-size of line 9 ensures that we
3
The original algorithm presented in [34] was not convergent; this was corrected by Gu?elat and Marcotte
[12], assuming a tractable representation of M with linear inequalities and called it the modified Frank-Wolfe
(MFW) algorithm. Our description in Algorithm 1 extends it to the more general setup of (1).
3
do not need this more powerful oracle. This is why Algorithm 1 requires to maintain S (t) (unlike
standard FW). Finally, as in classical FW, the FW gap gtFW is an upper bound on the unknown
suboptimality, and can be used as a stopping criterion:
E
D
E D
gtFW := ??f (x(t) ), dtFW ? ??f (x(t) ), x? ? x(t) ? f (x(t) ) ? f (x? ) (by convexity).
If ?t = ?max , then we call this step a drop step, as it fully removes the atom vt from the currently
active set of atoms S (t) (by settings its weight to zero). The weight updates for lines 12 and 13
are of the following form: For a FW step, we have S (t+1) = {st } if ?t = 1; otherwise S (t+1) =
(t+1)
(t)
(t+1)
(t)
S (t) ?{st }. Also, we have ?st
:= (1??t )?st +?t and ?v
:= (1??t )?v for v ? S (t) \{st }.
(t+1)
(t)
For an away step, we have S
= S \ {vt } if ?t = ?max (a drop step); otherwise S (t+1) = S (t) .
(t+1)
(t)
(t+1)
(t)
Also, we have ?vt
:= (1 + ?t )?vt ? ?t and ?v
:= (1 + ?t )?v for v ? S (t) \ {vt }.
Pairwise Frank-Wolfe. The next variant that we present is inspired by an early algorithm
by Mitchell et al. [25], called the MDM algorithm, originally invented for the polytope distance
problem. Here the idea is to only move weight mass between two atoms in each step. More precisely, the generalized method as presented in Algorithm 2 moves weight from the away atom vt to
the FW atom st , and keeps all other ? weights un-changed. We call such a swap of mass between
(t+1)
(t)
(t+1)
(t)
the two atoms a pairwise FW step, i.e. ?vt
= ?vt ? ? and ?st
= ?st + ? for some step-size
(t)
? ? ?max := ?vt . In contrast, classical FW shrinks all active weights at every iteration.
The pairwise FW direction will also be central to our proof technique to provide the first global
linear convergence rate for away-steps FW, as well as the fully-corrective variant and Wolfe?s minnorm-point algorithm.
As we will see in Section 2.2, the rate guarantee for the pairwise FW variant is more loose than for
the other variants, because we cannot provide a satisfactory bound on the number of the problematic
swap steps (defined just before Theorem 1). Nevertheless, the algorithm seems to perform quite well
in practice, often outperforming away-steps FW, especially in the important case of sparse solutions,
that is if the optimal solution x? lies on a low-dimensional face of M (and thus one wants to keep the
active set S (t) small). The pairwise FW step is arguably more efficient at pruning the coordinates in
S (t). In contrast to the away step which moves the mass back uniformly onto all other active elements
S (t) (and might require more corrections later), the pairwise FW step only moves the mass onto the
?
(good) FW atom st . A slightly different version than Algorithm 2 was also proposed by Nanculef
et al. [26], though their convergence proofs were incomplete (see Appendix A.3). The algorithm
is related to classical working set algorithms, such as the SMO algorithm used to train SVMs [29].
We refer to [26] for an empirical comparison for SVMs, as well as their Section 5 for more related
work. See also Appendix A.3 for a link between pairwise FW and [10].
Fully-Corrective Frank-Wolfe, and Wolfe?s Min-Norm Point Algorithm. When the linear oracle is expensive, it might be worthwhile to do more work to optimize over the active set S (t) in
between each call to the linear oracle, rather than just performing an away or pairwise step. We
give in Algorithm 3 the fully-corrective Frank-Wolfe (FCFW) variant, that maintains a correction
polytope defined by a set of atoms A(t) (potentially larger than the active set S (t) ). Rather than
obtaining the next iterate by line-search, x(t+1) is obtained by re-optimizing f over conv(A(t) ).
Depending on how the correction is implemented, and how the correction atoms A(t) are maintained, several variants can be obtained. These variants are known under many names, such as the
extended FW method by Holloway [14] or the simplicial decomposition method [31, 13]. Wolfe?s
min-norm point (MNP) algorithm [35] for polytope distance problems is often confused with FCFW
for quadratic objectives. The major difference is that standard FCFW optimizes f over conv(A(t) ),
whereas MNP implements the correction as a sequence of affine projections that potentially yield
a different update, but can be computed more efficiently in several practical applications [35]. We
describe precisely in Appendix A.1 a generalization of the MNP algorithm as a specific case of the
correction subroutine from step 7 of the generic Algorithm 3.
The original convergence analysis of the FCFW algorithm [14] (and also MNP algorithm [35]) only
showed that they were finitely convergent, with a bound on the number of iterations in terms of the
cardinality of A (unfortunately an exponential number in general). Holloway [14] also argued that
FCFW had an asymptotic linear convergence based on the flawed argument of Wolfe [34]. As far
as we know, our work is the first to provide global linear convergence rates for FCFW and MNP for
4
Algorithm 3 Fully-corrective Frank-Wolfe with approximate correction: FCFW(x(0) , A, )
X
1: Input: Set of atoms A, active set S (0) , starting point x(0) =
?v(0) v, stopping criterion .
v?S (0)
2: Let A(0) := S (0) (optionally, a bigger A(0) could be passed as argument for a warm start)
3: for t = 0 . . . T do
4:
Let st := LMOA ?f (x(t) )
(the FW atom)
5:
Let dtFW := st ? x(t) and gtFW = ??f (x(t) ), dtFW
(FW gap)
6:
if gtFW ? then return x(t)
7:
(x(t+1) , A(t+1) ) := Correction(x(t) , A(t) , st , )
(approximate correction step)
8: end for
Algorithm 4 Approximate correction: Correction(x(t) , A(t) , st , )
1: Return (x(t+1) , A(t+1) ) with the following properties:
(t+1)
2:
S (t+1) is the active set for x(t+1) and A(t+1)
.
?S
(t+1)
(t)
(t)
3:
f (x
) ? min f x + ?(st ? x )
(make at least as much progress as a FW step)
??[0,1]
4:
A
gt+1
:= max
v?S (t+1)
??f (x(t+1) ), x(t+1) ? v ?
(the away gap is small enough)
general strongly convex functions. Moreover, the proof of convergence for FCFW does not require
an exact solution to the correction step; instead, we show that the weaker properties stated for the
approximate correction procedure in Algorithm 4 are sufficient for a global linear convergence rate
(this correction could be implemented using away-steps FW, as done for example in [18]).
2
Global Linear Convergence Analysis
2.1 Intuition for the Convergence Proofs
We first give the general intuition for the linear convergence proof of the different FW variants,
starting from the work of Gu?elat and Marcotte [12]. We assume that the objective function f is
smooth over a compact set M, i.e. its gradient is Lipschitz continuous with constant L. Also let
M := diam(M). Let dt be the direction in which the line-search is executed by the algorithm
(Line 11 in Algorithm 1). By the standard descent lemma [see e.g. (1.2.5) in 27], we have:
D
E ?2
f (x(t+1) ) ? f (x(t) + ?dt ) ? f (x(t) ) + ? ?f (x(t) ), dt + Lkdt k2 ?? ? [0, ?max ]. (2)
2
We let rt := ??f (x(t) ) and let ht := f (x(t) ) ? f (x? ) be the suboptimality error. Supposing for
now that ?max ? ?t? := hrt , dt i /(Lkdt k2 ). We can set ? = ?t? to minimize the RHS of (2), subtract
f (x? ) on both sides, and re-organize to get a lower bound on the progress:
hrt , dt i2
1
ht ? ht+1 ?
=
hrt , d?t i2 ,
(3)
2Lkdt k2
2L
where we use the ?hat? notation to denote normalized vectors: d?t := dt /kdt k. Let et := x? ? x(t)
be the error vector. By ?-strong convexity of f , we have:
D
E ?2
f (x(t) + ?et ) ? f (x(t) ) + ? ?f (x(t) ), et + ?ket k2 ?? ? [0, 1].
(4)
2
The RHS is lower bounded by its minimum as a function of ? (unconstrained), achieved using
? := hrt , et i/(?ket k2 ). We are then free to use any value of ? on the LHS and maintain a valid
bound. In particular, we use ? = 1 to obtain f (x? ). Again re-arranging, we get:
2
hrt , e?t i
? hrt , d?t i2
ht ?
, and combining with (3), we obtain: ht ? ht+1 ?
ht . (5)
2?
L hrt , e?t i2
The inequality (5) is fairly general and valid for any line-search method in direction dt . To get a
linear convergence rate, we need to lower bound (by a positive constant) the term in front of ht on the
RHS, which depends on the angle between the update direction dt and the negative gradient rt . If
we assume that the solution x? lies in the relative interior of M with a distance of at least ? > 0 from
the boundary, then hrt , dt i ? ?krt k for the FW direction dtFW , and by combining with kdt k ? M ,
? ? 2
we get a linear rate with constant 1 ? L
( M ) (this was the result from [12]). On the other hand,
?
if x lies on the boundary, then h?
rt , d?t i gets arbitrary close to zero for standard FW (the zig-zagging
phenomenon) and the convergence is sublinear.
5
Proof Sketch for AFW. The key insight to prove the global linear convergence for AFW is to
relate hrt , dt i with the pairwise FW direction dtPFW := st ? vt . By the way the direction dt is
chosen on lines 6 to 10 of Algorithm 1, we have:
FW
PFW
2hrt , dt i ? hrt , dtFW i + hrt , dA
+ dA
i.
t i = hrt , dt
t i = hrt , dt
(6)
hrt , dtPFW i/2.
We thus have hrt , dt i ?
Now the crucial property of the pairwise FW direction
is that for any potential negative gradient direction rt , the worst case inner product h?
rt , dtPFW i
can be lower bounded away from zero by a quantity depending only on
st
rt
the geometry of M (unless we are at the optimum). We call this quantity
the pyramidal width of A. The figure on the right shows the six possible
d
dpFW
pairwise FW directions dtPFW for a triangle domain, depending on which
t
colored area the rt direction falls into. We will see that the pyramidal
x d
width is related to the smallest width of pyramids that we can construct
vt
from A in a specific way related to the choice of the away and towards
atoms vt and st . See (9) and our main Theorem 3 in Section 3.
FW
t
A
t
This gives the main argument for the linear convergence of AFW for steps where ?t? ? ?max .
When ?max is too small, AFW will perform a drop step, as the line-search will truncate the step-size
to ?t = ?max . We cannot guarantee sufficient progress in this case, but the drop step decreases the
active set size by one, and thus they cannot happen too often (not more than half the time). These are
the main elements for the global linear convergence proof for AFW. The rest is to carefully consider
various boundary cases. We can re-use the same techniques to prove the convergence for pairwise
FW, though unfortunately the latter also has the possibility of problematic swap steps. While their
number can be bounded, so far we only found the extremely loose bound quoted in Theorem 1.
Proof Sketch for FCFW. For FCFW, by line 4 of the correction Algorithm 4, the away gap satisfies gtA ? at the beginning of a new iteration. Supposing that the algorithm does not exit at line 6
of Algorithm 3, we have gtFW > and therefore 2hrt , dtFW i ? hrt , dtPFW i using a similar argument
as in (6). Finally, by line 3 of Algorithm 4, the correction is guaranteed to make at least as much
progress as a line-search in direction dtFW , and so the progress bound (5) applies also to FCFW.
2.2 Convergence Results
We now give the global linear convergence rates for the four variants of the FW algorithm: awaysteps FW (AFW Alg. 1); pairwise FW (PFW Alg. 2); fully-corrective FW (FCFW Alg. 3 with
approximate correction Alg. 4); and Wolfe?s min-norm point algorithm (Alg. 3 with MNP-correction
as Alg. 5 in Appendix A.1). For the AFW, MNP and PFW algorithms, we call a drop step when the
active set shrinks |S (t+1) | < |S (t) |. For the PFW algorithm, we also have the possibility of a swap
step where ?t = ?max but |S (t+1) | = |S (t) | (i.e. the mass was fully swapped from the away atom to
the FW atom). A nice property of FCFW is that it does not have any drop step (it executes both FW
steps and away steps simultaneously while guaranteeing enough progress at every iteration).
Theorem 1. Suppose that f has L-Lipschitz gradient4 and is ?-strongly convex over M =
conv(A). Let M = diam(M) and ? = P Width(A) as defined by (9). Then the suboptimality ht of the iterates of all the four variants of the FW algorithm decreases geometrically at each
step that is not a drop step nor a swap step (i.e. when ?t < ?max , called a ?good step?), that is
2
?
?
ht+1 ? (1 ? ?) ht ,
where ? :=
.
4L M
Let k(t) be the number of ?good steps? up to iteration t. We have k(t) = t for FCFW; k(t) ? t/2 for
MNP and AFW; and k(t) ? t/(3|A|! + 1) for PFW (because of the swap steps). This yields a global
linear convergence rate of ht ? h0 exp(?? k(t)) for all variants. If ? = 0 (general convex), then
ht = O(1/k(t)) instead. See Theorem 8 in Appendix D for an affine invariant version and proof.
Note that to our knowledge, none of the existing linear convergence results showed that the duality
gap was also linearly convergent. The result for the gap follows directly from the simple manipulation of (2); putting the FW gap to the LHS and optimizing the RHS for ? ? [0, 1].
Theorem 2. Suppose that f has L-Lipschitz gradient over M with M := diam(M). Then the FW
gap gtFW for any algorithm is upper bounded by the primal error ht as follows:
p
(7)
gtFW ? ht + LM 2 /2 when ht > LM 2 /2,
gtFW ? M 2ht L otherwise .
4
For AFW and PFW, we actually require that ?f is L-Lipschitz over the larger domain M + M ? M.
6
3
Pyramidal Width
We now describe the claimed lower bound on the angle between the negative gradient and the pairwise FW direction, which depends only on the geometric properties of M. According to our argument about the progress bound (5) and the PFW gap (6), our goal is to find a lower bound on
hrt , dtPFW i/hrt , e?t i. First note that hrt , dtPFW i = hrt , st?vt i = max hrt , s ? vi where S (t) is a poss?M,v?S (t)
(t)
sible active set for x . This looks like the directional width of a pyramid with base S (t) and summit
st . To be conservative, we consider the worst case possible active set for x(t) ; this is what we will
call the pyramid directional width P dirW (A, rt , x(t) ). We start with the following definitions.
Directional Width. The directional
width
of a set A with respect to a direction r is defined as
r
dirW (A, r) := maxs,v?A krk
, s ? v . The width of A is the minimum directional width over all
possible directions in its affine hull.
Pyramidal Directional Width. We define the pyramidal directional width of a set A with respect
to a direction r and a base point x ? M to be
r
P dirW (A, r, x) := min dirW (S ? {s(A, r)}, r) = min max
(8)
krk , s ? v ,
S?Sx
S?Sx s?A,v?S
5
where Sx := {S | S ? A such that x is a proper convex combination of all the elements in S}, and
s(A, r) := arg maxv?A hr, vi is the FW atom used as a summit.
Pyramidal Width. To define the pyramidal width of a set, we take the minimum over the cone of
possible feasible directions r (in order to avoid the problem of zero width).
A direction r is feasible for A from x if it points inwards conv(A), (i.e. r ? cone(A ? x)).
We define the pyramidal width of a set A to be the smallest pyramidal width of all its faces, i.e.
P Width(A) :=
min
P dirW (K ? A, r, x).
(9)
K?faces(conv(A))
x?K
r?cone(K?x)\{0}
Theorem 3. Let x ? M = conv(A) be a suboptimal point and S be an active set for x. Let x? be
an optimal point and corresponding error direction e? = (x? ?x)/ kx? ? xk, and negative gradient
r := ??f (x) (and so hr, e?i > 0). Let d = s?v be the pairwise FW direction obtained over A
and S with negative gradient r. Then
hr, di
? P Width(A).
(10)
hr, e?i
3.1 Properties of Pyramidal Width and Consequences
Examples of Values. The pyramidal width of a set A is lower bounded by the minimal width over
all subsets of atoms, and thus is strictly greater than zero if the number of atoms is finite. On the
other hand, this lower bound is often too loose to be useful, as in particular, vertex subsets of the
d
unit cube in dimension d can have exponentially small width O(d? 2 ) [see Corollary?
27 in 36]. On
the other hand, as we show here, the pyramidal width of the unit cube is actually 1/ d, justifying
why we kept the tighter but more involved definition (9). See Appendix B.1 for the proof.
?
Lemma 4. The pyramidal width of the unit cube in Rd is 1/ d.
For the probability
simplex with d vertices,
the pyramidal width is actually the same as its width,
p
?
which is 2/ d when d is even, and 2/ d?1/d when d is odd [2] (see Appendix B.1). In contrast,
the pyramidal width of an infinite set can be zero. For example, for a curved domain, the set of active
atoms S can contain vertices forming a very narrow pyramid, yielding a zero width in the limit.
Condition Number of a Set. The inverse of the rate constant ? appearing in Theorem 1 is the
product of two terms: L/? is the standard condition number of the objective function appearing in
the rates of gradient methods in convex optimization. The second quantity (M/?)2 (diameter over
pyramidal width) can be interpreted as a condition number of the domain M, or its eccentricity.
The more eccentric the constraint set (large diameter compared to its pyramidal width), the slower
the convergence. The best condition number of a function is when its level sets are spherical; the
analog in term of the constraint sets is actually the regular simplex, which has the maximum widthto-diameter ratio amongst all simplices [see Corollary 1 in 2]. Its eccentricity is (at most) d/2. In
contrast, the eccentricity of the unit cube is d2 , which is much worse.
5
By proper convex combination, we mean that all coefficients are non-zero in the convex combination.
7
We conjecture that the pyramidal width of a set of vertices (i.e. extrema of their convex hull) is
non-increasing when another vertex is added (assuming that all previous points remain vertices).
For example, the unit cube can be obtained by iteratively adding
? vertices?to the regular probability
simplex, and the pyramidal width thereby decreases from 2/ d to 1/ d. This property
? could
provide lower bounds for the pyramidal width of more complicated polytopes, such as 1/ d for the
d-dimensional marginal polytope, as it can be obtained by removing vertices from the unit cube.
Complexity Lower Bounds. Combining the convergence Theorem 1 and the condition number
1
of the unit simplex, we get a complexity of O(d L
? log( )) to reach -accuracy when optimizing a
strongly convex function over the unit simplex. Here the linear dependence on d should not come as
a surprise, in view of the known lower bound of 1/t for t ? d for Frank-Wolfe type methods [15].
Applications to Submodular Minimization. See Appendix A.2 for a consequence of our linear
rate for the popular MNP algorithm for submodular function optimization (over the base polytope).
4
Non-Strongly Convex Generalization
Building on the work of Beck and Shtern [4] and Wang and Lin [33], we can generalize our global
linear convergence results for all Frank-Wolfe variants for the more general case where f (x) :=
g(Ax) + hb, xi, for A ? Rp?d , b ? Rd and where g is ?g -strongly convex and continuously
differentiable over AM. We note that for a general matrix A, f is convex but not necessarily
strongly convex. In this case, the linear convergence still holds but with the constant ? appearing in
the rate of Theorem 1 replaced with the generalized constant ?
? appearing in Lemma 9 in Appendix F.
5
106
Illustrative Experiments
10
4
FW
awayFW
pairFW
FW
We illustrate the performance of the presented algorithm vari10
ants in two numerical experiments, shown in Figure 2. The
10
away
FW
first example is a constrained Lasso problem (`1 -regularized
10
2
least squares regression), that is minx?M f (x) = kAx ? bk ,
pa
irF
10
W
with M = 20 ? L1 a scaled L1 -ball. We used a random Gaus10
200?500
?
sian matrix A ? R
, and a noisy measurement b = Ax
10
0
200
400
600
800
1000
with x? being a sparse vector with 50 entries ?1, and 10% of
iteration
additive noise. For the L1 -ball, the linear minimization oracle
10
LMOA just selects the column of A of best inner product with
the residual vector. The second application comes from video
10
co-localization. The approach used by [16] is formulated as a
FW
quadratic program (QP) over a flow polytope, the convex hull of
10
paths in a network. In this application, the linear minimization
awayFW
oracle is equivalent to finding a shortest path in the network,
10
pairF
W
which can be done easily by dynamic programming. For the
LMOA , we re-use the code provided by [16] and their included
10
0
500
1000
1500
2000
iteration
aeroplane dataset resulting in a QP over 660 variables. In both
FW
experiments, we see that the modified FW variants (away-steps Figure 2: Duality gap gt vs iteraand pairwise) outperform the original FW algorithm, and ex- tions on the Lasso problem (top), and
hibit a linear convergence. In addition, the constant in the con- video co-localization (bottom). Code
is available from the authors? website.
vergence rate of Theorem 1 can also be empirically shown to be
fairly tight for AFW and PFW by running them on an increasingly obtuse triangle (see Appendix E).
2
gap
0
-2
-4
-6
-8
0
FW
awayFW
pairFW
gap
-2
-4
-6
-8
Discussion. Building on a preliminary version of our work [20], Beck and Shtern [4] also proved
a linear rate for away-steps FW, but with a simpler lower bound for the LHS of (10) using linear
duality arguments. However, their lower bound [see e.g. Lemma 3.1 in 4] is looser: they get a d2
constant for the eccentricity of the regular simplex instead of the tighter d that we proved. Finally,
the recently proposed generic scheme for accelerating first-order optimization methods in the sense
of Nesterov from [24] applies directly to the FW variants given their global linear convergence rate
that we proved. This gives for the first time first-order methods that only use linear oracles
p and
2
?
? L/?)
obtain the ?near-optimal? O(1/k
) rate for smooth convex functions, or the accelerated O(
constant in the linear rate for strongly convex functions. Given that the constants also depend on the
dimensionality, it remains an open question whether this acceleration is practically useful.
Acknowledgements. We thank J.B. Alayrac, E. Hazan, A. Hubard, A. Osokin and P. Marcotte for helpful
discussions. This work was partially supported by the MSR-Inria Joint Center and a Google Research Award.
8
References
[1] S. D. Ahipaao?glu, P. Sun, and M. Todd. Linear convergence of a modified Frank-Wolfe algorithm for computing minimum-volume enclosing ellipsoids. Optimization Methods and Software, 23(1):5?19, 2008.
[2] R. Alexander. The width and diameter of a simplex. Geometriae Dedicata, 6(1):87?94, 1977.
[3] F. Bach. Learning with submodular functions: A convex optimization perspective. Foundations and
Trends in Machine Learning, 6(2-3):145?373, 2013.
[4] A. Beck and S. Shtern. Linearly convergent away-step conditional gradient for non-strongly convex
functions. arXiv:1504.05002v1, 2015.
[5] A. Beck and M. Teboulle. A conditional gradient method with linear rate of convergence for solving
convex linear systems. Mathematical Methods of Operations Research (ZOR), 59(2):235?247, 2004.
[6] M. D. Canon and C. D. Cullum. A tight upper bound on the rate of convergence of Frank-Wolfe algorithm.
SIAM Journal on Control, 6(4):509?516, 1968.
[7] V. Chari et al. On pairwise costs for network flow multi-object tracking. In CVPR, 2015.
[8] J. C. Dunn. Rates of convergence for conditional gradient algorithms near singular and nonsingular
extremals. SIAM Journal on Control and Optimization, 17(2):187?211, 1979.
[9] M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval Research Logistics Quarterly,
3:95?110, 1956.
[10] D. Garber and E. Hazan. A linearly convergent conditional gradient algorithm with applications to online
and stochastic optimization. arXiv:1301.4666v5, 2013.
[11] D. Garber and E. Hazan. Faster rates for the Frank-Wolfe method over strongly-convex sets. In ICML,
2015.
[12] J. Gu?elat and P. Marcotte. Some comments on Wolfe?s ?away step?. Mathematical Programming, 1986.
[13] D. Hearn, S. Lawphongpanich, and J. Ventura. Restricted simplicial decomposition: Computation and
extensions. In Computation Mathematical Programming, volume 31, pages 99?118. Springer, 1987.
[14] C. A. Holloway. An extension of the Frank and Wolfe method of feasible directions. Mathematical
Programming, 6(1):14?27, 1974.
[15] M. Jaggi. Revisiting Frank-Wolfe: Projection-free sparse convex optimization. In ICML, 2013.
[16] A. Joulin, K. Tang, and L. Fei-Fei. Efficient image and video co-localization with Frank-Wolfe algorithm.
In ECCV, 2014.
[17] V. Kolmogorov and R. Zabin. What energy functions can be minimized via graph cuts? IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(2):147?159, 2004.
[18] R. G. Krishnan, S. Lacoste-Julien, and D. Sontag. Barrier Frank-Wolfe for marginal inference. In NIPS,
2015.
[19] P. Kumar and E. A. Yildirim. A linearly convergent linear-time first-order algorithm for support vector
classification with a core set result. INFORMS Journal on Computing, 2010.
[20] S. Lacoste-Julien and M. Jaggi. An affine invariant linear convergence analysis for Frank-Wolfe algorithms. arXiv:1312.7864v2, 2013.
[21] S. Lacoste-Julien, M. Jaggi, M. Schmidt, and P. Pletscher. Block-coordinate Frank-Wolfe optimization
for structural SVMs. In ICML, 2013.
[22] G. Lan. The complexity of large-scale convex programming under a linear optimization oracle.
arXiv:1309.5550v2, 2013.
[23] E. S. Levitin and B. T. Polyak. Constrained minimization methods. USSR Computational Mathematics
and Mathematical Physics, 6(5):787?823, Jan. 1966.
[24] H. Lin, J. Mairal, and Z. Harchaoui. A universal catalyst for first-order optimization. In NIPS, 2015.
[25] B. Mitchell, V. F. Demyanov, and V. Malozemov. Finding the point of a polyhedron closest to the origin.
SIAM Journal on Control, 12(1), 1974.
?
[26] R. Nanculef,
E. Frandi, C. Sartori, and H. Allende. A novel Frank-Wolfe algorithm. Analysis and applications to large-scale SVM training. Information Sciences, 2014.
[27] Y. Nesterov. Introductory Lectures on Convex Optimization. Kluwer Academic Publishers, 2004.
[28] J. Pena, D. Rodriguez, and N. Soheili. On the von Neumann and Frank-Wolfe algorithms with away
steps. arXiv:1507.04073v2, 2015.
[29] J. C. Platt. Fast training of support vector machines using sequential minimal optimization. In Advances
in kernel methods: support vector learning, pages 185?208. 1999.
[30] S. M. Robinson. Generalized Equations and their Solutions, Part II: Applications to Nonlinear Programming. Springer, 1982.
[31] B. Von Hohenbalken. Simplicial decomposition in nonlinear programming algorithms. Mathematical
Programming, 13(1):49?68, 1977.
[32] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference.
Foundations and Trends in Machine Learning, 1(12):1?305, 2008.
[33] P.-W. Wang and C.-J. Lin. Iteration complexity of feasible descent methods for convex optimization.
Journal of Machine Learning Research, 15:1523?1548, 2014.
[34] P. Wolfe. Convergence theory in nonlinear programming. In Integer and Nonlinear Programming. 1970.
[35] P. Wolfe. Finding the nearest point in a polytope. Mathematical Programming, 11(1):128?149, 1976.
[36] G. M. Ziegler. Lectures on 0/1-polytopes. arXiv:math/9909177v1, 1999.
9
| 5925 |@word msr:1 version:3 middle:2 polynomial:2 norm:5 seems:1 open:1 d2:2 decomposition:3 thereby:1 ecole:1 existing:3 hearn:1 current:2 additive:1 numerical:1 happen:1 remove:2 drop:7 update:5 maxv:1 v:1 greedy:1 half:1 website:1 intelligence:1 lkdt:3 accordingly:1 xk:1 beginning:1 core:1 pointer:1 colored:1 iterates:4 math:1 location:1 simpler:1 accessed:1 mathematical:7 along:1 become:1 prove:3 introductory:1 inside:1 pairwise:20 roughly:1 nor:1 multi:1 inspired:1 globally:1 spherical:1 cardinality:1 increasing:1 becomes:1 project:1 conv:9 bounded:7 moreover:2 maximizes:1 mass:6 confused:1 notation:1 what:2 provided:1 interpreted:1 finding:4 extremum:1 guarantee:4 every:2 k2:5 scaled:1 platt:1 control:3 unit:8 enjoy:1 appear:1 organize:1 arguably:1 t1:1 before:1 positive:1 local:1 todd:1 limit:1 consequence:2 path:2 inria:2 might:5 studied:1 co:3 unique:1 practical:1 testing:2 practice:3 block:1 implement:1 procedure:1 dunn:1 jan:1 krt:1 area:1 empirical:1 universal:1 eth:1 projection:2 regular:3 get:7 cannot:3 interior:2 onto:2 close:1 optimize:1 equivalent:1 center:1 urich:1 starting:3 convex:37 simplicity:1 insight:1 importantly:1 gta:1 handle:1 notion:1 variation:1 coordinate:3 arranging:1 play:1 suppose:2 exact:2 programming:12 origin:1 pa:1 wolfe:47 element:3 expensive:2 particularly:1 trend:2 summit:2 cut:1 invented:1 role:1 subproblem:1 bottom:1 wang:2 worst:3 revisiting:1 ensures:2 revival:1 sun:1 decrease:4 zig:5 intuition:2 convexity:4 complexity:5 nesterov:2 dynamic:2 depend:2 tight:3 solving:1 localization:3 exit:1 swap:6 conver:1 gu:3 triangle:2 po:1 easily:1 joint:1 represented:1 various:1 corrective:6 kolmogorov:1 train:1 fast:1 describe:3 h0:1 quite:1 garber:2 widely:1 larger:2 cvpr:1 allende:1 otherwise:4 ability:2 noisy:1 online:1 sequence:1 differentiable:1 product:5 combining:3 lmoa:10 description:2 convergence:42 regularity:1 optimum:4 requirement:1 eccentricity:4 neumann:1 generating:1 guaranteeing:1 sierra:1 object:3 tions:1 nanculef:2 depending:3 illustrate:1 informs:1 nearest:1 finitely:1 odd:1 progress:7 strong:4 implemented:3 hrt:23 come:2 switzerland:1 direction:27 hull:3 stochastic:1 require:8 argued:1 fix:1 generalization:2 preliminary:1 tighter:2 strictly:1 extension:2 clarify:2 hold:2 correction:18 practically:1 exp:1 lm:2 major:1 early:2 smallest:2 combinatorial:2 currently:1 ziegler:1 successfully:1 minimization:9 modified:3 normale:1 rather:2 avoid:1 earliest:1 corollary:2 ax:2 naval:1 polyhedron:1 contrast:4 am:1 sense:1 helpful:1 inference:3 dim:1 stopping:2 membership:2 typically:1 france:1 selects:1 subroutine:1 arg:4 classification:1 ussr:1 constrained:4 special:2 fairly:2 marginal:5 field:1 cube:7 construct:1 nicely:1 atom:29 flawed:1 represents:1 look:1 icml:3 simplex:8 np:1 minimized:1 fundamentally:1 simultaneously:1 beck:4 replaced:1 geometry:2 maintain:2 subsumed:1 fcfw:14 possibility:5 analyzed:1 extreme:1 truly:1 yielding:2 primal:1 necessary:1 obtuse:1 lh:3 unless:1 incomplete:1 re:6 minimal:2 increased:1 column:1 earlier:1 teboulle:1 disadvantage:1 assignment:1 cost:2 vertex:12 subset:2 entry:1 too:4 front:1 proximal:2 thanks:1 st:27 siam:3 stay:2 physic:1 continuously:1 again:1 central:1 satisfied:1 von:2 containing:2 choose:2 ket:2 worse:1 return:5 sible:1 potential:2 coefficient:1 depends:4 vi:2 later:2 multiplicative:1 view:1 doing:1 sup:1 hazan:3 start:3 maintains:1 complicated:1 simon:1 contribution:1 afw:11 square:1 minimize:1 accuracy:1 efficiently:2 yield:3 simplicial:3 nonsingular:1 directional:7 ant:1 generalize:1 yildirim:1 none:2 executes:1 reach:1 definition:2 energy:1 involved:1 naturally:1 associated:1 proof:10 di:1 con:1 dataset:1 proved:3 popular:1 mitchell:2 knowledge:3 dimensionality:2 carefully:2 actually:4 back:1 appears:1 originally:1 dt:20 improved:2 done:2 though:4 strongly:16 shrink:2 just:3 until:1 hand:5 working:1 sketch:2 replacing:1 nonlinear:4 google:1 rodriguez:1 name:1 building:2 normalized:1 true:2 contain:1 iteratively:1 satisfactory:1 i2:4 during:1 width:36 maintained:3 illustrative:1 suboptimality:3 criterion:2 generalized:3 l1:3 mdm:1 variational:2 image:1 novel:3 recently:4 qp:2 empirically:1 exponentially:1 volume:2 analog:1 interpretation:1 kluwer:1 pena:1 refer:1 measurement:1 eccentric:1 rd:4 unconstrained:1 erieure:1 mathematics:1 submodular:5 had:1 access:2 impressive:1 gt:3 add:2 base:5 jaggi:4 closest:1 showed:6 recent:2 perspective:1 optimizing:4 optimizes:1 scenario:1 manipulation:1 claimed:1 inequality:4 outperforming:1 arbitrarily:1 vt:23 seen:1 minimum:7 additional:2 canon:2 greater:1 converge:1 shortest:1 extremals:1 kdt:2 signal:1 ii:1 harchaoui:1 smooth:2 faster:1 academic:1 cross:1 bach:1 lin:3 justifying:1 award:1 bigger:2 feasibility:1 kax:1 variant:21 mrf:1 regression:1 vision:1 arxiv:6 iteration:10 represent:1 kernel:1 pyramid:4 achieved:1 whereas:1 want:1 addition:1 chari:1 else:1 pyramidal:20 singular:1 crucial:1 publisher:1 swapped:1 rest:1 unlike:2 sr:2 strict:1 comment:1 supposing:2 elegant:1 flow:5 irf:1 alayrac:1 call:7 integer:2 marcotte:4 near:2 structural:1 jordan:1 enough:3 hb:1 iterate:5 krishnan:1 lasso:2 suboptimal:1 polyak:1 inner:2 idea:1 whether:2 six:1 passed:1 aeroplane:1 accelerating:1 sontag:1 repeatedly:1 useful:2 svms:3 diameter:4 glu:1 outperform:1 exist:2 problematic:2 popularity:2 per:1 write:2 levitin:1 key:1 four:2 putting:1 nevertheless:1 lan:1 ht:17 lacoste:4 kept:1 v1:2 graph:2 geometrically:2 cone:3 angle:2 inverse:1 powerful:1 extends:1 family:1 looser:1 appendix:10 bound:19 guaranteed:1 convergent:7 quadratic:4 oracle:19 constraint:5 precisely:2 fei:2 software:1 argument:6 min:9 extremely:1 kumar:1 performing:1 martin:1 conjecture:1 structured:2 according:1 truncate:1 combination:4 ball:2 dta:6 remain:1 slightly:1 increasingly:1 newer:1 modification:1 minnorm:1 invariant:3 restricted:1 equation:1 previously:1 remains:1 discus:1 loose:3 know:2 tractable:1 end:3 available:1 operation:2 quarterly:1 worthwhile:1 away:32 generic:2 v2:3 appearing:6 alternative:1 schmidt:1 slower:1 hat:1 rp:1 original:5 assumes:1 running:2 top:1 graphical:1 especially:1 classical:7 objective:8 move:7 already:1 quantity:5 added:1 question:1 v5:1 rt:8 usual:1 dependence:1 exhibit:1 gradient:18 amongst:1 minx:1 distance:4 separate:1 link:1 thank:1 gence:1 topic:1 polytope:17 reason:1 assuming:2 code:2 ellipsoid:1 ratio:1 minimizing:1 optionally:1 setup:3 unfortunately:3 executed:1 inwards:1 potentially:3 frank:36 relate:1 ventura:1 stated:2 negative:5 zabin:1 attenuates:1 enclosing:1 proper:2 unknown:1 perform:2 upper:3 markov:1 finite:3 zagging:3 descent:3 curved:1 logistics:1 defining:1 extended:2 team:1 precise:1 discovered:1 barycentric:1 arbitrary:2 bk:1 namely:1 paris:1 optimized:1 smo:1 narrow:1 polytopes:3 nip:2 robinson:4 address:1 below:1 pattern:1 sparsity:1 program:2 including:1 max:22 video:3 wainwright:1 rely:1 warm:1 regularized:1 hr:4 sian:1 residual:1 pletscher:1 scheme:1 julien:4 lately:1 text:1 nice:3 geometric:2 acknowledgement:1 relative:2 asymptotic:1 catalyst:1 fully:8 lecture:2 highlight:1 sublinear:3 foundation:2 affine:5 sufficient:4 consistent:1 eccv:1 changed:1 supported:1 free:2 side:1 weaker:4 fall:1 face:5 barrier:1 sparse:5 boundary:9 dimension:1 valid:2 author:1 made:1 adaptive:2 projected:2 osokin:1 far:2 transaction:1 approximate:6 pruning:1 compact:1 keep:2 clique:1 global:13 active:20 mairal:1 xi:2 cullum:2 quoted:1 search:11 continuous:2 un:1 vergence:1 why:2 obtaining:1 alg:6 expansion:1 necessarily:1 domain:9 da:2 krk:2 joulin:1 main:3 linearly:6 rh:4 noise:1 simplices:1 slow:2 exponential:2 lie:8 tang:1 theorem:11 removing:1 bad:1 specific:5 svm:2 adding:2 sequential:1 gained:1 linearization:1 elat:3 sx:3 kx:1 gap:13 easier:1 subtract:1 suited:1 surprise:1 forming:1 tracking:3 partially:1 applies:2 springer:2 minimizer:1 satisfies:2 conditional:6 diam:3 goal:1 formulated:1 acceleration:3 towards:2 mnp:9 lipschitz:5 shtern:3 feasible:8 fw:67 hard:1 included:1 infinite:1 except:1 corrected:1 uniformly:1 lemma:5 conservative:2 pfw:9 called:3 duality:3 zag:2 holloway:3 hypothetically:1 support:3 latter:1 alexander:1 accelerated:1 phenomenon:2 dept:1 dirw:5 ex:1 |
5,442 | 5,926 | Quartz: Randomized Dual Coordinate Ascent
with Arbitrary Sampling
Peter Richt?arik
School of Mathematics
The University of Edinburgh
EH9 3FD, United Kingdom
peter.richtarik@ed.ac.uk
Zheng Qu
Department of Mathematics
The University of Hong Kong
Hong Kong
zhengqu@maths.hku.hk
Tong Zhang
Department of Statistics
Rutgers University
Piscataway, NJ, 08854
tzhang@stat.rutgers.edu
Abstract
We study the problem of minimizing the average of a large number of smooth
convex functions penalized with a strongly convex regularizer. We propose and
analyze a novel primal-dual method (Quartz) which at every iteration samples and
updates a random subset of the dual variables, chosen according to an arbitrary
distribution. In contrast to typical analysis, we directly bound the decrease of
the primal-dual error (in expectation), without the need to first analyze the dual
error. Depending on the choice of the sampling, we obtain efficient serial and
mini-batch variants of the method. In the serial case, our bounds match the best
known bounds for SDCA (both with uniform and importance sampling). With
standard mini-batching, our bounds predict initial data-independent speedup as
well as additional data-driven speedup which depends on spectral and sparsity
properties of the data.
Keywords: empirical risk minimization, dual coordinate ascent, arbitrary sampling, data-driven
speedup.
1
Introduction
In this paper we consider a primal-dual pair of structured convex optimization problems which has
in several variants of varying degrees of generality attracted a lot of attention in the past few years
in the machine learning and optimization communities [4, 22, 20, 23, 21, 27].
Let A1 , . . . , An be a collection of d-by-m real matrices and ?1 , . . . , ?n be 1/?-smooth convex
functions from Rm to R, where ? > 0. Further, let g : Rd ? R ? {+?} be a 1-strongly convex
function and ? > 0 a regularization parameter. We are interested in solving the following primal
problem:
i
h
Pn
def
minw=(w1 ,...,wd )?Rd P (w) = n1 i=1 ?i (A>
w)
+
?g(w)
.
(1)
i
In the machine learning context, matrices {Ai } are interpreted as examples/samples, w is a (linear)
predictor, function ?i is the loss incurred by the predictor on example Ai , g is a regularizer, ?
is a regularization parameter and (1) is the regularized empirical risk minimization problem. In
1
this paper we are especially interested in problems where n is very big (millions, billions), and
much larger than d. This is often the case in big data applications. Stochastic Gradient Descent
(SGD) [18, 11, 25] was designed for solving this type of large-scale optimization problems. In each
iteration SGD computes the gradient of one single randomly chosen function ?i and approximates
the gradient using this unbiased but noisy estimation. Because of the variance of the stochastic
estimation, SGD has slow convergence rate O(1/). Recently, many methods achieving fast (linear)
convergence rate O(log(1/)) have been proposed, including SAG [19], SVRG [6], S2GD [8],
SAGA [1], mS2GD [7] and MISO [10], all using different techniques to reduce the variance.
Another approach, such as Stochastic Dual Coordinate Ascent (SDCA) [22], solves (1) by considering its dual problem that is defined as follows. For each i, let ??i : Rm ? R be the convex
conjugate of ?i , namely, ??i (u) = maxs?Rm s> u ? ?i (s) and similarly let g ? : Rd ? R be the
convex conjugate of g. The dual problem of (1) is defined as:
h
i
def
D(?) = ?f (?) ? ?(?) ,
(2)
max
?=(?1 ,...,?n )?RN =Rnm
where ? = (?1 , . . . , ?n ) ? RN = Rnm is obtained by stacking dual variables (blocks) ?i ? Rm ,
i = 1, . . . , n, on top of each other and functions f and ? are defined by
Pn
Pn
def
def
1
f (?) = ?g ? ?n
?(?) = n1 i=1 ??i (??i ).
(3)
i=1 Ai ?i ;
SDCA [22] and its proximal extension Prox-SDCA [20] first solve the dual problem (2) by updating
uniformly at random one dual variable at each round and then recover the primal solution by setting
w = ?g ? (?). Let Li = ?max (A>
i Ai ). It is known that if we run SDCA for at least
1
i Li
i Li
O n + max
log n + max
??
??
iterations, then SDCA finds a pair (w, ?) such that E[P (w) ? D(?)] ? . By applying accelerated
q
? + maxi Li )
randomized coordinate descent on the dual problem, APCG [9] needs at most O(n
??
number of iterations to get -accuracy. ASDCA [21] and SPDC [26] are also accelerated and randomized primal-dual methods. Moreover, they can update a mini-batch of dual variables in each
round.
We propose a new algorithm (Algorithm 1), which we call Quartz, for simultaneously solving the
primal (1) and dual (2) problems. On the dual side, at each iteration our method selects and updates
a random subset (sampling) S? ? {1, . . . , n} of the dual variables/blocks. We assume that these sets
are i.i.d. throughout the iterations. However, we do not impose any additional assumptions on the
distribution of S? apart from the necessary requirement that each block i needs to be chosen with
def
? > 0. Quartz is the first SDCA-like method analyzed for
a positive probability: pi = P(i ? S)
an arbitrary sampling. The dual updates are then used to perform an update to the primal variable
w and the process is repeated. Our primal updates are different (less aggressive) from those used
in SDCA [22] and Prox-SDCA [20], thanks to which the decrease in the primal-dual error can be
bounded directly without first establishing the dual convergence as in [20], [23] and [9]. Our analysis
is novel and directly primal-dual in nature. As a result, our proof is more direct, and the logarithmic
term in our bound has a simpler form.
Main result. We prove that starting from an initial pair (w0 , ?0 ), Quartz finds a pair (w, ?) for
which P (w) ? D(?) ? (in expectation) in at most
0
0
)
i
max p1i + pi v??n
log P (w )?D(?
(4)
i
iterations. The parameters v1 , . . . , vn are assumed to satisfy the following ESO (expected separable
overapproximation) inequality:
h
P
2 i Pn
ES?
i?S? Ai hi
? i=1 pi vi khi k2 ,
(5)
where k ? k denotes the standard Euclidean norm. Moreover, the parameters v1 , . . . , vn are needed to
run the method (they determine stepsizes), and hence it is critical that they can be cheaply computed
2
before the method starts. We wish to point out that (5) always holds for some parameters {vi }.
Indeed, the left hand side is a quadratic function of h and hence the inequality holds for largeenough vi . Having said that, the size of these parameters directly influences the complexity, and
hence one would want to obtain as tight bounds as possible. As we will show, for many samplings
of interest small enough parameter v can be obtained in time required to read the data {Ai }. In
particular, if the data matrix A = (A1 , . . . , An ) is sufficiently sparse, our iteration complexity
result (4) specialized to the case of standard mini-batching can be better than that of accelerated
methods such as ASDCA [21] and SPDC [26] even when the condition number maxi Li /?? is
larger than n, see Proposition 4 and Figure 2.
As described above, Quartz uses an arbitrary sampling for picking the dual variables to be updated
in each iteration. To the best of our knowledge, only two papers exist in the literature where a
stochastic method using an arbitrary sampling was analyzed: NSync [16] for unconstrained minimization of a strongly convex function and ALPHA [15] for composite minimization of non-strongly
convex function. Assumption (5) was for the first time introduced in [16]. However, NSync is not
a primal-dual method. Besides NSync, the closest works to ours in terms of the generality of the
sampling are PCDM [17], SPCDM [3] and APPROX [2]. All these are randomized coordinate
descent methods, and all were analyzed for arbitrary uniform samplings (i.e., samplings satisfying
? = P(i0 ? S)
? for all i, i0 ? {1, . . . , n}). Again, none of these methods were analyzed in a
P(i ? S)
primal-dual framework.
In Section 2 we describe the algorithm, show that it admits a natural interpretation in terms of
Fenchel duality and discuss the flexibility of Quartz. We then proceed to Section 3 where we state the
main result, specialize it to the samplings discussed in Section 2, and give detailed comparison of our
results with existing results for related primal-dual stochastic methods in the literature. In Section 4
we demonstrate how Quartz compares to other related methods through numerical experiments.
2
The Quartz Algorithm
Throughout the paper we consider the standard Euclidean norm, denoted by k ? k. A function ? :
Rm ? R is (1/?)-smooth if it is differentiable and has Lipschitz continuous gradient with Lispchitz
constant 1/?: k??(x) ? ??(y)k ? ?1 kx ? yk, for all x, y ? Rm . A function g : Rd ? R ? {+?}
is 1-strongly convex if g(w) ? g(w0 ) + h?g(w0 ), w ? w0 i + 21 kw ? w0 k2 for all w, w0 ? dom(g),
where dom(g) denotes the domain of g and ?g(w0 ) is a subgradient of g at w0 .
? which is a random subset of
The most important parameter of Quartz is a random sampling S,
[n] = {1, 2, . . . , n}. The only assumption we make on the sampling S? in this paper is the following:
Assumption 1 (Proper sampling) S? is a proper sampling; that is,
def
? > 0,
pi = P(i ? S)
i ? [n].
(6)
This assumption guarantees that each block (dual variable) has a chance to get updated by the
method. Prior to running the algorithm, we compute positive constants v1 , . . . , vn satisfying (5)
to define the stepsize parameter ? used throughout in the algorithm:
??n
? = min vpi i+??n
.
i
(7)
? We shall show how to
Note from (5) that ? depends on both the data matrix A and the sampling S.
compute in less than two passes over the data the parameter v satisfying (5) for some examples of
sampling in Section 2.2.
2.1
Interpretation of Quartz through Fenchel duality
3
Algorithm 1 Quartz
Parameters: proper random sampling S? and a positive vector v ? Rn
0
? ? = min pi ??n ; ?
Initialization: ?0 ? RN ; w0 ? Rd ; pi = P(i ? S);
vi +??n ? =
i
1
?n
Pn
i=1
Ai ?i0
for t ? 1 do
wt = (1 ? ?)wt?1 + ??g ? (?
?t?1 )
t
t?1
? =?
Generate a random set St ? [n], following the distribution of S?
for i ? St do
t?1
> t
? ?p?1
?it = (1 ? ?p?1
i )?i
i ??i (Ai w )
end for
P
?
?t = ?
? t?1 + (?n)?1 i?St Ai (?it ? ?it?1 )
end for
Output: wt , ?t
Quartz (Algorithm 1)
a natural interpretation in terms of Fenchel duality. Let (w, ?) ? Rd ? RN
Phas
n
1
and define ?
? = ?n i=1 Ai ?i . The duality gap for the pair (w, ?) can be decomposed as:
Pn
(1)+(2)
?
P (w) ? D(?)
=
? (g(w) + g ? (?
?)) + n1 i=1 ?i (A>
i w) + ?i (??i )
P
n
?
>
=
?(g(w) + g ? (?
?) ? hw, ?
? i) + n1 i=1 ?i (A>
i w) + ?i (??i ) + hAi w, ?i i .
{z
}
{z
}
|
|
GAPg (w,?)
GAP?i (w,?i )
By Fenchel-Young inequality, GAPg (w, ?) ? 0 and GAP?i (w, ?i ) ? 0 for all i, which proves
weak duality for the problems (1) and (2), i.e., P (w) ? D(?). The pair (w, ?) is optimal when both
GAPg and GAP?i for all i are zero. It is known that this happens precisely when the following
optimality conditions hold:
w = ?g ? (?
?)
?i =
(8)
???i (A>
i w),
i ? [n].
(9)
We will now interpret the primal and dual steps of Quartz in terms of the above discussion. It is easy
to see that Algorithm 1 updates the primal and dual variables as follows:
wt = (1 ? ?)wt?1 + ??g ? (?
?t?1 )
(10)
?1
t?1
?1
> t
1 ? ?pi ?i + ?pi ???i (Ai w ) , i ? St
(11)
?it =
?it?1
, i?
/ St
Pn
t?1
1
where ?
? t?1 = ?n
, ? is a constant defined in (7) and St ? S? is a random subset of
i=1 Ai ?i
[n]. In other words, at iteration t we first set the primal variable wt to be a convex combination of
its current value wt?1 and a value reducing GAPg to zero: see (10). This is followed by adjusting a
subset of dual variables corresponding to a randomly chosen set of examples St such that for each
example i ? St , the i-th dual variable ?it is set to be a convex combination of its current value ?it?1
and a value reducing GAP?i to zero, see (11).
2.2
Flexibility of Quartz
Clearly, there are many ways in which the distribution of S? can be chosen, leading to numerous
variants of Quartz. The convex combination constant ? used throughout the algorithm should be
tuned according to (7) where v1 , . . . , vn are constants satisfying (5). Note that the best possible v
is obtained by computing the maximal eigenvalue of the matrix (A> A) ? P where ? denotes the
Hadamard (component-wise) product of matrices and P ? RN ?N is an n-by-n block matrix with
? j ? S),
? see [14]. However, the worst-case complexity
all elements in block (i, j) equal to P(i ? S,
of computing directly the maximal eigenvalue of (A> A) ? P amounts to O(N 2 ), which requires
unreasonable preprocessing time in the context of machine learning where N is assumed to be very
large. We now describe some examples of sampling S? and show how to compute in less than two
passes over the data the corresponding constants v1 , . . . , vn . More examples including distributed
sampling are presented in the supplementary material.
4
Serial sampling. The most studied sampling in the literature on stochastic optimization is the
? = 1 with
serial sampling, which corresponds to the selection of a single block i ? [n]. That is, |S|
probability 1. The name ?serial? is pointing to the fact that a method using such a sampling will
typically be a serial (as opposed to being parallel) method; updating a single block (dual variable) at
a time. A serial sampling is uniquely characterized by the vector of probabilities p = (p1 , . . . , pn ),
? it is easy to see that (5) is satisfied for
where pi is defined by (6). For serial sampling S,
def
vi = Li = ?max (A>
i Ai ), i ? [n],
(12)
where ?max (?) denotes the maximal eigenvalue.
Standard mini-batching. We now consider S? which selects subsets of [n] of cardinality ? , uniformly at random. In the terminology established in [17], such S? is called ? -nice. This sampling
satisfies pi = pj for all i, j ? [n]; and hence it is uniform. This sampling is well suited for parallel
computing. Indeed, Quartz could be implemented as follows. If we have ? processors available, then
at the beginning of iteration t we can assign each block (dual variable) in St to a dedicated processor.
The processor assigned to i would then compute ??it and apply the update. If all processors have
fast access to the memory where all the data is stored, as is the case in a shared-memory multicore
workstation, then this way of assigning workload to the individual processors does not cause any
major problems. For ? -nice sampling, (5) is satisfied for
Pd
(? ?1)(? ?1)
vi = ?max (Mi ),
Mi = j=1 1 + j n?1
A>
i ? [n],
(13)
ji Aji ,
where for each j ? [d], ?j is the number of nonzero blocks in the j-th row of matrix A, i.e.,
def
?j = |{i ? [n] : Aji 6= 0}|,
j ? [d].
(14)
Note that (13) follows from an extension of a formula given in [2] from m = 1 to m ? 1.
3
Main Result
The complexity of our method is given by the following theorem. The proof can be found in the
supplementary material.
Theorem 2 (Main Result) Assume that g is 1-strongly convex and that for each i ? [n], ?i is convex
and (1/?)-smooth. Let S? be a proper sampling (Assumption 1) and v1 , . . . , vn be positive scalars
satisfying (5). Then the sequence of primal and dual variables {wt , ?t }t?0 of Quartz (Algorithm 1)
satisfies:
E[P (wt ) ? D(?t )] ? (1 ? ?)t (P (w0 ) ? D(?0 )),
(15)
where ? is defined in (7). In particular, if we fix ? P (w0 ) ? D(?0 ), then for
0
0
)
i
T ? max p1i + pi v??n
log P (w )?D(?
? E[P (wT ) ? D(?T )] ? .
i
(16)
In order to put the above result into context, in the rest of this section we will specialize the above
result to two special samplings: a serial sampling, and the ? -nice sampling.
3.1
Quartz with serial sampling
When S? is a serial sampling, we just need to plug (12) into (16) and derive the bound
0
0
)
i
log P (w )?D(?
=? E[P (wT ) ? D(?T )] ? .
T ? max p1i + piL??n
i
(17)
If in addition, S? is uniform, then pi = 1/n for all i ? [n] and we refer to this special case of Quartz
as Quartz-U. By replacing pi = 1/n in (17) we obtain directly the complexity of Quartz-U:
0
0
)
i Li
T ? n + max
log P (w )?D(?
=? E[P (wT ) ? D(?T )] ? .
(18)
??
5
Otherwise, we can seek to maximize the right-hand side of the inequality in (17) with respect to
the sampling probability p to obtain the best bound. A simple calculation reveals that the optimal
probability is given by:
Pn
def
P(S? = {i}) = p?i = (Li + ??n)/ i=1 (Li + ??n) .
(19)
We shall call Quartz-IP the algorithm obtained by using the above serial sampling probability. The
following complexity result of Quartz-IP can be derived easily by plugging (19) into (17):
Pn
0
0
)
i=1 Li
T ? n + n??
log P (w )?D(?
=? E[P (wT ) ? D(?T )] ? .
(20)
Note that in contrast with the complexity result of Quartz-U (18), we now have dependence on the
average of the eigenvalues Li .
Quartz-U vs Prox-SDCA. Quartz-U should be compared to Proximal Stochastic Dual Coordinate
Ascent (Prox-SDCA) [22, 20]. Indeed, the dual update of Prox-SDCA takes exactly the same form
of Quartz-U1 , see (11). The main difference is how the primal variable wt is updated: while Quartz
performs the update (10), Prox-SDCA (see also [24, 5]) performs the more aggressive update wt =
?g ? (?
? t?1 ) and the complexity result of Prox-SDCA is as follows:
D(?? )?D(?0 )
i Li
i Li
log n + max
? E[P (wT ) ? D(?T )] ? , (21)
T ? n + max
??
??
where ?? is the dual optimal solution. Notice that the dominant terms in (18) and (21) exactly match,
although our logarithmic term is better and simpler. This is due to a direct bound on the decrease
of the primal-dual error of Quartz, without the need to first analyze the dual error, in contrast to the
typical approach for most of the dual coordinate ascent methods [22, 23, 20, 21, 9].
Quartz-IP vs Iprox-SDCA. The importance sampling (19) was previously used in the algorithm
Iprox-SDCA [27], which extends Prox-SDCA to non-uniform serial sampling. The complexity of
Quartz-IP (20) should then be compared with the following complexity result of Iprox-SDCA [27]:
Pn
Pn
D(?? )?D(?0 )
i=1 Li
i=1 Li
T ? n + n??
log n + n??
? E[P (wT ) ? D(?T )] ? . (22)
Again, the dominant terms in (20) and (22) exactly match but our logarithmic term is smaller.
3.2
Quartz with ? -nice Sampling (standard mini-batching)
We now specialize Theorem 2 to the case of the ? -nice sampling. We define w
? such that:
P
(?j ?1)(? ?1)
d
(?
? ?1)(? ?1)
>
A
A
=
1
+
maxi Li
maxi ?max
ji
ji
j=1 1 +
n?1
n?1
It is clear that 1 ? w
? ? maxj wj ? n and can be considered as a measure of the density of the data.
By plugging (13) into (16) we obtain directly the following corollary.
Corollary 3 Assume S? is the ? -nice sampling and v is chosen as in (13). If we let ? P (w0 ) ?
D(?0 ) and
?
?
(?
? ?1)(? ?1)
1+
maxi Li
0
0
n?1
? log P (w )?D(? ) ? E[P (wT ) ? D(?T )] ? . (23)
T ? ? n? +
???
Let us now have a detailed look at the above result, especially in terms of how it compares with the
serial uniform case (18). For fully sparse data, we get perfect linear speedup: the bound in (23) is a
def
1/? fraction of the bound in (18). For fully dense data, the condition number (? = maxi Li /(??))
is unaffected by mini-batching. For general data, the behaviour of Quartz with ? -nice sampling
interpolates these two extreme cases. It is important to note that regardless of the condition number
?, as long as ? ? 1 + (n ? 1)/(w
? ? 1) the bound in (23) is at most a 2/? fraction of the bound
in (18). Hence, for sparser problems, Quartz can achieve linear speedup for larger mini-batch sizes.
1
In [20] the authors proposed five options of dual updating rule. Our dual updating formula (11) should be
compared with option V in Prox-SDCA. For the same reason as given in the beginning of [20, Appendix A.],
Quartz implemented with the same other four options achieves the same complexity result as Theorem 2.
6
3.3
Quartz vs existing primal-dual mini-batch methods
We now compare the above result with existing mini-batch stochastic dual coordinate ascent methods. The mini-batch variants of SDCA, to which Quartz with ? -nice sampling can be naturally
compared, have been proposed and analyzed previously in [23], [21] and [26]. In [23], the authors
proposed to use a so-called safe mini-batching, which is precisely equivalent to finding the stepsize
parameter v satisfying (5) (in the special case of ? -nice sampling). However, they only analyzed
the case where the functions {?i }i are non-smooth. In [21], the authors studied accelerated minibatch SDCA (ASDCA), specialized to the case when the regularizer g is the squared L2 norm.
They showed that the complexity of ASDCA interpolates between that of SDCA and accelerated
gradient descent (AGD) [13] through varying the mini-batch size ? . In [26], the authors proposed
a mini-batch extension of their stochastic primal-dual coordinate algorithm (SPDC). Both ASDCA
and SPDC reach the same complexity as AGD when the mini-batch size equals to n, thus should be
considered as accelerated algorithms 2 . The complexity bounds for all these algorithms are summarized in Table 1. In Table 2 we compare the complexities of SDCA, ASDCA, SPDC and Quartz in
several regimes.
Algorithm
SDCA [22]
ASDCA [21]
SPDC [26]
Quartz with ? -nice
sampling
Iteration complexity
n+ 1
q ??
1
n
1
4 ? max n? , ???
, ???
, n3 2
(??? ) 3
q
n
n
+
?
???
(??1)(?
?
?1)
n
1
+
1
+
?
n?1
???
1
k
2
g
? k2
1
k
2
? k2
general
general
Table 1: Comparison of the iteration complexity of several primal-dual algorithms performing stochastic
coordinate ascent steps in the dual using a mini-batch of examples of size ? (with the exception of SDCA,
which is a serial method using ? = 1.
Algorithm
SDCA [22]
ASDCA
[21]
SPDC [26]
Quartz
(? -nice)
??n = ?( ?1n )
n3/2
?
3/2
n /? + n5/4 / ? +
n4/3 /? 2/3
?
n5/4 / ?
?
n3/2 /? + ?
? n
??n = ?(1)
n
?
n/ ?
?
n/ ?
??n = ?(? )
n
n/? + ?
?
n/?
n/?
n/?
?
??n = ?( n)
n
?
n/? + n3/4 / ?
?
n/? + n3/4 / ?
?
n/? + ?
?/ n
Table 2: Comparison of leading factors in the complexity bounds of several methods in 5 regimes.
Looking at Table 2, we see that in the ??n = ?(? ) regime (i.e., if the condition number is ? =
?(n/? )), Quartz matches the linear speedup (when compared to SDCA) of ASDCA and SPDC.
When the condition number is roughly equal to the sample ?
size (? = ?(n)), then Quartz does better
? . In particular, this is the case when
than both ASDCA and SPDC
as
long
as
n/?
+
?
?
?
n/
?
the data is sparse: ?
? ? n/ ? . If the data is even more sparse (and in many big data applications
one has ?
? = O(1)) and we have ?
? ? n/? , then Quartz significantly outperforms both ASDCA
and SPDC. Note that Quartz can be better than both ASDCA and SPDC even in the domain of
accelerated methods, that is, when the condition number is larger than the number of examples:
? = 1/(??) ? n. Indeed, we have the following result:
Proposition 4 Assume that n?? ? 1 and that maxi Li = 1. If the data is sufficiently sparse so that
2
?1)
,
(24)
??? n ? 1 + n?? + (???1)(?
n?1
? order) of Quartz is better than that of ASDCA and SPDC.
then the iteration complexity (in O
The result can be interpreted as follows: if n ? ? ? ? n/(1 + n/?)2 (that is, ? ? ??? n ?
(1 + n??)2 ), then there are sparse-enough problems for which Quartz is better than both ASDCA
and SPDC.
2
APCG [9] also reaches accelerated convergence rate but was not proposed in the mini-batch setting.
7
4
Experimental Results
In this section we demonstrate how Quartz specialized to different samplings compares with other
methods. All of our experiments are performed with m = 1, for smoothed hinge-loss functions {?i }
with ? = 1 and squared L2-regularizer g, see [20]. The experiments were performed on the three
datasets reported in Table 3, and three randomly generated large dataset [12] with n = 100, 000
examples, d = 100, 000 features with different sparsity. In Figure 1 we compare Quartz specialized
to serial sampling and for both uniform and optimal sampling with Prox-SDCA and Iprox-SDCA,
previously discussed in Section 3.1, on three datasets. Due to the conservative primal date in Quartz,
Quartz-U appears to be slower than Prox-SDCA in practice. Nevertheless, in all the experiments,
Quartz-IP shows almost identical convergence behaviour to that of Iprox-SDCA. In Figure 2 we
compare Quartz specialized to ? -nice sampling with mini-batch SPDC for different values of ? , in
the domain of accelerated methods (? = 10n). The datasets are randomly generated following [13,
Section 6]. When ? = 1, it is clear that SPDC outperforms Quartz as the condition number?is
larger than n. However, as ? increases, the number of data processed by SPDC is increased by ?
as predicted by its theory but the number of data processed by Quartz remains almost the same by
taking advantage of the large sparsity of the data. Hence, Quartz is much better in the large ? regime.
Dataset
cov1
w8a
ijcnn1
# Training size n
522,911
49,749
49,990
# features d
54
300
22
Sparsity (# nnz/(nd))
22.22%
3.91%
59.09%
Table 3: Datasets used in our experiments.
0
0
?10
10
?15
10
0
10
Prox-SDCA
Quartz-U
Iprox-SDCA
Quartz-IP
?5
Primal dual gap
?5
10
0
10
Prox-SDCA
Quartz-U
Iprox-SDCA
Quartz-IP
Primal dual gap
Primal dual gap
10
10
?10
10
?15
50
100
nb of epochs
10
150
0
Prox-SDCA
Quartz-U
Iprox-SDCA
Quartz-IP
?5
10
?10
10
?15
100
200
nb of epochs
10
300
0
50
nb of epochs
100
(a) cov1; n = 522911; ? = 1e-06 (b) w8a; n = 49749; ? = 1e-05 (c) ijcnn1; n = 49990; ? = 1e-05
Figure 1: Comparison of Quartz-U (uniform sampling), Quartz-IP (optimal importance sampling), ProxSDCA (uniform sampling) and Iprox-SDCA (optimal importance sampling).
0
0
?2
?4
10
?6
10
?8
?4
10
?6
10
0
300
(a) Rand1; n = 105 ; ? = 1e-06
10
10
?4
10
?6
10
10
?10
100
200
nb of epochs
Quartz-? =1
SPDC-? =1
Quartz-? =10
SPDC-? =10
Quartz-? =40
SPDC-? =40
?2
?8
10
?10
10
10
?8
10
10
Quartz-? =1
SPDC-? =1
Quartz-? =10
SPDC-? =10
Quartz-? =100
SPDC-? =100
?2
Primal dual gap
Primal dual gap
10
0
10
Quartz-? =1
SPDC-? =1
Quartz-? =100
SPDC-? =100
Quartz-? =1000
SPDC-? =1000
Primal dual gap
10
0
?10
100
200
nb of epochs
300
(b) Rand2; n = 105 ; ? = 1e-06
10
0
100
200
nb of epochs
300
(c) Rand3; n = 105 ; ? = 1e-06
Figure 2: Comparison of Quartz with SPDC for different mini-batch size ? in the regime ? = 10n. The three
random datasets Random1, Random2 and Random2 have respective sparsity 0.01%, 0.1% and 1%.
8
References
[1] A. Defazio, F. Bach, and S. Lacoste-julien. SAGA: A fast incremental gradient method with support for
non-strongly convex composite objectives. In Advances in Neural Information Processing Systems 27,
pages 1646?1654. 2014.
[2] O. Fercoq and P. Richt?arik. Accelerated, parallel and proximal coordinate descent. SIAM Journal on
Optimization (after minor revision), arXiv:1312.5799, 2013.
[3] O. Fercoq and P. Richt?arik. Smooth minimization of nonsmooth functions by parallel coordinate descent.
arXiv:1309.5885, 2013.
[4] C.-J. Hsieh, K.-W. Chang, C.-J. Lin, S.S. Keerthi, and S. Sundararajan. A dual coordinate descent method
for large-scale linear SVM. In Proc. of the 25th International Conference on Machine Learning, ICML
?08, pages 408?415, 2008.
[5] M. Jaggi, V. Smith, M. Takac, J. Terhorst, S. Krishnan, T. Hofmann, and M.I. Jordan. Communicationefficient distributed dual coordinate ascent. In Advances in Neural Information Processing Systems 27,
pages 3068?3076. Curran Associates, Inc., 2014.
[6] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In
C.j.c. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.q. Weinberger, editors, Advances in Neural
Information Processing Systems 26, pages 315?323. 2013.
[7] J. Kone?cn?y, J. Lu, P. Richt?arik, and M. Tak?ac? . mS2GD: Mini-batch semi-stochastic gradient descent in
the proximal setting. arXiv:1410.4744, 2014.
[8] J. Kone?cn?y and P. Richt?arik. S2GD: Semi-stochastic gradient descent methods. arXiv:1312.1666, 2013.
[9] Q. Lin, Z. Lu, and L. Xiao. An accelerated proximal coordinate gradient method and its application to
regularized empirical risk minimization. Technical Report MSR-TR-2014-94, July 2014.
[10] J. Mairal. Incremental majorization-minimization optimization with application to large-scale machine
learning. SIAM J. Optim., 25(2):829?855, 2015.
[11] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to
stochastic programming. SIAM J. Optim., 19(4):1574?1609, 2008.
[12] Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM J.
Optim., 22(2):341?362, 2012.
[13] Y. Nesterov. Gradient methods for minimizing composite functions. Math. Program., 140(1, Ser. B):125?
161, 2013.
[14] Z. Qu and P. Richt?arik. Coordinate descent methods with arbitrary sampling II: Expected separable
overapproximation. arXiv:1412.8063, 2014.
[15] Z. Qu and P. Richt?arik. Coordinate descent methods with arbitrary sampling I: Algorithms and complexity.
arXiv:1412.8060, 2014.
[16] P. Richt?arik and M. Tak?ac? . On optimal probabilities in stochastic coordinate descent methods. Optimization Letters, published online 2015.
[17] P. Richt?arik and M. Tak?ac? . Parallel coordinate descent methods for big data optimization. Math. Program.,
published online 2015.
[18] H. Robbins and S. Monro. A stochastic approximation method. Ann. Math. Statistics, 22:400?407, 1951.
[19] M. Schmidt, N. Le Roux, and F. Bach. Minimizing finite sums with the stochastic average gradient.
arXiv:1309.2388, 2013.
[20] S. Shalev-Shwartz and T. Zhang. Proximal stochastic dual coordinate ascent. arXiv:1211.2717, 2012.
[21] S. Shalev-shwartz and T. Zhang. Accelerated mini-batch stochastic dual coordinate ascent. In Advances
in Neural Information Processing Systems 26, pages 378?385. 2013.
[22] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss. J. Mach.
Learn. Res., 14(1):567?599, February 2013.
[23] M. Tak?ac? , A.S. Bijral, P. Richt?arik, and N. Srebro. Mini-batch primal and dual methods for svms. In
Proc. of the 30th International Conference on Machine Learning (ICML-13), pages 1022?1030, 2013.
[24] T. Yang. Trading computation for communication: Distributed stochastic dual coordinate ascent. In
Advances in Neural Information Processing Systems 26, pages 629?637. 2013.
[25] T. Zhang. Solving large scale l.ear prediction problems using stochastic gradient descent algorithms. In
Proc. of the 21st International Conference on Machine Learning (ICML-04), pages 919?926, 2004.
[26] Y. Zhang and L. Xiao. Stochastic primal-dual coordinate method for regularized empirical risk minimization. In Proc. of the 32nd International Conference on Machine Learning (ICML-15), pages 353?361,
2015.
[27] P. Zhao and T. Zhang. Stochastic optimization with importance sampling. ICML, 2015.
9
| 5926 |@word kong:2 msr:1 norm:3 nd:2 seek:1 hsieh:1 sgd:3 tr:1 reduction:1 initial:2 united:1 tuned:1 ours:1 past:1 existing:3 outperforms:2 current:2 wd:1 optim:3 assigning:1 attracted:1 numerical:1 hofmann:1 designed:1 update:11 juditsky:1 v:3 beginning:2 smith:1 math:4 simpler:2 zhang:8 five:1 direct:2 prove:1 specialize:3 indeed:4 expected:2 roughly:1 p1:1 decomposed:1 considering:1 cardinality:1 revision:1 bounded:1 moreover:2 interpreted:2 finding:1 nj:1 guarantee:1 every:1 sag:1 exactly:3 rm:6 k2:4 uk:1 ser:1 positive:4 before:1 overapproximation:2 mach:1 establishing:1 initialization:1 studied:2 nemirovski:1 practice:1 block:10 aji:2 sdca:38 nnz:1 empirical:4 significantly:1 composite:3 word:1 get:3 selection:1 nb:6 risk:4 context:3 applying:1 influence:1 put:1 equivalent:1 attention:1 starting:1 regardless:1 convex:16 communicationefficient:1 roux:1 rule:1 coordinate:25 updated:3 programming:1 us:1 curran:1 associate:1 element:1 satisfying:6 updating:4 worst:1 wj:1 richt:10 decrease:3 yk:1 pd:1 complexity:20 nesterov:2 dom:2 solving:4 tight:1 predictive:1 efficiency:1 workload:1 easily:1 regularizer:4 fast:3 describe:2 shalev:3 larger:5 solve:1 supplementary:2 otherwise:1 statistic:2 noisy:1 ip:9 online:2 sequence:1 differentiable:1 eigenvalue:4 advantage:1 propose:2 maximal:3 product:1 hadamard:1 date:1 flexibility:2 achieve:1 billion:1 convergence:5 requirement:1 perfect:1 incremental:2 depending:1 derive:1 ac:5 stat:1 multicore:1 school:1 minor:1 keywords:1 solves:1 implemented:2 predicted:1 trading:1 safe:1 stochastic:25 material:2 behaviour:2 assign:1 fix:1 proposition:2 extension:3 hold:3 sufficiently:2 considered:2 predict:1 pointing:1 major:1 achieves:1 estimation:2 proc:4 miso:1 robbins:1 minimization:8 clearly:1 always:1 arik:10 pn:12 varying:2 stepsizes:1 corollary:2 derived:1 hk:1 contrast:3 i0:3 typically:1 tak:4 interested:2 selects:2 dual:62 ms2gd:2 denoted:1 special:3 equal:3 having:1 sampling:59 identical:1 kw:1 look:1 icml:5 nonsmooth:1 report:1 few:1 randomly:4 simultaneously:1 individual:1 maxj:1 keerthi:1 n1:4 interest:1 fd:1 huge:1 zheng:1 analyzed:6 extreme:1 kone:2 primal:32 necessary:1 minw:1 respective:1 euclidean:2 re:1 fenchel:4 increased:1 bijral:1 stacking:1 subset:6 uniform:9 predictor:2 johnson:1 stored:1 reported:1 proximal:6 s2gd:2 thanks:1 st:10 randomized:4 density:1 siam:4 international:4 picking:1 w1:1 again:2 squared:2 satisfied:2 ear:1 opposed:1 zhao:1 leading:2 li:19 aggressive:2 prox:14 summarized:1 inc:1 satisfy:1 depends:2 vi:6 performed:2 lot:1 analyze:3 start:1 recover:1 option:3 parallel:5 pil:1 monro:1 majorization:1 accuracy:1 variance:3 richtarik:1 weak:1 none:1 lu:2 unaffected:1 processor:5 published:2 reach:2 ed:1 cov1:2 naturally:1 proof:2 mi:2 workstation:1 dataset:2 adjusting:1 knowledge:1 appears:1 hku:1 strongly:7 generality:2 just:1 hand:2 replacing:1 minibatch:1 name:1 unbiased:1 regularization:2 hence:6 assigned:1 read:1 nonzero:1 round:2 uniquely:1 hong:2 demonstrate:2 performs:2 dedicated:1 wise:1 novel:2 recently:1 specialized:5 ji:3 million:1 discussed:2 interpretation:3 approximates:1 interpret:1 sundararajan:1 refer:1 ai:13 rd:6 unconstrained:1 approx:1 mathematics:2 similarly:1 access:1 dominant:2 jaggi:1 closest:1 showed:1 p1i:3 driven:2 apart:1 inequality:4 additional:2 impose:1 determine:1 maximize:1 july:1 semi:2 ii:1 smooth:6 technical:1 match:4 characterized:1 plug:1 calculation:1 long:2 bach:2 lin:2 serial:16 rand1:1 a1:2 plugging:2 prediction:1 variant:4 n5:2 expectation:2 rutgers:2 arxiv:8 iteration:14 asdca:14 addition:1 want:1 rest:1 ascent:12 pass:2 jordan:1 call:2 yang:1 enough:2 easy:2 krishnan:1 reduce:1 cn:2 defazio:1 accelerating:1 peter:2 interpolates:2 proceed:1 cause:1 detailed:2 clear:2 amount:1 processed:2 svms:1 generate:1 shapiro:1 exist:1 notice:1 shall:2 four:1 terminology:1 rnm:2 nevertheless:1 achieving:1 lan:1 pj:1 lacoste:1 v1:6 subgradient:1 fraction:2 year:1 sum:1 run:2 letter:1 tzhang:1 extends:1 throughout:4 almost:2 vn:6 appendix:1 vpi:1 eh9:1 bound:15 def:10 hi:1 followed:1 quadratic:1 precisely:2 n3:5 u1:1 min:2 optimality:1 fercoq:2 performing:1 separable:2 speedup:6 department:2 structured:1 according:2 piscataway:1 combination:3 conjugate:2 smaller:1 qu:3 n4:1 happens:1 ijcnn1:2 previously:3 remains:1 discus:1 needed:1 end:2 available:1 unreasonable:1 apply:1 spectral:1 batching:6 stepsize:2 batch:16 weinberger:1 schmidt:1 slower:1 top:1 denotes:4 running:1 hinge:1 ghahramani:1 especially:2 prof:1 february:1 objective:1 dependence:1 said:1 hai:1 gradient:13 w0:12 reason:1 besides:1 mini:22 minimizing:3 kingdom:1 proper:4 perform:1 w8a:2 datasets:5 finite:1 descent:16 looking:1 communication:1 rn:6 smoothed:1 arbitrary:9 community:1 introduced:1 pair:6 namely:1 required:1 established:1 regime:5 sparsity:5 program:2 including:2 max:16 memory:2 critical:1 natural:2 regularized:4 numerous:1 julien:1 prior:1 literature:3 nice:12 l2:2 epoch:6 loss:3 fully:2 srebro:1 incurred:1 degree:1 xiao:2 editor:1 pi:13 row:1 penalized:1 svrg:1 side:3 burges:1 taking:1 sparse:6 edinburgh:1 distributed:3 apcg:2 computes:1 author:4 collection:1 preprocessing:1 agd:2 welling:1 alpha:1 reveals:1 mairal:1 assumed:2 shwartz:3 continuous:1 table:7 nature:1 learn:1 robust:1 bottou:1 domain:3 main:5 dense:1 big:4 repeated:1 slow:1 tong:1 saga:2 khi:1 wish:1 hw:1 young:1 formula:2 theorem:4 quartz:74 maxi:7 admits:1 svm:1 importance:5 terhorst:1 kx:1 spdc:26 gap:11 sparser:1 suited:1 logarithmic:3 cheaply:1 scalar:1 chang:1 corresponds:1 chance:1 satisfies:2 ann:1 lipschitz:1 shared:1 typical:2 uniformly:2 reducing:2 wt:18 conservative:1 called:2 duality:5 e:1 experimental:1 takac:1 exception:1 support:1 accelerated:12 |
5,443 | 5,927 | A Generalization of Submodular Cover via the
Diminishing Return Property on the Integer Lattice
Tasuku Soma
The University of Tokyo
tasuku soma@mist.i.u-tokyo.ac.jp
Yuichi Yoshida
National Institute of Informatics, and
Preferred Infrastructure, Inc.
yyoshida@nii.ac.jp
Abstract
We consider a generalization of the submodular cover problem based on the concept of diminishing return property on the integer lattice. We are motivated by
real scenarios in machine learning that cannot be captured by (traditional) submodular set functions. We show that the generalized submodular cover problem
can be applied to various problems and devise a bicriteria approximation algorithm. Our algorithm is guaranteed to output a log-factor approximate solution
that satisfies the constraints with the desired accuracy. The running time of our
algorithm is roughly O(n log(nr) log r), where n is the size of the ground set and
r is the maximum value of a coordinate. The dependency on r is exponentially
better than the naive reduction algorithms. Several experiments on real and artificial datasets demonstrate that the solution quality of our algorithm is comparable
to naive algorithms, while the running time is several orders of magnitude faster.
1
Introduction
A function f : 2S ? R+ is called submodular if f (X) + f (Y ) ? f (X ? Y ) + f (X ? Y ) for
all X, Y ? S, where S is a finite ground set. An equivalent and more intuitive definition is by
the diminishing return property: f (X ? {s}) ? f (X) ? f (Y ? {s}) ? f (Y ) for all X ? Y and
s ? S \ Y . In the last decade, the optimization of a submodular function has attracted particular
interest in the machine learning community. One reason of this is that many real-world models
naturally admit the diminishing return property. For example, document summarization [12, 13],
influence maximization in viral marketing [7], and sensor placement [10] can be described with the
concept of submodularity, and efficient algorithms have been devised by exploiting submodularity
(for further details, refer to [8]).
A variety of proposed models in machine learning [4, 13, 18] boil down to the submodular cover
problem [21]; for given monotone and nonnegative submodular functions f, c : 2S ? R+ , and
? > 0, we are to
minimize c(X)
subject to f (X) ? ?.
(1)
Intuitively, c(X) and f (X) represent the cost and the quality of a solution, respectively. The objective of this problem is to find X of minimum cost with the worst quality guarantee ?. Although this
problem is NP-hard since it generalizes the set cover problem, a simple greedy algorithm achieves
tight log-factor approximation and it practically performs very well.
The aforementioned submodular models are based on the submodularity of a set function, a function
defined on 2S . However, we often encounter problems that cannot be captured by a set function. Let
us give two examples:
Sensor Placement: Let us consider the following sensor placement scenario. Suppose that we
have several types of sensors with various energy levels. We assume a simple trade-off between
1
information gain and cost. Sensors of a high energy level can collect a considerable amount of
information, but we have to pay a high cost for placing them. Sensors of a low energy level can
be placed at a low cost, but they can only gather limited information. In this scenario, we want to
decide which type of sensor should be placed at each spot, rather than just deciding whether to place
a sensor or not. Such a scenario is beyond the existing models based on submodular set functions.
Optimal Budget Allocation: A similar situation also arises in the optimal budget allocation problem [2]. In this problem, we want to allocate budget among ad sources so that (at least) a certain
number of customers is influenced while minimizing the total budget. Again, we have to decide
how much budget should be set aside for each ad source, and hence set functions cannot capture the
problem.
We note that a function f : 2S ? R+ can be seen as a function defined on a Boolean hypercube
{0, 1}S . Then, the above real scenarios prompt us to generalize the submodularity and the diminishing return property to functions defined on the integer lattice ZS+ . The most natural generalization
of the diminishing return property to a function f : ZS+ ? R+ is the following inequality:
f (x + ?s ) ? f (x) ? f (y + ?s ) ? f (y)
(2)
for x ? y and s ? S, where ?s is the s-th unit vector. If f satisfies (2), then f also satisfies the
following lattice submodular inequality:
f (x) + f (y) ? f (x ? y) + f (x ? y)
(3)
for all x, y ? ZS+ , where ? and ? are the coordinate-wise max and min operations, respectively.
While the submodularity and the diminishing return property are equivalent for set functions, this
is not the case for functions over the integer lattice; the diminishing return property (2) is stronger
than the lattice submodular inequality (3). We say that f is lattice submodular if f satisfies (3),
and if f further satisfies (2) we say that f is diminishing return submodular (DR-submodular for
short). One might feel that the DR-submodularity (2) is too restrictive. However, considering the
fact that the diminishing return is more crucial in applications, we may regard the DR-submodularity
(2) as the most natural generalization of the submodularity, at least for applications mentioned so
far [17, 6]. For example, under a natural condition, the objective function in the optimal budget allocation satisfies (2) [17]. The DR-submodularity was also considered in the context of submodular
welfare [6].
In this paper, we consider the following generalization of the submodular cover problem for set
functions: Given a monotone DR-submodular function f : ZS+ ? R+ , a subadditive function
c : ZS+ ? R+ , ? > 0, and r ? Z+ , we are to
minimize c(x)
subject to f (x) ? ?,
0 ? x ? r1,
(4)
ZS+ .
where we say that c is subadditive if c(x + y) ? c(x) + c(y) for all x, y ?
We call problem (4)
the DR-submodular cover problem. This problem encompasses problems that boil down to the submodular cover problem for set functions and their generalizations to the integer lattice. Furthermore,
the cost function c is generalized to a subadditive function. In particular, we note that two examples
given above can be rephrased using this problem (see Section 4 for details).
If c is also monotone DR-submodular, one can reduce the problem (4) to the set version (1) (for
technical details, see Section 3.1). The problem of this naive reduction is that it only yields a
pseudo-polynomial time algorithm; the running time depends on r rather than log r. Since r can be
huge in many practical settings (e.g., the maximum energy level of a sensor), even linear dependence
on r could make an algorithm impractical. Furthermore, for a general subadditive function c, this
naive reduction does not work.
1.1
Our Contribution
For the problem (4), we devise a bicriteria approximation algorithm based on the decreasing threshold technique of [3]. More precisely, our algorithm takes the additional parameters 0 < ,
? < 1. The
S
output x ? Z+ of our algorithm is guaranteed to satisfy that c(x) is at most (1 + 3)? 1 + log ?d
times the optimum and f (x) ? (1 ? ?)?, where ? is the curvature of c (see Section 3 for the definition), d = maxs f (?s ) is the maximum value of f over all standard unit vectors, and ? is the
minimum value of the positive increments of f in the feasible region.
2
Running Time (dependency on r): An important feature of our algorithm is that the running
time depends on the bit length of r only polynomially whereas the naive reduction algorithms depend on it exponentially as mentioned above. More precisely, the running time of our algorithm is
max
O( n log nrc
?cmin log r), which is polynomial in the input size, whereas the naive algorithm is only
psuedo-polynomial time algorithm. In fact, our experiments using real and synthetic datasets show
that our algorithm is considerably faster than naive algorithms. Furthermore, in terms of the objective value (that is, the cost of the output), our algorithm also exhibits comparable performance.
Approximation Guarantee: Our approximation guarantee on the cost is almost tight. Note that
the DR submodular cover problem (4) includes the set cover problem, in which we are given a
collection of sets, and we want to find a minimum number of sets that covers all the elements. In
our context, S corresponds to the collection of sets, the cost c is the number of chosen sets, and f
is the number of covered elements. It is known that we cannot obtain an o(log m)-approximation
unless P 6= NP, where m is the number of elements [16]. However, since for the set cover problem
we have ? = 1, d = O(m), and ? = 1, our approximation guarantee is O(log m).
1.2
Related Work
Our result can be compared with several results in the literature for the submodular cover problem
for set functions. It is shown by Wolsey [21] that if c(X) = |X|, a simple greedy algorithm yields
(1 + log ?d )-approximation, which coincides with our approximation ratio except for the (1 + 3)
factor. Note that ? = 1 when c(X) = |X|, or more generally, when c is modular. Recently, Wan
et al. [20] discussed a slightly different setting, in which c is also submodular and both f and c
are integer valued. They proved that the greedy algorithm achieves ?H(d)-approximation, where
H(d) = 1+1/2+? ? ?+1/d is the d-th harmonic number. Again, their ratio asymptotically coincides
with our approximation ratio (Note that ? ? 1 when f is integer valued).
Another common submodular-based model in machine learning is in the form of the submodular
maximization problem: Given a monotone submodular set function f : {0, 1}S ? R+ and a feasible
S
set P ? [0, 1] (e.g., a matroid polytope or a knapsack polytope), we want to maximize f (x) subject
to x ? P ? {0, 1}S . Such models can be widely found in various tasks as already described. We
note that the submodular cover problem and the submodular maximization problem are somewhat
dual to each other. Indeed, Iyer and Bilmes [5] showed that a bicriteria algorithm of one of these
problems yields a bicriteria algorithm for the other. Being parallel to our setting, generalizing the
submodular maximization problem to the integer lattice ZS+ is a natural question. In this direction,
Soma et al. [17] considered the maximization of lattice submodular functions (not necessarily being
DR-submodular) and devised a constant-factor approximation pseudo-polynomial time algorithm.
We note that our result is not implied by [17] via the duality of [5]. In fact, such reduction only
yields a pseudo-polynomial time algorithm.
1.3
Organization of This Paper
The rest of this paper is organized as follows: Section 2 sets the mathematical basics of submodular functions over the integer lattice. Section 3 describes our algorithm and the statement of our
main theorem. In Section 4, we show various experimental results using real and artificial datasets.
Section 5 sketches the proof of the main theorem. Finally, we conclude the paper in Section 6.
2
Preliminaries
Let S be a finite set. For each s ? S, we denote the s-th unit vector by ?s ; that is, ?s (t) = 1
if t = s, otherwise ?s (t) = 0. A function f : ZS ? R is said to be lattice submodular if
f (x) + f (y) ? f (x ? y) + f (x ? y) for all x, y ? ZS . A function f is monotone if f (x) ? f (y)
for all x, y ? ZS with x ? y. For x, y ? ZS and a function f : ZS ? R, we denote f (y |
x) := f (y + x) ? f (x). A function f is diminishing return submodular (or DR-submodular) if
f (x + ?s ) ? f (x) ? f (y + ?s ) ? f (y) for each x ? y ? ZS and s ? S. For a DR-submodular
function f , one can immediately check that f (k?s | x) ? f (k?s | y) for arbitrary x ? y, s ? S,
and k ? Z+ . A function f is subadditive if f (x + y) ? f (x) + f (y) for x, y ? ZS . For each
x ? ZS+ , we define {x} to be the multiset in which each s ? S is contained x(s) times.
3
In [17], a lattice submodular function f : ZS ? R is said to have the diminishing return property if
f is coordinate-wise concave: f (x + 2?s ) ? f (x + ?s ) ? f (x + ?s ) ? f (x) for each x ? ZS and
s ? S. We note that our definition is consistent with [17]. Formally, we have the following lemma,
whose proof can be found in Appendix.
Lemma 2.1. A function f : ZS ? R is DR-submodular if and only if f is lattice submodular and
coordinate-wise concave.
The following is fundamental for a monotone DR-submodular function. A proof is placed in Appendix due to the limitation of space.
P
Lemma 2.2. For a monotone DR-submodular function f , f (x) ? f (y) ? s?{x} f (?s | y) for
arbitrary x, y ? ZS .
3
Algorithm for the DR-submodular Cover
Recall the DR-submodular cover problem (4). Let f : ZS+ ? R+ be a monotone DR-submodular
function and let c : ZS+ ? R+ be a subadditive cost function. The objective is to minimize c(x)
subject to f (x) ? ? and 0 ? x ? r1, where ? > 0 and r ? Z+ are the given constants. Without
loss of generality, we can assume that max{f (x) : 0 ? x ? r1} = ? (otherwise, we can consider
fb(x) := min{f (x), ?} instead of f ). Furthermore, we can assume c(x) > 0 for any x ? ZS+ .
A pseudocode description of our algorithm is presented in Algorithm 1. The algorithm can be viewed
as a modified version of the greedy algorithm and works as follows: We start with the initial solution
x = 0 and increase each coordinate of x gradually. To determine the amount of increments, the
algorithm maintains a threshold ? that is initialized to be sufficiently large enough. For each s ? S,
the algorithm finds the largest integer step size 0 < k ? r ? x(s) such that the marginal cost-gain
(k?s |x)
ratio f kc(?
is above the threshold ?. If such k exists, the algorithm updates x to x + k?s . After
s)
repeating this for each s ? S, the algorithm decreases the threshold ? by a factor of (1 ? ). If x
becomes feasible, the algorithm returns the current x. Even if x does not become feasible, the final
x satisfies f (x) ? (1 ? ?)? if we iterate until ? gets sufficiently small.
Algorithm 1 Decreasing Threshold for the DR-Submodular Cover Problem
Input: f : ZS+ ? R+ , c : ZS+ ? R+ , r ? N, ? > 0, > 0, ? > 0.
Output: 0 ? x ? r1 such that f (x) ? ?.
1: x ? 0, d ? max f (?s ), cmin ? min c(?s ), cmax ? max c(?s )
s?S
s?S
s?S
d
?
2: for (? = cmin
; ? ? ncmax
r d; ? ? ?(1 ? )) do
3:
for all s ? S do
4:
Find maximum integer 0 < k ? r ? x(s) such that
5:
If such k exists then x ? x + k?s .
6:
If f (x) ? ? then break the outer for loop.
7: return x
f (k?s |x)
kc(?s )
? ? with binary search.
Before we claim the theorem, we need to define several parameters on f and c. Let ? := min{f (?s |
x) : s ? S, x ? ZS+ , f (?s | x) > 0} and d := maxs f (?s ). Let cmax := maxs c(?s ) and
cmin := mins c(?s ). Define the curvature of c to be
P
s?{x? } c(?s )
? := ? min
.
(5)
x :optimal solution
c(x? )
Definition 3.1. For ? ? 1 and 0 < ? < 1, a vector x ? ZS+ is a (?, ?)-bicriteria approximate
solution if c(x) ? ? ? c(x? ), f (x) ? (1 ? ?)?, and 0 ? x ? r1.
Our main theorem is described below. We sketch the proof in Section 5.
Theorem 3.2. Algorithm 1 outputs a (1 + 3)? 1 + log ?d , ? -bicriteria approximate solution
max
in O n log nrc
?cmin log r time.
4
3.1
Discussion
Integer-valued Case. Let us make a simple remark on the case that f is integer valued. Without
loss of generality, we can assume ? ? Z+ . Then, Algorithm 1 always returns a feasible solution for
any 0 < ? < 1/?. Therefore, our algorithm can be easily modified to an approximation algorithm
if f is integer valued.
Definition of Curvature. Several authors [5, 19] use a different notion of curvature called the
total curvature, whose natural extension for a function over the integer lattice is as follows: The
|r1??s )
total curvature ? of c : ZS+ ? R+ is defined as ? := 1 ? mins?S c(?sc(?
. Note that ? = 0
s)
if c is modular, while ? = 1 if c is modular. For example, Iyer and Bilmes [5] devised a bicriteria
approximation algorithm whose approximation guarantee is roughly O((1 ? ?)?1 log ?d ).
Let us investigate the relation between ? and ? for DR-submodular functions. One can show that
1 ? ? ? ? ? (1 ? ?)?1 (see Lemma E.1 in Appendix), which means that our bound in terms of ?
is tighter than one in terms of (1 ? ?)?1 .
Comparison to Naive Reduction Algorithm. If c is also a monotone DR-submodular function,
one can reduce (4) to the set version (1) as follows. For each s ? S, create r copies of s and let
? ? S,
? define x ? ? ZS+ be the integral vector such that x ? (s)
S? be the set of these copies. For X
X
X
? Then, f?(X)
? := f (x ? ) is submodular. Similarly,
is the number of copies of s contained in X.
X
? := c(x ? ) is also submodular if c is a DR-submodular function. Therefore we may apply a
c?(X)
X
standard greedy algorithm of [20, 21] to the reduced problem and this is exactly what Greedy does
in our experiment (see Section 4). However, this straightforward reduction only yields a pseudo? = nr; even if the original algorithm was linear, the resulting
polynomial time algorithm since |S|
algorithm would require O(nr) time. Indeed this difference is not negligible since r can be quite
large in practical applications, as illustrated by our experimental evaluation.
Lazy Evaluation. We finally note that we can combine the lazy evaluation technique [11, 14],
which significantly reduces runtime in practice, with our algorithm. Specifically, we first push all
(?s )
the elements in S to a max-based priority queue. Here, the key of an element s ? S is fc(?
. Then
s)
the inner loop of Algorithm 1 is modified as follows: Instead of checking all the elements in S,
we pop elements whose keys are at least ?. For each popped element s ? S, we find k such that
(k?s |x)
0 < k ? r ? x(s) with f kc(?
? ? with binary search. If there is such k, we update x with
s)
x + k?s . Finally, we push s again with the key
f (?s |x)
c(?s )
if x(s) < r.
The correctness of this technique is obvious because of the DR-submodularity of f . In particular,
(?s |x)
, where x is the current vector.
the key of each element s ? S in the queue is always at least f c(?
s)
Hence, we never miss s ? S with
4
4.1
f (k?s |x)
kc(?s )
? ?.
Experiments
Experimental Setting
We conducted experiments on a Linux server with an Intel Xeon E5-2690 (2.90 GHz) processor and
256 GB of main memory. The experiments required, at most, 4 GB of memory. All the algorithms
were implemented in C++ and compiled with g++ 4.6.3.
In our experiments, the cost function c : ZS+ ? R+ is always chosen as c(x) = kxk1 :=
P
S
s?S x(s). Let f : Z+ ? R+ be a submodular function and ? be the worst quality guarantee.
We implemented the following four methods:
? Decreasing-threshold is our method with the lazy evaluation technique. We chose ? =
0.01 as stated otherwise.
? Greedy is a method in which, starting from x = 0, we iteratively increment x(s) for s ? S
that maximizes f (x + ?s ) ? f (x) until we get f (x) ? ?. We also implemented the lazy
evaluation technique [11].
5
? Degree is a method in which we assign x(s) a value proportional to the marginal f (?s ) ?
f (0), where kxk1 is determined by binary search so that f (x) ? ?. Precisely speaking,
x(s) is approximately proportional to the marginal since x(s) must be an integer.
? Uniform is a method that returns k1 for minimum k ? Z+ such that f (k1) ? ?.
We use the following real-world and synthetic datasets to confirm the accuracy and efficiency of our
method against other methods. We set r = 100, 000 for both problems.
Sensor placement. We used a dataset acquired by running simulations on a 129-vertex sensor
network used in Battle of the Water Sensor Networks (BWSN) [15]. We used the ?bwsn-utilities? [1]
program to simulate 3000 random injection events to this network for a duration of 96 hours. Let S
and E be the set of the 129 sensors in the network and the set of the 3000 events, respectively. For
each sensor s ? S and event e ? E, a value z(s, e) is provided, which denotes the time, in minutes,
the pollution has reached s after the injection time.1
We define a function f : ZS+ ? R+ as follows: Let x ? ZS+ be a vector, where we regard x(s) as
the energy level of the sensor s. Suppose that when the pollution reaches a sensor s, the probability
x(s)
that we can detect it is 1 ? (1 ? p)
, where p = 0.0001. In other words, by spending unit energy,
we obtain an extra chance of detecting the pollution with probability p. For each event e ? E, let se
be the first sensor where the pollution is detected in that injection event. Note that se is a random
variable. Let z? = max z(s, e). Then, we define f as follows:
e?E,s?S
f (x) = E E[z? ? z(se , e)],
e?E se
where z(se , e) is defined as z? when there is no sensor that managed to detect the pollution. Intuitively speaking, E[z? ? z(se , e)] expresses how much time we managed to save in the event e
se
on average. Then, we take the average over all the events. A similar function was also used in [11]
to measure the performance of a sensor allocation although they only considered the case p = 1.
This corresponds to the case that by spending unit energy at a sensor s, we can always detect the
pollution that has reached s. We note that f (x) is DR-submodular (see Lemma F.1 for the proof).
Budget allocation problem. In order to observe the behavior of our algorithm for large-scale
instances, we created a synthetic instance of the budget allocation problem [2, 17] as follows: The
instance can be represented as a bipartite graph (S, T ; E), where S is a set of 5,000 vertices and T
is a set of 50,000 vertices. We regard a vertex in S as an ad source, and a vertex in T as a person.
Then, we fix the degrees of vertices in S so that their distribution obeys the power law of ? := 2.5;
that is, the fraction of ad sources with out-degree d is proportional to d?? . For a vertex s ? S of
the supposed degree d, we choose d vertices in T uniformly at random and connect them to s with
edges. We define a function f : ZS+ ? R+ as
X
Y
x(s)
f (x) =
1?
(1 ? p)
,
(6)
t?T
s??(t)
where ?(t) is the set of vertices connected to t and p = 0.0001. Here, we suppose that, by investing
a unit cost to an ad source s ? S, we have an extra chance of influencing a person t ? T with
s ? ?(t) with probability p. Then, f (x) can be seen as the expected number of people influenced
by ad sources. We note that f is known to be a monotone DR-submodular function [17].
4.2
Experimental Results
Figure 1 illustrates the obtained objective value kxk1 for various choices of the worst quality guarantee ? on each dataset. We chose = 0.01 in Decreasing threshold. We can observe that Decreasing threshold attains almost the same objective value as Greedy, and it outperforms Degree
and Uniform.
Figure 2 illustrates the runtime for various choices of the worst quality guarantee ? on each dataset.
We chose = 0.01 in Decreasing threshold. We can observe that the runtime growth of Decreasing threshold is significantly slower than that of Greedy.
1
Although three other values are provided, they showed similar empirical results and we omit them.
6
4
30000
2
10
15000
1
10
0
10000
10
5000
10
-1
0
0
500
1000
1500
?
2000
2500
3000
1
10
0
10
-1
10
-2
10
-3
10
500
1000
1500
?
2000
2500
0
3000
(a) Sensor placement (BWSN)
2.5 1e8
0
500
4
2
10
2500
3000
1.0
0.1
0.01
0.001
0.0001
Greedy
3
10
2
time (s)
2000
4
3
1.0
1500
?
10
Greedy
Decreasing threshold
Degree
Uniform
10
1.5
1000
(a) Relative cost increase
10
Greedy
Decreasing threshold
Degree
Uniform
2.0
0
1.0
0.1
0.01
0.001
0.0001
2
10
-2
10
(a) Sensor placement (BWSN)
Objective value
Relative increase of the objective value
10
time (s)
Objective value
20000
10
Uniform
Decreasing threshold
Degree
Greedy
3
time (s)
25000
3
10
Uniform
Decreasing threshold
Degree
Greedy
1
10
10
1
10
0
10
0.5
0
10
-1
10
0.0
0
-2
5000
10000
?
15000
10
20000
-1
0
5000
10000
?
15000
20000
10
0
500
1000
1500
?
2000
2500
(b) Budget allocation (synthetic)
(b) Budget allocation (synthetic)
(b) Runtime
Figure 1: Objective values
Figure 2: Runtime
Figure 3: Effect of
3000
Figures 3(a) and 3(b) show the relative increase of the objective value and the runtime, respectively,
of our method against Greedy on the BWSN dataset. We can observe that the relative increase of the
objective value gets smaller as ? increases. This phenomenon can be well explained by considering
the extreme case that ? = max f (r1). In this case, we need to choose x = r1 anyway in order to
achieve the worst quality guarantee, and the order of increasing coordinates of x does not matter.
Also, we can see that the empirical runtime grows as a function of 1 , which matches our theoretical
bound.
5
Proof of Theorem 3.2
In this section, we outline the proof of the main theorem. Proofs of some minor claims can be found
in Appendix.
First, we introduce a notation. Let us assume that x is updated L times in the algorithm. Let xi be
the variable x after the i-th update (i = 0, . . . , L). Note that x0 = 0 and xL is the final output of
the algorithm. Let si ? S and ki ? Z+ be the pair used in the i-th update for i = 1, . . . , L; that is,
k c(?si )
for i = 1, . . . , L. Let
xi = xi?1 + ki ?si for i = 1, . . . , L. Let ?0 := 0 and ?i := f (kii?s |x
i?1 )
i
?
?0 := 0 and ?
?i := ?i?1 for i = 1, . . . , L, where ?i is the threshold value on the i-thP
update. Note that
?
?i?1 ? ?
?i for i = 1, . . . , L. Let x? be an optimal solution such that ? ? c(x? ) = s?{x? } c(?s ).
We regard that in the i-th update, the elements of {x? } are charged by the value of ?i (f (?s |
xi?1 ) ? f (?s | xi )). Then, the total charge on {x? } is defined as
T (x, f ) :=
L
X X
?i (f (?s | xi?1 ) ? f (?s | xi )).
s?{x? } i=1
Claim 5.1. Let us fix 1 ? i ? L arbitrary and let ? be the threshold value on the i-th update. Then,
f (?s | xi?1 )
?
f (ki ?si | xi?1 )
? ? and
?
(s ? S).
ki c(?si )
c(?s )
1?
Eliminating ? from the inequalities in Claim 5.1, we obtain
ki c(?si )
1
c(?s )
?
(i = 1, . . . , L,
f (ki ?si | xi?1 )
1 ? f (?s | xi?1 )
7
s ? S)
(7)
Furthermore, we have ?i ? ?
?i ?
Claim 5.2. c(x) ?
1
1? ?i
for i = 1, . . . , L.
1
1? T (x, f ).
?
Claim 5.3. For each s ? {x }, the total charge on s is at most
1
1? (1
+ log(d/?))c(?s ).
Proof. Let us fix s ? {x? } and let l be the minimum i such that f (?s | xi ) = 0. By (7), we have
ki c(?si )
1
c(?s )
?i =
?
?
.
(i = 1, . . . , l)
f (ki ?si | xi?1 )
1 ? f (?s | xi?1 )
Then, we have
L
l?1
X
X
?i (f (?s | xi?1 ) ? f (?s | xi )) =
?i (f (?s | xi?1 ) ? f (?s | xi )) + ?l f (?s | xl?1 )
i=1
i=1
l?1
X
(f (?s | xi?1 ) ? f (?s | xi )) f (?s | xl?1 )
1
c(?s )
+
?
1?
f (?s | xi?1 )
f (?s | xl?1 )
i=1
?
l?1
X
1
f (?s | xi )
c(?s ) 1 +
1?
1?
f (?s | xi?1 )
i=1
l?1
X
1
f (?s | xi?1 )
c(?s ) 1 +
(since 1 ? 1/x ? log x for x ? 1)
log
1?
f (?s | xi )
i=1
f (?s | x0 )
1
1
d
c(?s ) 1 + log
=
?
1 + log
c(?s )
1?
f (?s | xl?1 )
1?
?
?
Proof of Theorem 3.2. Combining these claims, we have
X
1
1
d
d
c(x) ?
? T (x, f ) ?
?
1
+
log
?
c(?
)
?
(1
+
3)
?
1
+
log
? ?c(x? ).
s
1?
(1 ? )2
?
?
?
s?{x }
Thus, x is an approximate solution with the desired ratio.
Let us see that x approximately satisfies the constraint; that is, f (x) ? (1 ? ?)?. We will now
consider a slightly modified version of the algorithm; in the modified algorithm, the threshold is
updated until f (x) = ?. Let x0 be the output of the modified algorithm. Then, we have
X
X ?c(?s )
f (x0 ) ? f (x) ?
f (?s | x) ?
d ? ?d ? ??
cmax nr
0
0
s?{x }
s?{x }
The third inequality holds since c(?s ) ? cmax and |{x0 }| ? nr. Thus f (x) ? (1 ? ?)?.
6
Conclusions
In this paper, motivated by real scenarios in machine learning, we generalized the submodular cover
problem via the diminishing return property over the integer lattice. We proposed a bicriteria approximation algorithm with the following properties: (i) The approximation ratio to the cost almost
matches the one guaranteed by the greedy algorithm [21] and is almost tight in general. (ii) We can
satisfy the worst solution quality with the desired accuracy. (iii) The running time of our algorithm
is roughly O(n log n log r). The dependency on r is exponentially better than that of the greedy algorithm. We confirmed by experiment that compared with the greedy algorithm, the solution quality
of our algorithm is almost the same and the runtime is several orders of magnitude faster.
Acknowledgments
The first author is supported by JSPS Grant-in-Aid for JSPS Fellows. The second author is supported
by JSPS Grant-in-Aid for Young Scientists (B) (No. 26730009), MEXT Grant-in-Aid for Scientific
Research on Innovative Areas (24106003), and JST, ERATO, Kawarabayashi Large Graph Project.
The authors thank Satoru Iwata and Yuji Nakatsukasa for reading a draft of this paper.
8
References
[1] http://www.water-simulation.com/wsp/about/bwsn/.
[2] N. Alon, I. Gamzu, and M. Tennenholtz. Optimizing budget allocation among channels and
influencers. In Proc. of WWW, pages 381?388, 2012.
[3] A. Badanidiyuru and J. Vondr?ak. Fast algorithms for maximizing submodular functions. In
Proc. of SODA, pages 1497?1514, 2014.
[4] Y. Chen, H. Shioi, C. A. F. Montesinos, L. P. Koh, S. Wich, and A. Krause. Active detection
via adaptive submodularity. In Proc. of ICML, pages 55?63, 2014.
[5] R. Iyer and J. Bilmes. Submodular optimization with submodular cover and submodular knapsack constraints. In Proc. of NIPS, pages 2436?2444, 2013.
[6] M. Kapralov, I. Post, and J. Vondrak. Online submodular welfare maximization: Greedy is
optimal. In Proc. of SODA, pages 1216?1225, 2012.
[7] D. Kempe, J. Kleinberg, and E. Tardos. Maximizing the spread of influence through a social
network. In Proc. of KDD, pages 137?146, 2003.
[8] A. Krause and D. Golovin. Submodular function maximization. In Tractability: Practical
Approaches to Hard Problems, pages 71?104. Cambridge University Press, 2014.
[9] A. Krause and J. Leskovec. Efficient sensor placement optimization for securing large water
distribution networks. Journal of Water Resources Planning and Management, 134(6):516?
526, 2008.
[10] A. Krause, A. Singh, and C. Guestrin. Near-optimal sensor placements in gaussian processes:
Theory, efficient algorithms and empirical studies. The Journal of Machine Learning Research,
9:235?284, 2008.
[11] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. VanBriesen, and N. Glance. Cost-effective
outbreak detection in networks. In Proc. of KDD, pages 420?429, 2007.
[12] H. Lin and J. Bilmes. Multi-document summarization via budgeted maximization of submodular functions. In Proceedings of the Annual Conference of the North American Chapter of the
Association for Computational Linguistics, pages 912?920, 2010.
[13] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In Proc.
of NAACL, pages 510?520, 2011.
[14] M. Minoux. Accelerated greedy algorithms for maximizing submodular set functions. Optimization Techniques, Lecture Notes in Control and Information Sciences, 7:234?243, 1978.
[15] A. Ostfeld, J. G. Uber, E. Salomons, J. W. Berry, W. E. Hart, C. A. Phillips, J.-P. Watson,
G. Dorini, P. Jonkergouw, Z. Kapelan, F. di Pierro, S.-T. Khu, D. Savic, D. Eliades, M. Polycarpou, S. R. Ghimire, B. D. Barkdoll, R. Gueli, J. J. Huang, E. A. McBean, W. James, A. Krause,
J. Leskovec, S. Isovitsch, J. Xu, C. Guestrin, J. VanBriesen, M. Small, P. Fischbeck, A. Preis,
M. Propato, O. Piller, G. B. Trachtman, Z. Y. Wu, and T. Walski. The battle of the water
sensor networks (BWSN): A design challenge for engineers and algorithms. Journal of Water
Resources Planning and Management, 134(6):556?568, 2008.
[16] R. Raz and S. Safra. A sub-constant error-probability low-degree test, and a sub-constant
error-probability PCP characterization of NP. In Proc. of STOC, pages 475?484, 1997.
[17] T. Soma, N. Kakimura, K. Inaba, and K. Kawarabayashi. Optimal budget allocation: Theoretical guarantee and efficient algorithm. In Proc. of ICML, 2014.
[18] H. O. Song, R. Girshick, S. Jegelka, J. Mairal, Z. Harchaoui, and T. Darrell. On learning to
localize objects with minimal supervision. In Proc. of ICML, 2014.
[19] M. Sviridenko, J. Vondr?ak, and J. Ward. Optimal approximation for submodular and supermodular optimization with bounded curvature. In Proc. of SODA, pages 1134?1148, 2015.
[20] P.-J. Wan, D.-Z. Du, P. Pardalos, and W. Wu. Greedy approximations for minimum submodular
cover with submodular cost. Computational Optimization and Applications, 45(2):463?474,
2009.
[21] L. A. Wolsey. An analysis of the greedy algorithm for the submodular set covering problem.
Combinatorica, 2(4):385?393, 1982.
9
| 5927 |@word version:4 eliminating:1 polynomial:6 stronger:1 simulation:2 bicriteria:8 reduction:7 initial:1 nii:1 document:3 outperforms:1 existing:1 current:2 com:1 si:9 attracted:1 must:1 pcp:1 kdd:2 update:7 aside:1 greedy:22 short:1 infrastructure:1 detecting:1 multiset:1 draft:1 characterization:1 mathematical:1 become:1 combine:1 introduce:1 x0:5 acquired:1 expected:1 indeed:2 behavior:1 roughly:3 planning:2 multi:1 decreasing:11 considering:2 increasing:1 becomes:1 provided:2 project:1 notation:1 bounded:1 maximizes:1 what:1 z:32 impractical:1 guarantee:10 pseudo:4 fellow:1 concave:2 growth:1 charge:2 runtime:8 exactly:1 control:1 unit:6 grant:3 omit:1 positive:1 before:1 negligible:1 influencing:1 scientist:1 ak:2 approximately:2 might:1 chose:3 collect:1 salomon:1 minoux:1 limited:1 obeys:1 practical:3 acknowledgment:1 practice:1 spot:1 area:1 empirical:3 significantly:2 word:1 get:3 cannot:4 satoru:1 context:2 influence:2 www:2 equivalent:2 customer:1 charged:1 maximizing:3 yoshida:1 straightforward:1 starting:1 duration:1 wich:1 immediately:1 preis:1 notion:1 coordinate:6 increment:3 anyway:1 feel:1 updated:2 tardos:1 suppose:3 element:10 inaba:1 kxk1:3 capture:1 worst:6 region:1 connected:1 trade:1 decrease:1 e8:1 mentioned:2 depend:1 tight:3 badanidiyuru:1 singh:1 bipartite:1 efficiency:1 easily:1 various:6 represented:1 chapter:1 fast:1 effective:1 artificial:2 sc:1 detected:1 whose:4 modular:3 widely:1 valued:5 quite:1 say:3 otherwise:3 ward:1 final:2 online:1 loop:2 combining:1 achieve:1 supposed:1 intuitive:1 description:1 exploiting:1 optimum:1 r1:8 darrell:1 object:1 alon:1 ac:2 minor:1 implemented:3 direction:1 submodularity:11 psuedo:1 tokyo:2 jst:1 cmin:5 pardalos:1 require:1 kii:1 montesinos:1 assign:1 fix:3 generalization:6 preliminary:1 tighter:1 extension:1 hold:1 practically:1 sufficiently:2 considered:3 ground:2 deciding:1 welfare:2 claim:7 achieves:2 proc:12 largest:1 correctness:1 create:1 sensor:25 always:4 gaussian:1 modified:6 rather:2 check:1 attains:1 detect:3 diminishing:13 kc:4 relation:1 aforementioned:1 among:2 dual:1 kempe:1 marginal:3 never:1 placing:1 icml:3 np:3 national:1 kakimura:1 detection:2 organization:1 interest:1 huge:1 investigate:1 evaluation:5 extreme:1 integral:1 edge:1 unless:1 initialized:1 desired:3 girshick:1 theoretical:2 leskovec:3 minimal:1 instance:3 xeon:1 boolean:1 cover:20 lattice:16 maximization:8 cost:17 tractability:1 vertex:9 uniform:6 jsps:3 conducted:1 too:1 dependency:3 connect:1 synthetic:5 considerably:1 tasuku:2 person:2 yuji:1 fundamental:1 off:1 informatics:1 influencers:1 linux:1 again:3 management:2 wan:2 choose:2 huang:1 dr:24 priority:1 admit:1 american:1 return:17 includes:1 north:1 inc:1 matter:1 satisfy:2 ad:6 depends:2 break:1 reached:2 start:1 kapralov:1 maintains:1 parallel:1 thp:1 contribution:1 minimize:3 accuracy:3 yield:5 generalize:1 bilmes:5 confirmed:1 processor:1 influenced:2 reach:1 definition:5 against:2 energy:7 james:1 obvious:1 naturally:1 proof:10 di:1 boil:2 gain:2 proved:1 dataset:4 kawarabayashi:2 recall:1 organized:1 supermodular:1 generality:2 furthermore:5 marketing:1 just:1 until:3 sketch:2 subadditive:6 glance:1 quality:9 scientific:1 grows:1 effect:1 naacl:1 concept:2 managed:2 hence:2 iteratively:1 illustrated:1 erato:1 covering:1 shioi:1 coincides:2 generalized:3 mist:1 outline:1 demonstrate:1 performs:1 spending:2 wise:3 harmonic:1 recently:1 common:1 viral:1 pseudocode:1 jp:2 exponentially:3 discussed:1 association:1 refer:1 cambridge:1 phillips:1 similarly:1 submodular:68 supervision:1 compiled:1 curvature:7 showed:2 optimizing:1 scenario:6 certain:1 server:1 inequality:5 binary:3 watson:1 devise:2 captured:2 minimum:6 seen:2 additional:1 somewhat:1 guestrin:3 determine:1 maximize:1 ii:1 harchaoui:1 reduces:1 technical:1 faster:3 match:2 lin:2 devised:3 hart:1 post:1 basic:1 represent:1 whereas:2 want:4 krause:6 pierro:1 source:6 crucial:1 extra:2 rest:1 subject:4 integer:17 call:1 near:1 iii:1 enough:1 variety:1 iterate:1 matroid:1 reduce:2 inner:1 raz:1 whether:1 motivated:2 allocate:1 utility:1 gb:2 song:1 queue:2 speaking:2 remark:1 generally:1 covered:1 se:7 amount:2 repeating:1 khu:1 reduced:1 http:1 rephrased:1 express:1 key:4 four:1 soma:4 threshold:17 localize:1 budgeted:1 asymptotically:1 graph:2 monotone:10 fraction:1 soda:3 place:1 almost:5 decide:2 wu:2 appendix:4 comparable:2 bit:1 bound:2 ki:8 pay:1 guaranteed:3 securing:1 nonnegative:1 annual:1 placement:8 precisely:3 constraint:3 sviridenko:1 vondrak:1 kleinberg:1 simulate:1 min:7 innovative:1 injection:3 battle:2 describes:1 slightly:2 smaller:1 vanbriesen:2 outbreak:1 intuitively:2 gradually:1 explained:1 koh:1 resource:2 yyoshida:1 popped:1 generalizes:1 operation:1 apply:1 observe:4 save:1 encounter:1 faloutsos:1 slower:1 knapsack:2 original:1 denotes:1 running:8 linguistics:1 cmax:4 restrictive:1 k1:2 hypercube:1 implied:1 objective:12 pollution:6 already:1 question:1 dependence:1 traditional:1 nr:5 said:2 exhibit:1 thank:1 outer:1 polytope:2 reason:1 water:6 length:1 ratio:6 minimizing:1 statement:1 stoc:1 stated:1 design:1 summarization:3 datasets:4 finite:2 situation:1 arbitrary:3 community:1 prompt:1 pair:1 required:1 pop:1 hour:1 nip:1 beyond:1 tennenholtz:1 below:1 reading:1 challenge:1 safra:1 encompasses:1 program:1 max:12 memory:2 power:1 event:7 natural:5 created:1 naive:8 literature:1 berry:1 checking:1 relative:4 law:1 loss:2 lecture:1 wolsey:2 allocation:10 limitation:1 proportional:3 degree:10 gather:1 jegelka:1 consistent:1 placed:3 last:1 copy:3 supported:2 institute:1 ghz:1 regard:4 world:2 fb:1 author:4 collection:2 adaptive:1 far:1 polynomially:1 social:1 approximate:4 vondr:2 preferred:1 confirm:1 active:1 mairal:1 conclude:1 xi:25 yuichi:1 search:3 investing:1 decade:1 channel:1 golovin:1 e5:1 du:1 necessarily:1 main:5 spread:1 nrc:2 xu:1 intel:1 gamzu:1 aid:3 sub:2 xl:5 third:1 young:1 down:2 theorem:8 minute:1 exists:2 magnitude:2 iyer:3 budget:12 push:2 illustrates:2 chen:1 generalizing:1 fc:1 bwsn:7 lazy:4 contained:2 corresponds:2 iwata:1 satisfies:8 chance:2 viewed:1 considerable:1 hard:2 feasible:5 specifically:1 except:1 determined:1 uniformly:1 miss:1 lemma:5 engineer:1 called:2 total:5 duality:1 experimental:4 uber:1 formally:1 combinatorica:1 people:1 mext:1 arises:1 accelerated:1 phenomenon:1 |
5,444 | 5,928 | A Universal Catalyst for First-Order Optimization
Hongzhou Lin1 , Julien Mairal1 and Zaid Harchaoui1,2
1
2
Inria
NYU
{hongzhou.lin,julien.mairal}@inria.fr
zaid.harchaoui@nyu.edu
Abstract
We introduce a generic scheme for accelerating first-order optimization methods
in the sense of Nesterov, which builds upon a new analysis of the accelerated proximal point algorithm. Our approach consists of minimizing a convex objective
by approximately solving a sequence of well-chosen auxiliary problems, leading
to faster convergence. This strategy applies to a large class of algorithms, including gradient descent, block coordinate descent, SAG, SAGA, SDCA, SVRG,
Finito/MISO, and their proximal variants. For all of these methods, we provide
acceleration and explicit support for non-strongly convex objectives. In addition
to theoretical speed-up, we also show that acceleration is useful in practice, especially for ill-conditioned problems where we measure significant improvements.
1
Introduction
A large number of machine learning and signal processing problems are formulated as the minimization of a composite objective function F : Rp ? R:
n
o
minp F (x) , f (x) + ?(x) ,
(1)
x?R
where f is convex and has Lipschitz continuous derivatives with constant L and ? is convex but may
not be differentiable. The variable x represents model parameters and the role of f is to ensure that
the estimated parameters fit some observed data. Specifically, f is often a large sum of functions
n
1X
f (x) ,
fi (x),
(2)
n i=1
and each term fi (x) measures the fit between x and a data point indexed by i. The function ? in (1)
acts as a regularizer; it is typically chosen to be the squared ?2 -norm, which is smooth, or to be a
non-differentiable penalty such as the ?1 -norm or another sparsity-inducing norm [2]. Composite
minimization also encompasses constrained minimization if we consider extended-valued indicator
functions ? that may take the value +? outside of a convex set C and 0 inside (see [11]).
Our goal is to accelerate gradient-based or first-order methods that are designed to solve (1), with
a particular focus on large sums of functions (2). By ?accelerating?, we mean generalizing a mechanism invented by Nesterov [17] that improves the convergence rate of the gradient descent algorithm. More precisely, when ? = 0, gradient descent steps produce iterates (xk )k?0 such that
F (xk ) ? F ? = O(1/k), where F ? denotes the minimum value of F . Furthermore, when the objective F is strongly convex with constant ?, the rate of convergence becomes linear in O((1 ? ?/L)k ).
These rates were shown by Nesterov [16] to be suboptimal for thepclass of first-order methods, and
instead optimal rates?O(1/k 2 ) for the convex case and O((1 ? ?/L)k ) for the ?-strongly convex one?could be obtained by taking gradient steps at well-chosen points. Later, this acceleration
technique was extended to deal with non-differentiable regularization functions ? [4, 19].
For modern machine learning problems involving a large sum of n functions, a recent effort has been
devoted to developing fast incremental algorithms [6, 7, 14, 24, 25, 27] that can exploit the particular
1
structure of (2). Unlike
Pn full gradient approaches which require computing and averaging n gradients
?f (x) = (1/n) i=1 ?fi (x) at every iteration, incremental techniques have a cost per-iteration
that is independent of n. The price to pay is the need to store a moderate amount of information
regarding past iterates, but the benefit is significant in terms of computational complexity.
Main contributions. Our main achievement is a generic acceleration scheme that applies to a
large class of optimization methods. By analogy with substances that increase chemical reaction
rates, we call our approach a ?catalyst?. A method may be accelerated if it has linear convergence rate for strongly convex problems. This is the case for full gradient [4, 19] and block coordinate descent methods [18, 21], which already have well-known accelerated variants. More importantly, it also applies to incremental algorithms such as SAG [24], SAGA [6], Finito/MISO [7, 14],
SDCA [25], and SVRG [27]. Whether or not these methods could be accelerated was an important
open question. It was only known to be the case for dual coordinate ascent approaches such as
SDCA [26] or SDPC [28] for strongly convex objectives. Our work provides a universal positive answer regardless of the strong convexity of the objective, which brings us to our second achievement.
Some approaches such as Finito/MISO, SDCA, or SVRG are only defined for strongly convex objectives. A classical trick to apply them to general convex functions is to add a small regularization
?kxk2 [25]. The drawback of this strategy is that it requires choosing in advance the parameter ?,
which is related to the target accuracy. A consequence of our work is to automatically provide a
direct support for non-strongly convex objectives, thus removing the need of selecting ? beforehand.
Other contribution: Proximal MISO. The approach Finito/MISO, which was proposed in [7]
and [14], is an incremental technique for solving smooth unconstrained ?-strongly convex problems
when n is larger than a constant ?L/? (with ? = 2 in [14]). In addition to providing acceleration
and support for non-strongly convex objectives, we also make the following specific contributions:
? we extend the method and its convergence proof to deal with the composite problem (1);
? we fix the method to remove the ?big data condition? n ? ?L/?.
The resulting algorithm can be interpreted as a variant of proximal SDCA [25] with a different step
size and a more practical optimality certificate?that is, checking the optimality condition does not
require evaluating a dual objective. Our construction is indeed purely primal. Neither our proof of
convergence nor the algorithm use duality, while SDCA is originally a dual ascent technique.
Related work. The catalyst acceleration can be interpreted as a variant of the proximal point algorithm [3, 9], which is a central concept in convex optimization, underlying augmented Lagrangian
approaches, and composite minimization schemes [5, 20]. The proximal point algorithm consists
of solving (1) by minimizing a sequence of auxiliary problems involving a quadratic regularization term. In general, these auxiliary problems cannot be solved with perfect accuracy, and several
notations of inexactness were proposed, including [9, 10, 22]. The catalyst approach hinges upon
(i) an acceleration technique for the proximal point algorithm originally introduced in the pioneer
work [9]; (ii) a more practical inexactness criterion than those proposed in the past.1 As a result, we
are able to control the rate of convergence for approximately solving the auxiliary problems with
an optimization method M. In turn, we are also able to obtain the computational complexity of the
global procedure for solving (1), which was not possible with previous analysis [9, 10, 22]. When
instantiated in different first-order optimization settings, our analysis yields systematic acceleration.
Beyond [9], several works have inspired this paper. In particular, accelerated SDCA [26] is an
instance of an inexact accelerated proximal point algorithm, even though this was not explicitly
stated in [26]. Their proof of convergence relies on different tools than ours. Specifically, we use the
concept of estimate sequence from Nesterov [17], whereas the direct proof of [26], in the context
of SDCA, does not extend to non-strongly convex objectives. Nevertheless, part of their analysis
proves to be helpful to obtain our main results. Another useful methodological contribution was the
convergence analysis of inexact proximal gradient methods of [23]. Finally, similar ideas appear in
the independent work [8]. Their results overlap in part with ours, but both papers adopt different
directions. Our analysis is for instance more general and provides support for non-strongly convex
objectives. Another independent work with related results is [13], which introduce an accelerated
method for the minimization of finite sums, which is not based on the proximal point algorithm.
1
Note that our inexact criterion was also studied, among others, in [22], but the analysis of [22] led to the
conjecture that this criterion was too weak to warrant acceleration. Our analysis refutes this conjecture.
2
2
The Catalyst Acceleration
We present here our generic acceleration scheme, which can operate on any first-order or gradientbased optimization algorithm with linear convergence rate for strongly convex objectives.
Linear convergence and acceleration. Consider the problem (1) with a ?-strongly convex function F , where the strong convexity is defined with respect to the ?2 -norm. A minimization algorithm M, generating the sequence of iterates (xk )k?0 , has a linear convergence rate if there exists
?M,F in (0, 1) and a constant CM,F in R such that
?
F (xk ) ? F ? ? CM,F (1 ? ?M,F )k ,
(3)
where F denotes the minimum value of F . The quantity ?M,F controls the convergence rate: the
larger is ?M,F , the faster is convergence to F ? . However, for a given algorithm M, the quantity
?M,F depends usually on the ratio L/?, which is often called the condition number of F .
The catalyst acceleration is a general approach that allows to wrap algorithm M into an accelerated
algorithm A, which enjoys a faster linear convergence rate, with ?A,F ? ?M,F . As we will also see,
the catalyst acceleration may also be useful when F is not strongly convex?that is, when ? = 0. In
that case, we may even consider a method M that requires strong convexity to operate, and obtain
2 2
?
an accelerated algorithm A that can minimize F with near-optimal convergence rate O(1/k
).
Our approach can accelerate a wide range of first-order optimization algorithms, starting from classical gradient descent. It also applies to randomized algorithms such as SAG, SAGA, SDCA, SVRG
and Finito/MISO, whose rates of convergence are given in expectation. Such methods should be
contrasted with stochastic gradient methods [15, 12], which minimize a different non-deterministic
function. Acceleration of stochastic gradient methods is beyond the scope of this work.
Catalyst action. We now highlight the mechanics of the catalyst algorithm, which is presented in
Algorithm 1. It consists of replacing, at iteration k, the original objective function F by an auxiliary
objective Gk , close to F up to a quadratic term:
?
Gk (x) , F (x) + kx ? yk?1 k2 ,
(4)
2
where ? will be specified later and yk is obtained by an extrapolation step described in (6). Then, at
iteration k, the accelerated algorithm A minimizes Gk up to accuracy ?k .
Substituting (4) to (1) has two consequences. On the one hand, minimizing (4) only provides an
approximation of the solution of (1), unless ? = 0; on the other hand, the auxiliary objective Gk
enjoys a better condition number than the original objective F , which makes it easier to minimize.
For instance, when M is the regular gradient descent algorithm with ? = 0, M has the rate of
convergence (3) for minimizing F with ?M,F = ?/L. However, owing to the additional quadratic
term, Gk can be minimized by M with the rate (3) where ?M,Gk = (? + ?)/(L + ?) > ?M,F . In
practice, there exists an ?optimal? choice for ?, which controls the time required by M for solving
the auxiliary problems (4), and the quality of approximation of F by the functions Gk . This choice
will be driven by the convergence analysis in Sec. 3.1-3.3; see also Sec. C for special cases.
Acceleration via extrapolation and inexact minimization. Similar to the classical gradient descent scheme of Nesterov [17], Algorithm 1 involves an extrapolation step (6). As a consequence, the
solution of the auxiliary problem (5) at iteration k + 1 is driven towards the extrapolated variable yk .
As shown in [9], this step is in fact sufficient to reduce the number of iterations of Algorithm 1 to
solve (1) when ?k = 0?that is, for running the exact accelerated proximal point algorithm.
Nevertheless, to control the total computational complexity of an accelerated algorithm A, it is necessary to take into account the complexity of solving the auxiliary problems (5) using M. This
is where our approach differs from the classical proximal point algorithm of [9]. Essentially, both
algorithms are the same, but we use the weaker inexactness criterion Gk (xk ) ? G?k ? ?k , where the
sequence (?k )k?0 is fixed beforehand, and only depends on the initial point. This subtle difference
has important consequences: (i) in practice, this condition can often be checked by computing duality gaps; (ii) in theory, the methods M we consider have linear convergence rates, which allows us
to control the complexity of step (5), and then to provide the computational complexity of A.
2
? also hides logarithmic factors.
In this paper, we use the notation O(.) to hide constants. The notation O(.)
3
Algorithm 1 Catalyst
input initial estimate x0 ? Rp , parameters ? and ?0 , sequence (?k )k?0 , optimization method M;
1: Initialize q = ?/(? + ?) and y0 = x0 ;
2: while the desired stopping criterion is not satisfied do
3:
Find an approximate solution of the following problem using M
n
o
?
xk ? arg min Gk (x) , F (x) + kx ? yk?1 k2
such that Gk (xk ) ? G?k ? ?k . (5)
p
2
x?R
4:
5:
2
Compute ?k ? (0, 1) from equation ?k2 = (1 ? ?k )?k?1
+ q?k ;
Compute
?k?1 (1 ? ?k?1 )
.
yk = xk + ?k (xk ? xk?1 ) with ?k =
2
?k?1
+ ?k
(6)
6: end while
output xk (final estimate).
3
Convergence Analysis
In this section, we present the theoretical properties of Algorithm 1, for optimization methods M
with deterministic convergence rates of the form (3). When the rate is given as an expectation, a
simple extension of our analysis described in Section 4 is needed. For space limitation reasons, we
shall sketch the proof mechanics here, and defer the full proofs to Appendix B.
3.1
Analysis for ?-Strongly Convex Objective Functions
We first analyze the convergence rate of Algorithm 1 for solving problem 1, regardless of the complexity required to solve the subproblems (5). We start with the ?-strongly convex case.
Theorem 3.1 (Convergence
of Algorithm 1, ?-Strongly Convex Case).
?
Choose ?0 = q with q = ?/(? + ?) and
?k =
2
(F (x0 ) ? F ? )(1 ? ?)k
9
with
?<
?
q.
Then, Algorithm 1 generates iterates (xk )k?0 such that
F (xk ) ? F ? ? C(1 ? ?)k+1 (F (x0 ) ? F ? )
with
8
C= ?
.
( q ? ?)2
(7)
This theorem characterizes the linear convergence rate of Algorithm 1. It is worth ?
noting that the
choice of ? is left?to the discretion of the user, but it can safely be set to ? = 0.9 q in practice.
The choice ?0 = q was made for convenience purposes since it leads to a simplified analysis, but
larger values are also acceptable, both from theoretical and practical point of views. Following an
advice from Nesterov[17, page 81] originally dedicated to his classical gradient descent algorithm,
we may for instance recommend choosing ?0 such that ?02 + (1 ? q)?0 ? 1 = 0.
The choice of the sequence (?k )k?0 is also subject to discussion since the quantity F (x0 ) ? F ? is
unknown beforehand. Nevertheless, an upper bound may be used instead, which will only affects
the corresponding constant in (7). Such upper bounds can typically be obtained by computing a
duality gap at x0 , or by using additional knowledge about the objective. For instance, when F is
non-negative, we may simply choose ?k = (2/9)F (x0 )(1 ? ?)k .
The proof of convergence uses the concept of estimate sequence invented by Nesterov [17], and
introduces an extension to deal with the errors (?k )k?0 . To control the accumulation of errors, we
borrow the methodology of [23] for inexact proximal gradient algorithms. Our construction yields a
convergence result that encompasses both strongly convex and non-strongly convex cases. Note that
estimate sequences were also used in [9], but, as noted by [22], the proof of [9] only applies when
using an extrapolation step (6) that involves the true minimizer of (5), which is unknown in practice.
To obtain a rigorous convergence result like (7), a different approach was needed.
4
Theorem 3.1 is important, but it does not provide yet the global computational complexity of the full
algorithm, which includes the number of iterations performed by M for approximately solving the
auxiliary problems (5). The next proposition characterizes the complexity of this inner-loop.
Proposition 3.2 (Inner-Loop Complexity, ?-Strongly Convex Case).
Under the assumptions of Theorem 3.1, let us consider a method M generating iterates (zt )t?0 for
minimizing the function Gk with linear convergence rate of the form
Gk (zt ) ? G?k ? A(1 ? ?M )t (Gk (z0 ) ? G?k ).
(8)
?
When z0 = xk?1 , the precision ?k is reached with a number of iterations TM = O(1/?
M ), where
? hides some universal constants and some logarithmic dependencies in ? and ?.
the notation O
This proposition is generic since the assumption (8) is relatively standard for gradient-based methods [17]. It may now be used to obtain the global rate of convergence of an accelerated algorithm.
By calling Fs the objective function value obtained after performing s = kTM iterations of the
method M, the true convergence rate of the accelerated algorithm A is
s
s
?
(F (x0 ) ? F ? ). (9)
Fs ? F ? = F x T s ? F ? ? C(1 ? ?) TM (F (x0 ) ? F ? ) ? C 1 ?
M
TM
As a result, algorithm A has a global linear rate of convergence with parameter
? M ??/?? + ?),
?A,F = ?/TM = O(?
where ?M typically depends
? on ? (the greater, the faster is M). Consequently, ? will be chosen to
maximize the ratio ?M / ? + ?. Note that for other algorithms M that do not satisfy (8), additional
analysis and possibly a different initialization z0 may be necessary (see Appendix D for example).
3.2
Convergence Analysis for Convex but Non-Strongly Convex Objective Functions
We now state the convergence rate when the objective is not strongly convex, that is when ? = 0.
Theorem 3.3 (Convergence?of Algorithm 1, Convex, but Non-Strongly Convex Case).
When ? = 0, choose ?0 = ( 5 ? 1)/2 and
?k =
2(F (x0 ) ? F ? )
9(k + 2)4+?
with ? > 0.
Then, Algorithm 1 generates iterates (xk )k?0 such that
!
2
8
2
?
?
?
? 2
F (xk ) ? F ?
1+
(F (x0 ) ? F ) + kx0 ? x k .
(k + 2)2
?
2
(10)
(11)
This theorem is the counter-part of Theorem 3.1 when ? = 0. The choice of ? is left to the discretion
of the user; it empirically seem to have very low influence on the global convergence speed, as long
as it is chosen small enough (e.g., we use ? = 0.1 in practice). It shows that Algorithm 1 achieves the
optimal rate of convergence of first-order methods, but it does not take into account the complexity
of solving the subproblems (5). Therefore, we need the following proposition:
Proposition 3.4 (Inner-Loop Complexity, Non-Strongly Convex Case).
Assume that F has bounded level sets. Under the assumptions of Theorem 3.3, let us consider a
method M generating iterates (zt )t?0 for minimizing the function Gk with linear convergence rate
?
of the form (8). Then, there exists TM = O(1/?
M ), such that for any k ? 1, solving Gk with initial
point xk?1 requires at most TM log(k + 2) iterations of M.
We can now draw up the global complexity of an accelerated algorithm A when M has a linear convergence rate (8) for ?-strongly convex objectives. To produce xk , M is called at most
kTM log(k + 2) times. Using the global iteration counter s = kTM log(k + 2), we get
!
2
2
2
?
8TM
log2 (s)
?
? 2
?
1+
(F (x0 ) ? F ) + kx0 ? x k
.
(12)
Fs ? F ?
s2
?
2
If M is a first-order method, this rate is near-optimal, up to a logarithmic factor, when compared to
the optimal rate O(1/s2 ), which may be the price to pay for using a generic acceleration scheme.
5
4
Acceleration in Practice
We show here how to accelerate existing algorithms M and compare the convergence rates obtained
before and after catalyst acceleration. For all the algorithms we consider, we study rates of convergence in terms of total number of iterations (in expectation, when necessary) to reach accuracy ?.
We first show how to accelerate full gradient and randomized coordinate descent algorithms [21].
Then, we discuss other approaches such as SAG [24], SAGA [6], or SVRG [27]. Finally, we present
a new proximal version of the incremental gradient approaches Finito/MISO [7, 14], along with its
accelerated version. Table 4.1 summarizes the acceleration obtained for the algorithms considered.
Deriving the global rate of convergence. The convergence rate of an accelerated algorithm A is
driven by the parameter
?. In the strongly convex case, the best choice is the one that maximizes
?
the ratio ?M,Gk / ? + ?. As discussed in Appendix C, this rule also holds when (8) is given in
expectation and in many cases where the constant CM,Gk is different than A(Gk (z0 )?G?k ) from (8).
When ? = 0, the choice of ? > 0 only affects
? the complexity by a multiplicative constant. A rule
of thumb is to maximize the ratio ?M,Gk / L + ? (see Appendix C for more details).
After choosing ?, the global iteration-complexity is given by Comp ? kin kout , where kin is an upperbound on the number of iterations performed by M per inner-loop, and kout is the upper-bound on
the number of outer-loop iterations, following from Theorems 3.1-3.3. Note that for simplicity, we
always consider that L ? ? such that we may write L ? ? simply as ?L? in the convergence rates.
4.1
Acceleration of Existing Algorithms
Composite minimization. Most of the algorithms we consider here, namely the proximal gradient
method [4, 19], SAGA [6], (Prox)-SVRG [27], can handle composite objectives with a regularization
penalty ? that admits a proximal operator prox? , defined for any z as
1
prox? (z) , arg min ?(y) + ky ? zk2 .
2
y?Rp
Table 4.1 presents convergence rates that are valid for proximal and non-proximal settings, since
most methods we consider are able to deal with such non-differentiable penalties. The exception is
SAG [24], for which proximal variants are not analyzed. The incremental method Finito/MISO has
also been limited to non-proximal settings so far. In Section 4.2, we actually introduce the extension
of MISO to composite minimization, and establish its theoretical convergence rates.
Full gradient method. A first illustration is the algorithm obtained when accelerating the regular
?full? gradient descent (FG), and how it contrasts with Nesterov?s accelerated variant (AFG). Here,
the optimal choice for
p ? is L ? 2?. In the strongly convex case, we get an accelerated rate of
?
L/? log(1/?)), which is the same as AFG up to logarithmic terms. A similar
convergence in O(n
result can also be obtained for randomized coordinate descent methods [21].
Randomized incremental gradient. We now consider randomized incremental gradient methods,
resp. SAG [24] and SAGA [6]. When ? > 0, we focus on the ?ill-conditioned? setting n ? L/?,
where these methods have the complexity O((L/?) log(1/?)). Otherwise, their complexity becomes
O(n log(1/?)), which is independent of the condition number and seems theoretically optimal [1].
For these methods, the best choice for ? has the form ? = a(L ? ?)/(n + b) ? ?, with (a, b) =
(2, ?2) for SAG, (a, b) = (1/2, 1/2) for SAGA. A similar formula, with a constant L? in place of
L, holds for SVRG; we omit it here for brevity. SDCA [26] and Finito/MISO [7, 14] are actually
related to incremental gradient methods, and the choice for ? has a similar form with (a, b) = (1, 1).
4.2
Proximal MISO and its Acceleration
Finito/MISO was proposed in [7] and [14] for solving the problem (1) when ? = 0 and when f is
a sum of n ?-strongly convex functions fi as in (2), which are also differentiable with L-Lipschitz
derivatives. The algorithm maintains a list of quadratic lower bounds?say (dki )ni=1 at iteration k?
of the functions fi and randomly updates one of them at each iteration by using strong-convexity
6
FG
Comp. ? > 0
O n L
log 1?
?
SAG [24]
SAGA [6]
Finito/MISO-Prox
O
L
?
log
1
?
Acc-FG [19]
Acc-SDCA [26]
O n L?
Catalyst ? > 0
q
? n L log 1
O
?
?
?
O
q
?
O
q
nL
?
log
1
?
nL?
?
log
1
?
Catalyst ? = 0
? n ?L
O
?
not avail.
SDCA [25]
SVRG [27]
Comp. ? = 0
?
1
O L? log ?
q
O n L
log 1?
?
q
nL
?
log 1
O
?
?
O n ?L?
no acceleration
not avail.
Table 1: Comparison of rates of convergence, before and after the catalyst acceleration, resp. in
the strongly-convex and non strongly-convex cases. To simplify, we only present the case where
n ? L/? when ? > 0. For all incremental algorithms, there is indeed no acceleration otherwise.
The quantity L? for SVRG is the average Lipschitz constant of the functions fi (see [27]).
inequalities. The current iterate xk is then obtained by minimizing the lower-bound of the objective
(
)
n
1X k
xk = arg min Dk (x) =
d (x) .
(13)
n i=1 i
x?Rp
Interestingly, since Dk is a lower-bound of F we also have Dk (xk ) ? F ? , and thus the quantity
F (xk ) ? Dk (xk ) can be used as an optimality certificate that upper-bounds F (xk ) ? F ? . Furthermore, this certificate was shown to converge to zero with a rate similar to SAG/SDCA/SVRG/SAGA
under the condition n ? 2L/?. In this section, we show how to remove this condition and how to
provide support to non-differentiable functions ? whose proximal operator can be easily computed.
We shall briefly sketch the main ideas, and we refer to Appendix D for a thorough presentation.
The first idea to deal with a nonsmooth regularizer ? is to change the definition of Dk :
n
1X k
Dk (x) =
d (x) + ?(x),
n i=1 i
which was also proposed in [7] without a convergence proof. Then, because the dki ?s are quadratic
functions, the minimizer xk of Dk can be obtained by computing the proximal operator of ? at a
particular point. The second idea to remove the condition n ? 2L/? is to modify the update of the
lower bounds dki . Assume that index ik is selected among {1, . . . , n} at iteration k, then
(1 ? ?)dik?1 (x)+ ?(fi (xk?1 )+h?fi (xk?1 ), x ? xk?1 i+ ?2 kx ? xk?1 k2 ) if i = ik
k
di (x) =
otherwise
dik?1 (x)
Whereas the original Finito/MISO uses ? = 1, our new variant uses ? = min(1, ?n/2(L ? ?)).
The resulting algorithm turns out to be very close to variant ?5? of proximal SDCA [25], which
corresponds to using a different value for ?. The main difference between SDCA and MISOProx is that the latter does not use duality. It also provides a different (simpler) optimality certificate F (xk ) ? Dk (xk ), which is guaranteed to converge linearly, as stated in the next theorem.
Theorem 4.1 (Convergence of MISO-Prox).
Let (xk )k?0 be obtained by MISO-Prox, then
n ? 1 o
1
E[F (xk )] ? F ? ? (1 ? ? )k+1 (F (x0 ) ? D0 (x0 )) with ? ? min
.
(14)
,
?
4L 2n
Furthermore, we also have fast convergence of the certificate
1
E[F (xk ) ? Dk (xk )] ? (1 ? ? )k (F ? ? D0 (x0 )) .
?
The proof of convergence is given in Appendix D. Finally, we conclude this section by noting that
MISO-Prox enjoys the catalyst acceleration, leading to the iteration-complexity presented in Table 4.1. Since the convergence rate (14) does not have exactly the same form as (8), Propositions 3.2
and 3.4 cannot be used and additional analysis, given in Appendix D, is needed. Practical forms of
the algorithm are also presented there, along with discussions on how to initialize it.
7
5
Experiments
We evaluate the Catalyst acceleration on three methods that have never been accelerated in the past:
SAG [24], SAGA [6], and MISO-Prox. We focus on ?2 -regularized logistic regression, where the
regularization parameter ? yields a lower bound on the strong convexity parameter of the problem.
We use three datasets used in [14], namely real-sim, rcv1, and ocr, which are relatively large, with
up to n = 2 500 000 points for ocr and p = 47 152 variables for rcv1. We consider three regimes:
? = 0 (no regularization), ?/L = 0.001/n and ?/L = 0.1/n, which leads significantly larger
condition numbers than those used in other studies (?/L ? 1/n in [14, 24]). We compare MISO,
SAG, and SAGA with their default parameters, which are recommended by their theoretical analysis
(step-sizes 1/L for SAG and 1/3L for SAGA), and study several accelerated variants. The values of
? and ? and the sequences (?k )k?0 are those suggested in the previous sections, with ? = 0.1 in (10).
Other implementation details are presented in Appendix E.
The restarting strategy for M is key to achieve acceleration in practice. All of the methods we compare store n gradients evaluated at previous iterates of the algorithm. We always use the gradients
from the previous run of M to initialize a new one. We detail in Appendix E the initialization for
each method. Finally, we evaluated a heuristic that constrain M to always perform at most n iterations (one pass over the data); we call this variant AMISO2 for MISO whereas AMISO1 refers to
the regular ?vanilla? accelerated variant, and we also use this heuristic to accelerate SAG.
The results are reported in Table 1. We always obtain a huge speed-up for MISO, which suffers from
numerical stability issues when the condition number is very large (for instance, ?/L = 10?3 /n =
4.10?10 for ocr). Here, not only does the catalyst algorithm accelerate MISO, but it also stabilizes
it. Whereas MISO is slower than SAG and SAGA in this ?small ?? regime, AMISO2 is almost
systematically the best performer. We are also able to accelerate SAG and SAGA in general, even
though the improvement is less significant than for MISO. In particular, SAGA without acceleration
proves to be the best method on ocr. One reason may be its ability to adapt to the unknown strong
convexity parameter ?? ? ? of the objective near the solution. When ?? /L ? 1/n, we indeed obtain
a regime where acceleration does not occur (see Sec. 4). Therefore, this experiment suggests that
adaptivity to unknown strong convexity is of high interest for incremental optimization.
?3
12
10
8
6
4
Relative duality gap
Relative duality gap
Objective function
x 10
0
10
?2
10
?4
10
?6
10
?8
0
50
100
150
10
200
0.1
0.098
100
200
300
400
500
0
10
?2
10
80
20
40
60
80
100
0.4959
0.4958
0.4957
15
20
25
400
500
0
?2
?4
10
?6
10
?8
10
0
20
40
60
80
100
#Passes, Dataset rcv1, ?/L = 10?1 /n
?2
10
?4
10
?6
10
?8
10
10
300
0
?10
10
200
10
Relative duality gap
Relative duality gap
Objective function
0.496
100
10
#Passes, Dataset rcv1, ?/L = 10?3 /n
0
5
0
10
10
10
#Passes, Dataset ocr, ? = 0
?8
10
?10
0
100
#Passes, Dataset rcv1, ? = 0
0
?6
10
#Passes, Dataset real-sim, ?/L = 10?1 /n
?4
10
60
?4
10
10
Relative duality gap
Relative duality gap
Objective function
0.102
40
10
#Passes, Dataset real-sim, ?/L = 10?3 /n
0.104
20
?2
?10
0
#Passes, Dataset real-sim, ? = 0
0.096
0
MISO
AMISO1
AMISO2
SAG
ASAG
SAGA
ASAGA
0
10
0
?2
10
?4
10
?6
10
?8
10
?10
5
10
15
20
#Passes, Dataset ocr, ?/L = 10?3 /n
25
10
0
5
10
15
20
25
#Passes, Dataset ocr, ?/L = 10?1 /n
Figure 1: Objective function value (or duality gap) for different number of passes performed over
each dataset. The legend for all curves is on the top right. AMISO, ASAGA, ASAG refer to the
accelerated variants of MISO, SAGA, and SAG, respectively.
Acknowledgments
This work was supported by ANR (MACARON ANR-14-CE23-0003-01), MSR-Inria joint centre,
CNRS-Mastodons program (Titan), and NYU Moore-Sloan Data Science Environment.
8
References
[1] A. Agarwal and L. Bottou. A lower bound for the optimization of finite sums. In Proc. International
Conference on Machine Learning (ICML), 2015.
[2] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties. Foundations and Trends in Machine Learning, 4(1):1?106, 2012.
[3] H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces.
Springer, 2011.
[4] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[5] D. P. Bertsekas. Convex Optimization Algorithms. Athena Scientific, 2015.
[6] A. J. Defazio, F. Bach, and S. Lacoste-Julien. SAGA: A fast incremental gradient method with support
for non-strongly convex composite objectives. In Adv. Neural Information Processing Systems (NIPS),
2014.
[7] A. J. Defazio, T. S. Caetano, and J. Domke. Finito: A faster, permutable incremental gradient method for
big data problems. In Proc. International Conference on Machine Learning (ICML), 2014.
[8] R. Frostig, R. Ge, S. M. Kakade, and A. Sidford. Un-regularizing: approximate proximal point algorithms
for empirical risk minimization. In Proc. International Conference on Machine Learning (ICML), 2015.
[9] O. G?uler. New proximal point algorithms for convex minimization. SIAM Journal on Optimization,
2(4):649?664, 1992.
[10] B. He and X. Yuan. An accelerated inexact proximal point algorithm for convex minimization. Journal
of Optimization Theory and Applications, 154(2):536?548, 2012.
[11] J.-B. Hiriart-Urruty and C. Lemar?echal. Convex Analysis and Minimization Algorithms I. Springer, 1996.
[12] A. Juditsky and A. Nemirovski. First order methods for nonsmooth convex large-scale optimization.
Optimization for Machine Learning, MIT Press, 2012.
[13] G. Lan. An optimal randomized incremental gradient method. arXiv:1507.02000, 2015.
[14] J. Mairal. Incremental majorization-minimization optimization with application to large-scale machine
learning. SIAM Journal on Optimization, 25(2):829?855, 2015.
[15] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to
stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[16] Y. Nesterov. A method of solving a convex programming problem with convergence rate O(1/k2 ). Soviet
Mathematics Doklady, 27(2):372?376, 1983.
[17] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Springer, 2004.
[18] Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM
Journal on Optimization, 22(2):341?362, 2012.
[19] Y. Nesterov. Gradient methods for minimizing composite functions.
140(1):125?161, 2013.
Mathematical Programming,
[20] N. Parikh and S.P. Boyd. Proximal algorithms. Foundations and Trends in Optimization, 1(3):123?231,
2014.
[21] P. Richt?arik and M. Tak?ac? . Iteration complexity of randomized block-coordinate descent methods for
minimizing a composite function. Mathematical Programming, 144(1-2):1?38, 2014.
[22] S. Salzo and S. Villa. Inexact and accelerated proximal point algorithms. Journal of Convex Analysis,
19(4):1167?1192, 2012.
[23] M. Schmidt, N. Le Roux, and F. Bach. Convergence rates of inexact proximal-gradient methods for
convex optimization. In Adv. Neural Information Processing Systems (NIPS), 2011.
[24] M. Schmidt, N. Le Roux, and F. Bach. Minimizing finite sums with the stochastic average gradient.
arXiv:1309.2388, 2013.
[25] S. Shalev-Shwartz and T. Zhang. Proximal stochastic dual coordinate ascent. arXiv:1211.2717, 2012.
[26] S. Shalev-Shwartz and T. Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized
loss minimization. Mathematical Programming, 2015.
[27] L. Xiao and T. Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM
Journal on Optimization, 24(4):2057?2075, 2014.
[28] Y. Zhang and L. Xiao. Stochastic primal-dual coordinate method for regularized empirical risk minimization. In Proc. International Conference on Machine Learning (ICML), 2015.
9
| 5928 |@word msr:1 briefly:1 version:2 norm:4 seems:1 open:1 reduction:1 initial:3 selecting:1 ours:2 interestingly:1 past:3 reaction:1 kx0:2 existing:2 current:1 yet:1 pioneer:1 numerical:1 zaid:2 designed:1 remove:3 update:2 juditsky:2 selected:1 xk:35 iterates:8 provides:4 certificate:5 simpler:1 zhang:4 mathematical:3 along:2 direct:2 ik:2 yuan:1 consists:3 introductory:1 inside:1 introduce:3 theoretically:1 x0:15 indeed:3 nor:1 mechanic:2 inspired:1 automatically:1 becomes:2 underlying:1 notation:4 bounded:1 maximizes:1 permutable:1 cm:3 interpreted:2 minimizes:1 safely:1 thorough:1 every:1 act:1 sag:17 exactly:1 doklady:1 k2:5 control:6 omit:1 appear:1 bertsekas:1 positive:1 before:2 modify:1 consequence:4 discretion:2 approximately:3 inria:3 initialization:2 studied:1 suggests:1 limited:1 nemirovski:2 range:1 practical:4 acknowledgment:1 practice:8 block:3 differs:1 procedure:1 sdca:15 universal:3 empirical:2 significantly:1 composite:10 boyd:1 regular:3 refers:1 get:2 cannot:2 close:2 convenience:1 operator:4 context:1 influence:1 risk:2 accumulation:1 deterministic:2 lagrangian:1 regardless:2 starting:1 convex:50 simplicity:1 roux:2 rule:2 importantly:1 borrow:1 deriving:1 his:1 stability:1 handle:1 coordinate:10 resp:2 target:1 construction:2 user:2 exact:1 programming:5 us:3 trick:1 trend:2 observed:1 role:1 invented:2 solved:1 adv:2 caetano:1 richt:1 counter:2 yk:5 environment:1 convexity:7 complexity:19 nesterov:12 solving:13 purely:1 upon:2 efficiency:1 accelerate:7 easily:1 joint:1 afg:2 regularizer:2 soviet:1 instantiated:1 fast:4 outside:1 choosing:3 shalev:2 whose:2 heuristic:2 larger:4 valued:1 solve:3 say:1 otherwise:3 anr:2 ability:1 final:1 sequence:10 differentiable:6 hiriart:1 fr:1 loop:5 achieve:1 inducing:2 ky:1 achievement:2 convergence:55 produce:2 generating:3 incremental:15 perfect:1 uler:1 ac:1 avail:2 sim:4 strong:7 auxiliary:10 involves:2 direction:1 drawback:1 owing:1 stochastic:9 require:2 fix:1 proposition:6 extension:3 hold:2 gradientbased:1 considered:1 scope:1 substituting:1 stabilizes:1 achieves:1 adopt:1 purpose:1 proc:4 miso:26 tool:1 minimization:16 mit:1 always:4 arik:1 pn:1 shrinkage:1 focus:3 improvement:2 hongzhou:2 methodological:1 contrast:1 rigorous:1 sense:1 helpful:1 stopping:1 cnrs:1 typically:3 tak:1 issue:1 arg:3 dual:6 ill:2 among:2 constrained:1 special:1 initialize:3 never:1 represents:1 progressive:1 icml:4 warrant:1 minimized:1 others:1 recommend:1 simplify:1 nonsmooth:2 modern:1 randomly:1 beck:1 huge:2 lin1:1 interest:1 introduces:1 analyzed:1 nl:3 primal:2 devoted:1 beforehand:3 necessary:3 unless:1 indexed:1 desired:1 theoretical:5 instance:6 teboulle:1 sidford:1 cost:1 too:1 reported:1 bauschke:1 dependency:1 answer:1 proximal:33 international:4 randomized:7 siam:6 systematic:1 salzo:1 squared:1 central:1 satisfied:1 choose:3 possibly:1 derivative:2 leading:2 account:2 upperbound:1 prox:8 sec:3 includes:1 titan:1 satisfy:1 explicitly:1 sloan:1 depends:3 performed:3 multiplicative:1 later:2 view:1 extrapolation:4 analyze:1 characterizes:2 reached:1 start:1 maintains:1 ktm:3 defer:1 contribution:4 minimize:3 majorization:1 ni:1 accuracy:4 variance:1 yield:3 weak:1 thumb:1 mastodon:1 worth:1 comp:3 acc:2 reach:1 suffers:1 checked:1 definition:1 inexact:8 proof:10 di:1 dataset:10 knowledge:1 improves:1 hilbert:1 subtle:1 actually:2 jenatton:1 originally:3 methodology:1 evaluated:2 though:2 strongly:31 furthermore:3 hand:2 sketch:2 replacing:1 logistic:1 brings:1 quality:1 scientific:1 concept:3 true:2 regularization:6 chemical:1 moore:1 deal:5 noted:1 criterion:5 dedicated:1 fi:8 parikh:1 empirically:1 extend:2 discussed:1 he:1 significant:3 refer:2 unconstrained:1 vanilla:1 mathematics:1 centre:1 frostig:1 add:1 recent:1 hide:3 moderate:1 driven:3 store:2 inequality:1 minimum:2 additional:4 greater:1 performer:1 converge:2 maximize:2 recommended:1 signal:1 ii:2 full:7 harchaoui:1 d0:2 smooth:2 faster:5 adapt:1 bach:4 long:1 lin:1 variant:12 involving:2 regression:1 basic:1 essentially:1 expectation:4 arxiv:3 iteration:21 agarwal:1 addition:2 whereas:4 operate:2 unlike:1 ascent:4 pass:10 subject:1 legend:1 seem:1 call:2 near:3 noting:2 enough:1 iterate:1 affect:2 fit:2 suboptimal:1 reduce:1 regarding:1 idea:4 inner:4 tm:7 ce23:1 whether:1 defazio:2 accelerating:3 effort:1 penalty:4 dik:2 f:3 action:1 useful:3 amount:1 shapiro:1 estimated:1 per:2 write:1 shall:2 key:1 nevertheless:3 lan:2 neither:1 lacoste:1 imaging:1 monotone:1 sum:7 run:1 inverse:1 place:1 almost:1 draw:1 appendix:9 acceptable:1 summarizes:1 bound:10 pay:2 guaranteed:1 quadratic:5 occur:1 precisely:1 constrain:1 calling:1 generates:2 speed:3 optimality:4 min:5 performing:1 rcv1:5 relatively:2 conjecture:2 developing:1 y0:1 kakade:1 equation:1 turn:2 discus:1 mechanism:1 needed:3 urruty:1 ge:1 end:1 zk2:1 apply:1 ocr:7 generic:5 schmidt:2 slower:1 rp:4 original:3 denotes:2 running:1 ensure:1 top:1 log2:1 hinge:1 exploit:1 build:1 especially:1 prof:2 classical:5 establish:1 objective:31 already:1 question:1 quantity:5 strategy:3 villa:1 gradient:34 wrap:1 athena:1 outer:1 reason:2 index:1 illustration:1 providing:1 minimizing:10 ratio:4 subproblems:2 gk:19 stated:2 negative:1 implementation:1 zt:3 unknown:4 perform:1 upper:4 datasets:1 finite:3 descent:14 extended:2 introduced:1 namely:2 required:2 specified:1 nip:2 able:4 beyond:2 suggested:1 usually:1 regime:3 sparsity:2 encompasses:2 program:1 including:2 overlap:1 regularized:3 indicator:1 scheme:6 julien:3 checking:1 relative:6 catalyst:17 loss:1 lecture:1 highlight:1 adaptivity:1 limitation:1 analogy:1 foundation:2 sufficient:1 minp:1 inexactness:3 thresholding:1 xiao:2 systematically:1 echal:1 course:1 extrapolated:1 supported:1 svrg:10 enjoys:3 weaker:1 wide:1 taking:1 fg:3 benefit:1 curve:1 default:1 evaluating:1 valid:1 made:1 simplified:1 far:1 approximate:2 restarting:1 global:9 mairal:3 conclude:1 shwartz:2 continuous:1 iterative:1 un:1 table:5 robust:1 bottou:1 main:5 linearly:1 asaga:2 big:2 s2:2 finito:12 augmented:1 advice:1 combettes:1 precision:1 refutes:1 explicit:1 saga:18 kxk2:1 kin:2 removing:1 theorem:11 z0:4 formula:1 specific:1 substance:1 nyu:3 list:1 admits:1 dki:3 dk:9 macaron:1 exists:3 conditioned:2 kx:3 gap:9 easier:1 generalizing:1 led:1 logarithmic:4 simply:2 applies:5 springer:3 corresponds:1 minimizer:2 relies:1 obozinski:1 goal:1 formulated:1 presentation:1 acceleration:30 consequently:1 towards:1 lipschitz:3 price:2 lemar:1 change:1 specifically:2 contrasted:1 averaging:1 domke:1 called:2 total:2 pas:1 duality:11 exception:1 support:6 latter:1 brevity:1 accelerated:26 evaluate:1 regularizing:1 |
5,445 | 5,929 | Fast and Memory Optimal Low-Rank Matrix
Approximation
Se-Young Yun
MSR, Cambridge
seyoung.yun@inria.fr
Marc Lelarge ?
Inria & ENS
marc.lelarge@ens.fr
Alexandre Proutiere ?
KTH, EE School / ACL
alepro@kth.se
Abstract
In this paper, we revisit the problem of constructing a near-optimal rank k approximation of a matrix M ? [0, 1]m?n under the streaming data model where the
columns of M are revealed sequentially. We present SLA (Streaming Low-rank
Approximation),
an algorithm that is asymptotically accurate, when ksk+1 (M ) =
?
o( mn) where sk+1 (M ) is the (k + 1)-th largest singular value of M . This
means that its average mean-square error converges to 0 as m and n grow large
? (k) and M (k) de? (k) ?M (k) k2 = o(mn) with high probability, where M
(i.e., kM
F
note the output of SLA and the optimal rank k approximation of M , respectively).
Our algorithm makes one pass on the data if the columns of M are revealed in
a random order, and two passes if the columns of M arrive in an arbitrary order.
To reduce its memory footprint and complexity, SLA uses random sparsification,
and samples each entry of M with a small probability ?. In turn, SLA is memory
optimal as its required memory space scales as k(m+n), the dimension of its output. Furthermore, SLA is computationally efficient as it runs in O(?kmn) time (a
constant number of operations is made for each observed entry of M ), which can
be as small as O(k log(m)4 n) for an appropriate choice of ? and if n ? m.
1
Introduction
We investigate the problem of constructing, in a memory and computationally efficient manner, an accurate estimate of the optimal rank k approximation M (k) of a large (m ? n) matrix
M ? [0, 1]m?n . This problem is fundamental in machine learning, and has naturally found numerous applications in computer science. The optimal rank k approximation M (k) minimizes, over
all rank k matrices Z, the Frobenius norm kM ? ZkF (and any norm that is invariant under rotation) and can be computed by Singular Value Decomposition (SVD) of M in O(nm2 ) time (if we
assume that m ? n). For massive matrices M (i.e., when m and n are very large), this becomes
unacceptably slow. In addition, storing and manipulating M in memory may become difficult. In
this paper, we design a memory and computationally efficient algorithm, referred to as Streaming
? (k) . Under
Low-rank Approximation (SLA), that computes a near-optimal rank k approximation M
mild assumptions on M , the SLA algorithm is asymptotically accurate in the sense that as m and n
? (k) ? M (k) k2 = o(mn) with high
grow large, its average mean-square error converges to 0, i.e., kM
F
(k)
probability (we interpret M as the signal that we aim to recover form a noisy observation M ).
To reduce its memory footprint and running time, the proposed algorithm combines random sparsification and the idea of the streaming data model. More precisely, each entry of M is revealed to
the algorithm with probability ?, called the sampling rate. Moreover, SLA observes and treats the
?
Work performed as part of MSR-INRIA joint research centre. M.L. acknowledges the support of the
French Agence Nationale de la Recherche (ANR) under reference ANR-11-JS02-005-01 (GAP project).
?
A. Proutiere?s research is supported by the ERC FSA grant, and the SSF ICT-Psi project.
1
columns of M one after the other in a sequential manner. The sequence of observed columns may
be chosen uniformly at random in which case the algorithm requires one pass on M only, or can be
arbitrary in which case the algorithm needs two passes. SLA first stores ` = 1/(? log(m)) randomly
selected columns, and extracts via spectral decomposition an estimator of parts of the k top right
singular vectors of M . It then completes the estimator of these vectors by receiving and treating the
remain columns sequentially. SLA finally builds, from the estimated top k right singular vectors, the
linear projection onto the subspace generated by these vectors, and deduces an estimator of M (k) .
The analysis of the performance of SLA is presented in Theorems 7, and 8. In summary:
4
? (k) of SLA satisfies:
when m ? n, logm(m) ? ? ? m?8/9 , with probability 1 ? k?, the output M
2
? (k) k2
sk+1 (M ) log(m)
kM (k) ? M
F
= O k2
+ ?
,
(1)
mn
mn
?m
where sk+1 (M ) is the (k + 1)-th singular value of M . SLA requires O(kn) memory space, and if
4
? ? logm(m) and k ? log6 (m), its time is O(?kmn). To ensure the asymptotic accuracy of SLA, the
?
upper-bound in (1) needs to converge to 0 which is true as soon as ksk+1 (M ) = o( mn). In the
case where M is seen as a noisy version of M (k) , this condition quantifies the maximum amount of
noise allowed for our algorithm to be asymptotically accurate.
SLA is memory optimal, since any rank k approximation algorithm needs to at least store its output,
i.e., k right and left singular vectors, and hence needs at least O(kn) memory space. Further observe
that among the class of algorithms sampling each entry of M at a given rate ?, SLA is computational
optimal, since it runs in O(?kmn) time (it does a constant number of operations per observed entry
if k = O(1)). In turn, to the best of our knowledge, SLA is both faster and more memory efficient
than existing algorithms. SLA is the first memory optimal and asymptotically accurate low rank
approximation algorithm.
The approach used to design SLA can be readily extended to devise memory and computationally
efficient matrix completion algorithms. We present this extension in the supplementary material.
Notations. Throughout the paper, we use the following notations. For any m ? n matrix A, we denote by A> its transpose, and by A?1 its pseudo-inverse. We denote by s1 (A) ? ? ? ? ? sn?m (A) ?
0, the singular values of A. When matrices A and B have the same number of rows, [A, B] to denote
the matrix whose first columns are those of A followed by those of B. A? denotes an orthonormal
basis of the subspace perpendicular to the linear span of the columns of A. Aj , Ai , and Aij denote the j-th column of A, the i-th row of A, and the entry of A on the i-th line and j-th column,
respectively. For h ? l, Ah:l (resp. Ah:l ) is the matrix obtained by extracting the columns (resp.
lines) h, . . . , l of A. For any ordered set B = {b1 , . . . , bp } ? {1, . . . , n}, A(B) refers to the matrix
composed by the ordered set B of columns of A. A(B) is defined similarly (but for lines). For real
numbers a ? b, we define |A|ba the matrix with (i, j) entry equal to (|A|ba )ij = min(b, max(a, Aij )).
Finally, for any vector v, kvk denotes its Euclidean norm, whereas for any matrix A, kAkF denotes
its Frobenius norm, kAk2 its operator norm, and kAk? its `? -norm, i.e., kAk? = maxi,j |Aij |.
2
Related Work
Low-rank approximation algorithms have received a lot of attention over the last decade. There are
two types of error estimate for these algorithms: either the error is additive or relative.
To translate our bound (1) in an additive error is easy:
? (k) kF ? kM ? M (k) kF + O k
kM ? M
sk+1 (M ) log1/2 m
?
+
mn
(?m)1/4
!
?
!
mn .
(2)
Sparsifying M to speed-up the computation of a low-rank approximation has been proposed in the
literature and the best additive error bounds have been obtained in [AM07]. When the sampling rate
4
? satisfies ? ? logm m , the authors show that with probability 1 ? exp(? log4 m),
1/2 1/2
k n
k 1/4 n1/4
(k)
(k)
(k) 1/2
?
kM ? M kF ? kM ? M kF + O
+
kM kF
.
(3)
? 1/2
? 1/4
2
This performance guarantee is derived from Lemma 1.1 and Theorem 1.4 in [AM07]. To
compare (2) and (3), note that our assumptions on the bounded entries of M ensures that:
?
s2k+1 (M )
(k)
? k1 and
mn. In particular, we see that the worst case
F ? kM kF ?
mn
1/2kM k1/4
?
k
k
bound for (3) is ??m + (?m)1/4
nm which is always lower than the worst case bound for (2):
1/2 ?
? m
k k1 + log
nm. When k = O(1), our bound is only larger by a logarithmic term in m
?m
compared to [AM07]. However, the algorithm proposed in [AM07] requires to store O(?mn) entries of M whereas SLA needs O(n) memory space. Recall that log4 m ? ?m ? m1/9 so that our
algorithm makes a significant improvement on the memory requirement at a low price in the error
guarantee bounds. Although biased sampling algorithms can reduce the error, the algorithm have to
run leverage scores with multiple passes over data [BJS15]. In a recent work, [CW13] proposes a
time efficient algorithm to compute a low-rank approximation of a sparse matrix. Combined with
[AM07], we obtain an algorithm running in time O(?mn) + O(nk 2 + k 3 ) but with an increased
additive error term.
? (k) of the optimal low-rank apWe can also compare our result to papers providing an estimate M
(k)
?
proximation of M with a relative error ?, i.e. such that kM ? M kF ? (1 + ?)kM ? M (k) kF . To
the best of our knowledge, [CW09] provides the best result in this setting. Theorem 4.4 in [CW09]
shows that provided the rank of M is at least 2(k + 1), their algorithm outputs with probability 1 ? ?
? (k) with relative error ? using memory space O (k/? log(1/?)(n + m)) (note that
a rank-k matrix M
in [CW09], the authors use as unit of memory a bit whereas we use as unit of memory an entry of the
matrix so we removed a log mn factor in their expression to make fair comparisons). To compare
with our result, we can translate our bound (1) in a relative error, and we need to take:
?
?
1/2
m?
sk+1 (M ) + log
mn
1/4
(?m)
?.
? = O ?k
kM ? M (k) kF
First note that since M is assumed to be of rank at least 2(k + 1), we have kM ? M (k) kF ?
sk+1 (M ) > 0 and ? is well-defined. Clearly, for our ? to tend to zero, we need kM ? M (k) kF to
be not too small. For the scenario we have in mind, M is a noisy version of the signal M (k) so that
(k)
M ?M (k) is the noise matrix. When every entry
at random
?of M ?M is generated independently
?
(k)
with a constant variance, kM ? M kF = ?( m + n) while sk+1 (M ) = ?( n). In such a case,
we have ? = o(1) and we improve the memory requirement of [CW09] by a factor ??1 log(k?)?1 .
[CW09] also considers a model where the full columns of M are revealed one after the other in an
arbitrary order, and proposes a one-pass algorithm to derive the rank-k approximation of M with the
same memory requirement. In this general setting, our algorithm is required to make two passes on
the data (and only one pass if the order of arrival of the column is random instead of arbitrary). The
running time of the algorithm scales as O(kmn??1 log(k?)?1 ) to project M onto k??1 log(k?)?1
dimensional random space. Thus, SLA improves the time again by a factor of ??1 log(k?)?1 .
We could also think of using sketching and streaming PCA algorithms to estimate M (k) . When the
columns arrive sequentially, these algorithms identify the left singular vectors using one-pass on the
matrix and then need a second pass on the data to estimate the right singular vectors. For example,
[Lib13] proposes a sketching algorithm that updates the p most frequent directions as columns are
observed. [GP14] shows that with O(km/?) memory space (for p = k/?), this sketching algorithm
? such that kM ? P ? M kF ? (1 + ?)kM ? M (k) kF , where P ? denotes
finds m ? k matrix U
U
U
? . The running time of the algorithm
the projection matrix to the linear span of the columns of U
is roughly O(kmn??1 ), which is much greater than that of SLA. Note also that to identify such
? in one pass on M , it is shown in [Woo14] that we have to use ?(km/?) memory space.
matrix U
This result does not contradict the performance analysis of SLA, since the latter needs two passes
on M if the columns of M are observed in an arbitrary manner. Finally, note that the streaming
PCA algorithm proposed in [MCJ13] does not apply to our problem as this paper investigates a very
specific problem: the spiked covariance model where a column is randomly generated in an i.i.d.
manner.
3
Streaming Low-rank Approximation Algorithm
3
Algorithm 1 Streaming Low-rank Approximation (SLA)
1
Input: M , k, ?, and ` = ? log(m)
1. A(B1 ) , A(B2 ) ? independently sample entries of [M1 , . . . , M` ] at rate ?
2. PCA for the first ` columns: Q ? SPCA(A(B1 ) , k)
3. Trimming the rows and columns of A(B2 ) :
A(B2 ) ? set the entries of rows of A(B2 ) having more than two non-zero entries to 0
A(B2 ) ? set the entries of the columns of A(B2 ) having more than 10m? non-zero entries to 0
4. W ? A(B2 ) Q
5. V? (B1 ) ? (A(B1 ) )> W
6. I? ? A(B1 ) V? (B1 )
Remove A(B1 ) , A(B2 ) , and Q from the memory space
for t = ` + 1 to n do
7. At ? sample entries of Mt at rate ?
8. V? t ? (At )> W
9. I? ? I? + At V? t
Remove At from the memory space
end for
? ? find R
? using the Gram-Schmidt process such that V? R
? is an orthonormal matrix
10. R
1 ? ? ?>
?
11. U ? ?? I RR
? (k) = |U
? V? > |1
Output: M
0
Algorithm 2 Spectral PCA (SPCA)
Input: C ? [0, 1]m?` , k
? ? ` ? k Gaussian random matrix
Trimming: C? ? set the entries of the rows of C with more than 10 non-zero entries to 0
?
? ? C? > C? ? diag(C? > C)
Power Iteration: QR ? QR decomposition of ?d5 log(`)e ?
Output: Q
In this section, we present the Streaming Low-rank Approximation (SLA) algorithm and analyze
its performance. SLA makes one pass on the matrix M , and is provided with the columns of M
one after the other in a streaming manner. The SVD of M is M = U ?V > where U and V are
(m ? m) and (n ? n) unitary matrices and ? is the (m ? n) matrix diag(s1 (M ), . . . sn?m (M )). We
assume (or impose by design of SLA) that the ` (specified below) first observed columns of M are
chosen uniformly at random among all columns. An extension of SLA to scenarios where columns
are observed in an arbitrary order is presented in ?3.5, but this extension requires two passes on M .
To be memory efficient, SLA uses sampling. Each observed entry of M is erased (i.e., set equal to
0) with probability 1 ? ?, where ? > 0 is referred to as the sampling rate. The algorithm, whose
pseudo-code is presented in Algorithm 1, proceeds in three steps:
1. In the first step, we observe ` =
1
? log(m) columns
(B) >
of M chosen uniformly at random. These
columns form the matrix M(B) = U ?(V
) , where B denotes the ordered set of the indexes of
the ` first observed columns. M(B) is sampled at rate ?. More precisely, we apply two independent
sampling procedures, where in each of them, every entry of M(B) is sampled at rate ?. The two
resulting independent random matrices A(B1 ) , and A(B2 ) are stored in memory. A(B1 ) , referred to
as A(B) to simplify the notations, is used in this first step, whereas A(B2 ) will be used in subsequent
steps. Next through a spectral decomposition of A(B) , we derive a (` ? k) orthonormal matrix Q
(B)
such that the span of its column vectors approximates that of the column vectors of V1:k . The first
step corresponds to Lines 1 and 2 in the pseudo-code of SLA.
2. In the second step, we complete the construction of our estimator of the top k right singular
vectors V1:k of M . Denote by V? the k ? n matrix formed by these estimated vectors. We first
compute the components of these vectors corresponding to the set of indexes B as V? (B) = A>
(B1 ) W
with W = A(B2) Q. Then for t = ` + 1, . . . , n, after receiving the t-th column Mt of M , we set
V? t = A>
t W , where At is obtained by sampling entries of Mt at rate ?. Hence after one pass on
M , we get V? = A?> W , where A? = [A(B1 ) , A`+1 , . . . , An ]. As it turns out, multiplying W by A?>
amplifies the useful signal contained in W , and yields an accurate approximation of the span of the
4
top k right singular vectors V1:k of M . The second step is presented in Lines 3, 4, 5, 7 and 8 in SLA
pseudo-code.
? such that U
? > V?
3. In the last step, we deduce from V? a set of column vectors gathered in matrix U
(k)
?
provides an accurate approximation of M . First, using the Gram-Schmidt process, we find R
1
>
? is an orthonormal matrix and compute U
? = AV? R
?R
? in a streaming manner as in
such that V? R
?
1 ? ? ? ? >
>
>
?
?
?
?
?
?
Step 2. Then, U V = ? AV R(V R) where V R(V R) approximates the projection matrix onto
? V? > is close to M (k) . This last step
the linear span of the top k right singular vectors of M . Thus, U
is described in Lines 6, 9, 10 and 11 in SLA pseudo-code.
In the next subsections, we present in more details the rationale behind the three steps of SLA, and
provide a performance analysis of the algorithm.
3.1
Step 1. Estimating right-singular vectors of the first batch of columns
(B)
The objective of the first step is to estimate V1:k , those components of the top k right singular
vectors of M whose indexes are in the set B (remember that B is the set of indexes of the ` first
observed columns). This estimator, denoted by Q, is obtained by applying the power method to
extract the top k right singular vector of M(B) , as described in Algorithm 2. In the design of this
algorithm and its performance analysis, we face two challenges: (i) we only have access to a sampled
version A(B) of M(B) ; and (ii) U ?(V (B) )> is not the SVD of M(B) since the column vectors of
(B)
V1:k are not orthonormal in general (we keep the components of these vectors corresponding to the
set of indexes B). Hence, the top k right singular vectors of M(B) that we extract in Algorithm 2 do
(B)
not necessarily correspond to V1:k .
To address (i), in Algorithm 2, we do not directly extract the top k right singular vectors of A(B) .
We first remove the rows of A(B) with too many non-zero entries (i.e., too many observed entries
from M(B) ), since these rows would perturb the SVD of A(B) . Let us denote by A? the obtained
? and remove its diagonal entries to
trimmed matrix. We then form the covariance matrix A?> A,
? Removing the diagonal entries is needed because of
obtain the matrix ? = A?> A? ? diag(A?> A).
the sampling procedure. Indeed, the diagonal entries of A?> A? scale as ?, whereas its off-diagonal
entries scale as ? 2 . Hence, when ? is small, the diagonal entries would clearly become dominant in
the spectral decomposition. We finally apply the power method to ? to obtain Q. In the analysis of
the performance of Algorithm 2, the following lemma will be instrumental, and provides an upper
bound of the gap between ? and (M(B) )> M(B) using the matrix Bernstein inequality (Theorem 6.1
[Tro12]). All proofs are detailed in Appendix.
8
Lemma 1 If ? ? m? 9 , with probability 1 ?
some constant c1 > 1.
1
`2 ,
k? ? ? 2 (M(B) )> M(B) k2 ? c1 ?
p
m` log(`), for
To address (ii), we first establish in Lemma 2 that for an appropriate choice of `, the column vectors
(B)
of V1:k are approximately orthonormal. This lemma is of independent interest, and relates the SVD
of a truncated matrix, here M(B) , to that of the initial matrix M . More precisely:
Lemma 2 If ? ? m?8/9 , there exists a ` ? k matrix V? (B) such that its column vectors are
p orthonorn
mal, and with probability 1 ? exp(?m1/7 ), for all i ? k satisfying that s2i (M ) ? ?`
m` log(`),
p
1
(B)
(B)
k n` V1:i ? V?1:i k2 ? m? 3 .
(B)
Note that as suggested by the above lemma, it might be impossible to recover
pVi when the corren
sponding singular value si (M ) is small (more precisely, when s2i (M ) ? ?`
m` log(`)). However,
the singular vectors corresponding to such small singular values generate very little error for lowrank approximation. Thus,
interested in singular vectors p
whose singular values are
p we are only
n
n
above the threshold ( ?`
m` log(`))1/2 . Let k 0 = max{i : s2i (M ) ? ?`
m` log(`), i ? k}.
Now to analyze the performance of Algorithm 2 when applied to A(B) , we decompose ? as ? =
2
0
0
(B)
(B)
(B)
? 2 ` ? (B)
V 0 (?1:k0 )2 (V? 0 )> + Y , where Y = ? ? ? ` V? 0 (?1:k0 )2 (V? 0 )> is a noise matrix. The
n
1:k
1:k
1:k
n
5
1:k
1:k
1:k
following lemma quantifies how noise may affect the performance of the power method, i.e., it
(B)
provides an upper bound of the gap between Q and V?1:k0 as a function of the operator norm of the
noise matrix Y :
Lemma 3 With probability 1 ? `12 , the output Q of SPCA when applied to A(B) satisfies for all
(B)
k2
i ? k 0 : k(V? )> ? Q? k2 ? 2 3kY
`
2.
1:i
?
n si (M )
In the proof, we analyze the power iteration algorithm from results in [HMT11].
To complete the performance analysis of Algorithm 2, it remains to upper bound kY k2 . To this aim,
we decompose Y into three terms:
>
Y = ? ? ? 2 (M(B) )> M(B) + ? 2 (M(B) )> I ? U1:k0 U1:k
0 M(B) +
` ? (B) 1:k0 2 ? (B) >
>
.
V1:k0 (?1:k0 ) (V1:k0 )
? 2 (M(B) )> U1:k0 U1:k
0 M(B) ?
n
The first term can be controlled using Lemma 1, and the last term is upper bounded using Lemma
2. Finally, the second term corresponds to the error made by ignoring the singular vectors which
are not within the top k 0 . To estimate this term, we use the matrix Chernoff bound (Theorem 2.2 in
[Tro11]), and prove that:
p
>
2
Lemma 4 With probability 1 ? exp(?m1/4 ), k(I ? U1:k0 U1:k
? 2? m` log(`) +
0 )M(B) k2
` 2
n sk+1 (M ).
(B)
In summary, combining the four above lemmas, we can establish that Q accurately estimates V?1:k :
Theorem 5 If ? ? m?8/9 , with probability 1 ?
(B)
A(B) satisfies for all i ? k: k(V?1:i )> ?
c1 is the constant from Lemma 1.
3.2
3
`2 , the output Q of Algorithm 2?when applied to
2
3? 2 (s2k+1 (M )+2m 3 n)+3(2+c1 )? n
m` log(`)
`
Q? k2 ?
, where
? 2 s2i (M )
Step 2: Estimating the principal right singular vectors of M
In this step, we aim at estimating the top k right singular vectors V1:k , or at least at producing
k vectors whose linear span approximates that of V1:k . Towards this objective, we start from Q
derived in the previous step, and define the (m ? k) matrix W = A(B2 ) Q. W is stored and kept in
memory for the remaining of the algorithm.
It is tempting to directly read from W the top k 0 left singular vectors U1:k0 . Indeed, we know that
p
p
(B)
Q ? n` V1:k , and E[A(B2 ) ] = ?U ?(V (B) )> , and hence E[W ] ? ? n` U1:k ?1:k
1:k . However, the
level of the noise in W is too important so as to accurately extract U1:k0 . In turn, W can be written
as ?U ?(V (B) )> Q + Z, where Z = (A(B2 ) ? ?U ?(V (B) )> )Q partly captures the noise in W . It
?
?
is then easy to see that the level of the noise Z satisfies E[kZk2 ] ? E[kZkF / k] = ?( ?m).
Pm Pk
2
Indeed, first observe that Z is of rank k. Then E[kZk2F ] =
i=1
j=1 E[Zij ] ? mk?: this is
(B) >
due to the facts that (i) Q and A(B2 ) ? ?U ?(V
) are independent (since A(B1 ) and A(B2 ) are
independent), (ii) kQj k22 = 1 for all j ? k, and (iii) the entries of A(B2 ) are independent with
(B) >
variance ?(?(1 ? ?)). However, for all j ? k 0 , the j-th singular value
) Q scales as
qof ?U ?(V
q
?
?
?m
`
O(? m`) = O( log(m) ), since sj (M ) ? mn and sj (M(B) ) ? n sj (M ) when j ? k 0 from
Lemma 2.
Instead, from W , A(B1 ) and the subsequent sampled arriving columns At , t > `, we produce
a (n ? k) matrix V? whose linear span approximates that of V1:k0 . More precisely, we first let
>
?t
V? (B) = A>
(B1 ) W . Then for all t = ` + 1, . . . , n, we define V = At W , where At is obtained
from the t-th observed column of M after sampling each of its entries at rate ?. Multiplying W by
A? = [A(B1 ) , A`+1 , . . . , An ] amplifies the useful signal in W , so that V? = A?> W constitutes a good
approximation of V1:k . To understand why, we can rewrite V? as follows:
V? = ? 2 M > M(B) Q + ?M > (A(B ) ? ?M(B) )Q + (A? ? ?M )> W.
2
6
In the above equation, the first term corresponds to the useful signal and the two remaining terms
constitute noise matrices. From Theorem 5, the linear span of columns of Q
that of
qapproximates
p
`
(B)
0
2
>
2 2
?
the columns of V
and thus, for j ? k , sj (? M M(B) Q) ? ? s (M )
? ? mn log(`).
j
n
The spectral norms of the noise matrices are bounded using random matrix arguments, and the fact
that (A(B2 ) ? ?M(B) ) and (A? ? ?M ) are zero-mean random matrices with independent entries. We
can show (see Lemma 14 given in the supplementary material) using the independence of A(B1 )
?
and A(B2) that with high probability, k?M > (A(B2 ) ? ?M(B) )Qk2 = O(? mn). We may also
p
establish that with high probability, k(A? ? ?M )> W k2 = O(? m(m + n)). This is a consequence
of a result derived in [AM07] (quoted
pin Lemma 13 in the supplementary material) stating that with
high probability, kA? ? ?M k = O( ?(m + n)) ?
and of the fact that due to the trimming process
presented in Line 3 in Algorithm 1, kW k2 = O( ?m). In summary, as soon as n scales at least
as m, the noise level becomes negligible, and the span of V?1:k0 provides an accurate approximation
of that of V1:k0 . The above arguments are made precise and rigorous in the supplementary material.
The following theorem summarizes the accuracy of our estimator of V1:k .
log4 (m)
m
8
? ? ? m? 9 for all i ? k, there exists a constant c2 such that with
?
?
s2 (M )+n log(m) m/?+m n log(m)/?
probability 1 ? k?, kVi> (V?1:k )? k2 ? c2 k+1
.
2
s (M )
Theorem 6 With
i
3.3
Step 3: Estimating the principal left singular vectors of M
In the last step, we estimate the principal left singular vectors of M to finally derive an estimator of
M (k) , the optimal rank-k approximation of M . The construction of this estimator is based on the
>
>
observation that M (k) = U1:k ?1:k
1:k V1:k = M PV1:k , where PV1:k = V1:k V1:k is an (n ? n) matrix
representing the projection onto the linear span of the top k right singular vectors V1:k of M . Hence
to estimate M (k) , we try to approximate the matrix PV1:k . To this aim, we construct a (k ? k) matrix
? so that the column vectors of V? R
? form an orthonormal basis whose span corresponds to that
R
?
of the column vectors of V . This construction is achieved using Gram-Schmidt process. We then
? ?.
? (k) of M (k) is 1 AP
?R
? > V? > , and finally our estimator M
approximate PV1:k by PV? = V? R
V
?
? (k) can be made in a memory efficient way accommodating for our streaming
The construction of M
model where the columns of M arrive one after the other, as described in the pseudo-code of SLA.
First, after constructing V? (B) in Step 2, we build the matrix I? = A(B1 ) V? (B) . Then, for t =
` + 1, . . . , n, after constructing the t-th line V? t of V? , we update I? by adding to it the matrix At V? t ,
? of the
so that after all columns of M are observed, I? = A?V? . Hence we can build an estimator U
? = 1 I?R
?R
? > , and finally obtain M
? (k) = |U
? V? > |1 .
principal left singular vectors of M as U
0
?
? (k) , we decompose M (k) ? M
? (k) as: M (k) ? M
? (k) =
To quantify the estimation error of M
1 ?
(k)
(k)
M (I ? PV? ) + (M
? M )PV? + (M ? ? A)PV? . The first term of the r.h.s. of the above
equation can be bounded
6: for i ? k, we have si (M )2 kVi> V?? k ? z =
p using Theorem
p
2
c2 (s
(M ) + n log(m) m/? + m n log(m)/?), and hence we can conclude that for all i ? k,
k+1
si (M )Ui V > (I ? P ? )
2 ? z. The second term can be easily bounded observing that the matrix
i
V
F
(M (k) ? M )PV? is of rank k: k(M (k) ? M )PV? k2F ? kk(M (k) ? M )PV? k22 ? kkM (k) ? M k22 =
ksk+1 (M )2 . The last term in the r.h.s. can be controlled as in the performance analysis of Step 2, and
2
observing that ( 1 A? ? M )P ? is of rank k: k 1 A? ? M P ? k2 ? k
1 A? ? M
= O(k?(m + n)).
V
?
V
?
F
?
2
It is then easy to remark that for the range of the parameter ? we are interested in, the upper bound z
of the first term dominates the upper bound of the two other terms. Finally, we obtain the following
result (see the supplementary material for a complete proof):
log4 (m)
m
8
? ? ? m? 9 , with probability
1 ? k?, the output of theSLA algorithm
q
2
(k)
> 1 2
?
?
s
(M
)
kM
?[U V ]0 kF
log(m)
log(m)
k+1
2
?
satisfies with constant c3 :
=
c
k
+
+
.
3
mn
mn
?n
?m
Theorem 7 When
7
Note that if
log4 (m)
m
8
? ? ? m? 9 , then
log(m)
?
?m
(k)
an asymptotically accurate estimate of M
3.4
= o(1). Hence if n ? m, the SLA algorithm provides
as soon as
sk+1 (M )2
mn
= o(1).
Required Memory and Running Time
Required memory.
Lines 1-6 in SLA pseudo-code. A(B1 ) and A(B2 ) have O(?m`) non-zero entries and we need
O(?m` log m) bits to store the id of these entries. Similarly, the memory required to store ? is
O(? 2 m`2 log(`)). Storing Q further requires O(`k) memory. Finally, V? (B1 ) and I? computed in
1
Line 6 require O(`k) and O(km) memory space, respectively. Thus, when ` = ? log
m , this first part
of the algorithm requires O(k(m + n)) memory.
Lines 7-9. Before we treat the remaining columns, A(B1 ) , A(B2 ) , and Q are removed from the mem?
ory. Using this released memory, when the t-th column arrives, we can store it, compute V? t and I,
and remove the column to save memory. Therefore, we do not need additional memory to treat the
remaining columns.
? . To this aim, the memory required is O(k(m + n)).
Lines 10 and 11. From I? and V? , we compute U
Running time.
From line 1 to 6. The SPCA algorithm requires O(`k(? 2 m` + k) log(`)) floating-point operations to
compute Q. W , V? , and I? are inner products, and their computations require O(?km`) operations.
1
With ` = ? log(m)
, the number of operations to treat the first ` columns is O(`k(? 2 m` + k) log(`) +
2
k?m`) = O(km) + O( k? ).
From line 7 to 9. To compute V? t and I? when the t-th column arrives, we need O(?km) operations.
Since there are n ? ` remaining columns, the total number of operations is O(?kmn).
? is computed from V? using the Gram-Schmidt process which requires O(k 2 m)
Lines 10 and 11 R
?R
? > using O(k 2 m) operations. Hence we conclude that:
operations. We then compute I?R
In summary, we have shown that:
Theorem 8 The memory required to run the SLA algorithm is O(k(m + n)). Its running time is
2
O(?kmn + k? + k 2 m).
4
2
Observe that when ? ? max( (log(m))
, (log(m))
) and k ? (log(m))6 , we have ?kmn ? k 2 /? ?
m
n
2
k m, and therefore, the running time of SLA is O(?kmn).
3.5
General Streaming Model
SLA is a one-pass low-rank approximation algorithm, but the set of the ` first observed columns
of M needs to be chosen uniformly at random. We can readily extend SLA to deal with scenarios
where the columns of M can be observed in an arbitrary order. This extension requires two passes
on M , but otherwise performs exactly the same operations as SLA. In the first pass, we extract a set
of ` columns chosen uniformly at random, and in the second pass, we deal with all other columns.
To extract ` randomly selected columns in the first pass, we proceed as follows. Assume that when
the t-th column of M arrives, we have already extracted l columns. Then the t-th column is extracted
`?l
with probability n?t+1
. This two-pass version of SLA enjoys the same performance guarantees as
those of SLA.
4
Conclusion
This paper revisited the low rank approximation problem. We proposed a streaming algorithm that
samples the data and produces a near optimal solution with a vanishing mean square error. The
algorithm uses a memory space scaling linearly with the ambient dimension of the matrix, i.e. the
memory required to store the output alone. Its running time scales as the number of sampled entries
of the input matrix. The algorithm is relatively simple, and in particular, does exploit elaborated
techniques (such as sparse embedding techniques) recently developed to reduce the memory requirement and complexity of algorithms addressing various problems in linear algebra.
8
References
[AM07]
Dimitris Achlioptas and Frank Mcsherry. Fast computation of low-rank matrix approximations. Journal of the ACM (JACM), 54(2):9, 2007.
[BJS15]
Srinadh Bhojanapalli, Prateek Jain, and Sujay Sanghavi. Tighter low-rank approximation
via sampling the leveraged element. In Proceedings of the Twenty-Sixth Annual ACMSIAM Symposium on Discrete Algorithms, pages 902?920. SIAM, 2015.
[CW09]
Kenneth L Clarkson and David P Woodruff. Numerical linear algebra in the streaming
model. In Proceedings of the forty-first annual ACM symposium on Theory of computing,
pages 205?214. ACM, 2009.
[CW13]
Kenneth L Clarkson and David P Woodruff. Low rank approximation and regression in
input sparsity time. In Proceedings of the forty-fifth annual ACM symposium on Theory
of computing, pages 81?90. ACM, 2013.
[GP14]
Mina Ghashami and Jeff M Phillips. Relative errors for deterministic low-rank matrix
approximations. In SODA, pages 707?717. SIAM, 2014.
[HMT11] Nathan Halko, Per-Gunnar Martinsson, and Joel A Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions.
SIAM review, 53(2):217?288, 2011.
[Lib13]
Edo Liberty. Simple and deterministic matrix sketching. In Proceedings of the 19th ACM
SIGKDD international conference on Knowledge discovery and data mining, pages 581?
588. ACM, 2013.
[MCJ13] Ioannis Mitliagkas, Constantine Caramanis, and Prateek Jain. Memory limited, streaming
PCA. In Advances in Neural Information Processing Systems, 2013.
[Tro11]
Joel A Tropp. Improved analysis of the subsampled randomized hadamard transform.
Advances in Adaptive Data Analysis, 3(01n02):115?126, 2011.
[Tro12]
Joel A Tropp. User-friendly tail bounds for sums of random matrices. Foundations of
Computational Mathematics, 12(4):389?434, 2012.
[Woo14] David Woodruff. Low rank approximation lower bounds in row-update streams. In
Advances in Neural Information Processing Systems, pages 1781?1789, 2014.
9
| 5929 |@word mild:1 msr:2 version:4 norm:8 instrumental:1 zkf:1 km:26 decomposition:6 covariance:2 initial:1 score:1 zij:1 woodruff:3 existing:1 ka:1 si:4 written:1 readily:2 subsequent:2 additive:4 kqj:1 numerical:1 remove:5 treating:1 update:3 alone:1 selected:2 woo14:2 unacceptably:1 vanishing:1 recherche:1 provides:6 revisited:1 c2:3 become:2 symposium:3 prove:1 acmsiam:1 combine:1 manner:6 indeed:3 roughly:1 little:1 becomes:2 project:3 provided:2 moreover:1 notation:3 bounded:5 estimating:4 bhojanapalli:1 prateek:2 minimizes:1 developed:1 finding:1 sparsification:2 guarantee:3 pseudo:7 remember:1 every:2 friendly:1 exactly:1 k2:15 unit:2 grant:1 producing:1 before:1 negligible:1 treat:4 consequence:1 id:1 approximately:1 ap:1 inria:3 acl:1 might:1 limited:1 perpendicular:1 range:1 footprint:2 procedure:2 projection:4 refers:1 get:1 onto:4 close:1 operator:2 applying:1 impossible:1 deterministic:2 attention:1 independently:2 estimator:10 d5:1 orthonormal:7 kkm:1 embedding:1 hmt11:2 resp:2 construction:4 massive:1 user:1 us:3 element:1 satisfying:1 observed:15 capture:1 worst:2 mal:1 ensures:1 removed:2 observes:1 complexity:2 ui:1 js02:1 rewrite:1 algebra:2 basis:2 easily:1 joint:1 k0:15 various:1 caramanis:1 s2i:4 jain:2 fast:2 whose:7 supplementary:5 larger:1 otherwise:1 anr:2 think:1 transform:1 noisy:3 fsa:1 sequence:1 rr:1 product:1 fr:2 frequent:1 deduces:1 combining:1 hadamard:1 translate:2 frobenius:2 ky:2 qr:2 amplifies:2 requirement:4 produce:2 converges:2 derive:3 completion:1 stating:1 ij:1 lowrank:1 school:1 received:1 quantify:1 direction:1 liberty:1 material:5 require:2 decompose:3 tighter:1 extension:4 exp:3 released:1 estimation:1 largest:1 clearly:2 always:1 gaussian:1 aim:5 derived:3 improvement:1 rank:33 rigorous:1 sigkdd:1 sense:1 streaming:16 manipulating:1 proutiere:2 interested:2 among:2 denoted:1 proposes:3 equal:2 construct:1 having:2 sampling:11 chernoff:1 kw:1 k2f:1 constitutes:1 sanghavi:1 simplify:1 randomly:3 composed:1 n02:1 floating:1 subsampled:1 logm:3 n1:1 interest:1 trimming:3 investigate:1 mining:1 joel:3 arrives:3 kvk:1 behind:1 mcsherry:1 accurate:9 ambient:1 euclidean:1 mk:1 increased:1 column:62 addressing:1 entry:35 ory:1 too:4 stored:2 kn:2 combined:1 fundamental:1 siam:3 international:1 randomized:1 probabilistic:1 off:1 receiving:2 sketching:4 qk2:1 again:1 nm:2 leveraged:1 de:2 b2:22 ioannis:1 kzk2:1 stream:1 performed:1 try:1 lot:1 analyze:3 observing:2 start:1 recover:2 qof:1 elaborated:1 square:3 formed:1 accuracy:2 variance:2 yield:1 identify:2 gathered:1 correspond:1 accurately:2 multiplying:2 randomness:1 ah:2 edo:1 sixth:1 lelarge:2 naturally:1 proof:3 psi:1 sampled:5 recall:1 knowledge:3 subsection:1 improves:1 alexandre:1 improved:1 furthermore:1 achlioptas:1 tropp:3 french:1 aj:1 k22:3 true:1 hence:10 read:1 deal:2 kak:2 mina:1 yun:2 complete:3 performs:1 recently:1 rotation:1 mt:3 extend:1 martinsson:1 m1:4 approximates:4 interpret:1 tail:1 significant:1 cambridge:1 ai:1 phillips:1 sujay:1 pm:1 similarly:2 erc:1 mathematics:1 centre:1 access:1 deduce:1 dominant:1 agence:1 recent:1 constantine:1 scenario:3 store:7 inequality:1 devise:1 seen:1 greater:1 additional:1 impose:1 forty:2 converge:1 tempting:1 signal:5 ii:3 relates:1 multiple:1 full:1 ghashami:1 faster:1 kzk2f:1 controlled:2 regression:1 iteration:2 sponding:1 achieved:1 c1:4 addition:1 whereas:5 completes:1 singular:31 grow:2 biased:1 pass:7 tend:1 extracting:1 ee:1 near:3 ssf:1 leverage:1 revealed:4 spca:4 easy:3 unitary:1 bernstein:1 iii:1 affect:1 independence:1 reduce:4 idea:1 inner:1 expression:1 pca:5 trimmed:1 clarkson:2 proceed:1 constitute:1 remark:1 useful:3 se:2 detailed:1 amount:1 generate:1 revisit:1 estimated:2 per:2 discrete:1 sparsifying:1 gunnar:1 four:1 threshold:1 sla:44 kenneth:2 kept:1 v1:21 asymptotically:5 sum:1 run:4 inverse:1 soda:1 arrive:3 throughout:1 appendix:1 summarizes:1 investigates:1 scaling:1 bit:2 bound:16 followed:1 annual:3 precisely:5 bp:1 pvi:1 u1:10 speed:1 argument:2 span:11 min:1 nathan:1 relatively:1 remain:1 s1:2 invariant:1 spiked:1 computationally:4 equation:2 remains:1 turn:4 pin:1 needed:1 mind:1 know:1 end:1 operation:10 apply:3 observe:4 appropriate:2 spectral:5 save:1 schmidt:4 batch:1 denotes:5 running:9 top:13 ensure:1 remaining:5 exploit:1 k1:3 build:3 perturb:1 establish:3 objective:2 already:1 kak2:1 diagonal:5 am07:7 kth:2 subspace:2 accommodating:1 considers:1 code:6 index:5 kk:1 providing:1 difficult:1 frank:1 ba:2 design:4 mcj13:2 twenty:1 upper:7 av:2 observation:2 truncated:1 extended:1 precise:1 arbitrary:7 david:3 required:8 specified:1 c3:1 nm2:1 address:2 suggested:1 proceeds:1 below:1 dimitris:1 sparsity:1 challenge:1 max:3 memory:44 cw13:2 power:5 s2k:2 mn:20 representing:1 improve:1 numerous:1 acknowledges:1 log1:1 extract:7 sn:2 review:1 ict:1 literature:1 discovery:1 kf:15 asymptotic:1 relative:5 kakf:1 ksk:3 log6:1 rationale:1 foundation:1 storing:2 row:8 summary:4 supported:1 last:6 soon:3 transpose:1 arriving:1 enjoys:1 aij:3 understand:1 face:1 fifth:1 sparse:2 dimension:2 gram:4 computes:1 author:2 made:4 adaptive:1 sj:4 approximate:3 contradict:1 keep:1 sequentially:3 proximation:1 mem:1 b1:21 assumed:1 conclude:2 quoted:1 quantifies:2 sk:9 decade:1 why:1 ignoring:1 necessarily:1 constructing:5 marc:2 diag:3 pv1:4 pk:1 linearly:1 s2:1 noise:11 alepro:1 kmn:9 arrival:1 allowed:1 fair:1 referred:3 en:2 slow:1 pv:7 srinadh:1 young:1 theorem:12 removing:1 specific:1 tro12:2 kvi:2 maxi:1 dominates:1 exists:2 sequential:1 adding:1 mitliagkas:1 nationale:1 gap:3 nk:1 logarithmic:1 halko:1 jacm:1 ordered:3 contained:1 corresponds:4 satisfies:6 extracted:2 acm:7 kzkf:1 seyoung:1 towards:1 jeff:1 price:1 erased:1 uniformly:5 lemma:17 principal:4 called:1 total:1 pas:14 partly:1 svd:5 la:1 support:1 log4:5 latter:1 |
5,446 | 593 | Improving Performance in Neural Networks
Using a Boosting Algorithm
Harris Drucker
AT&T Bell Laboratories
Holmdel, NJ 07733
Robert Schapire
AT&T Bell Laboratories
Murray Hill, NJ 07974
Patrice Simard
AT &T Bell Laboratories
Holmdel, NJ 07733
Abstract
A boosting algorithm converts a learning machine with error rate less
than 50% to one with an arbitrarily low error rate. However, the
algorithm discussed here depends on having a large supply of
independent training samples. We show how to circumvent this
problem and generate an ensemble of learning machines whose
performance in optical character recognition problems is dramatically
improved over that of a single network. We report the effect of
boosting on four databases (all handwritten) consisting of 12,000 digits
from segmented ZIP codes from the United State Postal Service
(USPS) and the following from the National Institute of Standards and
Testing (NIST): 220,000 digits, 45,000 upper case alphas, and 45,000
lower case alphas. We use two performance measures: the raw error
rate (no rejects) and the reject rate required to achieve a 1% error rate
on the patterns not rejected. Boosting improved performance in some
cases by a factor of three.
1 INTRODUCTION
In this article we summarize a study on the effects of a boosting algorithm on the
performance of an ensemble of neural networks used in optical character recognition
problems. Full details can be obtained elsewhere (Drucker, Schapire, and Simard, 1993).
The "boosting by filtering" algorithm is based on Schapire's original work (1990) which
showed that it is theoretically possible to convert a learning machine with error rate less
than 50% into an ensemble of learning machines whose error rate is arbitrarily low. The
work detailed here is the first practical implementation of this boosting algorithm.
As applied to an ensemble of neural networks using supervised learning, the algorithm
proceeds as follows: Assume an oracle that generates a large number of independent
42
Improving Performance in Neural Networks Using a Boosting Algorithm
training examples. First, generate a set of training examples and train a first network.
After the first network is trained it may be used in combination with the oracle to produce
a second training set in the following manner: Flip a fair coin. If the coin is heads, pass
outputs from the oracle through the first learning machine until the first network
misclassifies a pattern and add this pattern to a second training set. Otherwise, if the coin
is tails pass outputs from the oracle through the first learning machine until the first
network finds a pattern that it classifies correctly and add to the training set. This process
is repeated until enough patterns have been collected. These patterns, half of which the
first machine classifies correctly and half incorrectly, constitute the training set for the
second network. The second network may then be trained.
The first two networks may then be used to produce a third training set in the following
manner: Pass the outputs from the oracle through the first two networks. If the networks
disagree on the classification, add this pattern to the training set. Otherwise, toss out the
pattern. Continue this until enough patterns are generated to form the third training set.
This third network is then trained.
In the final testing phase (of Schapire's original scheme), the test patterns (never
previously used for training or validation) are passed through the three networks and
labels assigned using the following voting scheme: If the first two networks agree, that is
the label. Otherwise, assign the label as classified by the third network. However, we
have found that if we add together the three sets of outputs from each of the three
networks to obtain one set of ten outputs (for the digits) or one set of twenty-size outputs
(for the alphas) we obtain better results. Typically, the error rate is reduced by .5% over
straight voting.
The rationale for the better performance using addition is as follows: A voting criterion is
a hard-decision rule. Each voter in the ensemble has an equal vote whether in fact the
voter has high confidence (large difference between the two largest outputs in a particular
network) or low confidence (small difference between the two largest outputs). By
summing the outputs (a soft-decision rule) we incorporate the confidence of the networks
into the total output. As will be seen later, this also allows us to build an ensemble with
only two voters rather than three as called for in the original algorithm.
Conceptually, this process could be iterated in a recursive manner to produce an ensemble
of nine networks, twenty-seven networks, etc. However, we have found significant
improvement in going from one network to only three. The penalty paid is potentially an
increase by a factor of three in evaluating the performance (we attribute no penalty to the
increased training time). However it can show how to reduce this to a factor of 1.75
using sieving procedures.
2 A DEFORMATION MODEL
The proof that boosting works depends on the assumption of three independent training
sets. Without a very large training set, this is not possible unless that error rates are large.
After training the first network, unless the network has very poor performance, there are
not enough remaining samples to generate the second training set. For example, suppose
we had 9000 total examples and used the first 3000 to train the first network and that
network achieves a 5% error rate. We would like the next training set to consist of 1500
patterns that the first network classifies incorrectly and 1500 that the first network
43
44
Drucker, Schapire, and Simard
classifies incorrectly. At a 5% error rate, we need approximately 30,000 new images to
pass through the first network to find 1500 patterns that the first network classifies
incorrectly. These many patterns are not available. Instead we will generate additional
patterns by using small deformations around the finite training set based on the
techniques of Simard (Simard, et al., 1992).
The image consists of a square pixel array (we use both 16x16 and 20x20). Let the
intensity of the image at coordinate location (ij) be Fjj(x,y) where the (x,y) denotes that
F is a differentiable and hence continuous function of x and y. i and j take on the discrete
values 0,1,... ,15 for a 16x16 pixel array.
The change in F at location (ij) due to small x-translation, y-translation, rotation,
diagonal deformation, axis deformation, scaling and thickness deformation is given by
the following respective matrix inner products:
aFjj(x,y)
ax
aFjj(x,y)
ay
where the k's are small values and x and yare referenced to the center of the image. This
construction depends on obtaining the two partial derivatives.
For example, if all the k' s except k 1 are zero, then M'jj(X,y) = k 1
ap.?(x y)
'~X '
is the amount
by which Fij(x,y) at coordinate location (ij) changes due to an x-translation of value k 1.
The diagonal deformation can be conceived of as pulling on two opposite comers of the
image thereby stretching the image along the 45 degree axis (away from the center) while
simultaneously shrinking the image towards the center along a - 45 degree axis. If k4
changes sign, we push towards the center along the 45 degree axis and pull away along
the - 45 degree axis. Axis deformation can be conceived as pulling (or pushing) away
from the center along the x-axis while pushing (or pulling) towards the center along the
y-axis.
If all the k's except k7 are zero, then M'jj(x,y) = k711 VFjj(x,y) I j2 is the norm squared of
the gradient of the intensity. It can be shown that this corresponds to varying the
"thickness" of the image.
Typically the original image is very coarsely quantized and not differentiable. Smoothing
of the original image is done by numerically convolving the original image with a 5x5
_ (x2 + y2)
square kernel whose elements are values from the Gaussian: exp
to give us
cr
Improving Performance in Neural Networks Using a Boosting Algorithm
.a 16x 16 or: 20x20 square matrix of smoothed values.
A matrix of partial derivatives (with respect to x) for each pixel --Iocation is obtained by
convolving the original image with a kernel whose elements are the derivatives with
respect to x of the Gaussian function. ' We can similarly form a matrix of parti~
derivatives with respect to y. A new image can then be constructed by adding together
the smoothed image and a differential matrix whose elements are given by the above
equation.
Using the above equation, we may simulate an oracle by cycling through a finite sized
training set, picking random values (uniformly distributed in some small range) of the
constants k for each new image. The choice of the range of k is somewhat critical: too
small and the new image is too close to the old image for the neural network to consider
it a "new" pattern. Too large and the image is distorted and nonrepresentative of "real"
data. We will discuss the proper choice of k later.
3 NETWORK ARCHITECTURES
We use as the basic learning machine a neural network with extensive use of shared
weights (LeCun, et. al., 1989, 1990). Typically the number of weights is much less than
the number of connections. We believe this leads to a better ability to reject images (i.e.,
no decision made) and thereby minimizes the number of rejects needed to obtain a given
error rate on images not rejected. However, there is conflicting evidence (Martin &
Pitman, 1991) that given enough training patterns, fully connected networks give similar
performance to networks using weight sharing. For the digits there is a 16 by 16 input
surrounded by a six pixel border to give a 28 by 28 input layer. The network has 4645
neurons, 2578 different weights, and 98442 connections.
The networks used for the alpha characters use a 20 by 20 input surrounded by a six pixel
border to give a 32 by 32 input layer. There are larger feature maps and more layers, but
essentially the same construction as for the digits.
4 TRAINING ALGORITHM
The training algorithm is described in general terms: Ideally, the data set should be
broken up into a training set, a validation set and a test set. The training set and
validation set are smoothed (no deformations) and the first network trained using a
quasi-Newton procedure. We alternately train on the training data and test on the
validation data until the error rate on the validation data reaches a minimum. Typically,
there is some overtraining in that the error rate on the training data continues to decrease
after the error rate on the validation set reaches a minimum.
Once the first network is trained, the second set of training data is generated by cycling
deformed training data through the first network. After the pseudo-random tossing of a
fair coin, if the coin is heads, deformed images are passed though the first network until
the network makes a mistake. If tails, deformed images are passed through the network
until the network makes a correct labeling. Each deformed image is generated from the
original image by randomly selecting values of the constants k. It may require multiple
passes through the training data to generate enough deformed images to form the second
training set
45
46
Drucker, Schapire, and Simard
Recall that the second training set will consist equally of images that the first network
misclassifies and images that the the first network classifies correctly. The total size of
the training set is that of the first training set. Correctly classified images are not hard to
find if the error rate of the first network is low. However, we only accept these images
with probability 50%. The choice of the range of the random variables k should be such
that the deformed images do not look distorted. The choice of the range of the k' s is
good if the error rate using the first network on the deformed patterns is approximately
the same as the error rate of the first network on the validation set (NOT the first training
set).
A second network is now trained on this new training set in the alternate train/test
procedure using the original validation set (not deformed) as the test set. Since this
training data is much more difficult to learn than the first training data, typically the error
rate on the second training set using the second trained network will be higher
(sometimes much higher) than the error rates of the first network on either the first
training set or the validation set. Also, the error rate on the validation set using the
second network will be higher than that of the first network because the network is trying
to generalize from difficult training data, 50% of which the first network could not
recognize.
The third training set is formed by once again generating deformed images and presenting
the images to both the first and second networks. If the networks disagree (whether both
are wrong or just one is), then that image is added to the third training set. The network
is trained using this new training data and tested on the original validation set.
Typically, the error rate on the validation set using the third network will be much higher
than either of the first two networks on the same validation set.
The three networks are then tested on the third set of data, which is the smoothed test
data. According to the original algorithm we should observe the outputs of the first two
networks. If the networks agree, accept that labeling, otherwise use the labeling assigned
by the third network. However, we are interested in more than a low error rate. We have
a second criterion, namely the percent of the patterns we have to reject (i.e. no
classification decision) in order to achieve a 1% error rate. The rationale for this is that if
an image recognizer is used to sort ZIP codes (or financial statements) it is much less
expensive to hand sort some numbers than to accept all and send mail to the wrong
address or credit the wrong account. From now on we shall call this latter criterion the
reject rate (without appending each time the statement "for a 1% error rate on the patterns
not rejected").
For a single neural network, a reject criterion is to compare the two (of the ten or twentysix) largest outputs of the network. If the difference is great, there is high confidence that
the maximum output is the correct classification. Therefore, a critical threshold is set
such that if the difference is smaller then that threshold, the image is rejected. The
threshold is set so that the error rate on the patterns not rejected is 1%.
5 RESULTS
The boosting algorithm was first used on a database consisting of segmented ZIP codes
from the United States Postal Service (USPS) divided into 9709 training examples and
2007 validation samples.
Improving Performance in Neural Networks Using a Boosting Algorithm
The samples supplied to us from the USPS were machine segmented from zip codes and
labeled but not size normalized. The validation set consists of approximately 2% badly
segmented characters (incomplete segmentations. decapitated fives, etc.) The training set
was cleaned thus the validation set is significantly more difficult than the training set.
The data was size normalized to fit inside a 16x16 array. centered, and deslanted. There
is no third group of data called the "test set" in the sense described previously even
though the validation error rate has been commonly called the test error rate in prior work
(LeCun. et. al., 1989, 1990).
Within the 9709 training digits are some machine printed digits which have been found to
improve performance on the validation set. This data set has an interesting history having
been around for three years with an approximate 5% error rate and 10% reject rate using
our best neural network. There has been a slight improvement using double
backpropagation (Drucker & LeCun. 1991) bringing down the error rate to 4.7% and the
reject rate to 8.9% but nothing dramatic. This network. which has a 4.7% error rate was
retrained on smoothed data by starting from the best set of weights. The second and third
networks were trained as described previously with the following key numbers:
The retrained first network has a training error rate of less than 1%, a test error rate of
4.9% and a test reject rate of 11.5%
We had to pass 153,000 deformed images (recycling the 9709 training set) through the
trained first network to obtain another 9709 training images. Of these 9709 images.
approximately one-half are patterns that the first network misclassifies. This means that
the first network has a 3.2% error rate on the deformed images, far above the error rate on
the original training images.
A second network is trained and gives a 5.8% test error rate.
To generate the last training set we passed 195,000 patterns (again recycling the 9709) to
give another set of 9709 training patterns. Therefore, the first two nets disagreed on 5%
of the deformed patterns.
The third network is trained and gives a test error rate of 16.9%
Using the original voting scheme for these three networks, we obtained a 4.0% error rate.
a significant improvement over the 4.9% using one network. As suggested before. adding
together the three outputs gives a method of rejecting images with low confidence scores
(when the two highest outputs are too close). For curiosity, we also determined what
would happen if we just added together the first two networks:
Original network: 4.9% test error rate and 11.5% reject rate.
Two networks added: 3.9% test error rate and 7.9% reject rate.
Three networks added: 3.6% test error rate and 6.6% reject rate.
The ensemble of three networks gives a significant improvement, especially in the reject
rate.
In April of 1992, the National Institute of Standards and Technology (NIST) provided a
labeled database of 220.000 digits. 45.000 lower case alphas and 45.000 upper case
47
48
Drucker, Schapire, and Simard
alphas. We divided these into training set, validation set, and test set. All data were
resampled and size-normalized to fit into a 16x16 or 20x20 pixel array. For the digits, we
deslanted and smoothed the data before retraining the first 16x16 input neural network
used for the USPS data. After the second training set was generated and the second
network trained the results from adding the two networks together were so good (Table 1)
that we decided not to generate the third training set For the NIST data, the error rates
reported are those of the test data.
TABLE 1. Test error rate and reject rate in percent
USPS
digits
NIST
digits
NIST
upper
alphas
NIST
lower
alpha
ERROR RATE
SINGLE NET
5.0
1.4
4.0
9.8
ERROR RATE
USING BOOSTING
3.6
.8
2.4
8.1
REJECT RATE
SINGLE NET
9.6
1.0
9.2
29.
REJECT RATE
USING BOOSTING
6.6
*
3.1
21.
DATABASE
* Reject rate is not reported if the error rate is below 1%.
6 CONCLUSIONS
In all cases we have been able to boost performance above that of single net. Although
others have used ensembles to improve performance (Srihari, 1990; Benediktsson and
Swain, 1992; Xu, et. al., 1992) the technique used here is particularly straightforward
since the usual multi-classifier system requires a laborious development of each classifier.
There is also a difference in emphasis. In the usual multi-classifier design, each classifier
is trained independently and the problem is how to best combine the classifiers. In
boosting, each network (after the first) has parameters that depend on the prior networks
and we know how to combine the networks (by voting or adding).
7 ACKNOWLEDGEMENTS
We hereby acknowledge the United State Postal Service and the National Institute of
Standards and Technology in supplying the databases.
Improving Performance in Neural Networks Using a Boosting Algorithm
References
J.A. Benediktsson and P.H. Swain, "Consensus Theoretic Classification Methods", IEEE
trans. on Systems, Man, and Cybernetics, Vol. 22, No.4, July/August 1992, pp. 688-704.
H. Drucker. R. Schapire, and P. Simard "Boosting Perfonnance in Neural Networks",
International Journal of Pattern Recognition and Artificial Intelligence, (to be published,
1993)d
H. Drucker and Y. LeCun, "Improving Generalization Perfonnance in Character
Recognition", Proceedings of the 1991 IEEE Workshop on Neural Networks for Signal
Processing, IEEE Press,pp. 198 - 207.
Y. LeCun, et. aI., "Backpropagation Applied to Handwritten Zip Code Recognition",
Neural Computation 1,1989, pp. 541-551
Y. LeCun, et. aI., Handwritten Digit Recognition with a Back-Propagation Network", In
D.S. Touretsky (ed), Advances in Neural Information Processing Systems 2, (1990) pp.
396-404, San Mateo, CA: Morgan Kaufmann Publishers
G. L. Martin and J. A. Pitman, "Recognizing Handed-Printed Letters and Digits Using
Backpropagation Learning", Neural Computation, Vol. 3, 1991, pp. 258-267.
R. Schapire, "The Strength of Weak Learnability", Machine Learning, Vol. 5, #2, 1990,
pp. 197-227.
P. Simard. "Tangent Prop - A fonnalism for specifying selected invariances in an
adaptive network", In J.E. Moody, SJ. Hanson, and R.P. Lippmann (eds.) Advances in
Neural Information Processing Systems 4, (1992) p. 895-903, San Mateo, CA: Morgan
Kaufmann Publishers
Sargur Srihari, "High-Perfonnance Reading Machines", Proceeding of the IEEE, Vol 80,
No.7, July 1992, pp. 1120-1132.
C.Y. Suen, et. aI., "Computer Recognition of Unconstrained Handwritten Numerals",
Proceeding of the IEEE, Vol 80, No.7, July 1992, pp. 1162-1180.
L. Xu, et. al.. "Methods of Combining Multiple Classifiers", IEEE Trans. on Systems
Man, and Cybernetics, Vol. 22. No.3, May/June 1992, pp. 418-435.
49
| 593 |@word deformed:12 norm:1 retraining:1 k7:1 paid:1 dramatic:1 thereby:2 score:1 united:3 selecting:1 happen:1 half:3 intelligence:1 selected:1 supplying:1 quantized:1 boosting:17 postal:3 location:3 five:1 along:6 constructed:1 differential:1 supply:1 consists:2 combine:2 inside:1 manner:3 theoretically:1 multi:2 provided:1 classifies:6 what:1 minimizes:1 nj:3 pseudo:1 voting:5 wrong:3 classifier:6 before:2 service:3 referenced:1 mistake:1 approximately:4 ap:1 voter:3 emphasis:1 mateo:2 specifying:1 range:4 decided:1 practical:1 lecun:6 testing:2 recursive:1 backpropagation:3 digit:13 procedure:3 bell:3 reject:18 significantly:1 printed:2 confidence:5 close:2 map:1 center:6 send:1 straightforward:1 starting:1 independently:1 rule:2 parti:1 array:4 pull:1 financial:1 coordinate:2 construction:2 suppose:1 element:3 recognition:7 expensive:1 particularly:1 continues:1 database:5 labeled:2 connected:1 decrease:1 highest:1 broken:1 ideally:1 trained:14 depend:1 usps:5 comer:1 train:4 artificial:1 labeling:3 whose:5 larger:1 otherwise:4 ability:1 patrice:1 final:1 differentiable:2 net:4 product:1 j2:1 combining:1 achieve:2 double:1 produce:3 generating:1 ij:3 fij:1 correct:2 attribute:1 centered:1 numeral:1 require:1 assign:1 generalization:1 around:2 credit:1 exp:1 great:1 achieves:1 recognizer:1 label:3 largest:3 suen:1 gaussian:2 rather:1 cr:1 varying:1 ax:1 june:1 improvement:4 sense:1 typically:6 accept:3 going:1 quasi:1 interested:1 pixel:6 classification:4 development:1 misclassifies:3 smoothing:1 equal:1 once:2 never:1 having:2 look:1 report:1 others:1 randomly:1 simultaneously:1 national:3 recognize:1 phase:1 consisting:2 laborious:1 partial:2 respective:1 perfonnance:3 unless:2 incomplete:1 old:1 deformation:8 increased:1 handed:1 soft:1 swain:2 recognizing:1 too:4 learnability:1 reported:2 thickness:2 international:1 picking:1 deslanted:2 together:5 moody:1 squared:1 again:2 convolving:2 simard:9 derivative:4 account:1 depends:3 later:2 sort:2 square:3 formed:1 kaufmann:2 stretching:1 ensemble:9 conceptually:1 generalize:1 weak:1 handwritten:4 raw:1 iterated:1 rejecting:1 cybernetics:2 straight:1 published:1 classified:2 history:1 overtraining:1 reach:2 sharing:1 ed:2 pp:9 hereby:1 proof:1 recall:1 segmentation:1 back:1 higher:4 supervised:1 improved:2 april:1 done:1 though:2 rejected:5 just:2 until:7 hand:1 propagation:1 pulling:3 believe:1 effect:2 normalized:3 y2:1 hence:1 assigned:2 laboratory:3 x5:1 criterion:4 trying:1 hill:1 ay:1 presenting:1 theoretic:1 percent:2 image:41 rotation:1 discussed:1 tail:2 slight:1 numerically:1 significant:3 ai:3 unconstrained:1 similarly:1 had:2 etc:2 add:4 showed:1 arbitrarily:2 continue:1 seen:1 minimum:2 additional:1 somewhat:1 morgan:2 zip:5 tossing:1 july:3 signal:1 full:1 multiple:2 segmented:4 divided:2 equally:1 fjj:1 basic:1 essentially:1 kernel:2 sometimes:1 addition:1 publisher:2 bringing:1 pass:1 call:1 enough:5 fit:2 architecture:1 opposite:1 reduce:1 inner:1 drucker:8 whether:2 six:2 passed:4 penalty:2 nine:1 constitute:1 jj:2 dramatically:1 detailed:1 amount:1 ten:2 reduced:1 schapire:9 generate:7 supplied:1 sign:1 conceived:2 correctly:4 discrete:1 shall:1 vol:6 coarsely:1 group:1 key:1 four:1 threshold:3 k4:1 convert:2 year:1 letter:1 distorted:2 decision:4 holmdel:2 scaling:1 layer:3 resampled:1 oracle:6 badly:1 strength:1 x2:1 generates:1 simulate:1 optical:2 martin:2 according:1 alternate:1 combination:1 poor:1 smaller:1 character:5 equation:2 agree:2 previously:3 discus:1 needed:1 know:1 flip:1 available:1 yare:1 observe:1 away:3 appending:1 coin:5 original:14 denotes:1 remaining:1 newton:1 recycling:2 pushing:2 murray:1 build:1 especially:1 added:4 usual:2 diagonal:2 cycling:2 gradient:1 seven:1 mail:1 collected:1 consensus:1 fonnalism:1 code:5 touretsky:1 x20:3 difficult:3 robert:1 potentially:1 statement:2 implementation:1 design:1 proper:1 twenty:2 upper:3 disagree:2 neuron:1 nist:6 finite:2 acknowledge:1 incorrectly:4 head:2 smoothed:6 retrained:2 august:1 intensity:2 namely:1 required:1 cleaned:1 extensive:1 connection:2 hanson:1 conflicting:1 boost:1 alternately:1 trans:2 address:1 able:1 suggested:1 proceeds:1 curiosity:1 pattern:25 below:1 reading:1 summarize:1 critical:2 circumvent:1 scheme:3 improve:2 technology:2 axis:8 benediktsson:2 prior:2 acknowledgement:1 tangent:1 fully:1 rationale:2 interesting:1 filtering:1 validation:19 degree:4 article:1 surrounded:2 translation:3 elsewhere:1 last:1 institute:3 pitman:2 distributed:1 evaluating:1 made:1 commonly:1 san:2 adaptive:1 far:1 sj:1 alpha:8 approximate:1 lippmann:1 summing:1 continuous:1 table:2 learn:1 ca:2 obtaining:1 improving:6 border:2 nothing:1 fair:2 repeated:1 xu:2 x16:5 shrinking:1 third:13 down:1 evidence:1 disagreed:1 consist:2 workshop:1 adding:4 push:1 srihari:2 sargur:1 corresponds:1 harris:1 prop:1 sized:1 towards:3 toss:1 shared:1 man:2 hard:2 change:3 determined:1 except:2 uniformly:1 called:3 total:3 pas:5 invariance:1 vote:1 latter:1 incorporate:1 tested:2 |
5,447 | 5,930 | Stochastic Online Greedy Learning with
Semi-bandit Feedbacks
Tian Lin
Tsinghua University
Beijing, China
lintian06@gmail.com
Jian Li
Tsinghua University
Beijing, China
lapordge@gmail.com
Wei Chen
Microsoft Research
Beijing, China
weic@microsoft.com
Abstract
The greedy algorithm is extensively studied in the field of combinatorial optimization for decades. In this paper, we address the online learning problem when the
input to the greedy algorithm is stochastic with unknown parameters that have
to be learned over time. We first propose the greedy regret and -quasi greedy
regret as learning metrics comparing with the performance of offline greedy algorithm. We then propose two online greedy learning algorithms with semi-bandit
feedbacks, which use multi-armed bandit and pure exploration bandit policies at
each level of greedy learning, one for each of the regret metrics respectively. Both
algorithms achieve O(log T ) problem-dependent regret bound (T being the time
horizon) for a general class of combinatorial structures and reward functions that
allow greedy solutions. We further show that the bound is tight in T and other
problem instance parameters.
1
Introduction
The greedy algorithm is simple and easy-to-implement, and can be applied to solve a wide range of
complex optimization problems, either with exact solutions (e.g. minimum spanning tree [19, 25])
or approximate solutions (e.g. maximum coverage [11] or influence maximization [17]). Moreover,
for many practical problems, the greedy algorithm often serves as the first heuristic of choice and
performs well in practice even when it does not provide a theoretical guarantee.
The classical greedy algorithm assumes that a certain reward function is given, and it constructs the
solution iteratively. In each phase, it searches for a local optimal element to maximize the marginal
gain of reward, and add it to the solution. We refer to this case as the offline greedy algorithm with
a given reward function, and the corresponding problem the offline problems. The phase-by-phase
process of the greedy algorithm naturally forms a decision sequence to illustrate the decision flow in
finding the solution, which is named as the greedy sequence. We characterize the decision class as an
accessible set system, a general combinatorial structure encompassing many interesting problems.
In many real applications, however, the reward function is stochastic and is not known in advance,
and the reward is only instantiated based on the unknown distribution after the greedy sequence is
selected. For example, in the influence maximization problem [17], social influence are propagated
in a social network from the selected seed nodes following a stochastic model with unknown parameters, and one wants to find the optimal seed set of size k that generates the largest influence
spread, which is the expected number of nodes influenced in a cascade. In this case, the reward of
seed selection is only instantiated after the seed selection, and is only one of the random outcomes.
Therefore, when the stochastic reward function is unknown, we aim at maximizing the expected
reward overtime while gradually learning the key parameters of the expected reward functions. This
falls in the domain of online learning, and we refer the online algorithm as the strategy of the player,
who makes sequential decisions, interacts with the environment, obtains feedbacks, and accumulates
1
her reward. For online greedy algorithms in particular, at each time step the player selects and plays
a candidate decision sequence while the environment instantiates the reward function, and then the
player collects the values of instantiated function at every phase of the decision sequence as the
feedbacks (thus the name of semi-bandit feedbacks [2]), and takes the value of the final phase as the
reward cumulated in this step.
The typical objective for an online algorithm is to make sequential decisions against the optimal
solution in the offline problem where the reward function is known a priori. For online greedy
algorithms, instead, we compare it with the solution of the offline greedy algorithm, and minimize
their gap of the cumulative reward over time, termed as the greedy regret. Furthermore, in some
problems such as influence maximization, the reward function is estimated with error even for the
offline problem [17] and thus the greedily selected element at each phase may contain some error.
We call such greedy sequence as -quasi greedy sequence. To accommodate these cases, we also
define the metric of -quasi greedy regret, which compares the online solution against the minimum
offline solution from all -quasi greedy sequences.
In this paper, we propose two online greedy algorithms targeted at two regret metrics respectively.
The first algorithm OG-UCB uses the stochastic multi-armed bandit (MAB) [22, 8], in particular
the well-known UCB policy [3] as the building block to minimize the greedy regret. We apply the
UCB policy to every phase by associating the confidence bound to each arm, and then choose the
arm having the highest upper confidence bound greedily in the process of decision. For the second
scenario where we allow tolerating -error for each phase, we propose a first-explore-then-exploit
algorithm OG-LUCB to minimize the -quasi greedy regret. For every phase in the greedy process,
OG-LUCB applies the LUCB policy [16, 9] which depends on the upper and lower confidence
bound to eliminate arms. It first explores each arm until the lower bound of one arm is higher than the
upper bound of any other arm within an -error, then the stage of current phase is switched to exploit
that best arm, and continues to the next phase. Both OG-UCB and OG-LUCB achieve the problemdependent O(log T ) bound in terms of the respective regret metrics, where the coefficients in front of
T depends on direct elements along the greedy sequence (a.k.a., its decision frontier) corresponding
to the instance of learning problem. The two algorithms have complementary advantages: when we
really target at greedy regret (setting to 0 for OG-LUCB), OG-UCB has a slightly better regret
guarantee and does not need an artificial switch between exploration and exploitation; when we are
satisfied with -quasi greedy regret, OG-LUCB works but OG-UCB cannot be adapted for this case
and may suffer a larger regret. We also show a problem instance in this paper, where the upper bound
is tight to the lower bound in T and other problem parameters.
We further show our algorithms can be easily extended to the knapsack problem, and applied to
the stochastic online maximization for consistent functions and submodular functions, etc., in the
supplementary material.
To summarize, our contributions include the following: (a) To the best of our knowledge, we are
the first to propose the framework using the greedy regret and -quasi greedy regret to characterize
the online performance of the stochastic greedy algorithm for different scenarios, and it works for a
wide class of accessible set systems and general reward functions; (b) We propose Algorithms OGUCB and OG-LUCB that achieve the problem-dependent O(log T ) regret bound; and (c) We also
show that the upper bound matches with the lower bound (up to a constant factor).
Due to the space constraint, the analysis of algorithms, applications and empirical evaluation of the
lower bound are moved to the supplementary material.
Related Work. The multi-armed bandit (MAB) problem for both stochastic and adversarial settings [22, 4, 6] has been widely studied for decades. Most work focus on minimizing the cumulative
regret over time [3, 14], or identifying the optimal solution in terms of pure exploration bandits
[1, 16, 7]. Among those work, there is one line of research that generalizes MAB to combinatorial
learning problems [8, 13, 2, 10, 21, 23, 9]. Our paper belongs to this line considering stochastic
learning with semi-bandit feedbacks, while we focus on the greedy algorithm, the structure and its
performance measure, which have not been addressed.
The classical greedy algorithms in the offline setting are studied in many applications [19, 25, 11, 5],
and there is a line of work [15, 18] focusing on characterizing the greedy structure for solutions. We
adopt their characterizations of accessible set systems to the online setting of the greedy learning.
There is also a branch of work using the greedy algorithm to solve online learning problem, while
2
they require the knowledge of the exact form of reward function, restricting to special functions such
as linear [2, 20] and submodular rewards [26, 12]. Our work does not assume the exact form, and it
covers a much larger class of combinatorial structures and reward functions.
2
Preliminaries
Online combinatorial learning problem can be formulated as a repeated game between the environment and the player under stochastic multi-armed bandit framework.
Let E = {e1 , e2 , . . . , en } be a finite ground set of size n, and F be a collection of subsets of E. We
consider the accessible set system (E, F) satisfying the following two axioms: (1) ? ? F; (2) If S ?
F and S 6= ?, then there exists some e in E, s.t., S \ {e} ? F. We define any set S ? E as a feasible
set if S ? F. For any S ? F, its accessible set is defined as N (S) := {e ? E \ S : S ? {e} ? F}.
We say feasible set S is maximal if N (S) = ?. Define the largest length of any feasible set as
m := maxS?F |S| (m ? n), and the largest width of any feasible set as W := maxS?F |N (S)|
(W ? n). We say that such an accessible set system (E, F) is the decision class of the player. In the
class of combinatorial learning problems, the size of F is usually very large (e.g., exponential in m,
W and n).
Beginning with an empty set, the accessible set system (E, F) ensures that any feasible set S can
be acquired by adding elements one by one in some order (cf. Lemma A.1 in the supplementary
material for more details), which naturally forms the decision process of the player. For convenience, we say the player can choose a decision sequence, defined as an ordered feasible sets
? := hS0 , S1 , . . . , Sk i ? F k+1 satisfying that ? = S0 ? S1 ? ? ? ? ? Sk and for any i = 1, 2, . . . , k,
Si = Si?1 ? {si } where si ? N (Si?1 ). Besides, define decision sequence ? as maximal if and only
if Sk is maximal.
Let ? be an arbitrary set. The environment draws i.i.d. samples from ? as ?1 , ?2 , . . . , at each time
t = 1, 2, . . . , by following a predetermined but unknown distribution. Consider reward function
f : F ? ? ? R that is bounded, and it is non-decreasing1 in the first parameter, while the exact
form of function is agnostic to the player. We use a shorthand ft (S) := f (S, ?t ) to denote the
reward for any given S at time t, and denote the expected reward as f (S) := E?1 [f1 (S)], where the
expectation E?t is taken from the randomness of the environment at time t. For ease of presentation,
we assume that the reward function for any time t is normalized with arbitrary alignment as follows:
(1) ft (?) = L (for any constant L ? 0); (2) for any S ? F, e ? N (S), ft (S ? {e}) ? ft (S) ? [0, 1].
Therefore, reward function f (?, ?) is implicitly bounded within [L, L + m].
We extend the concept of arms in MAB, and introduce notation a := e|S to define an arm, representing the selected element e based on the prefix S, where S is a feasible set and e ? N (S); and
define A := {e|S : ?S ? F, ?e ? N (S)} as the arm space. Then, we can define the marginal
reward for function ft as ft (e|S) := ft (S ? {e}) ? ft (S), and the expected marginal reward for f
as f (e|S) := f (S ? {e}) ? f (S). Notice that the use of arms characterizes the marginal reward, and
also indicates that it is related to the player?s previous decision.
2.1
The Offline Problem and The Offline Greedy Algorithm
In the offline problem, we assume that f is provided as a value oracle. Therefore, the objective is
to find the optimal solution S ? = arg maxS?F f (S), which only depends on the player?s decision.
When the optimal solution is computationally hard to obtain, usually we are interested in finding
a feasible set S + ? F such that f (S + ) ? ?f (S ? ) where ? ? (0, 1], then S + is called an ?approximation solution. That is a typical case where the greedy algorithm comes into play.
The offline greedy algorithm is a local search algorithm that refines the solution phase by
phase. It goes as follows: (a) Let G0 = ?; (b) For each phase k = 0, 1, . . . , find
gk+1 = arg maxe?N (Gk ) f (e|Gk ), and let Gk+1 = Gk ? {gk+1 }; (c) The above process ends
when N (Gk+1 ) = ? (Gk+1 is maximal). We define the maximal decision sequence ? G :=
hG0 , G1 , . . . , GmG i (mG is its length) found by the offline greedy as the greedy sequence. For simplicity, we assume that it is unique.
1
Therefore, the optimal solution is a maximal decision sequence.
3
One important feature is that the greedy algorithm uses a polynomial number of calls
(poly(m, W, n)) to the offline oracle, even though the size of F or A may be exponentially large.
In some cases such as the offline influence maximization problem [17], the value of f (?) can only be
accessed with some error or estimated approximately. Sometimes, even though f (?) can be computed
exactly, we may only need an approximate maximizer in each greedy phase in favor of computational
efficiency (e.g., efficient submodular maximization [24]). To capture such scenarios, we say a maximal decision sequence ? = hS0 , S1 , . . . , Sm0 i is an -quasi greedy sequence ( ? 0), if the greedy
decision can tolerate error every phase, i.e., for each k = 0, 1, . . . , m0 ?1 and Sk+1 = Sk ?{sk+1 },
f (sk+1 |Sk ) ? maxs?N (Sk ) f (s|Sk ) ? . Notice that there could be many -quasi greedy sequences,
and we denote ? Q := hQ0 , Q1 , . . . , QmQ i (mQ is its length) as the one with the minimum reward,
that is f (QmQ ) is minimized over all -quasi greedy sequences.
2.2
The Online Problem
In the online case, in constrast f is not provided. The player can only access one of functions
f1 , f2 , . . . , generated by the environment, for each time step during a repeated game.
For each time t, the game proceeds in the following three steps: (1) The environment draws
i.i.d. sample ?t ? ? from its predetermined distribution without revealing it; (2) the player may,
based on her previous knowledge, select a decision sequence ? t = hS0 , S1 , . . . , Smt i, which reflects the process of her decision phase by phase; (3) then, the player plays ? t and gains reward
ft (Smt ), while observes intermediate feedbacks ft (S0 ), ft (S1 ), . . . , ft (Smt ) to update her knowledge. We refer such feedbacks as semi-bandit feedbacks in the decision order.
t
t
t
For any time t = 1, 2, . . . , denote ? t = hS0t , S1t , . . . , Sm
t i and S := Smt . The player is to make sequential decisions, and the classical objective is to minimize the cumulative gap of rewards against
the optimal solution [3] or the approximation solution [10]. For example, when the optimal solution S ? = arg maxS?F E [f1 (S)] can be solved in the offline problem, we minimize the expected
PT
cumulative regret R(T ) := T ? E [f1 (S ? )] ? t=1 E [ft (S t )] over the time horizon T , where the
expectation is taken from the randomness of the environment and the possible random algorithm of
the player. In this paper, we are interested in online algorithms that are comparable to the solution
of the offline greedy algorithm, namely the greedy sequence ? G = hG0 , G1 , . . . , GmG i. Thus, the
objective is to minimize the greedy regret defined as
RG (T ) := T ? E [f1 (GmG )] ?
T
X
E ft (S t ) .
(1)
t=1
Given ? 0, we define the -quasi greedy regret as
RQ (T ) := T ? E[f1 (QmQ )] ?
T
X
E ft (S t ) ,
(2)
t=1
where ? Q = hQ0 , Q1 , . . . , QmQ i is the minimum -quasi greedy sequence.
We remark that if the offline greedy algorithm provides an ?-approximation solution (with 0 < ? ?
1), then the greedy regret (or -quasi greedy regret) also provides ?-approximation regret, which is
the regret comparing to the ? fraction of the optimal solution, as defined in [10].
In the rest of the paper, our goal is to design the player?s policy that is comparable to the offline
PT
greedy, in other words, RG (T )/T = f (GmG ) ? T1 t=1 E [ft (S t )] = o(1). Thus, to achieve sublinear greedy regret RG (T ) = o(T ) is our main focus.
3
The Online Greedy and Algorithm OG-UCB
In this section, we propose our Online Greedy (OG) algorithm with the UCB policy to minimize the
greedy regret (defined in (1)).
For any arm a = e|S ? A, playing a at each time t yields the marginal reward as a random variable
Xt (a) = ft (a), in which the random event ?t ? ? is i.i.d., and we denote ?(a) as its true mean (i.e.,
4
Algorithm 1 OG
Require: MaxOracle
1: for t = 1, 2, . . . do
2:
S0 ? ?; k ? 0; h0 ? true
3:
repeat
P
4:
A ? {e|Sk : ?e ? N (Sk )}; t0 ? a?A N (a) +
1
0
?
5:
(sk+1 |Sk , hk ) ? MaxOracle A, X(?), N (?), t
6:
7:
8:
9:
10:
11:
. online greedy procedure
. find the current maximal
Sk+1 ? Sk ? {sk+1 }; k ? k + 1
until N (Sk ) = ?
. until a maximal sequence is found
Play sequence ? t ? hS0 , . . . , Sk i, observe {ft (S0 ), . . . , ft (Sk )}, and gain ft (Sk ).
for all i = 1, 2, . . . , k do
. update according to signals from MaxOracle
if h0 , h1 , ? ? ? , hi?1 are all true then
? i |Si?1 ) and N (si |Si?1 ) according to (3).
Update X(s
?
Subroutine 2 UCB(A, X(?),
N (?), t) to implement MaxOracle
q
3 ln t
Setup: confidence radius radt (a) := 2N
(a) , for each a ? A
?
1: if ?a ? A, X(a)
is not initialized then
2:
return (a, true)
3: else
o
n
?
4:
It+ ? arg maxa?A X(a)
+ radt (a) , and return (It+ , true)
. break ties arbitrarily
. to initialize arms
. apply UCB?s rule
?
?(a) := E [X1 (a)]). Let X(a)
be the empirical mean for the marginal reward of a, and N (a) be the
? t (a) and Nt (a) for particular X(a)
?
counter of the plays. More specifically, denote X
and N (a) at
the beginning of the time step t, and they are evaluated as follows:
Pt?1
t?1
X
fi (a)Ii (a)
? t (a) = i=1
X
, Nt (a) =
Ii (a),
(3)
Pt?1
i=1 Ii (a)
i=1
where Ii (a) ? {0, 1} indicates whether a is updated at time i. In particular, assume that our algo?
rithm is lazy-initialized so that each X(a)
and N (a) is 0 by default, until a is played.
The Online Greedy algorithm (OG) proposed in Algorithm 1 serves as a meta-algorithm allowing different implementations of Subroutine MaxOracle. For every time t, OG calls MaxOracle
(Line 5, to be specified later) to find the local maximal phase by phase, until the decision sequence ? t
is made. Then, it plays sequence ? t , observes feedbacks and gains the reward (Line 8). Meanwhile,
OG collects the Boolean signals (hk ) from MaxOracle during the greedy process (Line 5), and up? and N (?) according to those signals (Line 10). On the other hand, MaxOracle
date estimators X(?)
?
takes accessible arms A, estimators X(?),
N (?), and counted time t0 , and returns an arm from A and
signal hk ? {true, false} to instruct OG whether to update estimators for the following phase.
The classical UCB [3] can be used to implement MaxOracle, which is described in Subroutine 2.
We term our algorithm OG, in which MaxOracle is implemented by Subroutine 2 UCB, as Algorithm OG-UCB. A few remarks are in order: First, Algorithm OG-UCB chooses an arm with the
highest upper confidence bound for each phase. Second, the signal hk is always true, meaning that
OG-UCB always update empirical means of arms along the decision sequence. Third, because we
? and N (?), the memory is allocated only when it is needed.
use lazy-initialized X(?)
3.1
Regret Bound of OG-UCB
For any feasible set S, define the greedy element for S as gS? := arg maxe?N (S) f (e|S), and we use
N? (S) := N (S) \ {gS? } for convenience. Denote F ? := {S ? F : S is maximal} as the collection
of all maximal feasible sets in F. We use the following gaps to measure the performance of the
algorithm.
5
Definition 3.1 (Gaps). The gap between the maximal greedy feasible set GmG and any S ? F is
defined as ?(S) := f (GmG ) ? f (S) if it is positive, and 0 otherwise. We define the maximum gap
as ?max = f (GmG ) ? minS?F ? f (S), which is the worst penalty for any maximal feasible set. For
any arms a = e|S ? A, we define the unit gap of a (i.e., the gap for one phase) as
?
f (gS |S) ? f (e|S),
e 6= gS?
?(a) = ?(e|S) :=
.
(4)
f (gS? |S) ? maxe0 ?N? (S) f (e0 |S), e = gS?
For any arms a = e|S ? A, we define the sunk-cost gap (irreversible once selected) as
?
?
? (a) = ? (e|S) := max f (GmG ) ?
min
f (V ), 0 ,
V :V ?F ? ,S?{e}?V
(5)
where for two feasible sets A and B, A ? B means that A is a prefix of B in some decision
sequence, that is, there exists a decision sequence ? = hS0 = ?, S1 , . . . , Sk i such that Sk = B and
for some j < k, Sj = A. Thus, ?? (e|S) means the largest gap we may have after we have fixed our
prefix selection to be S ? {e}, and is upper bounded by ?max .
Definition 3.2 (Decision frontier). For any decision sequence ? = hS0 , S1 , . . . , Sk i, define decision
Sk
frontier ?(?) := i=1 {e|Si?1 : e ? N (Si?1 )} ? A as the arms need to be explored in the decision
Sk
sequence ?, and ?? (?) := i=1 {e|Si?1 : ?e ? N? (Si?1 )} similarly.
Theorem 3.1 (Greedy regret bound). For any time T , Algorithm OG-UCB (Algorithm 1 with Subroutine 2) can achieve the greedy regret
X 6?? (a) ? ln T ? 2
G
?
R (T ) ?
+
+ 1 ? (a) ,
(6)
?(a)2
3
G
a??? (? )
where ? G is the greedy decision sequence.
When m = 1, the above theorem immediately recovers the regret
bound of the
classical UCB
mW ?max log T
?
[3] (with ? (a) = ?(a)). The greedy regret is bounded by O
where ? is the
?2
minimum unit gap (? = mina?A ?(a)), and the memory cost is at most proportional to the regret.
For a special class of linear bandits, a simple extension where we treat arms e|S and e|S 0 as the
n
same can make OG-UCB essentially the same as OMM in [20], while the regret is O( ?
log T ) and
the memory cost is O(n) (cf. Appendix F.1 of the supplementary material).
4
Relaxing the Greedy Sequence with -Error Tolerance
In this section, we propose an online algorithm called OG-LUCB, which learns an -quasi greedy
sequence, with the goal of minimizing the -quasi greedy regret (in (2)). We learn -quasi-greedy
sequences by a first-explore-then-exploit policy, which utilizes results from PAC learning with a
fixed confidence setting. In Section 4.1, we implement MaxOracle via the LUCB policy, and derive
its exploration time; we then assume the knowledge of time horizon T in Section 4.2, and analyze
the -quasi greedy regret; and in Section 4.3, we show that the assumption of knowing T can be
further removed.
4.1
OG with a first-explore-then-exploit policy
Given ? 0 and failure probability ? ? (0, 1), we use Subroutine 3 LUCB,? to implement the subroutine MaxOracle in Algorithm OG. We call the resulting algorithm OG-LUCB,? . Specifically,
Subroutine 3 is adapted from CLUCB-PAC in [9], and specialized to explore the top-one element in
the support of [0, 1] (i.e., set R = 21 , width(M) = 2 and Oracle = arg max in [9]). Assume that
I exploit (?) is lazy-initialized. For each greedy phase, the algorithm first explores each arm in A in the
exploration stage, during which the return flag (the second return field) is always false; when the
optimal one is found (initialize I exploit (A) with I?t ), it sticks to I exploit (A) in the exploitation stage
for the subsequent time steps, and return flag for this phase becomes true. The main algorithm OG
then uses these flags in such a way that it updates arm estimates for phase i if any only if all phases
6
?
Subroutine 3 LUCB,? (A, X(?),
N (?), t) to implement MaxOracle
q
3
t /?)
Setup: radt (a) := ln(4W
, for each a ? A; I exploit (?) to cache arms for exploitation;
2N (a)
1: if I exploit (A) is initialized then return (I exploit (A), true)
?
2: if ?a ? A, X(a)
is not initialized then
3:
return (a, false)
4: else
?
5:
I?t ? arg maxa?A X(a)
(
6:
7:
8:
9:
10:
11:
12:
?
X(a)
+ radt (a), a 6= I?t
?
X(a)
? radt (a), a = I?t
0
0
It ? arg maxa?A X (a)
if X 0 (It0 ) ? X 0 (I?t ) > then
It00 ? arg maxi?{I?t ,I 0 } radt (i), and return (It00 , false)
t
else
I exploit (A) ? I?t
return (I exploit (A), true)
?a ? A, X 0 (a) ?
. in the exploitation stage
. break ties arbitrarily
. to initialize arms
. perturb arms
. not separated
. in the exploration stage
. separated
. initialize I exploit (A) with I?t
. in the exploitation stage
for j < i are already in the exploitation stage. This avoids maintaining useless arm estimates and is
a major memory saving comparing to OG-UCB.
In Algorithm OG-LUCB,? , we define the total exploration time T E = T E (?), such that for any
time t ? T E , OG-LUCB,? is in the exploitation stage for all greedy phases encountered in the
algorithm. This also means that after time T E , in every step we play the same maximal decision
sequence ? = hS0 , S1 , ? ? ? , Sk i ? F k+1 , which we call a stable sequence. Following a common
practice, we define the hardness coefficient with prefix S ? F as
X
1
, where ?(e|S) is defined in (4).
(7)
HS :=
max {?(e|S)2 , 2 }
e?N (S)
Rewrite definitions with respect to the -quasi regret. Recall that ? Q = hQ0 , Q1 , . . . , QmQ i is
the minimum -quasi greedy sequence. In this section, we rewrite the gap ?(S) := max{f (QmQ ) ?
?
f (S), 0} for any S
? F, the maximum gap ?max := f (Q mQ ) ? minS?F ? f (S), and ? (a) =
?
? (e|S) := max f (QmQ ) ? minV :V ?F ? ,S?{e}?V f (V ), 0 , for any arm a = e|S ? A.
The following theorem shows that, with high probability, we can find a stable -quasi greedy sequence, and the total exploration time is bounded.
Theorem 4.1 (High probability exploration time). Given any ? 0 and ? ? (0, 1), suppose after
the total exploration time T E = T E (?), Algorithm OG-LUCB,? (Algorithm 1 with Subroutine 3)
sticks to a stable sequence ? = hS0 , S1 , ? ? ? , Sm0 i where m0 is its length. With probability at least
1 ? m?, the following claims hold: (1) ? is an -quasi greedy sequence; (2) The total exploration
Pm0 ?1
time satisfies that T E ? 127 k=0 HS ln (1996W HS /?) ,
4.2
Time Horizon T is Known
Knowing time horizon T , we may let ? = T1 in OG-LUCB,? to derive the -quasi regret as follows.
Theorem 4.2. Given any ? 0. When total time T is known, let Algorithm OG-LUCB,? run
with ? = T1 . Suppose ? = hS0 , S1 , ? ? ? ,nSm0 i is the sequence
selected at time T . Define funco
P
127
113
Q,?
?
tion R (T ) := e|S??(?) ? (e|S) min ?(e|S)2 , 2 ln (1996W HS T ) + ?max m, where m is
the largest length of a feasible set and HS is defined in (7). Then, the -quasi regret satisfies that
W m?max
RQ (T ) ? RQ,? (T ) = O( max{?
2 ,2 } log T ), where ? is the minimum unit gap.
In general, the two bounds (Theorem 3.1 and Theorem 4.2) are for different regret metrics, thus can
not be directly compared. When = 0, OG-UCB is slightly better only in the constant before log T .
On other hand, when we are satisfied with -quasi greedy regret, OG-LUCB,? may work better for
7
Algorithm 4 OG-LUCB-R (i.e., OG-LUCB with Restart)
Require:
1: for epoch ` = 1, 2, ? ? ? do
? and N (?) for all arms, and restart OG-LUCB,? with ? = 1 (defined in (8)).
2:
Clean X(?)
?`
3:
Run OG-LUCB,? for ?` time steps. (exit halfway, if the time is over.)
some large , for the bound takes the maximum (in the denominator) of the problem-dependent term
?(e|S) and the fixed constant term, and the memory cost is only O(mW ).
4.3
Time Horizon T is not Known
When time horizon T is not known, we can apply the ?squaring trick?, and restart the algorithm for
each epoch as follows. Define the duration of epoch ` as ?` , and its accumulated time as ?` , where
(
`
0,
`=0
?` := e2 ; ?` := P`
.
(8)
?
,
`?1
s=1 s
For any time horizon T , define the final epoch K = K(T ) as the epoch where T lies in, that is
?K?1 < T ? ?K . Then, our algorithm OG-LUCB-R is proposed in Algorithm 4. The following
theorem shows that the O(log T ) -quasi regret still holds, with a slight blowup of the constant
hidden in the big O notation (For completeness, the explicit constant before log T can be found in
Theorem D.7 of the supplementary material).
Theorem 4.3. Given any ? 0. Use ?` and ?` defined in (8), and function RQ,? (T ) defined in
(`)
(`)
(`)
Theorem 4.2. In Algorithm OG-LUCB-R, suppose ? (`) = hS0 , S1 , ? ? ? , Sm(`) i is the sequence
(`)
selected by the end of `-th epoch of OG-LUCB,? , where m is its length. For any time T , denote
Q
final epoch as K = K(T) such that ?K?1 <
T ? ?K , and the -quasi regret satisfies that R (T ) ?
PK
(`)
W m?max
Q,?
(?` ) = O max{?
2 ,2 } log T , where ? is the minimum unit gap.
`=1 R
5
Lower Bound on the Greedy Regret
Consider a problem of selecting one element each from m bandit instances, and the player sequentially collects prize at every phase. For simplicity, we call it the prize-collecting problem, which is
defined as follows: For each bandit instance i = 1, 2, . . . , m, denote set
Ei = {ei,1 , ei,2 , . . . , ei,W }
Sm
of size W . The accessible set system is defined as (E, F), where E = i=1 Ei , F = ?m
i=1 Fi ? {?},
and Fi = {S ? E : |S| = i, ?k : 1 ? k ? i, |S?Ek | = 1}. The reward function f : F ?? ? [0, m]
is non-decreasing inthe first parameter, and the form of f is unknown
to the player. Let minimum
unit gap ? := min f (gS? |S) ? f (e|S) : ?S ? F, ?e ? N? (S) > 0, where its value is also unknown to the player. The objective of the player is to minimize the greedy regret.
Denote the greedy sequence as ? G = hG0 , G1 , ? ? ? , Gm i, and the greedy arms as AG =
?
{gG
|Gi?1 : ?i = 1, 2, ? ? ? , W }. We say an algorithm is consistent, if the sum of playing all
i?1
P
arms a ? A \ AG is in o(T ? ), for any ? > 0, i.e., E[ a?A\AG NT (a)] = o(T ? ).
Theorem 5.1. For any consistent algorithm, there exists a problem instance of the prize-collecting
2
problem, as time T tends to ?, for any minimum unit gap ? ? (0, 41 ), such that ?2 ? 3W ?m?1
for
mW ln T
G
.
some constant ? ? (0, 1), the greedy regret satisfies that R (T ) = ?
?2
We remark that the detailed problem instance and the greedy regret can be found in Theorem E.2 of
the supplementary material. Furthermore, we may also restrict the maximum gap ?max to ?(1), and
ln T
the lower bound RG (T ) = ?( mW ??max
), for any sufficiently large T . For the upper bound, OG2
G
UCB (Theorem 3.1) gives that R (T ) = O( mW??2max log T ), Thus, our upper bound of OG-UCB
matches the lower bound within a constant factor.
Acknowledgments Jian Li was supported in part by the National Basic Research Program of
China grants 2015CB358700, 2011CBA00300, 2011CBA00301, and the National NSFC grants
61202009, 61033001, 61361136003.
8
References
[1] J.-Y. Audibert and S. Bubeck. Best arm identification in multi-armed bandits. In COLT, 2010.
[2] J.-Y. Audibert, S. Bubeck, and G. Lugosi. Minimax policies for combinatorial prediction games. arXiv
preprint arXiv:1105.4871, 2011.
[3] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine
learning, 47(2-3):235?256, 2002.
[4] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit problem.
SIAM Journal on Computing, 32(1):48?77, 2002.
[5] A. Bj?orner and G. M. Ziegler. Introduction to greedoids. Matroid applications, 40:284?357, 1992.
[6] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit
problems. arXiv preprint arXiv:1204.5721, 2012.
[7] S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in finitely-armed and continuous-armed bandits.
Theoretical Computer Science, 412(19):1832?1852, 2011.
[8] N. Cesa-Bianchi and G. Lugosi. Combinatorial bandits. Journal of Computer and System Sciences, 78
(5):1404?1422, 2012.
[9] S. Chen, T. Lin, I. King, M. R. Lyu, and W. Chen. Combinatorial pure exploration of multi-armed bandits.
In NIPS, 2014.
[10] W. Chen, Y. Wang, and Y. Yuan. Combinatorial multi-armed bandit: General framework and applications.
In ICML, 2013.
[11] V. Chvatal. A greedy heuristic for the set-covering problem. Mathematics of operations research, 4(3):
233?235, 1979.
[12] V. Gabillon, B. Kveton, Z. Wen, B. Eriksson, and S. Muthukrishnan. Adaptive submodular maximization
in bandit setting. In NIPS. 2013.
[13] Y. Gai, B. Krishnamachari, and R. Jain. Learning multiuser channel allocations in cognitive radio networks: A combinatorial multi-armed bandit formulation. In DySPAN. IEEE, 2010.
[14] A. Garivier and O. Capp?e. The kl-ucb algorithm for bounded stochastic bandits and beyond. arXiv
preprint arXiv:1102.2490, 2011.
[15] P. Helman, B. M. Moret, and H. D. Shapiro. An exact characterization of greedy structures. SIAM Journal
on Discrete Mathematics, 6(2):274?283, 1993.
[16] S. Kalyanakrishnan, A. Tewari, P. Auer, and P. Stone. Pac subset selection in stochastic multi-armed
bandits. In ICML, 2012.
? Tardos. Maximizing the spread of influence through a social network. In
[17] D. Kempe, J. Kleinberg, and E.
SIGKDD, 2003.
[18] B. Korte and L. Lov?asz. Greedoids and linear objective functions. SIAM Journal on Algebraic Discrete
Methods, 5(2):229?238, 1984.
[19] J. B. Kruskal. On the shortest spanning subtree of a graph and the traveling salesman problem. Proceedings of the American Mathematical society, 7(1):48?50, 1956.
[20] B. Kveton, Z. Wen, A. Ashkan, H. Eydgahi, and B. Eriksson. Matroid bandits: Fast combinatorial optimization with learning. arXiv preprint arXiv:1403.5045, 2014.
[21] B. Kveton, Z. Wen, A. Ashkan, and C. Szepesvari. Tight regret bounds for stochastic combinatorial
semi-bandits. arXiv preprint arXiv:1410.0949, 2014.
[22] T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4?22, 1985.
[23] T. Lin, B. Abrahao, R. Kleinberg, J. Lui, and W. Chen. Combinatorial partial monitoring game with linear
feedback and its applications. In ICML, 2014.
[24] B. Mirzasoleiman, A. Badanidiyuru, A. Karbasi, J. Vondrak, and A. Krause. Lazier than lazy greedy. In
Proc. Conference on Artificial Intelligence (AAAI), 2015.
[25] R. C. Prim. Shortest connection networks and some generalizations. Bell system technical journal, 36(6):
1389?1401, 1957.
[26] M. Streeter and D. Golovin. An online algorithm for maximizing submodular functions. In NIPS, 2009.
9
| 5930 |@word h:5 exploitation:7 polynomial:1 kalyanakrishnan:1 q1:3 accommodate:1 selecting:1 prefix:4 multiuser:1 current:2 com:3 comparing:3 nt:3 si:12 gmail:2 refines:1 subsequent:1 predetermined:2 update:6 greedy:92 selected:7 intelligence:1 beginning:2 prize:3 characterization:2 provides:2 node:2 completeness:1 accessed:1 mathematical:1 along:2 direct:1 yuan:1 shorthand:1 introduce:1 acquired:1 lov:1 hardness:1 expected:6 blowup:1 multi:10 decreasing:1 armed:12 cache:1 considering:1 becomes:1 provided:2 moreover:1 bounded:6 notation:2 agnostic:1 maxa:3 finding:2 ag:3 guarantee:2 every:7 clucb:1 collecting:2 tie:2 exactly:1 stick:2 unit:6 grant:2 t1:3 positive:1 before:2 local:3 treat:1 tsinghua:2 tends:1 irreversible:1 accumulates:1 nsfc:1 approximately:1 lugosi:2 hg0:3 china:4 studied:3 collect:3 relaxing:1 ease:1 tian:1 range:1 practical:1 unique:1 acknowledgment:1 kveton:3 practice:2 regret:49 implement:6 block:1 minv:1 procedure:1 problemdependent:1 empirical:3 axiom:1 bell:1 cascade:1 revealing:1 confidence:6 word:1 cannot:1 convenience:2 selection:4 eriksson:2 influence:7 maximizing:3 go:1 duration:1 simplicity:2 identifying:1 constrast:1 pure:4 immediately:1 rule:2 estimator:3 mq:2 updated:1 tardos:1 target:1 play:7 pt:4 suppose:3 exact:5 gm:1 us:3 trick:1 element:8 satisfying:2 continues:1 ft:20 preprint:5 solved:1 capture:1 worst:1 wang:1 ensures:1 counter:1 highest:2 removed:1 observes:2 rq:4 environment:8 reward:35 tight:3 rewrite:2 algo:1 badanidiyuru:1 efficiency:1 f2:1 exit:1 capp:1 overtime:1 easily:1 muthukrishnan:1 separated:2 instantiated:3 jain:1 fast:1 artificial:2 outcome:1 h0:2 heuristic:2 larger:2 solve:2 supplementary:6 widely:1 say:5 otherwise:1 cb358700:1 favor:1 gi:1 g1:3 fischer:1 final:3 online:24 sequence:43 advantage:1 mg:1 propose:8 maximal:15 date:1 achieve:5 weic:1 moved:1 empty:1 mirzasoleiman:1 illustrate:1 derive:2 finitely:1 coverage:1 implemented:1 come:1 radius:1 stochastic:15 exploration:13 material:6 require:3 f1:6 generalization:1 really:1 preliminary:1 mab:4 frontier:3 extension:1 hold:2 sufficiently:1 ground:1 seed:4 lyu:1 bj:1 claim:1 m0:2 major:1 kruskal:1 adopt:1 proc:1 combinatorial:15 radio:1 ziegler:1 robbins:1 largest:5 reflects:1 always:3 aim:1 og:45 sunk:1 focus:3 abrahao:1 indicates:2 hk:4 adversarial:1 greedily:2 sigkdd:1 dependent:3 squaring:1 accumulated:1 eliminate:1 her:4 bandit:28 hidden:1 quasi:26 subroutine:10 selects:1 interested:2 arg:9 among:1 colt:1 priori:1 special:2 kempe:1 initialize:4 s1t:1 marginal:6 field:2 construct:1 having:1 once:1 saving:1 icml:3 minimized:1 few:1 wen:3 national:2 phase:29 microsoft:2 evaluation:1 alignment:1 partial:1 respective:1 stoltz:1 tree:1 initialized:6 e0:1 theoretical:2 cba00300:1 instance:7 boolean:1 cover:1 maximization:7 cost:4 subset:2 lazier:1 front:1 characterize:2 chooses:1 explores:2 siam:3 accessible:9 gabillon:1 aaai:1 satisfied:2 cesa:4 choose:2 cognitive:1 ek:1 american:1 return:10 li:2 coefficient:2 audibert:2 depends:3 later:1 h1:1 break:2 tion:1 analyze:1 characterizes:1 contribution:1 minimize:8 who:1 yield:1 identification:1 tolerating:1 monitoring:1 randomness:2 influenced:1 ashkan:2 orner:1 definition:3 against:3 failure:1 e2:2 naturally:2 recovers:1 propagated:1 gain:4 recall:1 knowledge:5 auer:3 focusing:1 higher:1 tolerate:1 wei:1 formulation:1 evaluated:1 though:2 furthermore:2 stage:8 until:5 traveling:1 hand:2 ei:5 maximizer:1 building:1 name:1 contain:1 normalized:1 concept:1 true:10 iteratively:1 game:5 width:2 during:3 covering:1 cba00301:1 gg:1 mina:1 stone:1 performs:1 meaning:1 radt:6 fi:3 common:1 specialized:1 exponentially:1 extend:1 slight:1 refer:3 multiarmed:2 mathematics:3 similarly:1 submodular:5 access:1 stable:3 etc:1 add:1 belongs:1 termed:1 scenario:3 certain:1 meta:1 arbitrarily:2 minimum:10 maximize:1 shortest:2 signal:5 semi:6 branch:1 ii:4 technical:1 match:2 instruct:1 lin:3 lai:1 e1:1 prediction:1 basic:1 denominator:1 essentially:1 metric:6 expectation:2 arxiv:10 sometimes:1 want:1 krause:1 addressed:1 else:3 jian:2 allocated:1 rest:1 asz:1 smt:4 flow:1 call:6 mw:5 intermediate:1 chvatal:1 easy:1 switch:1 matroid:2 nonstochastic:2 associating:1 restrict:1 knowing:2 t0:1 whether:2 penalty:1 suffer:1 algebraic:1 remark:3 tewari:1 detailed:1 korte:1 extensively:1 schapire:1 shapiro:1 notice:2 estimated:2 discrete:2 key:1 clean:1 garivier:1 graph:1 asymptotically:1 fraction:1 halfway:1 beijing:3 sum:1 run:2 named:1 utilizes:1 draw:2 decision:33 appendix:1 comparable:2 bound:26 hi:1 played:1 encountered:1 oracle:3 g:7 adapted:2 constraint:1 generates:1 kleinberg:2 vondrak:1 min:5 according:3 instantiates:1 slightly:2 s1:11 gradually:1 karbasi:1 taken:2 computationally:1 ln:7 needed:1 serf:2 end:2 salesman:1 generalizes:1 operation:1 apply:3 observe:1 eydgahi:1 knapsack:1 assumes:1 top:1 include:1 cf:2 maintaining:1 exploit:13 perturb:1 classical:5 society:1 objective:6 g0:1 already:1 strategy:1 interacts:1 restart:3 spanning:2 length:6 besides:1 useless:1 minimizing:2 setup:2 gk:8 design:1 implementation:1 pm0:1 policy:10 unknown:7 allowing:1 upper:9 bianchi:4 sm:3 finite:2 extended:1 arbitrary:2 namely:1 specified:1 kl:1 connection:1 learned:1 nip:3 address:1 beyond:1 proceeds:1 usually:2 summarize:1 program:1 max:22 memory:5 event:1 arm:32 representing:1 minimax:1 epoch:7 freund:1 encompassing:1 sublinear:1 interesting:1 proportional:1 allocation:2 switched:1 consistent:3 s0:4 playing:2 repeat:1 supported:1 offline:19 allow:2 wide:2 fall:1 characterizing:1 munos:1 helman:1 tolerance:1 feedback:11 default:1 cumulative:4 avoids:1 collection:2 made:1 adaptive:2 counted:1 social:3 sj:1 approximate:2 obtains:1 implicitly:1 sequentially:1 it0:1 search:2 continuous:1 decade:2 sk:27 streeter:1 learn:1 channel:1 szepesvari:1 golovin:1 funco:1 complex:1 poly:1 meanwhile:1 domain:1 pk:1 spread:2 main:2 big:1 repeated:2 complementary:1 x1:1 en:1 moret:1 rithm:1 gai:1 explicit:1 exponential:1 candidate:1 lie:1 third:1 learns:1 theorem:14 xt:1 pac:3 prim:1 maxi:1 explored:1 krishnamachari:1 exists:3 restricting:1 sequential:3 cumulated:1 adding:1 hq0:3 false:4 subtree:1 horizon:8 chen:5 gap:18 rg:4 explore:4 bubeck:4 lazy:4 ordered:1 applies:1 satisfies:4 goal:2 targeted:1 formulated:1 presentation:1 hs0:10 king:1 feasible:14 hard:1 typical:2 specifically:2 lui:1 flag:3 lemma:1 called:2 total:5 lucb:25 player:20 ucb:24 maxe:2 select:1 support:1 |
5,448 | 5,931 | Linear Multi-Resource Allocation with Semi-Bandit
Feedback
Koby Crammer
Department of Electrical Engineering
The Technion, Israel
koby@ee.technion.ac.il
Tor Lattimore
Department of Computing Science
University of Alberta, Canada
tor.lattimore@gmail.com
Csaba Szepesv?ari
Department of Computing Science
University of Alberta, Canada
szepesva@ualberta.ca
Abstract
We study an idealised sequential resource allocation problem. In each time step
the learner chooses an allocation of several resource types between a number of
tasks. Assigning more resources to a task increases the probability that it is completed. The problem is challenging because the alignment of the tasks to the resource types is unknown and the feedback is noisy. Our main contribution is the
new setting and an algorithm with nearly-optimal regret analysis. Along the way
we draw connections to the problem of minimising regret for stochastic linear
bandits with heteroscedastic noise. We also present some new results for stochastic linear bandits on the hypercube that significantly improve on existing work,
especially in the sparse case.
1
Introduction
Economist Thomas Sowell remarked that ?The first lesson of economics is scarcity: There is never
enough of anything to fully satisfy all those who want it.?1 The optimal allocation of resources is
an enduring problem in economics, operations research and daily life. The problem is challenging
not only because you are compelled to make difficult trade-offs, but also because the (expected)
outcome of a particular allocation may be unknown and the feedback noisy.
We focus on an idealised resource allocation problem where the economist plays a repeated resource
allocation game with multiple resource types and multiple tasks to which these resources can be
assigned. Specifically, we consider a (nearly) linear model with D resources and K tasks. In each
time step t the economist chooses an allocation of resources Mt ? RD?K where Mtk ? RD is the
kth column and represents the amount of each resource type assigned to the kth task. We assume
that the kth task is completed successfully with probability min {1, hMtk , ?k i} and ?k ? RD is an
unknown non-negative vector that determines how the success rate of a given task depends on the
quantity and type of resources assigned to it. Naturally we will limit the availability of resources
PK
by demanding that Mt satisfies k=1 Mtdk ? 1 for all resource types d. At the end of each time
step the economist observes which tasks were successful. The objective is to maximise the number
of successful tasks up to some time horizon n that is known in advance. This model is a natural
generalisation of the one used by Lattimore et al. [2014], where it was assumed that there was a
single resource type only.
1
He went on to add that ?The first lesson of politics is to disregard the first lesson of economics.? Sowell
[1993]
1
An example application might be the problem of allocating computing resources on a server between
a number of Virtual Private Servers (VPS). In each time step (some fixed interval) the controller
chooses how much memory/cpu/bandwidth to allocate to each VPS. A VPS is said to fail in a given
round if it fails to respond to requests in a timely fashion. The requirements of each VPS are
unknown in advance, but do not change greatly with time. The controller should learn which VPS
benefit the most from which resource types and allocate accordingly.
The main contribution of this paper besides the new setting is an algorithm designed for this problem
along with theoretical guarantees on its performance in terms of the regret. Along the way we present
some additional results for the related problem of minimising regret for stochastic linear bandits on
the hypercube. We also prove new concentration results for weighted least squares estimation, which
may be independently interesting.
The generalisation of the work of Lattimore et al. [2014] to multiple resources turns out to be fairly
non-trivial. Those with knowledge of the theory of stochastic linear bandits will recognise some
similarity. In particular, once the nonlinearity of the objective is removed, the problem is equivalent
to playing K linear bandits in parallel, but where the limited resources constrain the actions of the
learner and correspondingly the returns for each task. Stochastic linear bandits have recently been
generating a significant body of research (e.g., Auer [2003], Dani et al. [2008], Rusmevichientong
and Tsitsiklis [2010], Abbasi-Yadkori et al. [2011, 2012], Agrawal and Goyal [2012] and many others). A related problem is that of online combinatorial optimisation. This has an extensive literature,
but most results are only applicable for discrete action sets, are in the adversarial setting, and cannot exploit the additional structure of our problem. Nevertheless, we refer the interested reader to
(say) the recent work by Kveton et al. [2014] and references there-in. Also worth mentioning is that
the resource allocation problem at hand is quite different to the ?linear semi-bandit? proposed and
analysed by Krishnamurthy et al. [2015] where the action set is also finite (the setting is different in
many other ways besides).
Given its similarity, it is tempting to apply the techniques of linear bandits to our problem. When
doing so, two main difficulties arise. The first is that our payoffs are non-linear: the expected
reward is a linear function only up to a point after which it is clipped. In the resource allocation
problem this has a natural interpretation, which is that over-allocating resources beyond a certain
point is fruitless. Fortunately, one can avoid this difficulty rather easily by ensuring that with high
probability resources are never over-allocated. The second problem concerns achieving good regret
regardless of the task specifics. In particular, when the number of tasks K is large and resources are
at a premium the allocation problem behaves more like a K-armed bandit where the economist must
choose the few tasks that
? can be completed successfully. For this kind of problem regret should scale
in the worst case with K only [Auer et al., 2002, Bubeck and Cesa-Bianchi, 2012]. The standard
linear bandits approach, on the other hand, would lead to a bound on the regret that depends linearly
on K. To remedy this situation, we will exploit that if K is large and resources are scarce, then
many tasks will necessarily be under-resourced and will fail with high probability. Since the noise
model is Bernoulli, the variance of the noise for these tasks is extremely low. By using weighted
least-squares estimators we are able to exploit this and thereby obtain an improved regret. An added
benefit is that when resources are plentiful, then all tasks will succeed with high probability under
the optimal allocation, and in this case the variance is also low. This leads to a poly-logarithmic
regret for the resource-laden case where the optimal allocation fully allocates every task.
2
Preliminaries
If F is some event, then ?F is its complement (i.e., it is the event that F does not occur). If A is
positive definite and x is a vector, then kxk2A = x> Ax stands for the weighted 2-norm. We write |x|
to be the vector of element-wise absolute values of x. We let ? ? RD?K be a matrix with columns
?1 , . . . ?K . All entries in ? are non-negative, but otherwise we make no global assumptions on ?. At
each time step t the learner chooses an allocation matrix Mt ? M where
(
)
K
X
D?K
M = M ? [0, 1]
:
Mdk ? 1 for all d .
k=1
The assumption that each resource type has a bound of 1 is non-restrictive, since the units of any
resource can be changed to accommodate this assumption. We write Mtk ? [0, 1]D for the kth
2
column of Mt . The reward at time step t is kYt k1 where Ytk ? {0, 1} is sampled from a Bernoulli
distribution with parameter ?(hMtk , ?k i) = min {1, hMtk , ?k i}. The economist observes all Ytk ,
however, not just the sum. The optimal allocation is denoted by M ? and defined by
K
X
M ? = arg max
?(hMk , ?k i) .
M ?M
k=1
We are primarily concerned with designing an allocation algorithm that minimises the expected
(pseudo) regret of this problem, which is defined by
" n K
#
K
X
XX
?
Rn = n
?(hMk , ?k i) ? E
?(hMtk , ?k i) ,
t=1 k=1
k=1
where the expectation is taken over both the actions of the algorithm and the observed reward.
Optimal Allocations
If ? is known, then the optimal allocation can be computed by constructing an appropriate linear
program. Somewhat surprisingly it may also be computed exactly in O(K log K + D log D) time
using Algorithm 1 below. The optimal allocation is not so straight-forward as, e.g., simply allocating
resources to the incomplete task for which the corresponding ? is largest in some dimension. For
example, for K = 2 tasks and d = 2 resource types:
0
1/2
0
1
? = ?1 ?2 =
=?
M ? = M1? M2? =
.
1/2
1
1/2 1/2
We see that even though ?22 is the largest param- Algorithm 1
eter, the optimal allocation assigns only half of the
Input: ?
second resource (d = 2) to this task. The right apM = 0 ? RD?K and B = 1 ? RD
proach is to allocate resources to incomplete tasks
while ? k, d s.t hMk , ?k i < 1 and Bd > 0 do
using the ratios as prescribed by Algorithm 1. The
A = {k : hMk , ?k i < 1} and B = {d : Bd > 0}
intuition for allocating in this way is that resources
?dk
k, d = arg max
min
should be allocated as efficiently as possible, and efi?A\{k}
?di
(k,d)?A?B
ficiency is determined by the ratio of the expected
1 ? hMk , ?k i
success due to the allocation of a resource and the
Mdk = min Bd ,
?dk
amount of resources allocated.
end while
?
Theorem 1. Algorithm 1 returns M .
return M
The proof of Theorem 1 and an implementation of Algorithm 1 may be found in the supplementary
material.
We are interested primarily in the case when ? is unknown, so Algorithm 1 will not be directly
applicable. Nevertheless, the algorithm is useful as a module in the implementation of a subsequent
algorithm that estimates ? from data.
3
Optimistic Allocation Algorithm
We follow the optimism in the face of uncertainty principle. In each time step t, the algorithm
constructs an estimator ??kt for each ?k and a corresponding confidence set Ctk for which ?k ? Ctk
holds with high probability. The algorithm then takes the optimistic action subject to the assumption
that ?k does indeed lie in Ctk for all k. The main difficulty is the construction of the confidence sets.
Like other authors [Dani et al., 2008, Rusmevichientong and Tsitsiklis, 2010, Abbasi-Yadkori et al.,
2011] we define our confidence sets to be ellipses, but the use of a weighted least-squares estimator
means that our ellipses may be significantly smaller than the sets that would be available by using
these previous works in a straightforward way. The algorithm accepts as input the number of tasks
and resource types, the horizon and constants ? > 0 and ? where constant ? is defined by
D
1
2
,
N = 4n4 D2
,
B ? max k?k k2 ,
so that
?=
k
nK
s
!2
?
3nN
6nN
? = 1 + ?B + 2 log
log
.
(1)
?
?
3
2
Note that B must be a known bound on maxk k?k k2 , which might seem like a serious restriction,
until one realizes that it is easy to add an initialisation phase where estimates are quickly made
while incurring minimal additional regret, as was also done by Lattimore et al. [2014]. The value
of ? determines the level of regularisation in the least squares estimation and will be tuned later to
optimise the regret.
Algorithm 2 Optimistic Allocation Algorithm
1: Input K, D, n, ?, ?
2: for t ? 1, . . . , n do
3:
// Compute confidence
sets for all tasks k:
P
4:
Gtk = ?I + ? <t ?? k M? k M?>k
6:
P
??tk = G?1
? <t ?? k M? Y? k
tk
o
o
n
n
2
2
0
= ??k : k?
?k ? ??tk kGtk ? 4?
Ctk = ??k : k?
?k ? ??tk kGtk ? ? and Ctk
7:
8:
// Compute optimistic allocation:
Mt = arg maxMt ?M max??k ?Ctk ?(hMtk , ??k i)
5:
9:
10:
// Observe success indicators Ytk for all tasks k:
Ytk ? Bernoulli(?(hMtk , ?k i))
11:
// Compute weights for all tasks k:
?1
12:
?tk
= arg max??k ?Ctk
?k i (1 ? hMtk , ??k i)
0 hMtk , ?
13: end for
Computational Efficiency
We could not find an efficient implementation of Algorithm 2 because solving the bilinear optimisation problem in Line 8 is likely to be NP-hard (Bennett and Mangasarian [1993] and also Petrik
and Zilberstein [2011]). In our experiments we used a simple algorithm based on optimising for M
and ? in alternative steps combined with random restarts, but for large D and K this would likely
not be efficient. In the supplementary material we present an alternative algorithm that is efficient,
but relies on the assumption that k?k k1 ? 1 for all k. In this regime it is impossible to over-allocate
resources and this fact can be exploited to obtain an efficient and practical algorithm with strong
guarantees. Along the way, we are able to construct an elegant algorithm for linear bandits on the
hypercube that enjoys optimal regret and adapts to sparsity.
Computing the weights ?tk (Line 12) is (somewhat surprisingly) straight-forward. Define
p
p
and ptk = hMtk , ??tk i ? 2 ? kMtk kG?1 .
p?tk = hMtk , ??tk i + 2 ? kMtk kG?1
tk
tk
Then the weights can be computed by
?1
?tk
?
1
?p?tk (1 ? p?tk ) if p?tk ? 2
= ptk (1 ? ptk ) if ptk ? 21
?1
otherwise .
4
(2)
A curious reader might wonder why the weights are computed by optimising within confidence set
0
Ctk
, which has double the radius of Ctk . The reason is rather technical, but essentially if the true
parameter ?k were to lie on the boundary of the confidence set, then the corresponding weight could
become infinite. For the analysis to work we rely on controlling the size of the weights. It is not
clear whether or not this trick is really necessary.
4
Worst-case Regret for Algorithm 2
We now analyse the regret of Algorithm
? 2. First we offer a worst-case bound on the regret that
depends on the time-horizon like O( n). We then turn our attention to the resource-laden case
where the optimal allocation satisfies hMk? , ?k i = 1 for all k. In this instance we show that the
dependence on the horizon is only poly-logarithmic, which would normally be unexpected when the
4
action-space is continuous. The improvement comes from the weighted estimation that exploits the
fact that the variance of the noise under the optimal allocation vanishes.
2
Theorem 2. Suppose Algorithm 2 is run with bound B ? maxk k?k k2 . Then
s
p
Rn ? 1 + 4D 2?nK max k?k k? + 4 ?/? log(1 + 4n2 ) .
k
Choosing ? = B ?1 log
6nN
?
3nN
?
2
and assuming that B ? O(maxk k?k k2 ), then
q
3/2
Rn ? O D
nK max k?k k2 log n .
log
k
The proof of Theorem 2 will follow by carefully analysing the width of the confidence sets as the
algorithm makes allocations. We start by proving the validity of the confidence sets, and then prove
the theorem.
Weighted Least Squares Estimation
For this sub-section we focus on the problem of estimating a single unknown ? = ?k . Let
n
M1 , . . . , Mn be a sequence of allocations to task k with Mt ? RD . Let {Ft }t=0 be a filtration
with Ft containing information available at the end of round t, which means that Mt is Ft?1 measurable. Let ?1 , . . . , ?n be the sequence of weights chosen by Algorithm 2. The sequence of
outcomes is Y1 , . . . , Yn ? P
{0, 1} for which E[Yt |Ft?1 ] = ?(hMt , ?i). The weighted regularised
gram matrix is Gt = ?I + ? <t ?? M? M?> and the corresponding weighted least squares estimator
is
X
??t = G?1
?t M? Y? .
t
? <t
2
2
Theorem 3. If k?k2 ? B and ? is chosen as in Eq. (1), then k?
?t ? ?kGt ? ? for all t ? n with
probability at least 1 ? ? = 1/(nK).
Similar results exist in the literature for unweighted least-squares estimators (for example, Dani
et al. [2008], Rusmevichientong and Tsitsiklis [2010], Abbasi-Yadkori et al. [2011]). In our case,
however, Gt is the weighted gram matrix, which may be significantly larger than an unweighted
version when the weights become large. The proof of Theorem 3 is unfortunately too long to include
in the main text, but it may be found in the supplementary material.
Analysing the Regret
2
We start with some technical lemmas. Let F be the failure event that k?
?tk ? ?k kGtk > ? for some
t ? n and 1 ? k ? K.
Lemma 4 (Abbasi-Yadkori et al. [2012]). Let x1 , . . . , xn be
n an arbitrary
o sequence of vectors with
Pt?1
Pn
2
>
2
kxt k2 ? c and let Gt = I + s=1 xs xs . Then t=1 min 1, kxt kG?1 ? 2D log 1 + c?n
D .
t
Corollary 5. If F does not hold, then
n
X
n
o
2
?tk min 1, kMtk kG?1 ? 8D log(1 + 4n2 ).
tk
t=1
The proof is omitted, but follows rather easily by showing that ?tk can be moved inside the minimum
at a price of increasing the loss at most by a factor of four, and then applying Lemma 4. See the
supplementary material for the formal proof.
K
X
p
?1
Lemma 6. Suppose F does not hold, then
?tk ? D max k?k k? + 4 ?/? .
k
k=1
5
?1
Proof. We exploit the fact that ?tk
is an estimate of the variance, which is small whenever kMtk k1
is small:
?1
?tk
= arg max hMtk , ??k i (1 ? hMtk , ??k i) ? arg max hMtk , ??k i
0
?
?k ?Ctk
0
?
?k ?Ctk
(a)
= hMtk , ?i + arg max hMtk , ??k ? ?i ? kMtk k1 k?k k? + 4
p
?
?k ?Ctk0
(b)
? kMtk k1 k?k k? + 4
p
? kMtk kG?1
tk
(c)
p
? kMtk kI/? ? kMtk k1 k?k k? + 4 ?/? ,
?1
0
where (a) follows from Cauchy-Schwartz
k ? Ctk , (b) since Gtk ? I/? and basic
pand the fact that ?p
linear algebra, (c) since kMtk kI/? = 1/? kMtk k2 ? 1/? kMtk k1 . The result is completed
PK
since the resource constraints implies that k=1 kMtk k1 ? D.
Proof of Theorem 2. By Theorem 3 we have that F holds with probability at most ? = 1/(nK).
If F does not hold, then by the definition of the confidence set we have ?k ? Ctk for all t and k.
Therefore
"
#
n X
K
n X
K
X
X
Rn = E
(hMk? , ?k i ? ?(hMtk , ?k i)) ? 1 + E 1 {?F }
hMk? ? Mtk , ?k i .
t=1 k=1
t=1 k=1
Note that we were able to replace ?(hMtk , ?k i) = hMtk , ?k i, since if F does not hold, then Mtk
will never be chosen in such a way that resources are over-allocated. We will now assume that F
does not hold and bound the argument in the expectation. By the optimism principle we have:
n X
K
X
hMk?
(a)
? Mtk , ?k i ?
t=1 k=1
n X
K
X
min {1, hMtk , ??tk ? ?k i}
t=1 k=1
(b)
?
K
n X
X
o
n
?tk ? ?k kGtk
min 1, kMtk kG?1 k?
tk
t=1 k=1
(c)
?2
n X
K
X
n
p o
min 1, kMtk kG?1 ?
tk
t=1 k=1
v
u n
u X
?
? 2tn
(d)
K
X
!
n
o 2
min 1, kMtk kG?1
tk
t=1
v
u n
u X
? 2tn
?
(e)
t=1
k=1
K
X
!
?1
?tk
k=1
K
X
!
n
o
2
?tk min 1, kMtk kG?1
tk
k=1
v
!
r ! n
u
K
n
o
u
X
? X
2
t
? 2 nD max k?k k? + 4
?
?tk min 1, kMtk kG?1
tk
k
? t=1
k=1
v
r !
u
u
(g)
?
t
? 4D 2?nK max k?k k? + 4
log(1 + 4n2 ) .
k
?
(f )
where (a) follows from the assumption that ?k ? Ctk for all t and k and since Mt is chosen optimistically, (b) by the Cauchy-Schwarz inequality, (c) by the definition of ??kt , which lies inside Ctk ,
(d) by Jensen?s inequality, (e) by Cauchy-Schwarz again, (f) follows from Lemma 6. Finally (g)
follows from Corollary 5.
5
Regret in Resource-Laden Case
We now show that if there are enough resources such that the optimal strategy can complete
? every
task with certainty, then the regret of Algorithm 2 is poly-logarithmic (in contrast to O( n) otherwise). As before we exploit the low variance, but now the variance is small because hMtk , ?k i is
6
close to 1, while in the previous section we argued that this could not happen too often (there is no
contradiction as the quantity maxk k?k k appeared in the previous bound).
PK
Theorem 7. If k=1 hMk? , ?k i = K, then Rn ? 1 + 8?KD log(1 + 4n2 ).
Proof. We start by showing that the weights are large:
?1
?tk
= max0 hMtk , ?i (1 ? hMtk , ?i) ? max0 (1 ? hMtk , ?i)
??Ctk
??Ctk
p
? max0 hMtk , ?? ? ?i ? kMtk kG?1 max0 k?
? ? ?kGtk ? kMtk kG?1 4 ? .
?
?,??Ctk
tk
?
?,??Ctk
tk
Applying the optimism principle and using the bound above combined with Corollary 5 gives the
result:
"
#
n X
K
X
ERn ? 1 + E 1 {?F }
min {1, hMtk , ??kt ? ?k i}
t=1 k=1
"
? 1 + 2E 1 {?F }
n X
K
X
#
n
p o
min 1, kMtk kG?1 ?
tk
t=1 k=1
"
= 1 + 2E 1 {?F }
K
n X
X
n
op
?1
?
min 1, ?tk
?tk kMtk kG?1
#
tk
t=1 k=1
"
? 1 + 8? E 1 {?F }
K
n X
X
#
n
o
2
min 1, ?tk kMtk kG?1
tk
t=1 k=1
2
? 1 + 8?KD log(1 + 4n ) .
6
Experiments
We present two experiments to demonstrate the behaviour of Algorithm 2. All code and data is
available in the supplementary material. Error bars indicate 95% confidence intervals, but sometimes
they are too small to see (the algorithm is quite conservative, so the variance is very low). We used
B = 10 for all experiments. The first experiment demonstrates the improvements obtained by
using a weighted estimator over an unweighted one, and also serves to give some idea of the rate of
learning. For this experiment we used D = K = 2 and n = 106 and
K
X
8/10 2/10
1
0
?
? = ?1 ?2 =
hMk? , ?k i = 2 ,
=?
M =
and
4/10
2
1/2 1/2
k=1
where the kth column is the parameter/allocation for the kth task. We ran two versions of the
algorithm. The first, exactly as given in Algorithm 2 and the second identical except that the weights
were fixed to ?tk = 4 for all t and k (this value is chosen because it corresponds to the minimum
inverse variance for a Bernoulli variable). The data was produced by taking the average regret over
8 runs. The results are given in Fig. 1. In Fig. 2 we plot ?tk . The results show that ?tk is increasing
linearly with t. This is congruent with what we might expect because in this regime the estimation
error should drop with O(1/t) and the estimated variance is proportional
p to the estimation error.
Note that the estimation error for the algorithm with ?tk = 4 will be O( 1/t).
For the second experiment we show the algorithm adapting to the environment. We fix n = 5 ? 105
and D = K = 2. For ? ? (0, 1) we define
K
X
1/2 ?/2
1 0
and
hMk? , ?k i = 1 .
?? =
=?
M? =
1/2 ?/2
1 0
k=1
The unusual profile of the regret as ? varies can be attributed to two factors. First, if ? is small then
the algorithm quickly identifies that resources should be allocated first to the first task. However, in
the early stages of learning the algorithm is conservative in allocating to the first task to avoid overallocation. Since the remaining resources are given to the second task, the regret is larger for small
7
? because the gain from allocating to the second task is small. On the other hand, if ? is close to 1,
then the algorithm suffers the opposite problem. Namely, it cannot identify which task the resources
should be assigned to. Of course, if ? = 1, then the algorithm must simply learn that all resources
can be allocated safely and so the regret is smallest here. An important point is that the algorithm
never allocates all its resources at the start of the process because this risks over-allocation, so even
in ?easy? problems the regret will not vanish.
Figure 1: Weighted vs unweighted estimation
Figure 3: ?Gap? dependence
Figure 2: Weights
80,000
30,000
40
20,000
20
Weighted Estimator
Unweighted Estimator
0
0
1,000,000
t
7
Regret
40,000
?
Regret
60,000
?t1
?t2
0
0
1,000,000
t
20,000
10,000
0
0.0
0.5
?
1.0
Conclusions and Summary
We introduced the stochastic multi-resource allocation problem and developed a new algorithm that
enjoys near-optimal worst-case regret. The main drawback of the new algorithm is that its computation time is exponential in the dimension parameters, which makes practical implementations
challenging unless both K and D are relatively small. Despite this challenge we were able to implement that algorithm using a relatively brutish approach to solving the optimisation problem, and
this was sufficient to present experimental results on synthetic data showing that the algorithm is
behaving as the theory predicts, and that the use of the weighted least-squares estimation is leading
to a real improvement.
Despite the computational issues, we think this is a reasonable first step towards a more practical algorithm as well as a solid theoretical understanding of the structure of the problem. As a consolation
(and on their own merits) we include some other results:
? An efficient (both in terms of regret and computation) algorithm for the case where overallocation is impossible.
? An algorithm for linear bandits on the hypercube that enjoys optimal regret bounds and
adapts to sparsity.
? Theoretical analysis of weighted least-squares estimators, which may have other applications (e.g., linear bandits with heteroscedastic noise).
There are many directions for future research. The most natural is to improve the practicality of the
algorithm. We envisage such an algorithm might be obtained by following the program below:
? Generalise the Thompson sampling analysis for linear bandits by Agrawal and Goyal
[2012]. This is a highly non-trivial step, since it is no longer straight-forward to show
that such an algorithm is optimistic with high probability. Instead it will be necessary to
make do with some kind of local optimism for each task.
? The method of estimation depends heavily on the algorithm over-allocating its resources
only with extremely low probability, but this significantly slows learning in the initial
phases when the confidence sets are large and the algorithm is acting conservatively. Ideally
we would use a method of estimation that depended on the real structure of the problem,
but existing techniques that might lead to theoretical guarantees (e.g., empirical process
theory) do not seem promising if small constants are expected.
It is not hard to think up extensions or modifications to the setting. For example, it would be
interesting to look at an adversarial setting (even defining it is not so easy), or move towards a
non-parametric model for the likelihood of success given an allocation.
8
References
Yasin Abbasi-Yadkori, Csaba Szepesv?ari, and David Tax. Improved algorithms for linear stochastic
bandits. In Advances in Neural Information Processing Systems, pages 2312?2320, 2011.
Yasin Abbasi-Yadkori, David Pal, and Csaba Szepesvari. Online-to-confidence-set conversions and
application to sparse stochastic bandits. In AISTATS, volume 22, pages 1?9, 2012.
Shipra Agrawal and Navin Goyal. Thompson sampling for contextual bandits with linear payoffs.
arXiv preprint arXiv:1209.3352, 2012.
Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. The Journal of Machine Learning Research, 3:397?422, 2003.
Peter Auer, Nicol?o Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit
problem. Machine Learning, 47:235?256, 2002.
Kristin P Bennett and Olvi L Mangasarian. Bilinear separation of two sets inn-space. Computational
Optimization and Applications, 2(3):207?227, 1993.
S?ebastien Bubeck and Nicol`o Cesa-Bianchi. Regret Analysis of Stochastic and Nonstochastic Multiarmed Bandit Problems. Foundations and Trends in Machine Learning. Now Publishers Incorporated, 2012. ISBN 9781601986269.
Varsha Dani, Thomas P Hayes, and Sham M Kakade. Stochastic linear optimization under bandit
feedback. In COLT, pages 355?366, 2008.
Akshay Krishnamurthy, Alekh Agarwal, and Miroslav Dudik. Efficient contextual semi-bandit
learning. arXiv preprint arXiv:1502.05890, 2015.
Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari. Tight regret bounds for stochastic combinatorial semi-bandits. arXiv preprint arXiv:1410.0949, 2014.
Tor Lattimore, Koby Crammer, and Csaba Szepesv?ari. Optimal resource allocation with semi-bandit
feedback. In Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence (UAI),
2014.
Marek Petrik and Shlomo Zilberstein. Robust approximate bilinear programming for value function
approximation. The Journal of Machine Learning Research, 12:3027?3063, 2011.
Paat Rusmevichientong and John N Tsitsiklis. Linearly parameterized bandits. Mathematics of
Operations Research, 35(2):395?411, 2010.
Thomas Sowell. Is Reality Optional?: And Other Essays. Hoover Institution Press, 1993.
9
| 5931 |@word private:1 version:2 exploitation:1 norm:1 nd:1 d2:1 essay:1 thereby:1 solid:1 accommodate:1 initial:1 plentiful:1 initialisation:1 tuned:1 existing:2 com:1 contextual:2 analysed:1 gmail:1 assigning:1 must:3 bd:3 john:1 subsequent:1 happen:1 shlomo:1 designed:1 plot:1 drop:1 v:1 half:1 intelligence:1 accordingly:1 compelled:1 institution:1 along:4 become:2 prove:2 inside:2 indeed:1 expected:5 multi:2 yasin:2 alberta:2 cpu:1 armed:1 param:1 increasing:2 xx:1 estimating:1 israel:1 kg:15 kind:2 what:1 developed:1 csaba:5 guarantee:3 pseudo:1 certainty:1 every:2 safely:1 exactly:2 k2:8 demonstrates:1 schwartz:1 unit:1 normally:1 yn:1 before:1 positive:1 maximise:1 engineering:1 local:1 t1:1 depended:1 limit:1 bilinear:3 despite:2 optimistically:1 might:6 challenging:3 heteroscedastic:2 mentioning:1 limited:1 practical:3 kveton:2 regret:31 goyal:3 definite:1 implement:1 empirical:1 significantly:4 adapting:1 confidence:13 cannot:2 close:2 risk:1 impossible:2 applying:2 restriction:1 equivalent:1 measurable:1 branislav:1 yt:1 straightforward:1 economics:3 regardless:1 independently:1 laden:3 attention:1 thompson:2 assigns:1 m2:1 estimator:9 contradiction:1 proving:1 krishnamurthy:2 construction:1 play:1 controlling:1 ualberta:1 suppose:2 pt:1 heavily:1 programming:1 designing:1 regularised:1 trick:1 element:1 trend:1 predicts:1 observed:1 ft:4 module:1 preprint:3 electrical:1 worst:4 went:1 trade:2 removed:1 observes:2 hmk:12 ran:1 intuition:1 vanishes:1 environment:1 reward:3 kgtk:5 ideally:1 solving:2 tight:1 algebra:1 petrik:2 efficiency:1 learner:3 shipra:1 easily:2 artificial:1 outcome:2 choosing:1 quite:2 supplementary:5 larger:2 say:1 otherwise:3 fischer:1 think:2 analyse:1 noisy:2 envisage:1 online:2 ptk:4 agrawal:3 sequence:4 kxt:2 isbn:1 inn:1 sowell:3 tax:1 adapts:2 moved:1 double:1 requirement:1 congruent:1 generating:1 tk:46 paat:1 ac:1 minimises:1 op:1 eq:1 strong:1 come:1 implies:1 indicate:1 direction:1 radius:1 drawback:1 kgt:1 stochastic:11 exploration:1 virtual:1 material:5 argued:1 behaviour:1 fix:1 really:1 preliminary:1 hoover:1 extension:1 hold:7 tor:3 early:1 smallest:1 omitted:1 estimation:11 applicable:2 realizes:1 combinatorial:2 schwarz:2 largest:2 successfully:2 weighted:14 kristin:1 dani:4 offs:2 rather:3 avoid:2 pn:1 apm:1 zilberstein:2 corollary:3 ax:1 focus:2 improvement:3 bernoulli:4 likelihood:1 greatly:1 contrast:1 adversarial:2 nn:4 bandit:25 interested:2 arg:7 issue:1 colt:1 denoted:1 fairly:1 once:1 never:4 construct:2 sampling:2 optimising:2 represents:1 identical:1 koby:3 look:1 nearly:2 future:1 others:1 np:1 t2:1 serious:1 primarily:2 wen:1 few:1 phase:2 highly:1 zheng:1 alignment:1 allocating:7 kt:3 daily:1 necessary:2 allocates:2 unless:1 incomplete:2 theoretical:4 minimal:1 miroslav:1 instance:1 column:4 ctk:19 entry:1 technion:2 wonder:1 successful:2 too:3 pal:1 varies:1 synthetic:1 chooses:4 combined:2 varsha:1 quickly:2 again:1 abbasi:6 cesa:3 containing:1 choose:1 leading:1 return:3 rusmevichientong:4 availability:1 satisfy:1 depends:4 later:1 optimistic:5 doing:1 start:4 parallel:1 timely:1 contribution:2 il:1 square:9 pand:1 variance:9 who:1 efficiently:1 lesson:3 identify:1 produced:1 worth:1 straight:3 suffers:1 whenever:1 ashkan:1 definition:2 failure:1 remarked:1 naturally:1 proof:8 di:1 attributed:1 gain:1 sampled:1 knowledge:1 carefully:1 auer:4 follow:2 restarts:1 improved:2 done:1 though:1 just:1 stage:1 until:1 hand:3 navin:1 validity:1 true:1 remedy:1 idealised:2 assigned:4 round:2 game:1 width:1 szepesva:1 anything:1 complete:1 demonstrate:1 tn:2 wise:1 lattimore:6 mangasarian:2 ari:3 recently:1 behaves:1 mt:8 volume:1 he:1 interpretation:1 m1:2 significant:1 refer:1 multiarmed:2 rd:7 mathematics:1 nonlinearity:1 fruitless:1 similarity:2 behaving:1 longer:1 gt:3 add:2 alekh:1 own:1 recent:1 certain:1 server:2 inequality:2 success:4 life:1 exploited:1 minimum:2 additional:3 fortunately:1 somewhat:2 dudik:1 tempting:1 semi:5 multiple:3 sham:1 technical:2 minimising:2 offer:1 long:1 ellipsis:2 ensuring:1 basic:1 controller:2 optimisation:3 expectation:2 essentially:1 arxiv:6 sometimes:1 agarwal:1 eter:1 szepesv:3 want:1 interval:2 allocated:6 publisher:1 mdk:2 subject:1 elegant:1 seem:2 ee:1 curious:1 near:1 enough:2 concerned:1 easy:3 nonstochastic:1 bandwidth:1 opposite:1 idea:1 politics:1 whether:1 optimism:4 allocate:4 peter:2 azin:1 action:6 useful:1 clear:1 amount:2 ytk:4 exist:1 estimated:1 discrete:1 write:2 proach:1 hmt:1 four:1 nevertheless:2 achieving:1 sum:1 run:2 inverse:1 parameterized:1 you:1 respond:1 uncertainty:2 clipped:1 reader:2 reasonable:1 separation:1 recognise:1 draw:1 bound:11 ki:2 kyt:1 occur:1 constraint:1 constrain:1 argument:1 min:16 extremely:2 prescribed:1 relatively:2 ern:1 department:3 request:1 kd:2 smaller:1 kakade:1 n4:1 modification:1 taken:1 resource:53 turn:2 fail:2 merit:1 end:4 unusual:1 serf:1 available:3 operation:2 incurring:1 efi:1 apply:1 observe:1 appropriate:1 yadkori:6 alternative:2 thomas:3 remaining:1 include:2 completed:4 exploit:6 restrictive:1 k1:8 especially:1 practicality:1 hypercube:4 objective:2 move:1 added:1 quantity:2 strategy:1 concentration:1 dependence:2 parametric:1 said:1 kth:6 olvi:1 cauchy:3 trivial:2 reason:1 assuming:1 economist:6 besides:2 code:1 ratio:2 difficult:1 unfortunately:1 negative:2 filtration:1 slows:1 implementation:4 ebastien:1 unknown:6 bianchi:3 conversion:1 finite:2 optional:1 payoff:2 situation:1 maxk:4 defining:1 incorporated:1 y1:1 rn:5 arbitrary:1 canada:2 introduced:1 complement:1 namely:1 david:2 extensive:1 connection:1 accepts:1 beyond:1 able:4 bar:1 mtk:5 below:2 regime:2 sparsity:2 appeared:1 challenge:1 program:2 max:13 memory:1 optimise:1 marek:1 event:3 demanding:1 natural:3 difficulty:3 rely:1 indicator:1 scarce:1 mn:1 improve:2 identifies:1 text:1 literature:2 understanding:1 nicol:2 regularisation:1 fully:2 loss:1 expect:1 interesting:2 allocation:33 proportional:1 foundation:1 sufficient:1 principle:3 playing:1 course:1 changed:1 summary:1 surprisingly:2 enjoys:3 tsitsiklis:4 formal:1 generalise:1 face:1 correspondingly:1 taking:1 absolute:1 sparse:2 akshay:1 benefit:2 feedback:5 dimension:2 boundary:1 stand:1 gram:2 unweighted:5 xn:1 conservatively:1 forward:3 author:1 made:1 approximate:1 vps:5 global:1 hayes:1 uai:1 assumed:1 continuous:1 why:1 reality:1 promising:1 learn:2 szepesvari:2 ca:1 robust:1 necessarily:1 poly:3 constructing:1 aistats:1 pk:3 main:6 linearly:3 noise:5 arise:1 profile:1 n2:4 paul:1 gtk:2 repeated:1 body:1 x1:1 fig:2 fashion:1 fails:1 sub:1 exponential:1 lie:3 vanish:1 theorem:10 specific:1 showing:3 jensen:1 dk:2 x:2 concern:1 enduring:1 sequential:1 horizon:4 nk:6 gap:1 logarithmic:3 simply:2 likely:2 bubeck:2 unexpected:1 corresponds:1 determines:2 satisfies:2 relies:1 succeed:1 towards:2 price:1 bennett:2 replace:1 change:1 hard:2 analysing:2 specifically:1 generalisation:2 determined:1 infinite:1 except:1 acting:1 lemma:5 max0:4 conservative:2 experimental:1 disregard:1 premium:1 crammer:2 scarcity:1 |
5,449 | 5,932 | Exactness of Approximate MAP Inference in
Continuous MRFs
Nicholas Ruozzi
Department of Computer Science
University of Texas at Dallas
Richardson, TX 75080
Abstract
Computing the MAP assignment in graphical models is generally intractable. As a
result, for discrete graphical models, the MAP problem is often approximated using linear programming relaxations. Much research has focused on characterizing
when these LP relaxations are tight, and while they are relatively well-understood
in the discrete case, only a few results are known for their continuous analog.
In this work, we use graph covers to provide necessary and sufficient conditions
for continuous MAP relaxations to be tight. We use this characterization to give
simple proofs that the relaxation is tight for log-concave decomposable and logsupermodular decomposable models. We conclude by exploring the relationship
between these two seemingly distinct classes of functions and providing specific
conditions under which the MAP relaxation can and cannot be tight.
1
Introduction
Graphical models are a popular modeling tool for both discrete and continuous distributions. We are
commonly interested in one of two inference tasks in graphical models: finding the most probable
assignment (a.k.a., MAP inference) and computing marginal distributions. These problems are NPhard in general, and a variety of approximate inference schemes are used in practice.
In this work, we will focus on approximate MAP inference. For discrete state spaces, linear programming relaxations of the MAP problem (specifically, the MAP LP) are quite common [1; 2; 3].
These relaxations replace global marginalization constraints with a collection of local marginalization constraints. Wald and Globerson [4] refer to these as local consistency relaxations (LCRs). The
advantage of LCRs is that they are often much easier to specify and to optimize over (e.g., by using
a message-passing algorithm such as loopy belief propagation (LBP)). However, the analogous relaxations for continuous state spaces may not be compactly specified and can lead to an unbounded
number of constraints (except in certain special cases). To overcome this problem, further relaxations have been proposed [5; 4]. By construction, each of these further relaxations can only be tight
if the initial LCR was tight. As a result, there are compelling theoretical and algorithmic reasons to
investigate when LCRs are tight.
Among the most well-studied continuous models are the Gaussian graphical models. For this class
of models, it is known that the continuous MAP relaxation is tight when the corresponding inverse
covariance matrix is positive definite and scaled diagonally dominant (a special case of the so-called
log-concave decomposable models)[4; 6; 7]. In addition, LBP is known to converge to the correct
solution for Gaussian graphical models and log-concave decomposable models that satisfy a scaled
diagonal dominance condition [8; 9]. While much of the prior work in this domain has focused on
log-concave graphical models, in this work, we provide a general necessary and sufficient condition
for the continuous MAP relaxation to be tight. This condition mirrors the known results for the
discrete case and is based on the notion of graph covers: the MAP LP is tight if and only if the
1
optimal solution to the MAP problem is an upper bound on the MAP solution over any graph cover,
appropriately scaled. This characterization will allow us to understand when the MAP relaxation is
tight for more general models.
Apart from this characterization theorem, the primary goal of this work is to move towards a uniform treatment of the discrete and continuous cases; they are not as different as they may initially
appear. To this end, we explore the relationship between log-concave decomposable models and logsupermodular decomposable models (introduced here in the continuous case). Log-supermodular
models provide an example of continuous graphical models for which the MAP relaxation is tight,
but the objective function is not necessarily log-concave. These two concepts have analogs in discrete state spaces. In particular, log-concave decomposability is related to log-concave closures of
discrete functions and log-supermodular decomposability is a known condition which guarantees
that the MAP LP is exact in the discrete setting. We prove a number of results that highlight the
similarities and differences between these two concepts as well as a general condition under which
the MAP relaxation corresponding to a pairwise twice continuously differentiable model cannot be
tight.
2
Prerequisites
Let f : X n ? R?0 be a non-negative function where X is the set of possible assignments of each
variable. A function f factors with respect to a hypergraph G = (V, A), if there exist potential
functions fi : X ? R?0 for each i ? V and f? : X |?| ? R?0 for each ? ? A such that
Y
Y
f (x1 , . . . , xn ) =
fi (xi )
f? (x? ).
i?V
??A
The hypergraph G together with the potential functions fi?V and f??A define a graphical model.
We are interested computing supx?X n f G (x). In general, this MAP inference task is NP-hard,
but in practice, local message-passing algorithms based on approximations from statistical physics,
such as LBP, produce reasonable estimates in many settings. Much effort has been invested into
understanding when LBP solves the MAP problem. In this section, we briefly review approximate
MAP inference in the discrete setting (i.e., when X is a finite set). For simplicity and consistency, we
will focus on log-linear models as in [4]. Given a vector of sufficient statistics ?i (xi ) ? Rk for each
i ? V and xi ? X and a parameter vector ?i ? Rk , we will assume that fi (xi ) = exp (h?i , ?i (xi )i) .
Similarly, given a vector of sufficient statistics ?? (x? ) for each ? ? A and x? ? X |?| and a
parameter vector ?? , we will assume that f? (x? ) = exp (h?? , ?? (x? )i) . We will write ?(x) to
represent the concatenation of the individual sufficient statistics and ? to represent the concatenation
of the parameters. The objective function can then be expressed as f G (x) = exp (h?, ?(x)i) .
2.1
The MAP LP relaxation
The MAP problem can be formulated in terms of mean parameters [10].
sup log f (x) = sup h?, ?i
x?X n
??M
M = {? ? Rm : ?? ? ? s.t. E? [?(x)] = ?}
where ? is the space of all densities over X n and M is the set of all realizable mean parameters.
In general, M is a difficult object to compactly describe and to optimize over. As a result, one
typically constructs convex outerbounds on M that are more manageable. In the case that X is finite,
one such outerbound is given by the MAP LP. For each i ? V and k ? X , define ?i (xi )k , 1{xi =k} .
Similarly, for each ? ? A and k ? X |?| , define ?? (x? )k , 1{x? =k} . With this choice of sufficient
statistics, M is equivalent to the set of all marginal distributions over the individual variables and
elements of A that arise from some joint probability distribution. The MAP LP is obtained by
replacing M with a relaxation that only enforces local consistency constraints.
(
)
P
x?\{i} ?? (x? ) = ?i (xi ), for all ? ? A, i ? ?, xi ? X
P
ML = ? ? 0 :
for all i ? V
xi ?i (xi ) = 1,
The set of constraints, ML , is known as the local marginal polytope. The approximate MAP problem is then to compute max??ML h?, ?i.
2
1, 2, 3
1
1, 4
2
3
2, 3, 4
1, 2, 3
4
2
1, 4
3
(a) A hypergraph graph, G.
1
2, 3, 4
2, 3, 4
4
4
1, 4
1
1, 2, 3
3
2
(b) One possible 2-cover of G.
Figure 1: An example of a graph cover of a factor graph. The nodes in the cover are labeled for the
node that they copy in the base graph.
2.2
Graph covers
In this work, we are interested in understanding when this relaxation is tight (i.e., when does
sup??ML h?, ?i = supx?X n log f (x)). For discrete MRFs, the MAP LP is known to be tight in
a variety of different settings [11; 12; 13; 14]. Two different theoretical tools are often used to investigate the tightness of the MAP LP: duality and graph covers. Duality has been particularly useful in
the design of convergent and correct message-passing schemes that solve the MAP LP [1; 15; 2; 16].
Graph covers provide a theoretical framework for understanding when and why message-passing algorithms such as belief propagation fail to solve the MAP problem [17; 18; 3].
Definition 2.1. A graph H covers a graph G = (V, E) if there exists a graph homomorphism
h : H ? G such that for all vertices i ? G and all j ? h?1 (i), h maps the neighborhood ?j of j in
H bijectively to the neighborhood ?i of i in G.
If a graph H covers a graph G, then H looks locally the same as G. In particular, local messagepassing algorithms such as LBP have difficulty distinguishing a graph and its covers. If h(j) = i,
then we say that j ? H is a copy of i ? G. Further, H is said to be an M -cover of G if every vertex
of G has exactly M copies in H.
This definition can be easily extended to hypergraphs. Each hypergraph G can be represented in
factor graph form: create a node in the factor graph for each vertex (called variable nodes) and each
hyperedge (called factor nodes) of G. Each factor node is connected via an edge in the factor graph
to the variable nodes on which the corresponding hyperedge depends. For an example of a 2-cover,
see Figure 1.
To any M -cover H = (V H , AH ) of G given by the homomorphism h, we can associate a collection
of potentials: the potential at node i ? V H is equal to fh(i) , the potential at node h(i) ? G,
and for each ? ? AH , we associate the potential fh(?) . In this way, we can construct a function
f H : X M |V | ? R?0 such that f H factorizes over H. We will say that the graphical model H is an
M -cover of the graphical model G whenever H is an M -cover of G and f H is chosen as described
th
above. It will be convenient in the sequel to write f H (xH ) = f H (x1 , . . . , xM ) where xm
i is the m
copy of variable i ? V .
There is a direct correspondence between ? ? ML and assignments on graph covers. This correspondence is the basis of the following theorem.
Theorem 2.2 (Ruozzi and Tatikonda 3).
sup h?, ?i = sup
??ML
sup
sup
M H?C M (G) xH
1
log f H (xH )
M
where C M (G) is the set of all M -covers of G.
Theorem 2.2 claims that the optimal value of the MAP LP is equal to the supremum over all MAP
assignments over all graph covers, appropriately scaled. In particular, the proof of this result shows
that, under mild conditions, there exists an M -cover H of G and an assignment xH such that
1
H
H
M log f (x ) = sup??ML h?, ?i.
3
Continuous MRFs
In this section, we will describe how to extend the previous results from discrete to continuous MRFs
(i.e., X = R) using graph covers. The relaxation that we consider here is the appropriate extension
3
of the MAP LP where each of the sums are replaced by integrals [4].
?
?
?
R densities ?i , ?? s.t.
?
?
?? (x? )dx?\i = ?i (xi ), for all ? ? A, i ? ?, xi ? X
ML = ? :
?
?i = E?i [?i ],
for all i ? V
?
?
?? = E?? [?? ],
for all ? ? A
?
?
?
?
?
?
?
Our goal is to understand under what conditions this continuous relaxation is tight. Wald and Globerson [4] have approached this problem by introducing a further relaxation of ML which they call the
weak local consistency relaxation (weak LCR). They provide conditions under which the weak LCR
(and hence the above relaxation) is tight. In particular, they show that weak LCR is tight for the class
of log-concave decomposable models. In this work, we take a different approach. We first prove
the analog of Theorem 2.2 in the continuous case and then we show that the known conditions that
guarantee tightness of the continuous relaxation are simple consequences of this general theorem.
Theorem 3.1.
sup h?, ?i = sup
??ML
sup
sup
M H?C M (G) xH
1
log f H (xH )
M
where C M (G) is the set of all M -covers of G.
The proof of Theorem 3.1 is conceptually straightforward, albeit technical, and can be found in
Appendix A. The proof approximates the expectations in ML as expectations with respect to simple functions, applies the known results for finite spaces, and takes the appropriate limit. Like its
discrete counterpart, Theorem 3.1 provides necessary and sufficient conditions for the continuous
relaxation to be tight. In particular, for the relaxation to be tight, the optimal solution on any M cover, appropriately scaled, cannot exceed the value of the optimal solution of the MAP problem
over G.
3.1
Tightness of the MAP relaxation
Theorem 3.1 provides necessary and sufficient conditions for the tightness of the continuous relaxation. However, checking that the maximum value attained on any M -cover is bounded by the
maximum value over the base graph to the M , in and of itself, appears to be a daunting task. In
this section, we describe two families of graphical models for which this condition is easy to verify: the log-concave decomposable functions and the log-supermodular decomposable functions.
Log-concave decomposability has been studied before, particularly in the case of Gaussian graphical models. Log-supermodularity with respect to graphical models, however, appears to have been
primarily studied in the discrete case.
3.1.1
Log-concave decomposability
A function f : Rn ? R?0 is log-concave if f (x)? f (y)1?? ? f (?x + (1 ? ?)y) for all x, y ? Rn
and all ? ? [0, 1]. If f can be written as a product of log-concave potentials over a hypergraph G,
we say that f is log-concave decomposable over G.
Theorem 3.2. If f is log-concave decomposable, then supx log f (x) = sup??ML h?, ?i.
Proof. By log-concave decomposability, for any M -cover H of G,
H
1
M
f (x , . . . , x ) ? f
G
x1 + ? ? ? + xM
M
M
,
which we obtain by applying the definition of log-concavity separately to each of the M copies of
the potential functions for each node and factor of G. As a result, supx1 ,...,xM f H (x1 , . . . , xM ) ?
supx f G (x)M . The proof of the theorem then follows by applying Theorem 3.1.
Wald and Globerson [4] provide a different proof of Theorem 3.2 by exploiting duality and the weak
LCR.
4
3.1.2
Log-supermodular decomposability
Log-supermodular functions have played an important role in the study of discrete graphical models,
and log-supermodularity arises in a number of classical correlations inequalities (e.g., the FKG
inequality). For log-supermodular decomposable models, the MAP LP is tight and the MAP problem
can be solved exactly in polynomial time [19; 20]. In the continuous case, log-supermodularity is
defined analogously to the discrete case. That is, f : Rn ? R?0 is log-supermodular if f (x)f (y) ?
f (x ? y)f (x ? y) for all x, y ? Rn , where x ? y is the componentwise maximum of the vectors
x and y and x ? y is the componentwise minimum. Continuous log-supermodular functions are
sometimes said to be multivariate totally positive of order two [21]. We will say that a graphical
model is log-supermodular decomposable if f can be factorized as a product of log-supermodular
potentials.
For any collection of vectors x1 , . . . , xk ? Rn , let z i (x1 , . . . , xk ) be the vector whose j th component is the ith largest element of x1j , . . . , xkj for each j ? {1, . . . , n}.
Theorem 3.3. If f is log-supermodular decomposable, then supx log f (x) = sup??ML h?, ?i.
Proof. By log-supermodular decomposability, for any M -cover H of G,
H
1
M
f (x , . . . , x ) ?
M
Y
f G (z m (x1 , . . . , xM )).
m=1
Again, this follows by repeatedly applying the definition of log-supermodularity separately to
each of the M copies of the potential functions for each node and factor of G. As a result,
QM
supx1 ,...,xM f H (x1 , . . . , xM ) ? supx1 ,...,xM m=1 f G (xm ). The proof of the theorem then follows by applying Theorem 3.1.
4
Log-supermodular decomposability vs. log-concave decomposability
As discussed above, log-concave decomposable and log-supermodular decomposable models are
both examples of continuous graphical models for which the MAP relaxation is tight. These two
classes are not equivalent: twice continuously differentiable functions are supermodular if and only
if all off diagonal elements of the Hessian matrix are non-negative. Contrast this with twice continuously differentiable concave functions where the Hessian matrix must be negative semidefinite.
In particular, this means that log-supermodular functions can be multimodel. In this section, we
explore the relationship between log-supermodularity and log-concavity.
4.1
Gaussian MRFs
We begin with the case of Gaussian graphical models, i.e., pairwise graphical models given by
Y
Y
1
2
T
T
f (x) ? = ?1/2x Ax + b x =
exp ? Aii xi + bi xi
exp (?Aij xi xj )
2
i?V
(i,j)?E
n?n
for some symmetric positive definite matrix A ? R
and vector b ? Rn . Here, f factors over the
graph G corresponding to the non-zero entries of the matrix A.
Gaussian graphical models are a relatively well-studied class of continuous graphical models.
In fact, sufficient conditions for the convergence and correctness of Gaussian belief propagation
(GaBP) are known for these models. Specifically, GaBP converges to the optimal solution if the
positive definite matrix A is walk-summable, scaled diagonally dominant, or log-concave decomposable [22; 7; 8; 9]. These conditions are known to be equivalent [23; 6].
Definition
4.1. ? ? Rn?n is scaled diagonally dominant if ?w ? Rn , w > 0 such that |?ii |wi >
P
|?
|w
ij
j.
j6=i
In addition, the following theorem provides a characterization of scaled diagonal dominance (and
hence log-concave decomposability) in terms of graph covers for these models.
Theorem 4.2 (Ruozzi and Tatikonda 6). Let A be a symmetric positive definite matrix. The following
are equivalent.
5
1. A is scaled diagonally dominant.
2. All covers of A are positive definite.
3. All 2-covers of A are positive definite.
The proof of this theorem constructs a specific 2-cover whose covariance matrix has negative eigenvalues whenever the matrix A is positive definite but not scaled diagonally dominant. The joint
distribution corresponding to this 2-cover is not bounded from above, so the optimal value of the
MAP relaxation is +? as per Theorem 3.1.
For Gaussian graphical models, log-concave decomposability and log-supermodular decomposability are related: every positive definite, log-supermodular decomposable model is log-concave decomposable, and every positive definite, log-concave decomposable model is a signed version of
some positive definite, log-supermodular decomposable Gaussian graphical model. This follows
from the following simple lemma.
Lemma 4.3. A symmetric positive definite matrix A is scaled diagonally dominant if and only if the
matrix B such that Bii = Aii for all i and Bij = ?|Aij | for all i 6= j is positive definite.
If A is positive definite and scaled diagonally dominant, then the model is log-concave decomposable. In contrast, the model would be log-supermodular decomposable if all of the off-diagonal elements of A were negative, independent of the diagonal. In particular, the diagonal could have both
positive and negative elements, meaning that f (x) could be either log-concave, log-convex, or neither. As log-convex quadratic forms do not correspond to normalizable Gaussian graphical models,
the log-convex case appears to be less interesting as the MAP problem is unbounded from above.
However, the situation is entirely different for constrained (over some convex set) log-quadratic
maximization. As an example, consider a box constrained log-quadratic maximization problem
where the matrix A has all negative off-diagonal entries. Such a model is always log-supermodular
decomposable. Hence, the MAP relaxation is tight, but the model is not necessarily log-concave.
4.2
Pairwise twice differentiable MRFs
All of the results from the previous section can be extended to general twice continuously differentiable functions over pairwise graphical models (i.e., |?| = 2 for all ? ? A). In this section, unless
otherwise specified, assume that all models are pairwise.
Theorem 4.4. If log f (x) is strictly concave and twice continuously differentiable, the following are
equivalent.
1. ?2 (log f )(x) is scaled diagonally dominant for all x.
2. ?2 (log f H )(xH ) is negative definite for every graph cover H of G and every xH .
3. ?2 (log f H )(xH ) is negative definite for every 2-cover H of G and every xH .
The equivalence of 1-3 in Theorem 4.4 follows from Theorem 4.2.
Corollary 4.5. If ?2 (log f )(x) is scaled diagonally dominant for all x, then the continuous MAP
relaxation is tight.
Corollary 4.6. If f is log-concave decomposable over a pairwise graphical model and strictly logconcave, then ?2 (log f )(x) is scaled diagonally dominant for all x.
Whether or not log-concave decomposability is equivalent to the other conditions listed in the statement of Theorem 4.4 remains an open question (though we conjecture that this is the case). Similar
ideas can be extended to general twice continuously differentiable functions.
Theorem 4.7. Suppose log f (x) is twice continuously differentiable with a maximum at x? . Let
Bij = |?2 (log f )(x? )ij | for all i 6= j and Bii = ?2 (log f )(x? )ii . If f admits a pairwise factorization over G and B has both positive and negative eigenvalues, then the continuous MAP relaxation
is not tight.
Proof. If B has both positive and negative eigenvalues, then there exists a 2-cover H of G such that
?2 (log f H )(x? , x? ) has both positive and negative eigenvalues. As a result, the lift of x? to the
6
2-cover f H is a saddle point. Consequently, f H (x? , x? ) < supxH f H (xH ). By Theorem 3.1, the
continuous MAP relaxation cannot be tight.
This negative result is quite general. If ?2 (log f ) is positive definite but not scaled diagonally
dominant at any global optimum, then the MAP relaxation is not tight. In particular, this means that
all log-supermodular decomposable functions that meet the conditions of the theorem must be s.d.d.
at their optima.
Algorithmically, Moallemi and Van Roy [9] argued that belief propagation converges for models
that are log-concave decomposable and scaled diagonally dominant. It is unknown whether or not a
similar convergence argument applies to log-supermodular decomposable functions.
4.3
Concave closures
Many of the tightness results in the discrete case can be seen as a specific case of the continuous
results described above. Again, suppose that X ? R is a finite set.
Definition 4.8. The concave closure of a function g : X n ? R ? {??} at x ? Rn is given by
?
?
?X
?
P
P
?(y)g(y) : y ?(y) = 1, y ?(y)y = x, ?(y) ? 0
g(x) = sup
?
?
n
y?X
Equivalently, the concave closure of a function is the smallest concave function such that g(x) ?
g(x) for all x. A function and its concave closure must necessarily have the same maximum. Computing the concave (or convex) closure of a function is NP-hard in general, but it can be efficiently
computed for certain special classes of discrete functions. In particular, when X = {0, 1} and log f
is supermodular, then its concave closure can be computed in polynomial time as it is equal to the
Lov?asz extension of log f . The Lov?asz extension has a number of interesting properties. Most
notably, it is linear (the Lov?asz extension of a sum of functions is equal to sum of the Lov?asz extensions). Define the log-concave closure of f to be f?(x) = exp(log f (x)). As a result, if f is
log-supermodular decomposable, then f? is log-concave decomposable.
Q
Q
P
Theorem 4.9. If f? = i?V f?i ??A f?? , then supx?X n f (x) = ??ML h?, ?i.
This theorem is a direct consequence of Theorem 3.2. For example, the tightness results of Bayati
et al. [11] and Sanghavi et al. [14] (and indeed many others) can be seen as a special case of this
theorem. Even when |X | is not finite, the concave closure can be similarly defined, and the theorem holds in this case as well. Given the characterization in the discrete case, this suggests that
there could be a, possibly deep, connection between log-concave closures and log-supermodular
decomposability.
5
Discussion
We have demonstrated that the same necessary and sufficient condition based on graph covers for
the tightness of the MAP LP in the discrete case translates seamlessly to the continuous case. This
characterization allowed us to provide simple proofs of the tightness of the MAP relaxation for logconcave decomposable and log-supermodular decomposable models. While the proof of Theorem
3.1 is nontrivial, it provides a powerful tool to reason about the tightness of MAP relaxations. We
also explored the intricate relationship between log-concave and log-supermodular decomposablity
in both the discrete and continuous cases which provided intuition about when the MAP relaxation
can or cannot be tight for pairwise graphical models.
A
Proof of Theorem 3.1
The proof of this theorem proceeds in two parts. First, we will argue that
1
log f H (xH ).
sup h?, ?i ? sup sup sup
??ML
M H?C M (G) xH M
To see this, fix an M -cover, H, of G via the homomorphism h and consider any assignment xH .
Construct the mean parameters ?0 ? ML as follows.
7
1
M
1
?? (x? ) =
M
X
?i (xi ) =
Z
?0i = ?i (xi )?i (xi )dxi
Z
?0? = ?? (x? )?? (x? )dx?
?(xH
j ? xi )
j?V (H):h(j)=i
X
?(xH
? ? x? )
??A(H):h(?)=?
Here, ?(?) is the Dirac delta function1 . This implies that
1
log f H (xH ) = h?, ?0 i ? sup h?, ?i.
M
??ML
For the other direction, fix some ?0 ? ML such that ?0 is generated by the vector of densities ? .
We will prove the result for locally consistent probability distributions with bounded support. The
result for arbitrary ? will then follow by constructing sequences of these distributions that converge
(in measure) to ? . For simplicity, we will assume that each potential function is strictly positive2 .
Consider the space [?t, t]|V | for some positive integer t. We will consider local probability distributions that are supported on subsets of this space. That is, supp(?i ) ? [?t, t] for each i and
supp(?? ) ? [?t, t]|?| for each ?. For a fixed positive integer s, divide the interval [?t, t] into 2s+1 t
intervals of size 1/2s and let Sk denote the k th interval. This partitioning divides [?t, t]|V | into disjoint cubes of volume 1/2s|V | . The distribution ? can be approximated as a sequence of distributions
? 1 , ? 2 , . . . as follows. Define a vector of approximate densities ? s by setting
sR
2 Sk ?i (xi )dxi , if x0i ? Sk
?is (x0i ) ,
0, otherwise
(
R
Q
|?|s Q
?? (x? )dx? , if x0? ? kj :j?? Skj
2
S
s 0
k
kj :j??
j
?? (x? ) ,
0, otherwise
R
s
s
We have ?
? ? , [?t,t] ?i (xi )?i (xi )dxi ? ?0i for each i ? V (G), and
R
? s (x )? (x )dx? ? ?0? for each ? ? A(G).
[?t,t]|?| ? ? ? ?
The continuous MAP relaxation for local probability distributions of this form can be
R expressed in
terms of discrete variables over X = {1, . . . , 2s+1 t}. To see this, define ?si (zi ) = Sz ?is (xi )dxi
i
R
for each zi ? {1, . . . , 2s+1 t} and ?s? (z? ) = Sz ??s (x? )dx? for each z? ? {1, . . . , 2s+1 t}|?| . The
?
corresponding MAP LP objective, evaluated at ?s , is then
Z
Z
XX
XX
2|?|s log f? (x? )dx? .
(1)
?si (zi )
2s log fi (xi )dxi +
?s? (z? )
i?V
Szi
zi
Sz?
??A z?
This MAP LP objective corresponds to a discrete graphical model that factors over the hypergraph
G, with potential functions corresponding to the above integrals over the partitions indexed by the
vector z.
!
!
Z
Z
Y
Y
g s (z) ?
exp
2s log fi (xi )dxi
exp
2|?|s log f? (x? )dx?
Szi
i?V (G)
=
Y
i?V (G)
Z
exp
??A(G)
|V (G)|s
2
log fi (xi )dx
Sz
Y
??A(G)
Sz?
Z
exp
2
|V (G)|s
log f? (x? )dx
Sz
Every assignment selects a single cube indexed by z. The value of the objective is calculated by
averaging log f over the cube indexed by z. As a result, maxz g s (z) ? supx f (x) and for any
M -cover H of G, maxz1:M g H,s (z 1 , . . . , z M ) ? supx1:m f H (x1 , . . . , xM ). As this upper bound
holds for any fixed s, it must also hold for any vector of distributions that can be written as a limit of
such distributions. Now, by applying Theorem 2.2 for the discrete case, h?, ?0 i = lims?? h?, ?s i ?
1
log f H (xH ) as desired. To finish the proof, observe that any Riemann
supM supH?C M (G) supxH M
integrable density can be arbitrarily well approximated by densities of this form as t ? ?.
1
In order to make this precise, we would need to use Lebesgue integration or take a sequence of probability
distributions over the space RM |V | that arbitrarily well-approximate the desired assignment xH .
2
The same argument will apply in the general case, but each of the local distributions must be contained in
the support of the corresponding potential function (i.e., supp(?i ) ? supp(fi )) for the integrals to exist.
8
References
[1] A. Globerson and T. S. Jaakkola. Fixing max-product: Convergent message passing algorithms for MAP
LP-relaxations. In Proc. 21st Neural Information Processing Systems (NIPS), Vancouver, B.C., Canada,
2007.
[2] T. Werner. A linear programming approach to max-sum problem: A review. Pattern Analysis and Machine
Intelligence, IEEE Transactions on, 29(7):1165?1179, 2007.
[3] N. Ruozzi and S. Tatikonda. Message-passing algorithms: Reparameterizations and splittings. IEEE
Transactions on Information Theory, 59(9):5860?5881, Sept. 2013.
[4] Y. Wald and A. Globerson. Tightness results for local consistency relaxations in continuous MRFs. In
Proc. 30th Uncertainty in Artifical Intelligence (UAI), Quebec City, Quebec, Canada, 2014.
[5] T. P. Minka. Expectation propagation for approximate Bayesian inference. In Proceedings of the Seventeenth conference on Uncertainty in Artificial Intelligence (UAI), pages 362?369, 2001.
[6] N. Ruozzi and S. Tatikonda. Message-passing algorithms for quadratic minimization. Journal of Machine
Learning Research, 14:2287?2314, 2013.
[7] D. M. Malioutov, J. K. Johnson, and A. S. Willsky. Walk-sums and belief propagation in Gaussian
graphical models. Journal of Machine Learning Research, 7:2031?2064, 2006.
[8] C. C. Moallemi and B. Van Roy. Convergence of min-sum message passing for quadratic optimization.
Information Theory, IEEE Transactions on, 55(5):2413 ?2423, May 2009.
[9] C. C. Moallemi and B. Van Roy. Convergence of min-sum message-passing for convex optimization.
Information Theory, IEEE Transactions on, 56(4):2041 ?2050, April 2010.
[10] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference.
R in Machine Learning, 1(1-2):1?305, 2008.
Foundations and Trends
[11] M. Bayati, C. Borgs, J. Chayes, and R. Zecchina. Belief propagation for weighted b-matchings on arbitrary graphs and its relation to linear programs with integer solutions. SIAM Journal on Discrete Mathematics, 25(2):989?1011, 2011.
[12] V. Kolmogorov and R. Zabih. What energy functions can be minimized via graph cuts? In Computer
VisionECCV 2002, pages 65?81. Springer, 2002.
[13] S. Sanghavi, D. M. Malioutov, and A. S. Willsky. Belief propagation and LP relaxation for weighted
matching in general graphs. Information Theory, IEEE Transactions on, 57(4):2203 ?2212, April 2011.
[14] S. Sanghavi, D. Shah, and A. S. Willsky. Message passing for maximum weight independent set. Information Theory, IEEE Transactions on, 55(11):4822?4834, Nov. 2009.
[15] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. MAP estimation via agreement on (hyper)trees:
Message-passing and linear programming. Information Theory, IEEE Transactions on, 51(11):3697?
3717, Nov. 2005.
[16] David Sontag, Talya Meltzer, Amir Globerson, Yair Weiss, and Tommi Jaakkola. Tightening LP relaxations for MAP using message-passing. In 24th Conference in Uncertainty in Artificial Intelligence, pages
503?510. AUAI Press, 2008.
[17] P. O. Vontobel. Counting in graph covers: A combinatorial characterization of the Bethe entropy function.
Information Theory, IEEE Transactions on, Jan. 2013.
[18] P. O. Vontobel and R. Koetter. Graph-cover decoding and finite-length analysis of message-passing iterative decoding of LDPC codes. CoRR, abs/cs/0512078, 2005.
[19] S. Iwata, L. Fleischer, and S. Fujishige. A strongly polynomial-time algorithm for minimizing submodular
functions. Journal of The ACM, 1999.
[20] A. Schrijver. A combinatorial algorithm minimizing submodular functions in strongly polynomial time.
Journal of Combinatorial Theory, Series B, 80(2):346 ? 355, 2000.
[21] S. Karlin and Y. Rinott. Classes of orderings of measures and related correlation inequalities. i. multivariate totally positive distributions. Journal of Multivariate Analysis, 10(4):467 ? 498, 1980.
[22] Y. Weiss and W. T. Freeman. Correctness of belief propagation in Gaussian graphical models of arbitrary
topology. Neural Comput., 13(10):2173?2200, Oct. 2001.
[23] D. M. Malioutov. Approximate inference in Gaussian graphical models. Ph.D. thesis, EECS, MIT, 2008.
9
| 5932 |@word mild:1 version:1 briefly:1 manageable:1 polynomial:4 open:1 closure:10 covariance:2 homomorphism:3 initial:1 series:1 si:2 dx:9 written:2 must:5 partition:1 koetter:1 v:1 intelligence:4 amir:1 xk:2 ith:1 characterization:7 provides:4 node:11 unbounded:2 direct:2 prove:3 x0:1 pairwise:8 lov:4 intricate:1 notably:1 indeed:1 freeman:1 riemann:1 talya:1 totally:2 begin:1 provided:1 bounded:3 xx:2 factorized:1 what:2 finding:1 guarantee:2 zecchina:1 every:8 auai:1 concave:44 exactly:2 scaled:17 lcrs:3 rm:2 qm:1 partitioning:1 appear:1 positive:22 before:1 understood:1 local:11 dallas:1 limit:2 consequence:2 meet:1 signed:1 twice:8 studied:4 equivalence:1 suggests:1 factorization:1 bi:1 seventeenth:1 globerson:6 enforces:1 practice:2 definite:16 jan:1 convenient:1 matching:1 cannot:5 fkg:1 applying:5 optimize:2 equivalent:6 map:56 demonstrated:1 maxz:1 straightforward:1 convex:7 focused:2 decomposable:32 simplicity:2 notion:1 analogous:1 construction:1 suppose:2 exact:1 programming:4 distinguishing:1 agreement:1 associate:2 element:5 roy:3 approximated:3 particularly:2 trend:1 skj:1 cut:1 labeled:1 role:1 solved:1 connected:1 ordering:1 intuition:1 hypergraph:6 reparameterizations:1 tight:28 basis:1 matchings:1 compactly:2 easily:1 joint:2 aii:2 represented:1 tx:1 kolmogorov:1 distinct:1 describe:3 artificial:2 approached:1 lift:1 hyper:1 neighborhood:2 quite:2 whose:2 solve:2 say:4 tightness:10 supermodularity:5 otherwise:3 statistic:4 richardson:1 invested:1 itself:1 seemingly:1 chayes:1 advantage:1 differentiable:8 eigenvalue:4 sequence:3 karlin:1 product:3 dirac:1 exploiting:1 convergence:4 optimum:2 produce:1 converges:2 object:1 fixing:1 x0i:2 ij:2 solves:1 c:1 implies:1 tommi:1 direction:1 correct:2 argued:1 fix:2 probable:1 exploring:1 extension:5 strictly:3 hold:3 exp:10 algorithmic:1 claim:1 smallest:1 fh:2 estimation:1 proc:2 combinatorial:3 tatikonda:4 largest:1 correctness:2 create:1 city:1 tool:3 weighted:2 minimization:1 exactness:1 mit:1 gaussian:13 always:1 factorizes:1 jaakkola:3 corollary:2 ax:1 focus:2 seamlessly:1 contrast:2 realizable:1 inference:10 mrfs:7 typically:1 initially:1 szi:2 relation:1 interested:3 selects:1 among:1 constrained:2 special:4 integration:1 marginal:3 equal:4 lcr:5 construct:4 cube:3 look:1 gabp:2 minimized:1 np:2 sanghavi:3 others:1 few:1 primarily:1 individual:2 replaced:1 lebesgue:1 ab:1 message:13 investigate:2 semidefinite:1 lims:1 edge:1 integral:3 moallemi:3 necessary:5 unless:1 indexed:3 tree:1 divide:2 walk:2 desired:2 vontobel:2 theoretical:3 modeling:1 compelling:1 cover:41 assignment:9 maximization:2 loopy:1 werner:1 introducing:1 vertex:3 decomposability:14 entry:2 subset:1 uniform:1 johnson:1 supx:7 eec:1 st:1 density:6 siam:1 sequel:1 physic:1 off:3 decoding:2 together:1 continuously:7 analogously:1 again:2 thesis:1 summable:1 possibly:1 supp:4 potential:13 satisfy:1 depends:1 sup:20 maxz1:1 supm:1 efficiently:1 correspond:1 rinott:1 conceptually:1 weak:5 bayesian:1 j6:1 malioutov:3 ah:2 whenever:2 definition:6 energy:1 minka:1 proof:16 dxi:6 treatment:1 popular:1 x1j:1 appears:3 attained:1 supermodular:28 follow:1 specify:1 wei:2 daunting:1 april:2 evaluated:1 box:1 though:1 strongly:2 supx1:4 correlation:2 replacing:1 propagation:9 concept:2 verify:1 counterpart:1 hence:3 symmetric:3 meaning:1 variational:1 fi:8 xkj:1 common:1 function1:1 volume:1 analog:3 hypergraphs:1 extend:1 approximates:1 discussed:1 refer:1 consistency:5 mathematics:1 similarly:3 logsupermodular:2 submodular:2 similarity:1 base:2 dominant:12 multivariate:3 apart:1 certain:2 inequality:3 hyperedge:2 arbitrarily:2 integrable:1 seen:2 minimum:1 converge:2 ii:2 technical:1 wald:4 expectation:3 represent:2 sometimes:1 lbp:5 addition:2 separately:2 interval:3 appropriately:3 sr:1 asz:4 logconcave:2 fujishige:1 quebec:2 jordan:1 call:1 integer:3 counting:1 exceed:1 easy:1 meltzer:1 variety:2 marginalization:2 xj:1 zi:4 finish:1 topology:1 idea:1 translates:1 texas:1 fleischer:1 whether:2 effort:1 sontag:1 splittings:1 passing:13 hessian:2 repeatedly:1 deep:1 generally:1 useful:1 listed:1 locally:2 ph:1 zabih:1 exist:2 delta:1 algorithmically:1 per:1 disjoint:1 ruozzi:5 discrete:25 write:2 dominance:2 neither:1 graph:32 relaxation:44 sum:7 inverse:1 powerful:1 uncertainty:3 family:2 reasonable:1 appendix:1 entirely:1 bound:2 played:1 convergent:2 correspondence:2 quadratic:5 nontrivial:1 constraint:5 normalizable:1 argument:2 min:2 relatively:2 conjecture:1 department:1 wi:1 lp:19 remains:1 fail:1 end:1 prerequisite:1 apply:1 observe:1 bijectively:1 appropriate:2 nicholas:1 bii:2 yair:1 shah:1 graphical:32 classical:1 move:1 objective:5 question:1 primary:1 diagonal:7 said:2 concatenation:2 polytope:1 argue:1 reason:2 willsky:4 length:1 code:1 ldpc:1 relationship:4 providing:1 minimizing:2 equivalently:1 difficult:1 statement:1 negative:13 tightening:1 design:1 unknown:1 upper:2 finite:6 situation:1 extended:3 precise:1 rn:9 arbitrary:3 canada:2 introduced:1 david:1 specified:2 componentwise:2 connection:1 nip:1 proceeds:1 pattern:1 xm:11 program:1 max:3 belief:8 wainwright:2 difficulty:1 scheme:2 sept:1 kj:2 prior:1 understanding:3 review:2 checking:1 vancouver:1 highlight:1 interesting:2 suph:1 bayati:2 foundation:1 sufficient:10 consistent:1 diagonally:12 supported:1 copy:6 aij:2 allow:1 understand:2 characterizing:1 van:3 overcome:1 calculated:1 xn:1 concavity:2 commonly:1 collection:3 transaction:8 approximate:9 nov:2 supremum:1 sz:6 ml:18 global:2 uai:2 conclude:1 xi:27 continuous:30 iterative:1 sk:3 why:1 bethe:1 messagepassing:1 necessarily:3 constructing:1 domain:1 arise:1 allowed:1 x1:9 nphard:1 xh:19 exponential:1 comput:1 bij:2 theorem:37 rk:2 specific:3 borgs:1 explored:1 admits:1 intractable:1 exists:3 albeit:1 corr:1 mirror:1 easier:1 entropy:1 explore:2 saddle:1 expressed:2 contained:1 applies:2 springer:1 corresponds:1 iwata:1 acm:1 oct:1 goal:2 formulated:1 consequently:1 towards:1 replace:1 hard:2 specifically:2 except:1 averaging:1 lemma:2 called:3 duality:3 schrijver:1 support:2 arises:1 artifical:1 |
5,450 | 5,933 | On the consistency theory of high dimensional
variable screening
Xiangyu Wang
Dept. of Statistical Science
Duke University, USA
xw56@stat.duke.edu
Chenlei Leng
Dept. of Statistics
University of Warwick, UK
C.Leng@warwick.ac.uk
David B. Dunson
Dept. of Statistical Science
Duke University, USA
dunson@stat.duke.edu
Abstract
Variable screening is a fast dimension reduction technique for assisting high dimensional feature selection. As a preselection method, it selects a moderate size
subset of candidate variables for further refining via feature selection to produce
the final model. The performance of variable screening depends on both computational efficiency and the ability to dramatically reduce the number of variables
without discarding the important ones. When the data dimension p is substantially
larger than the sample size n, variable screening becomes crucial as 1) Faster feature selection algorithms are needed; 2) Conditions guaranteeing selection consistency might fail to hold. This article studies a class of linear screening methods
and establishes consistency theory for this special class. In particular, we prove
the restricted diagonally dominant (RDD) condition is a necessary and sufficient
condition for strong screening consistency. As concrete examples, we show two
screening methods SIS and HOLP are both strong screening consistent (subject
to additional constraints) with large probability if n > O((?s+?/? )2 log p) under
random designs. In addition, we relate the RDD condition to the irrepresentable
condition, and highlight limitations of SIS.
1
Introduction
The rapidly growing data dimension has brought new challenges to statistical variable selection, a
crucial technique for identifying important variables to facilitate interpretation and improve prediction accuracy. Recent decades have witnessed an explosion of research in variable selection and
related fields such as compressed sensing [1, 2], with a core focus on regularized methods [3?7].
Regularized methods can consistently recover the support of coefficients, i.e., the non-zero signals,
via optimizing regularized loss functions under certain conditions [8?10]. However, in the big data
era when p far exceeds n, such regularized methods might fail due to two reasons. First, the conditions that guarantee variable selection consistency for convex regularized methods such as lasso
might fail to hold when p >> n; Second, the computational expense of both convex and non-convex
regularized methods increases dramatically with large p.
Bearing these concerns in mind, [11] propose the concept of ?variable screening?, a fast technique
that reduces data dimensionality from p to a size comparable to n, with all predictors having nonzero coefficients preserved. They propose a marginal correlation based fast screening technique
?Sure Independence Screening? (SIS) that can preserve signals with large probability. However,
this method relies on a strong assumption that the marginal correlations between the response and
the important predictors are high [11], which is easily violated in the practice. [12] extends the
marginal correlation to the Spearman?s rank correlation, which is shown to gain certain robustness
but is still limited by the same strong assumption. [13] and [14] take a different approach to attack
the screening problem. They both adopt variants of a forward selection type algorithm that includes
one variable at a time for constructing a candidate variable set for further refining. These methods
1
eliminate the strong marginal assumption in [11] and have been shown to achieve better empirical
performance. However, such improvement is limited by the extra computational burden caused
by their iterative framework, which is reported to be high when p is large [15]. To ameliorate
concerns in both screening performance and computational efficiency, [15] develop a new type of
screening method termed ?High-dimensional ordinary least-square projection? (HOLP ). This new
screener relaxes the strong marginal assumption required by SIS and can be computed efficiently
(complexity is O(n2 p)), thus scalable to ultra-high dimensionality.
This article focuses on linear models for tractability. As computation is one vital concern for designing a good screening method, we primarily focus on a class of linear screeners that can be efficiently
computed, and study their theoretical properties. The main contributions of this article lie in three
aspects.
1. We define the notion of strong screening consistency to provide a unified framework for
analyzing screening methods. In particular, we show a necessary and sufficient condition for a screening method to be strong screening consistent is that the screening matrix
is restricted diagonally dominant (RDD). This condition gives insights into the design of
screening matrices, while providing a framework to assess the effectiveness of screening
methods.
2. We relate RDD to other existing conditions. The irrepresentable condition (IC) [8] is necessary and sufficient for sign consistency of lasso [3]. In contrast to IC that is specific to the
design matrix, RDD involves another ancillary matrix that can be chosen arbitrarily. Such
flexibility allows RDD to hold even when IC fails if the ancillary matrix is carefully chosen
(as in HOLP ). When the ancillary matrix is chosen as the design matrix, certain equivalence is shown between RDD and IC, revealing the difficulty for SIS to achieve screening
consistency. We also comment on the relationship between RDD and the restricted eigenvalue condition (REC) [6] which is commonly seen in the high dimensional literature. We
illustrate via a simple example that RDD might not be necessarily stronger than REC.
3. We study the behavior of SIS andHOLP under random designs, and prove that a sample
size of n = O (?s + ?/? )2 log p is sufficient for SIS and HOLP to be screening consistent, where s is the sparsity, ? measures the diversity of signals and ? /? evaluates the
signal-to-noise ratio. This is to be compared to the sign consistency results in [9] where the
design matrix is fixed and assumed to follow the IC.
The article is organized as follows. In Section 1, we set up the basic problem and describe the
framework of variable screening. In Section 2, we provide a deterministic necessary and sufficient
condition for consistent screening. Its relationship with the irrepresentable condition is discussed
in Section 3. In Section 4, we prove the consistency of SIS and HOLP under random designs by
showing the RDD condition is satisfied with large probability, although the requirement on SIS is
much more restictive.
2
Linear screening
Consider the usual linear regression
Y = X? + ,
where Y is the n ? 1 response vector, X is the n ? p design matrix and is the noise. The regression
task is to learn the coefficient vector ?. In the high dimensional setting where p >> n, a sparsity
assumption is often imposed on ? so that only a small portion of the coordinates are non-zero. Such
an assumption splits the task of learning ? into two phases. The first is to recover the support of
?, i.e., the location of non-zero coefficients; The second is to estimate the value of these non-zero
signals. This article mainly focuses on the first phase.
As pointed out in the introduction, when the dimensionality is too high, using regularization methods
methods raises concerns both computationally and theoretically. To reduce the dimensionality, [11]
suggest a variable screening framework by finding a submodel
?
Md = {i : |??i | is among the largest d coordinates of |?|}
or
2
M? = {i : |??i | > ?}.
Let Q = {1, 2, ? ? ? , p} and define S as the true model with s = |S| being its cardinarlity. The
hope is that the submodel size |Md | or |M? | will be smaller or comparable to n, while S ? Md
or S ? M? . To achieve this goal two steps are usually involved in the screening analysis. The
first is to show there exists some ? such that mini?S |??i | > ? and the second step is to bound the
size of |M? | such that |M? | = O(n). To unify these steps for a more comprehensive theoretical
framework, we put forward a slightly stronger definition of screening consistency in this article.
Definition 2.1. (Strong screening consistency) An estimator ?? (of ?) is strong screening consistent
if it satisfies that
min |??i | > max |??i |
(1)
i6?S
i?S
and
sign(??i ) = sign(?i ), ?i ? S.
(2)
Remark 2.1. This definition does not differ much from the usual screening property studied in the
(n?s)
literature, which requires mini?S |??i | > maxi6?S |??i |, where max(k) denotes the k th largest item.
The key of strong screening consistency is the property (1) that requires the estimator to preserve
consistent ordering of the zero and non-zero coefficients. It is weaker than variable selection consistency in [8]. The requirement in (2) can be seen as a relaxation of the sign consistency defined in [8],
as no requirement for ??i , i 6? S is needed. As shown later, such relaxation tremendously reduces the
restriction on the design matrix, and allows screening methods to work for a broader choice of X.
The focus of this article is to study the theoretical properties of a special class of screeners that take
the linear form as
?? = AY
for some p?n ancillary matrix A. Examples include sure independence screening (SIS) where A =
X T /n and high-dimensional ordinary least-square projection (HOLP ) where A = X T (XX T )?1 .
We choose to study the class of linear estimators because linear screening is computationally efficient and theoretically tractable. We note that the usual ordinary least-squares estimator is also a
special case of linear estimators although it is not well defined for p > n.
3
Deterministic guarantees
In this section, we derive the necessary and sufficient condition that guarantees ?? = AY to be strong
screening consistent. The design matrix X and the error are treated as fixed in this section and we
will investigate random designs later. We consider the set of sparse coefficient vectors defined by
maxi?supp(?) |?i |
p
B(s, ?) = ? ? R : |supp(?)| ? s,
?? .
mini?supp(?) |?i |
The set B(s, ?) contains vectors having at most s non-zero coordinates with the ratio of the largest
and smallest coordinate bounded by ?. Before proceeding to the main result of this section, we
introduce some terminology that helps to establish the theory.
Definition 3.1. (restricted diagonally dominant matrix) A p ? p symmetric matrix ? is restricted
diagonally dominant with sparsity s if for any I ? Q, |I| ? s ? 1 and i ? Q \ I
X
X
?ii > C0 max
|?ij + ?kj |,
|?ij ? ?kj | + |?ik | ?k 6= i, k ? Q \ I,
j?I
j?I
where C0 ? 1 is a constant.
Notice this definition implies that for i ? Q \ I
X
X
X
?ii ? C0
|?ij + ?kj | +
|?ij ? ?kj | /2 ? C0
|?ij |,
j?I
j?I
(3)
j?I
which is related to the usual diagonally dominant matrix. The restricted diagonally dominant matrix provides a necessary and sufficient condition for any linear estimators ?? = AY to be strong
screening consistent. More precisely, we have the following result.
3
Theorem 1. For the noiseless case where = 0, a linear estimator ?? = AY is strong screening
consistent for every ? ? B(s, ?), if and only if the screening matrix ? = AX is restricted diagonally
dominant with sparsity s and C0 ? ?.
Proof. Assume ? is restricted diagonally dominant with sparsity s and C0 ? ?. Recall ?? = ??.
Suppose S is the index set of non-zero predictors. For any i ? S, k 6? S, if we let I = S \ {i}, then
we have
X ?j
X ?j
X ?j
?
|?i | = |?i | ?ii +
?ij = |?i | ?ii +
(?ij + ?kj ) + ?ki ?
?kj ? ?ki
?i
?i
?i
j?I
j?I
j?I
X
?j
|?i | X
> ?|?i |
?kj + ?ki = ?
?j ?kj + ?i ?ki = ?sign(?i ) ? ??k ,
?i
?i
j?I
j?I
and
X ?j
X ?j
X ?j
?
|?i | = |?i | ?ii +
?ij = |?i | ?ii +
(?ij ? ?kj ) ? ?ki +
?kj + ?ki
?i
?i
?i
j?I
j?I
j?I
X
?j
> |?i |
?kj + ?ki = sign(?i ) ? ??k .
?i
j?I
Therefore, whatever value sign(?i ) is, it always holds that |??i | > |??k | and thus mini?S |??i | >
maxk6?S |??k |.
To prove the sign consistency for non-zero coefficients, we notice that for i ? S,
X
X ?j
?ij > 0.
??i ?i = ?ii ?i2 +
?ij ?j ?i = ?i2 ?ii +
?i
j?I
j?I
The proof of necessity is left to the supplementary materials.
? Intuitively, in order to preserve the correct
The noiseless case is a good starting point to analyze ?.
?
order of the coefficients in ? = AX?, one needs AX to be close to a diagonally dominant matrix,
so that ??i , i ? MS will take advantage of the large diagonal terms in AX to dominate ??i , i 6? MS
that is just linear combinations of off-diagonal terms.
When noise is considered, the condition in Theorem 1 needs to be changed slightly to accommodate
extra discrepancies. In addition, the smallest non-zero coefficient has to be lower bounded to ensure
a certain level of signal-to-noise ratio. Thus, we augment our previous definition of B(s, ?) to have
a signal strength control
B? (s, ?) = {? ? B(s, ?)|
min
i?supp(?)
|?i | ? ? }.
Then we can obtain the following modified Theorem.
Theorem 2. With noise, the linear estimator ?? = AY is strong screening consistent for every
? ? B? (s, ?) if ? = AX ? 2? ?1 kAk? Ip is restricted diagonally dominant with sparsity s and
C0 ? ?.
The proof of Theorem 2 is essentially the same as Theorem 1 and is thus left to the supplementary
materials. The condition in Theorem 2 can be further tailored to a necessary and sufficient version
with extra manipulation on the noise term. Nevertheless, this might not be useful in practice due to
the randomness in noise. In addition, the current version of Theorem 2 is already tight in the sense
that there exists some noise vector such that the condition in Theorem 2 is also necessary for strong
screening consistency.
Theorems 1 and 2 establish ground rules for verifying consistency of a given screener and provide
practical guidance for screening design. In Section 4, we consider some concrete examples of ancillary matrix A and prove that conditions in Theorems 1 and 2 are satisfied by the corresponding
screeners with large probability under random designs.
4
4
Relationship with other conditions
For some special cases such sure independence screening (?SIS?), the restricted diagonally dominant
(RDD) condition is related to the strong irrepresentable condition (IC) proposed in [8]. Assume each
column of X is standardized to have mean zero. Letting C = X T X/n and ? be a given coefficient
vector, the IC is expressed as
?1
kCS c ,S CS,S
? sign(?S )k? ? 1 ? ?
(4)
for some ? > 0, where CA,B represents the sub-matrix of C with row indices in A and column
indices in B. The authors enumerate several scenarios of C such that IC is satisfied. We verify some
of these scenarios for screening matrix ?.
Corollary 1. If ?ii = 1, ?i and |?ij | < c/(2s), ?i 6= j for some 0 ? c < 1 as defined in Corollary
1 and 2 in [8], then ? is a restricted diagonally dominant matrix with sparsity s and C0 ? 1/c.
If |?ij | < r|i?j| , ?i, j for some 0 < r < 1 as defined in Corollary 3 in [8], then ? is a restricted
diagonally dominant matrix with sparsity s and C0 ? (1 ? r)2 /(4r).
A more explicit but nontrivial relationship between IC and RDD is illustrated below when |S| = 2.
Theorem 3. Assume ?ii = 1, ?i and |?ij | < r, ?i 6= j. If ? is restricted diagonally dominant with
sparsity 2 and C0 ? ?, then ? satisfies
??1
k?S c ,S ??1
S,S ? sign(?S )k? ?
1?r
for all ? ? B(2, ?). On the other hand, if ? satisfies the IC for all ? ? B(2, ?) for some ?, then ? is
a restricted diagonally dominant matrix with sparsity 2 and
1 1?r
.
C0 ?
1??1+r
Theorem 3 demonstrates certain equivalence between IC and RDD. However, it does not mean
that RDD is also a strong requirement. Notice that IC is directly imposed on the covariance matrix
X T X/n. This makes IC a strong assumption that is easily violated; for example, when the predictors
are highly correlated. In contrast to IC, RDD is imposed on matrix AX where there is flexibility in
choosing A. Only when A is chose to be X/n, RDD is equivalently strong as IC, as shown in next
theorem. For other choices of A, such as HOLP defined in next section, the estimator satisfies RDD
even when predictors are highly correlated. Therefore, RDD is considered as weak requirement.
For ?SIS?, the screening matrix ? = X T X/n coincides with the covariance matrix, making RDD
and IC effectively equivalent. The following theorem formalizes this.
Theorem 4. Let A = X T /n and standardize columns of X to have sample variance one. Assume
X satisfies the sparse Riesz condition [16], i.e,
min ?min (X?T X? /n) ? ?,
??Q, |?|?s
for some
? ? > 0. Now if AX is restricted diagonally dominant with sparsity s + 1 and C0 ? ? with
? > s/?, then X satisfies the IC for any ? ? B(s, ?).
?
In other words, under the condition ? > s/?, the strong screening consistency of SIS for B(s +
1, ?) implies the model selection consistency of lasso for B(s, ?).
Theorem 4 illustrates the difficulty of SIS. The necessary condition that guarantees good screening
performance of SIS also guarantees the model selection consistency of lasso. However, such a
strong necessary condition does not mean that SIS should be avoided in practice given its substantial
advantages in terms of simplicity and computational efficiency. The strong screening consistency
defined in this article is stronger than conditions commonly used in justifying screening procedures
as in [11].
Another common assumption in the high dimensional literature is the restricted eigenvalue condition
(REC). Compared to REC, RDD is not necessarily stronger due to its flexibility in choosing the
ancillary matrix A. [17, 18] prove that the REC is satisfied when the design matrix is sub-Gaussian.
However, REC might not be guaranteed when the row of X follows heavy-tailed distribution. In
contrast, as the example shown in next section and in [15], by choosing A = X T (XX T )?1 , the
resulting estimator satisfies RDD even when the rows of X follow heavy-tailed distributions.
5
5
Screening under random designs
In this section, we consider linear screening under random designs when X and are Gaussian.
The theory developed in this section can be easily extended to a broader family of distributions, for
example, where follows a sub-Gaussian distribution [19] and X follows an elliptical distribution
[11, 15]. We focus on the Gaussian case for conciseness. Let ? N (0, ? 2 ) and X ? N (0, ?).
We prove the screening consistency of SIS and HOLP by verifying the condition in Theorem 2.
Recall the ancillary matrices for SIS and HOLP are defined respectively as
AHOLP = X T (XX T )?1 .
ASIS = X/n,
For simplicity, we assume ?ii = 1 for i = 1, 2, ? ? ? , p. To verify the RDD condition, it is essential
to quantify the magnitude of the entries of AX and A.
Lemma 1. Let ? = ASIS X, then for any t > 0 and i 6= j ? Q, we have
2
t n
tn
P |?ii ? ?ii | ? t ? 2 exp ? min
,
,
8e2 K 2eK
and
2
tn
t n
,
,
P |?ij ? ?ij | ? t ? 6 exp ? min
72e2 K 6eK
where K = kX 2 (1) ? 1k?1 is a constant, X 2 (1) is a chi-square random variable with one degree
of freedom and the norm k ? k?1 is defined in [19].
Lemma 1 states that the screening matrix ? = ASIS X for SIS will eventually converge to the covariance matrix ? in l? when n tends to infinity and log p = o(n). Thus, the screening performance
of SIS strongly relies on the structure of ?. In particular, the (asymptotically) necessary and sufficient condition for SIS being strong screening consistent is ? satisfying the RDD condition. For
the noise term, we have the following lemma.
Lemma 2. Let ? = ASIS . For any t > 0 and i ? Q, we have
2
tn
t n
,
P (|?i | ? ?t) ? 6 exp ? min
,
72e2 K 6eK
where K is defined the same as in Lemma 1.
The proof of Lemma 2 is essentially the same as the proof of off-diagonal terms in Lemma 1 and
is thus omitted. As indicated before, the necessary and sufficient condition for SIS to be strong
screening consistent is that ? follows RDD. As RDD is usually hard to verify, we consider a stronger
sufficient condition inspired by Corollary 1.
Theorem 5. Let r = maxi6=j |?ij |. If r <
n > 144K
1
2?s ,
then for any ? > 0, if the sample size satisfies
1 + 2?s + 2?/?
1 ? 2?sr
2
log(3p/?),
(5)
where K is defined in Lemma 1, then with probability at least 1 ? ?, ? = ASIS X ?
2? ?1 kASIS k? Ip is restricted diagonally dominant with sparsity s and C0 ? ?. In other words,
SIS is screening consistent for any ? ? B? (s, ?).
Proof. Taking union bound on the results from Lemma 1 and 2, we have for any t > 0 and p > 2,
2
n
t
t
P min ?ii ? 1 ? t or max |?ij | ? r + t or k?k? ? ?t ? 7p2 exp ?
min
,
.
i?Q
i6=j
K
72e2 6e
In other words, for any ? > 0, when n ? K log(7p2 /?), with probability at least 1 ? ?, we have
r
r
?
?
K log(7p2 /?)
K log(7p2 /?)
min ?ii ? 1 ? 6 2e
, max |?ij | ? r + 6 2e
,
i?Q
i6=j
n
n
6
?
max |?i | ? 6 2e?
i?Q
r
K log(7p2 /?)
.
n
A sufficient condition for ? to be restricted diagonally dominant is that
min ?ii > 2?s max |?ij | + 2? ?1 max |?i |.
i
i
i6=j
Plugging in the values we have
r
r
r
?
?
?
K log(7p2 /?)
K log(7p2 /?)
K log(7p2 /?)
?1
> 2?s(r + 6 2e
) + 12 2e? ?
.
1 ? 6 2e
n
n
n
Solving the above inequality (notice that 7p2 /? < 9p2 /? 2 and ? > 1) completes the proof.
The requirement that maxi6=j |?ij | < 1/(?sr) or the necessary and sufficient condition that ? is
RDD strictly constrains the correlation structure of X, causing the difficulty for SIS to be strong
screening consistent. For HOLP we instead have the following result.
Lemma 3. Let ? = AHOLP X. Assume p > c0 n for some c0 > 1, then for any C > 0 there exists
some 0 < c1 < 1 < c2 and c3 > 0 such that for any t > 0 and any i ? Q, j 6= i, we have
n
n
P |?ii | < c1 ??1
? 2e?Cn , P |?ii | > c2 ?
? 2e?Cn
p
p
and
?
2
n
P |?ij | > c4 ?t
? 5e?Cn + 2e?t /2 ,
p
?
c (c ?c )
where c4 = ? 2 0 1 .
c3 (c0 ?1)
Proof. The proof of Lemma 3 relies heavily on previous results for the Stiefel Manifold provided in
the supplementary materials. We only sketch the basic idea here and leave the complete proof to the
supplementary materials. Defining H = X T (XX T )?1/2 , then we have ? = HH T and H follows
the Matrix Angular Central Gaussian (MACG) with covariance ?. The diagonal terms of HH T
can be bounded similarly via the Johnson-Lindenstrauss lemma, by using the fact that HH T =
?1/2 U (U T ?U )?1 U ?, where U is a p ? n random projection matrix. Now for off-diagonal terms,
we decompose the Stiefel manifold as H = (G(H2 )H1 H2 ), where H1 is a (p ? n + 1) ? 1 vector,
H2 is a p ? (n ? 1) matrix and G(H2 ) is chosen so that (G(H2 ) H2 ) ? O(p), and show that H1
follows Angular Central Gaussian (ACG) distribution with covariance G(H2 )T ?G(H2 ) conditional
(d)
on H2 . It can be shown that e2 HH T e1 = e2 G(H2 )H1 |eT1 H2 = 0. Let t21 = eT1 HH T e1 , then
eT1 H2 = 0 is equivalent to eT1 G(H2 )H1 = t1 , and we obtain the desired coupling distribution as
(d)
eT2 HH T e1 = eT2 G(H2 )H1 |eT1 G(H2 )H1 = t1 . Using the normal representation of ACG(?), i.e.,
if x = (x1 , ? ? ? , xp ) ? N (0, ?), then x/kxk ? ACG(?), we can write G(H2 )H1 in terms of
normal variables and then bound all terms using concentration inequalities.
Lemma 3 quantifies the entries of the screening matrix for HOLP . As illustrated in the lemma,
regardless of the covariance ?, diagonal terms of ? are always O( np ) and the off-diagonal terms are
?
O( pn ). Thus, with n ? O(s2 ), ? is likely to satisfy the RDD condition with large probability. For
the noise vector we have the following result.
Lemma 4. Let ? = AHOLP . Assume p > c0 n for some c0 > 1, then for any C > 0 there exist the
same c1 , c2 , c3 as in Lemma 3 such that for any t > 0 and i ? Q,
?
?
2
2? c2 ?t n
P |?i | ?
< 4e?Cn + 2e?t /2 ,
?1 p
1 ? c0
if n ? 8C/(c0 ? 1)2 .
The proof is almost identical to Lemma 2 and is provided in the supplementary materials. The
following theorem results after combining Lemma 3 and 4.
7
Theorem 6. Assume p > c0 n for some c0 > 1. For any ? > 0, if the sample size satisfies
8C
n > max 2C 0 ?4 (?s + ?/? )2 log(3p/?),
,
(c0 ? 1)2
where C 0 = max{
4c24
4c2
}
,
2
c21 c21 (1?c?1
0 )
(6)
and c1 , c2 , c3 , c4 , C are the same constants defined in Lemma 3,
then with probability at least 1 ? ?, ? = AHOLP X ? 2? ?1 kAHOLP k? Ip is restricted diagonally
dominant with sparsity s and C0 ? ?. This implies HOLP is screening consistent for any ? ?
B? (s, ?).
Proof. Notice that if
min |?ii | > 2s? max |?ij | + 2? ?1 kX T (XX T )?1 k? ,
i
ij
(7)
then the proof is complete?
because ? ? 2? ?1 kX T (XX T )?1 k? is already a restricted diagonally
dominant matrix. Let t = Cn/?. The above equation then requires
?
?
?
?
2c4 C?s? n
2? c2 C?t n
2c4 C?s?
2? c2 C? n
?1 n
?1
c1 ?
?
?
= c1 ? ?
?
> 0,
p
?
p
?
(1 ? c?1
(1 ? c?1
0 )? ? p
0 )? ? p
which implies that
?
?
2c4 C?2 ?s
2? c2 C?2
= C1 ?2 ?s + C2 ?2 ? ?1 ? > 1,
?>
+
c1
c1 (1 ? c?1
)?
0
?
?
c2 C
where C1 = 2c4c1 C , C2 = c 2(1?c
?1 . Therefore, taking union bounds on all matrix entries, we
1
0 )
have
2
1
P (7) does not hold
< (p + 5p2 )e?Cn + 2p2 e?Cn/? < (7 + )p2 e?Cn/? ,
n
where the second inequality is due to the fact that p > n and ? > 1. Now for any ? > 0, (7) holds
with probability at least 1 ? ? if
?2
n?
log(7 + 1/n) + 2 log p ? log ? ,
C
?
2
which is satisfied provided (noticing 8 < 3) n ? 2?C log 3p
? . Now pushing ? to the limit gives (6),
the precise condition we need.
There are several interesting observations on equation (5) and (6). First, (?s + ?/? )2 appears in
both expressions. We note that ?s evaluates the sparsity and the diversity of the signal ? while ?/?
is closely related to the signal-to-noise ratio. Furthermore, HOLP relaxes the correlation constraint
r < 1/(2?s) or the covariance constraint (? is RDD) with the conditional number constraint. Thus
for any ?, as long as the sample size is large enough, strong screening consistency is assured.
Finally, HOLP provides an example to satisfy the RDD condition in answer to the question raised
in Section 4.
6
Concluding remarks
This article studies and establishes a necessary and sufficient condition in the form of restricted
diagonally dominant screening matrices for strong screening consistency of a linear screener. We
verify the condition for both SIS and HOLP under random designs. In addition, we show a
close relationship between RDD and the IC, highlighting the difficulty of using SIS in screening for
arbitrarily correlated predictors. For future work, it is of interest to see how linear screening can be
adapted to compressed sensing [20] and how techniques such as preconditioning [21] can improve
the performance of marginal screening and variable selection.
Acknowledgments This research was partly support by grant NIH R01-ES017436 from the National Institute of Environmental Health Sciences.
8
References
[1] David L Donoho. Compressed sensing.
52(4):1289?1306, 2006.
IEEE Transactions on Information Theory,
[2] Richard Baraniuk. Compressive sensing. IEEE Signal Processing Magazine, 24(4), 2007.
[3] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal
Statistical Society. Series B (Statistical Methodology), 58(1):267?288, 1996.
[4] Jianqing Fan and Runze Li. Variable selection via nonconcave penalized likelihood and its
oracle properties. Journal of the American Statistical Association, 96(456):1348?1360, 2001.
[5] Emmanuel Candes and Terence Tao. The dantzig selector: statistical estimation when p is
much larger than n. The Annals of Statistics, 35(6):2313?2351, 2007.
[6] Peter J Bickel, Ya?acov Ritov, and Alexandre B Tsybakov. Simultaneous analysis of lasso and
dantzig selector. The Annals of Statistics, 37(4):1705?1732, 2009.
[7] Cun-Hui Zhang. Nearly unbiased variable selection under minimax concave penalty. The
Annals of Statistics, 38(2):894?942, 2010.
[8] Peng Zhao and Bin Yu. On model selection consistency of lasso. The Journal of Machine
Learning Research, 7:2541?2563, 2006.
[9] Martin J Wainwright. Sharp thresholds for high-dimensional and noisy recovery of sparsity
using l1-constrained quadratic programming. IEEE Transactions on Information Theory, 2009.
[10] Jason D Lee, Yuekai Sun, and Jonathan E Taylor. On model selection consistency of mestimators with geometrically decomposable penalties. Advances in Neural Processing Information Systems, 2013.
[11] Jianqing Fan and Jinchi Lv. Sure independence screening for ultrahigh dimensional feature
space. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(5):849?
911, 2008.
[12] Gaorong Li, Heng Peng, Jun Zhang, Lixing Zhu, et al. Robust rank correlation based screening.
The Annals of Statistics, 40(3):1846?1877, 2012.
[13] Hansheng Wang. Forward regression for ultra-high dimensional variable screening. Journal
of the American Statistical Association, 104(488):1512?1524, 2009.
[14] Haeran Cho and Piotr Fryzlewicz. High dimensional variable selection via tilting. Journal of
the Royal Statistical Society: Series B (Statistical Methodology), 74(3):593?622, 2012.
[15] Xiangyu Wang and Chenlei Leng. High-dimensional ordinary least-squares projection for
screening variables. https://stat.duke.edu/?xw56/holp-paper.pdf, 2015.
[16] Cun-Hui Zhang and Jian Huang. The sparsity and bias of the lasso selection in highdimensional linear regression. The Annals of Statistics, 36(4):1567?1594, 2008.
[17] Garvesh Raskutti, Martin J Wainwright, and Bin Yu. Restricted eigenvalue properties for
correlated gaussian designs. The Journal of Machine Learning Research, 11:2241?2259, 2010.
[18] Shuheng Zhou. Restricted eigenvalue conditions on subgaussian random matrices. arXiv
preprint arXiv:0912.4045, 2009.
[19] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv
preprint arXiv:1011.3027, 2010.
[20] Lingzhou Xue and Hui Zou. Sure independence screening and compressed random sensing.
Biometrika, 98(2):371?380, 2011.
[21] Jinzhu Jia and Karl Rohe. Preconditioning to comply with the irrepresentable condition. arXiv
preprint arXiv:1208.5584, 2012.
9
| 5933 |@word version:2 stronger:5 norm:1 c0:24 covariance:7 accommodate:1 reduction:1 necessity:1 contains:1 series:3 existing:1 ka:1 current:1 elliptical:1 si:26 item:1 runze:1 core:1 provides:2 location:1 attack:1 zhang:3 c2:12 ik:1 prove:7 introduce:1 theoretically:2 peng:2 shuheng:1 behavior:1 growing:1 chi:1 inspired:1 becomes:1 provided:3 xx:6 bounded:3 substantially:1 developed:1 compressive:1 unified:1 finding:1 guarantee:5 formalizes:1 every:2 concave:1 biometrika:1 demonstrates:1 uk:2 whatever:1 control:1 grant:1 before:2 t1:2 tends:1 limit:1 era:1 analyzing:1 might:6 chose:1 studied:1 dantzig:2 equivalence:2 limited:2 c21:2 practical:1 acknowledgment:1 practice:3 union:2 procedure:1 c24:1 empirical:1 revealing:1 projection:4 word:3 suggest:1 irrepresentable:5 selection:19 close:2 put:1 restriction:1 equivalent:2 deterministic:2 imposed:3 holp:17 regardless:1 starting:1 convex:3 unify:1 simplicity:2 identifying:1 recovery:1 decomposable:1 insight:1 estimator:10 rule:1 submodel:2 dominate:1 notion:1 coordinate:4 et2:2 annals:5 suppose:1 heavily:1 magazine:1 duke:5 programming:1 fryzlewicz:1 designing:1 standardize:1 satisfying:1 rec:6 xw56:2 preprint:3 wang:3 verifying:2 sun:1 ordering:1 substantial:1 complexity:1 constrains:1 raise:1 tight:1 solving:1 efficiency:3 preconditioning:2 easily:3 fast:3 describe:1 choosing:3 larger:2 supplementary:5 warwick:2 compressed:4 ability:1 statistic:6 noisy:1 final:1 ip:3 advantage:2 eigenvalue:4 t21:1 propose:2 causing:1 combining:1 rapidly:1 flexibility:3 achieve:3 requirement:6 produce:1 guaranteeing:1 maxi6:3 leave:1 help:1 illustrate:1 develop:1 ac:1 stat:3 derive:1 coupling:1 ij:24 p2:13 strong:28 c:1 involves:1 implies:4 riesz:1 quantify:1 differ:1 closely:1 correct:1 ancillary:7 material:5 bin:2 decompose:1 ultra:2 strictly:1 hold:6 considered:2 ic:18 ground:1 exp:4 normal:2 rdd:30 bickel:1 adopt:1 smallest:2 omitted:1 estimation:1 tilting:1 et1:5 largest:3 establishes:2 hope:1 brought:1 always:2 gaussian:7 modified:1 pn:1 zhou:1 shrinkage:1 broader:2 corollary:4 ax:8 focus:6 refining:2 improvement:1 consistently:1 rank:2 likelihood:1 mainly:1 contrast:3 tremendously:1 sense:1 eliminate:1 kc:1 selects:1 tao:1 among:1 augment:1 raised:1 special:4 constrained:1 marginal:6 field:1 having:2 piotr:1 identical:1 represents:1 yu:2 nearly:1 discrepancy:1 future:1 np:1 richard:1 primarily:1 roman:1 preserve:3 national:1 comprehensive:1 phase:2 freedom:1 screening:73 interest:1 investigate:1 highly:2 explosion:1 necessary:14 taylor:1 desired:1 guidance:1 theoretical:3 witnessed:1 column:3 ordinary:4 tractability:1 subset:1 entry:3 predictor:6 johnson:1 too:1 reported:1 answer:1 xue:1 cho:1 vershynin:1 lee:1 off:4 terence:1 concrete:2 central:2 satisfied:5 choose:1 huang:1 ek:3 american:2 zhao:1 li:2 supp:4 diversity:2 includes:1 coefficient:10 satisfy:2 caused:1 depends:1 later:2 h1:8 jason:1 analyze:1 portion:1 recover:2 candes:1 jia:1 contribution:1 ass:1 square:5 accuracy:1 variance:1 efficiently:2 weak:1 randomness:1 simultaneous:1 definition:6 evaluates:2 involved:1 e2:6 proof:13 conciseness:1 gain:1 recall:2 dimensionality:4 organized:1 asis:5 carefully:1 appears:1 alexandre:1 follow:2 methodology:3 response:2 ritov:1 strongly:1 furthermore:1 just:1 angular:2 correlation:7 hand:1 sketch:1 indicated:1 facilitate:1 usa:2 concept:1 true:1 unbiased:1 verify:4 regularization:1 symmetric:1 nonzero:1 i2:2 illustrated:2 coincides:1 m:2 pdf:1 ay:5 complete:2 tn:3 l1:1 stiefel:2 nih:1 common:1 garvesh:1 raskutti:1 discussed:1 interpretation:1 association:2 consistency:27 i6:4 pointed:1 similarly:1 dominant:21 recent:1 optimizing:1 moderate:1 termed:1 manipulation:1 certain:5 scenario:2 jianqing:2 inequality:3 arbitrarily:2 seen:2 additional:1 xiangyu:2 converge:1 signal:10 assisting:1 ii:19 yuekai:1 reduces:2 exceeds:1 faster:1 long:1 justifying:1 e1:3 plugging:1 prediction:1 variant:1 scalable:1 basic:2 regression:5 noiseless:2 essentially:2 arxiv:6 tailored:1 c1:10 preserved:1 addition:4 completes:1 jian:1 crucial:2 extra:3 sr:2 sure:5 comment:1 subject:1 nonconcave:1 effectiveness:1 subgaussian:1 vital:1 split:1 relaxes:2 enough:1 independence:5 lasso:8 reduce:2 idea:1 cn:8 expression:1 penalty:2 peter:1 remark:2 dramatically:2 useful:1 enumerate:1 preselection:1 tsybakov:1 http:1 exist:1 notice:5 sign:11 tibshirani:1 write:1 key:1 terminology:1 nevertheless:1 threshold:1 asymptotically:1 relaxation:2 geometrically:1 noticing:1 baraniuk:1 ameliorate:1 extends:1 family:1 almost:1 comparable:2 chenlei:2 bound:4 ki:7 guaranteed:1 fan:2 quadratic:1 oracle:1 nontrivial:1 strength:1 adapted:1 constraint:4 precisely:1 infinity:1 aspect:1 mestimators:1 min:12 concluding:1 martin:2 combination:1 spearman:1 smaller:1 slightly:2 cun:2 making:1 intuitively:1 restricted:23 computationally:2 equation:2 eventually:1 fail:3 hh:6 needed:2 mind:1 letting:1 tractable:1 robustness:1 denotes:1 standardized:1 include:1 ensure:1 pushing:1 emmanuel:1 establish:2 society:3 r01:1 already:2 question:1 concentration:1 usual:4 md:3 diagonal:7 manifold:2 reason:1 index:3 relationship:5 mini:4 providing:1 ratio:4 equivalently:1 dunson:2 robert:1 relate:2 expense:1 design:18 observation:1 defining:1 extended:1 precise:1 sharp:1 david:2 required:1 c3:4 c4:6 acov:1 usually:2 below:1 jinchi:1 sparsity:16 challenge:1 max:11 royal:3 wainwright:2 difficulty:4 treated:1 regularized:6 zhu:1 minimax:1 improve:2 jun:1 health:1 kj:11 comply:1 literature:3 ultrahigh:1 asymptotic:1 loss:1 highlight:1 interesting:1 limitation:1 lv:1 h2:16 degree:1 sufficient:14 consistent:15 xp:1 article:9 heng:1 heavy:2 row:3 karl:1 changed:1 diagonally:21 penalized:1 bias:1 weaker:1 institute:1 taking:2 sparse:2 dimension:3 lindenstrauss:1 forward:3 commonly:2 author:1 leng:3 avoided:1 far:1 transaction:2 selector:2 assumed:1 iterative:1 decade:1 tailed:2 quantifies:1 learn:1 maxk6:1 robust:1 ca:1 bearing:1 necessarily:2 zou:1 constructing:1 assured:1 main:2 big:1 noise:11 s2:1 n2:1 x1:1 fails:1 sub:3 explicit:1 candidate:2 lie:1 acg:3 theorem:21 rohe:1 specific:1 discarding:1 showing:1 sensing:5 maxi:1 concern:4 burden:1 exists:3 essential:1 effectively:1 hui:3 magnitude:1 illustrates:1 kx:3 likely:1 highlighting:1 expressed:1 kxk:1 satisfies:9 relies:3 environmental:1 conditional:2 goal:1 donoho:1 hard:1 lemma:19 partly:1 ya:1 highdimensional:1 support:3 jonathan:1 violated:2 dept:3 jinzhu:1 correlated:4 |
5,451 | 5,934 | Finite-Time Analysis of Projected Langevin Monte
Carlo
Ronen Eldan
Weizmann Institute
roneneldan@gmail.com
S?ebastien Bubeck
Microsoft Research
sebubeck@microsoft.com
Joseph Lehec
Universit?e Paris-Dauphine
lehec@ceremade.dauphine.fr
Abstract
We analyze the projected Langevin Monte Carlo (LMC) algorithm, a close cousin
of projected Stochastic Gradient Descent (SGD). We show that LMC allows to
sample in polynomial time from a posterior distribution restricted to a convex
body and with concave log-likelihood. This gives the first Markov chain to sample
from a log-concave distribution with a first-order oracle, as the existing chains
with provable guarantees (lattice walk, ball walk and hit-and-run) require a zerothorder oracle. Our proof uses elementary concepts from stochastic calculus which
could be useful more generally to understand SGD and its variants.
1
Introduction
A fundamental primitive in Bayesian learning is the ability to sample from the posterior distribution.
Similarly to the situation in optimization, convexity is a key property to obtain algorithms with
provable guarantees for this task. Indeed several Markov Chain Monte Carlo methods have been
analyzed for the case where the posterior distribution is supported on a convex set, and the negative
log-likelihood is convex. This is usually referred to as the problem of sampling from a log-concave
distribution. In this paper we propose and analyze a new Markov chain for this problem which
could have several advantages over existing chains for machine learning applications. We describe
formally our contribution in Section 1.1. Then in Section 1.2 we explain how this contribution relates
to various line of work in different fields such as theoretical computer science, statistics, stochastic
approximation, and machine learning.
1.1
Main result
Let K ? Rn be a convex set such that 0 ? K, K contains a Euclidean ball of radius r > 0
and is contained in a Euclidean ball of radius R. Denote PK the Euclidean projection on K (i.e.,
PK (x) = argminy?K |x ? y| where | ? | denotes the Euclidean norm in Rn ), and k ? kK the gauge
of K defined by
kxkK = inf{t ? 0; x ? tK}, x ? Rn .
Let f : K ? R be a L-Lipschitz and ?-smooth convex function, that is f is differentiable and
satisfies ?x, y ? K, |?f (x) ? ?f (y)| ? ?|x ? y|, and |?f (x)| ? L. We are interested in the problem of sampling from the probability measure ? on Rn whose density with respect to the Lebesgue
measure is given by:
Z
d?
1
= exp(?f (x))1{x ? K}, where Z =
exp(?f (y))dy.
dx
Z
y?K
1
We denote m = E? |X|, and M = E [k?kK ], where ? is uniform on the sphere Sn?1 = {x ? Rn :
|x| = 1}.
In this paper we study the following Markov chain, which depends on a parameter ? > 0, and where
?1 , ?2 , . . . is an i.i.d. sequence of standard Gaussian random variables in Rn , and X 0 = 0,
?
?
X k+1 = PK X k ? ?f (X k ) + ??k .
(1)
2
We call the chain (1) projected Langevin Monte Carlo (LMC).
Recall that the total variation distance between two measures ?, ? is defined as TV(?, ?) =
supA |?(A) ? ?(A)| where the supremum is over all measurable sets A. With a slight abuse of
notation we sometimes write TV(X, ?) where X is a random variable distributed according to
e n ) (respectively ?)
e means that there exists c ? R, C > 0 such that
?. The notation vn = O(u
vn ? Cun logc (un ) (respectively ?).
Our main result shows that for an appropriately chosen step-size and number of iterations, one has
convergence in total variation distance of the iterates (X k ) to the target distribution ?.
2
and
Theorem 1 Let ? > 0. One has TV(X N , ?) ? ? provided that ? = N1 m
?
!!
6
? 8
1
n
+
RL
1
2
2
6
e (n + RL) (M + L/r) nm max
N =?
, 22 ?m(L + R)
.
?16
r
?
Note that by viewing ?, L, r as numerical constants, using M ? 1/r, and assuming R ? n and
m ? n3/4 , the bound reads
9 6
e n m .
N =?
?22
?
Observe also that if f is constant, that is ? is the uniform measure on K, then L = 0, m ? n, and
?
e
one can show that M = O(1/
n), which yields the bound:
n 11
e
N =?
.
?2
1.2
Context and related works
There is a long line of works in theoretical computer science proving results similar to Theorem 1,
starting with the breakthough result of Dyer et al. [1991] who showed that the lattice walk mixes in
e 23 ) steps. The current record for the mixing time is obtained by Lov?asz and Vempala [2007],
O(n
e 4 ) for the hit-and-run walk. These chains (as well as other popular chains
who show a bound of O(n
such as the ball walk or the Dikin walk, see e.g. Kannan and Narayanan [2012] and references
therein) all require a zeroth-order oracle for the potential f , that is given x one can calculate the
value f (x). On the other hand our proposed chain (1) works with a first-order oracle, that is given
x one can calculate the value of ?f (x). The difference between zeroth-order oracle and firstorder oracle has been extensively studied in the optimization literature (e.g., Nemirovski and Yudin
[1983]), but it has been largely ignored in the literature on polynomial-time sampling algorithms.
We also note that hit-and-run and LMC are the only chains which are rapidly mixing from any
starting point (see Lov?asz and Vempala [2006]), though they have this property for seemingly very
different reasons. When initialized in a corner of the convex body, hit-and-run might take a long
time to take a step, but once it moves it escapes very far (while a chain such as the ball walk would
only do a small step). On the other hand LMC keeps moving at every step, even when initialized in
a corner, thanks for the projection part of (1).
Our main motivation to study the chain (1) stems from its connection with the ubiquitous
stochastic gradient descent (SGD) algorithm. In general this algorithm takes the form xk+1 =
PK (xk ? ??f (xk ) + ?k ) where ?1 , ?2 , . . . is a centered i.i.d. sequence. Standard results in approximation theory, such as Robbins and Monro [1951], show that if the variance of the noise Var(?1 ) is
of smaller order than the step-size ? then the iterates (xk ) converge to the minimum of f on K (for a
step-size decreasing sufficiently fast as a function of the number of iterations). For the specific noise
2
sequence that we study in (1), the variance is exactly equal to the step-size, which is why the chain
deviates from its standard and well-understood behavior. We also note that other regimes where
SGD does not converge to the minimum of f have been studied in the optimization literature, such
as the constant step-size case investigated in Pflug [1986], Bach and Moulines [2013].
The chain (1) is also closely related to a line of works in Bayesian statistics on Langevin Monte
Carlo algorithms, starting essentially with Tweedie and Roberts [1996]. The focus there is on the
unconstrained case, that is K = Rn . In this simpler situation, a variant of Theorem 1 was proven in
the recent paper Dalalyan [2014]. The latter result is the starting point of our work. A straightforward way to extend the analysis of Dalalyan to the constrained case is to run the unconstrained chain
with an additional potential that diverges quickly as the distance from x to K increases. However
it seems much more natural to study directly the chain (1). Unfortunately the techniques used in
Dalalyan [2014] cannot deal with the singularities in the diffusion process which are introduced by
the projection. As we explain in Section 1.3 our main contribution is to develop the appropriate
machinery to study (1).
In the machine learning literature it was recently observed that Langevin Monte Carlo algorithms
are particularly well-suited for large-scale applications because of the close connection to SGD. For
instance Welling and Teh [2011] suggest to use mini-batch to compute approximate gradients instead
of exact gradients in (1), and they call the resulting algorithm SGLD (Stochastic Gradient Langevin
Dynamics). It is conceivable that the techniques developed in this paper could be used to analyze
SGLD and its refinements introduced in Ahn et al. [2012]. We leave this as an open problem for
future work. Another interesting direction for future work is to improve the polynomial dependency
on the dimension and the inverse accuracy in Theorem 1 (our main goal here was to provide the
simplest polynomial-time analysis).
1.3
Contribution and paper organization
As we pointed out above, Dalalyan [2014] proves the equivalent of Theorem 1 in the unconstrained
case. His elegant approach is based on viewing LMC as a discretization of the diffusion process
dXt = dWt ? 21 ?f (Xt ), where (Wt ) is a Brownian motion. The analysis then proceeds in two
steps, by deriving first the mixing time of the diffusion process, and then showing that the discretized
process is ?close? to its continuous version. In Dalalyan [2014] the first step is particularly transparent as he assumes ?-strong convexity for the potential f , which in turns directly gives a mixing
time of order 1/?. The second step is also simple once one realizes that LMC (without projection)
can be viewed as the diffusion process dX t = dWt ? 12 ?f (X?b ?t c ). Using Pinsker?s inequality and
Girsanov?s formula it is then a short calculation to show that the total variation distance between X t
and Xt is small.
The constrained case presents several challenges, arising from the reflection of the diffusion process on the boundary of K, and from the lack of curvature in the potential (indeed the constant
potential case is particularly important for us as it corresponds to ? being the uniform distribution on K). Rather than a simple Brownian motion with drift, LMC with projection can be
viewed as the discretization of reflected Brownian motion with drift, which is a process of the form
dXt = dWt ? 21 ?f (Xt )dt ? ?t L(dt), where Xt ? K, ?t ? 0, L is a measure supported on
{t ? 0 : Xt ? ?K}, and ?t is an outer normal unit vector of K at Xt . The term ?t L(dt) is
referred to as the Tanaka drift. Following Dalalyan [2014] the analysis is again decomposed in two
steps. We study the mixing time of the continuous process via a simple coupling argument, which
crucially uses the convexity of K and of the potential f . The main difficulty is in showing that the
discretized process (X t ) is close to the continuous version (Xt ), as the Tanaka drift prevents us
from a straightforward application of Girsanov?s formula. Our approach around this issue is to first
use a geometric argument to prove that the two processes are close in Wasserstein distance, and then
to show that in fact for a reflected Brownian motion with drift one can deduce a total variation bound
from a Wasserstein bound.
In this extended abstract we focus on the special case where f is a constant function, that is ? is
uniform on the convex body K. The generalization to an arbitrary smooth potential can be found
in the supplementary material. The rest of the paper is organized as follows. Section 2 contains
the main tehcnical arguments. We first remind the reader of Tanaka?s construction (Tanaka [1979])
of reflected Brownian motion in Section 2.1. We present our geometric argument to bound the
3
Wasserstein distance between (Xt ) and (X t ) in Section 2.2, and we use our coupling argument to
bound the mixing time of (Xt ) in Section 2.3. The derivation of a total variation bound from the
Wasserstein bound is discussed in Section 2.4. Finally we conclude the paper in Section 3 with some
preliminary experimental comparison between LMC and hit-and-run.
2
The constant potential case
In this section we derive the main arguments to prove Theorem 1 when f is a constant function, that
is ?f = 0. For a point x ? ?K we say that ? is an outer unit normal vector at x if |?| = 1 and
hx ? x0 , ?i ? 0,
?x0 ? K.
For x ?
/ ?K we say that 0 is an outer unit normal at x. We define the support function hK of K by
hK (y) = sup {hx, yi; x ? K} ,
y ? Rn .
Note that hK is also the gauge function of the polar body of K.
2.1
The Skorokhod problem
Let T ? R+ ? {+?} and w : [0, T ) ? Rn be a piecewise continuous path with w(0) ? K.
We say that x : [0, T ) ? Rn and ? : [0, T ) ? Rn solve the Skorokhod problem for w if one has
x(t) ? K, ?t ? [0, T ),
x(t) = w(t) + ?(t), ?t ? [0, T ),
and furthermore ? is of the form
Z t
?(t) = ?
?s L(ds), ?t ? [0, T ),
0
where ?s is an outer unit normal at x(s), and L is a measure on [0, T ] supported on the set {t ?
[0, T ) : x(t) ? ?K}.
The path x is called the reflection of w at the boundary of K, and the measure L is called the local
time of x at the boundary of K. Skorokhod showed the existence of such a a pair (x, ?) in dimension
1 in Skorokhod [1961], and Tanaka extended this result to convex sets in higher dimensions in
Tanaka [1979]. Furthermore Tanaka also showed that the solution is unique, and if w is continuous
then so is x and ?. In particular the reflected Brownian motion in K, denoted (Xt ), is defined as
the reflection of the standard Brownian motion (Wt ) at the boundary of K (existence follows by
continuity of Wt ). Observe that by It?o?s formula, for any smooth function g on Rn ,
Z t
Z
Z t
1 t
g(Xt ) ? g(X0 ) =
h?g(Xs ), dWs i +
?g(Xs ) ds ?
h?g(Xs ), ?s i L(ds). (2)
2 0
0
0
To get a sense of what a solution typically looks like, let us work out the case where w is piecewise
constant (this will also be useful to realize that LMC can be viewed as the solution to a Skorokhod
problem). For a sequence g1 . . . gN ? Rn , and for ? > 0, we consider the path:
w(t) =
N
X
gk 1{t ? k?},
t ? [0, (N + 1)?).
k=1
Define (xk )k=0,...,N inductively by x0 = 0 and
xk+1 = PK (xk + gk ).
It is easy to verify that the solution to the Skorokhod problem for w is given by x(t) = x?b ?t c and
Rt
?(t) = ? 0 ?s L(ds), where the measure L is defined by (denoting ?s for a dirac at s)
L=
N
X
|xk + gk ? PK (xk + gk )|?k? ,
k=1
and for s = k?,
?s =
xk + gk ? PK (xk + gk )
.
|xk + gk ? PK (xk + gk )|
4
2.2
Discretization of reflected Brownian motion
Given the discussion above, it is clear that when f is a constant function, the chain (1) can be viewed
as the reflection (X t ) of a discretized Brownian motion W t := W?b ?t c at the boundary of K (more
precisely the value of X k? coincides with the value of X k as defined by (1)). It is rather clear that
the discretized Brownian motion (W t ) is ?close? to the path (Wt ), and we would like to carry this
to the reflected paths (X t ) and (Xt ). The following lemma extracted from Tanaka [1979] allows to
do exactly that.
Lemma 1 Let w and w be piecewise continuous path and assume that (x, ?) and (x, ?) solve the
Skorokhod problems for w and w, respectively. Then for all time t we have
|x(t) ? x(t)|2 ? |w(t) ? w(t)|2
Z t
+2
hw(t) ? w(t) ? w(s) + w(s), ?(ds) ? ?(ds)i.
0
Applying the above lemma to the processes (Wt ) and (W t ) at time T = N ? yields (note that
WT = W T )
Z T
Z T
2
|XT ? X T | ? ?2
hWt ? W t , ?t iL(dt) + 2
hWt ? W t , ? t iL(dt)
0
0
We claim that the second integral is equal to 0. Indeed, since the discretized process is constant on
the intervals [k?, (k + 1)?) the local time L is a positive combination of Dirac point masses at
?, 2?, . . . , N ?.
On the other hand Wk? = W k? for all integer k, hence the claim. Therefore
Z T
hWt ? W t , ?t i L(dt)
|XT ? X T |2 ? ?2
0
Using the inequality hx, yi ? kxkK hK (y) we get
Z
2
|XT ? X T | ? 2 sup kWt ? W T kK
[0,T ]
T
hK (?t ) L(dt).
0
Taking the square root, expectation and using Cauchy?Schwarz we get
"
# "Z
#
T
2
E |XT ? X T | ? 2 E sup kWt ? W T kK E
hK (?t ) L(dt) .
[0,T ]
(3)
0
The next two lemmas deal with each term in the right hand side of the above equation, and they will
show that there exists a universal constant C such that
1/4
E |XT ? X T | ? C (? log(T /?)) n3/4 T 1/2 M 1/2 .
(4)
We discuss why the above bound implies a total variation bound in Section 2.4.
Lemma 2 We have, for all t > 0
Z
E
0
t
nt
hK (?s ) L(ds) ? .
2
Proof By It?o?s formula
d|Xt |2 = 2hXt , dWt i + n dt ? 2hXt , ?t i L(dt).
Now observe that by definition of the reflection, if t is in the support of L then
hXt , ?t i ? hx, ?t i, ?x ? K.
In other words hXt , ?t i ? hK (?t ). Therefore
Z t
Z t
2
hK (?s ) L(ds) ? 2
hXs , dWs i + nt + |X0 |2 ? |Xt |2 .
0
0
The first term of the right?hand side is a martingale, so using that X0 = 0 and taking expectation
we get the result.
5
Lemma 3 There exists a universal constant C such that
"
#
E sup kWt ? W t kK ? C M
p
n? log(T /?).
[0,T ]
Proof Note that
"
#
E sup kWt ? W t kK = E
[0,T ]
max
0?i?N ?1
Yi
where
Yi =
kWt ? Wi? kK .
sup
t?[i?,(i+1)?)
Observe that the variables (Yi ) are identically distributed, let p ? 1 and write
?
!1/p ?
N
?1
X
? ? N 1/p kY0 kp .
E max Yi ? E ?
|Yi |p
i?N ?1
i=0
We claim that
?
(5)
kY0 kp ? C p n ? M
for some constant C, and for all p ? 2. Taking this for granted and choosing p = log(N ) in the
previous inequality yields the result (recall that N = T /?). So it is enough to prove (5). Observe
that since (Wt ) is a martingale, the process Mt = kWt kK is a sub?martingale. By Doob?s maximal
inequality
kY0 kp = k sup Mt kp ? 2kM? kp ,
[0,?]
for every p ? 2. Letting ?n be the standard Gaussian measure on Rn and using Khintchin?s inequality we get
Z
1/p
Z
?
?
p
kM? kp = ?
kxkK ?n (dx)
? C p?
kxkK ?n (dx).
Rn
Rn
Lastly, integrating in polar coordinate, it is easily seen that
Z
?
kxkK ?n (dx) ? C n M.
Rn
2.3
A mixing time estimate for the reflected Brownian motion
Given a probability measure ? supported on K, we let ?Pt be the law of Xt when X0 has law ?.
The following lemma is the key result to estimate the mixing time of the process (Xt ).
Lemma 4 Let x, x0 ? K
|x ? x0 |
TV(?x Pt , ?x0 Pt ) ? ?
.
2?t
The
above result clearly implies that for a probability measure ? on K, TV(?0 Pt , ?Pt ) ?
R
|x| ?(dx)
K ?
. Since ? (the uniform measure on K) is stationary for reflected Brownian motion, we
2?t
obtain
m
TV(?0 Pt , ?) ? ?
.
(6)
2?t
In other words, starting from 0, the mixing time of (Xt ) is of order m2 . We now turn to the proof of
the above lemma.
Proof The proof is based on a coupling argument. Let (Wt ) be a Brownian motion starting from 0
and let (Xt ) be a reflected Brownian motion starting from x:
X0 = x
dXt = dWt ? ?t L(dt)
6
where (?t ) and L satisfy the appropriate conditions. We construct a reflected Brownian motion (Xt0 )
starting from x0 as follows. Let ? = inf{t ? 0; Xt = Xt0 }, and for t < ? let St be the orthogonal
reflection with respect to the hyperplane (Xt ? Xt0 )? . Then up to time ? , the process (Xt0 ) is defined
by
( 0
X0 = x0
dXt0 = dWt0 ? ?t0 L0 (dt)
dWt0 = St (dWt )
where L0 is a measure supported on {t ? ? ; Xt0 ? ?K}, and ?t0 is an outer unit normal at Xt0 for
all such t. After time ? we just set Xt0 = Xt . Since St is an orthogonal map (Wt0 ) is a Brownian
motion and thus (Xt0 ) is a reflected Brownian motion starting from x0 . Therefore
TV(?x Pt , ?x0 Pt ) ? P(Xt 6= Xt0 ) = P(? > t).
Observe that on [0, ? )
dWt ? dWt0 = (I ? St )(dWt ) = 2hVt , dWt iVt ,
where Vt =
Xt ?Xt0
|Xt ?Xt0 | .
So
d(Xt ? Xt0 ) = 2hVt , dWt iVt ? ?t L(dt) + ?t0 L0 (dt) = 2(dBt ) Vt ? ?t L(dt) + ?t0 L0 (dt),
where
Z
t
hVs , dWs i,
Bt =
on [0, ? ).
0
Observe that (Bt ) is a one?dimensional Brownian motion. It?o?s formula then gives
dg(Xt ? Xt0 ) = 2h?g(Xt ? Xt0 ), Vt i dBt ? h?g(Xt ? Xt0 ), ?t i L(dt)
+ h?g(Xt ? Xt0 ), ? 0 ti L0 (dt) + 2?2 g(Xt ? Xt0 )(Vt , Vt ) dt,
for every smooth function g on Rn . Now if g(x) = |x| then
?g(Xt ? Xt0 ) = Vt
so h?g(Xt ? Xt0 ), Vt i = 1, h?g(Xt ? Xt0 ), ?t i ? 0 on the support of L, and h?g(Xt ? Xt0 ), ?t0 i ? 0
1
on the support of L0 . Moreover ?2 g(Xt ? Xt0 ) = |Xt ?Y
P(Xt ?Yt )? where Px? denotes the
t|
?
2
orthogonal projection on x . In particular ? g(Xt ? Yt )(Vt ) = 0. We obtain |Xt ? Xt0 | ?
|x ? x0 | + 2Bt , on [0, ? ). Therefore P(? > t) ? P(? 0 > t) where ? 0 is the first time the Brownian
motion (Bt ) hits the value ?|x ? x0 |/2. Now by the reflection principle
|x ? x0 |
P(? 0 > t) = 2 P (0 ? 2 Bt < |x ? x0 |) ? ?
.
2?t
2.4
From Wasserstein distance to total variation
To conclude it remains to derive a total variation bound between XT and X T using (4). The details
of this step are deferred to the supplementary material where we consider the case of a general logconcave distribution. The intuition goes as follows: the processes (XT +s )s?0 and (X T +s )s?0 both
evolve according to a Brownian motion until the first time s that one process undergoes a reflection.
But if T is large enough and ? is small enough then one can easily get from (4) (and the fact that
the uniform measure does not put too much mass close to the boundary) that XT and X T are much
closer to each other than they are to the boundary of K. This implies that one can couple them (just
as in Section 2.3) so that they meet before one of them hits the boundary.
3
Experiments
Comparing different Markov Chain Monte Carlo algorithms is a challenging problem in and of itself. Here we choose the following simple comparison procedure based on the volume algorithm
7
developed in Cousins and Vempala [2014]. This algorithm, whose objective is to compute the volume of a given convex set K, procedes in phases. In each phase ` it estimates the mean of a certain
function under a multivariate Gaussian restricted to K with (unrestricted) covariance ?` In . Cousins
and Vempala provide a Matlab implementation of the entire algorithm, where in each phase the
target mean is estimated by sampling from the truncated Gaussian using the hit-and-run (H&R)
chain. We implemented the same procedure with LMC instead of H&R, and we choose the step-size
? = 1/(?n2 ), where ? is the smoothness parameter of the underlying log-concave distribution (in
particular here ? = 1/?`2 ). The intuition for the choice of the step-size is as follows: the scaling in
inverse smoothness comes from the optimization literature, while the scaling in inverse dimension
squared comes from the analysis in the unconstrained case in Dalalyan [2014].
Time
Estimated normalized volume
2000
4.5
Box H&R
Box LMC
Box and Ball H&R
Box and Ball LMC
4
3.5
Box H&R
Box LMC
Box and Ball H&R
Box and Ball LMC
1800
1600
1400
3
1200
2.5
1000
2
800
1.5
600
1
400
0.5
0
200
1
2
3
4
5
6
7
8
9
0
10
1
2
3
4
5
6
7
8
9
10
We ran the volume algorithm with both H&R and LMC on the ?
following
set of convex bodies:
K = [?1, 1]n (referred to as the ?Box?) and K = [?1, 1]n ? 2n Bn (referred to as the ?Box
and Ball?), where n = 10 ? k, k = 1, . . . , 10. The computed volume (normalized by 2n for the
?Box? and by 0.2 ?2n for the ?Box and Ball?) as well as the clock time (in seconds) to terminate are
reported in the figure above. From these experiments it seems that LMC and H&R roughly compute
similar values for the volume (with H&R being slightly more accurate), and LMC is almost always a
bit faster. These results are encouraging, but much more extensive experiments are needed to decide
if LMC is indeed a competitor to H&R in practice.
8
References
S. Ahn, A. Korattikara, and M. Welling. Bayesian posterior sampling via stochastic gradient fisher
scoring. In ICML 2012, 2012.
F. Bach and E. Moulines. Non-strongly-convex smooth stochastic approximation with convergence
rate o(1/n). In Advances in Neural Information Processing Systems 26 (NIPS), pages 773?781.
2013.
B. Cousins and S. Vempala. Bypassing kls: Gaussian cooling and an o? (n3 ) volume algorithm.
Arxiv preprint arXiv:1409.6011, 2014.
A. Dalalyan. Theoretical guarantees for approximate sampling from smooth and log-concave densities. Arxiv preprint arXiv:1412.7392, 2014.
M. Dyer, A. Frieze, and R. Kannan. A random polynomial-time algorithm for approximating the
volume of convex bodies. Journal of the ACM (JACM), 38(1):1?17, 1991.
R. Kannan and H. Narayanan. Random walks on polytopes and an affine interior point method for
linear programming. Mathematics of Operations Research, 37:1?20, 2012.
L. Lov?asz and S. Vempala. Hit-and-run from a corner. SIAM J. Comput., 35(4):985?1005, 2006.
L. Lov?asz and S. Vempala. The geometry of logconcave functions and sampling algorithms. Random
Structures & Algorithms, 30(3):307?358, 2007.
A. Nemirovski and D. Yudin. Problem Complexity and Method Efficiency in Optimization. Wiley
Interscience, 1983.
G. Pflug. Stochastic minimization with constant step-size: asymptotic laws. SIAM J. Control and
Optimization, 24(4):655?666, 1986.
H. Robbins and S. Monro. A stochastic approximation method. Annals of Mathematical Statistics,
22:400?407, 1951.
A. Skorokhod. Stochastic equations for diffusion processes in a bounded region. Theory of Probability & Its Applications, 6(3):264?274, 1961.
H. Tanaka. Stochastic differential equations with reflecting boundary condition in convex regions.
Hiroshima Mathematical Journal, 9(1):163?177, 1979.
L. Tweedie and G. Roberts. Exponential convergence of langevin distributions and their discrete
approximations. Bernoulli, 2(4):341?363, 1996.
M. Welling and Y.W. Teh. Bayesian learning via stochastic gradient langevin dynamics. In ICML
2011, 2011.
9
| 5934 |@word version:2 polynomial:5 norm:1 seems:2 open:1 calculus:1 km:2 crucially:1 bn:1 covariance:1 sgd:5 kxkk:5 carry:1 contains:2 denoting:1 existing:2 current:1 com:2 discretization:3 dikin:1 nt:2 comparing:1 gmail:1 dx:6 realize:1 numerical:1 hvs:1 stationary:1 xk:13 short:1 record:1 iterates:2 simpler:1 mathematical:2 differential:1 prove:3 interscience:1 x0:20 lov:4 indeed:4 roughly:1 behavior:1 discretized:5 moulines:2 decreasing:1 decomposed:1 encouraging:1 provided:1 notation:2 moreover:1 underlying:1 mass:2 bounded:1 what:1 developed:2 guarantee:3 every:3 firstorder:1 concave:5 ti:1 exactly:2 universit:1 hit:9 control:1 unit:5 positive:1 before:1 understood:1 local:2 meet:1 path:6 abuse:1 might:1 zeroth:2 therein:1 studied:2 challenging:1 nemirovski:2 weizmann:1 unique:1 practice:1 procedure:2 universal:2 projection:6 word:2 integrating:1 suggest:1 dbt:2 get:6 cannot:1 close:7 interior:1 put:1 context:1 applying:1 measurable:1 equivalent:1 map:1 yt:2 primitive:1 dalalyan:8 starting:9 straightforward:2 convex:13 go:1 m2:1 deriving:1 his:1 proving:1 variation:8 coordinate:1 annals:1 target:2 construction:1 pt:8 exact:1 programming:1 us:2 particularly:3 cooling:1 observed:1 preprint:2 calculate:2 region:2 ran:1 intuition:2 convexity:3 complexity:1 inductively:1 pinsker:1 dynamic:2 efficiency:1 easily:2 various:1 derivation:1 hiroshima:1 fast:1 describe:1 monte:7 kp:6 choosing:1 whose:2 supplementary:2 solve:2 say:3 ability:1 statistic:3 g1:1 itself:1 seemingly:1 advantage:1 differentiable:1 sequence:4 propose:1 maximal:1 fr:1 korattikara:1 rapidly:1 mixing:9 dirac:2 convergence:3 diverges:1 leave:1 tk:1 coupling:3 develop:1 derive:2 strong:1 implemented:1 implies:3 come:2 direction:1 radius:2 closely:1 stochastic:12 centered:1 viewing:2 material:2 require:2 hx:4 transparent:1 generalization:1 preliminary:1 elementary:1 singularity:1 bypassing:1 sufficiently:1 around:1 normal:5 exp:2 sgld:2 claim:3 polar:2 hwt:3 realizes:1 schwarz:1 robbins:2 gauge:2 minimization:1 clearly:1 gaussian:5 always:1 rather:2 l0:6 focus:2 bernoulli:1 likelihood:2 ceremade:1 hk:9 sense:1 hvt:2 typically:1 bt:5 entire:1 doob:1 interested:1 issue:1 denoted:1 constrained:2 special:1 logc:1 field:1 once:2 equal:2 construct:1 sampling:7 look:1 icml:2 future:2 piecewise:3 escape:1 dg:1 frieze:1 kwt:6 phase:3 geometry:1 lebesgue:1 microsoft:2 n1:1 organization:1 deferred:1 analyzed:1 chain:20 accurate:1 integral:1 closer:1 pflug:2 tweedie:2 machinery:1 orthogonal:3 euclidean:4 walk:8 initialized:2 theoretical:3 instance:1 gn:1 lattice:2 uniform:6 too:1 reported:1 dependency:1 thanks:1 density:2 fundamental:1 st:4 siam:2 quickly:1 again:1 squared:1 nm:1 choose:2 corner:3 potential:8 wk:1 skorokhod:8 satisfy:1 depends:1 root:1 analyze:3 sup:7 monro:2 contribution:4 il:2 square:1 accuracy:1 variance:2 who:2 largely:1 yield:3 ronen:1 bayesian:4 carlo:7 explain:2 girsanov:2 definition:1 competitor:1 proof:6 couple:1 popular:1 recall:2 ubiquitous:1 organized:1 reflecting:1 higher:1 dt:19 reflected:11 though:1 box:12 strongly:1 furthermore:2 just:2 lastly:1 until:1 d:8 hand:5 clock:1 lack:1 continuity:1 undergoes:1 concept:1 verify:1 normalized:2 hence:1 read:1 deal:2 lmc:19 coincides:1 motion:20 reflection:8 recently:1 argminy:1 mt:2 rl:2 volume:8 extend:1 slight:1 he:1 discussed:1 smoothness:2 unconstrained:4 mathematics:1 similarly:1 pointed:1 moving:1 ahn:2 deduce:1 curvature:1 posterior:4 brownian:20 showed:3 recent:1 multivariate:1 inf:2 certain:1 inequality:5 vt:8 dauphine:2 yi:7 scoring:1 seen:1 minimum:2 additional:1 wasserstein:5 unrestricted:1 converge:2 relates:1 mix:1 stem:1 smooth:6 faster:1 calculation:1 bach:2 sphere:1 long:2 variant:2 essentially:1 expectation:2 arxiv:4 iteration:2 sometimes:1 interval:1 appropriately:1 rest:1 asz:4 logconcave:2 elegant:1 call:2 integer:1 easy:1 identically:1 enough:3 cousin:4 t0:5 granted:1 matlab:1 ignored:1 useful:2 generally:1 clear:2 dws:3 extensively:1 narayanan:2 simplest:1 estimated:2 arising:1 write:2 discrete:1 ivt:2 key:2 diffusion:6 run:8 inverse:3 almost:1 reader:1 decide:1 vn:2 dy:1 scaling:2 bit:1 bound:12 oracle:6 precisely:1 n3:3 argument:7 vempala:7 px:1 tv:7 according:2 ball:11 combination:1 smaller:1 slightly:1 wi:1 joseph:1 cun:1 restricted:2 equation:3 remains:1 turn:2 discus:1 needed:1 letting:1 dyer:2 operation:1 observe:7 appropriate:2 dwt:10 batch:1 existence:2 denotes:2 assumes:1 prof:1 approximating:1 move:1 objective:1 rt:1 gradient:7 conceivable:1 distance:7 outer:5 cauchy:1 reason:1 provable:2 kannan:3 assuming:1 remind:1 kk:8 mini:1 unfortunately:1 robert:2 gk:8 negative:1 implementation:1 ebastien:1 teh:2 markov:5 finite:1 descent:2 truncated:1 langevin:8 situation:2 extended:2 rn:18 supa:1 lehec:2 arbitrary:1 drift:5 introduced:2 pair:1 paris:1 extensive:1 connection:2 polytopes:1 tanaka:9 nip:1 proceeds:1 usually:1 regime:1 challenge:1 max:3 natural:1 difficulty:1 improve:1 sn:1 deviate:1 literature:5 geometric:2 evolve:1 asymptotic:1 law:3 dxt:3 interesting:1 proven:1 var:1 affine:1 principle:1 eldan:1 supported:5 side:2 understand:1 institute:1 taking:3 distributed:2 boundary:9 dimension:4 yudin:2 hxs:1 refinement:1 projected:4 far:1 welling:3 approximate:2 supremum:1 keep:1 conclude:2 un:1 continuous:6 wt0:1 why:2 terminate:1 investigated:1 pk:8 main:8 motivation:1 noise:2 n2:1 body:6 referred:4 martingale:3 wiley:1 sub:1 exponential:1 comput:1 hw:1 theorem:6 formula:5 specific:1 xt:47 showing:2 x:3 exists:3 hxt:4 suited:1 jacm:1 bubeck:1 xt0:23 prevents:1 contained:1 corresponds:1 satisfies:1 extracted:1 kls:1 acm:1 goal:1 viewed:4 lipschitz:1 fisher:1 wt:8 hyperplane:1 lemma:9 total:8 called:2 experimental:1 formally:1 support:4 latter:1 |
5,452 | 5,935 | Beyond Sub-Gaussian Measurements:
High-Dimensional Structured Estimation with
Sub-Exponential Designs
Vidyashankar Sivakumar
Arindam Banerjee
Department of Computer Science & Engineering
University of Minnesota, Twin Cities
{sivakuma,banerjee}@cs.umn.edu
Pradeep Ravikumar
Department of Computer Science
University of Texas, Austin
pradeepr@cs.utexas.edu
Abstract
We consider the problem of high-dimensional structured estimation with normregularized estimators, such as Lasso, when the design matrix and noise are drawn
from sub-exponential distributions. Existing results only consider sub-Gaussian
designs and noise, and both the sample complexity and non-asymptotic estimation
error have been shown to depend on the Gaussian width of suitable sets. In contrast, for the sub-exponential setting, we show that the sample complexity and the
estimation error will depend on the exponential width of the corresponding sets,
and the analysis holds for any norm. Further, using
?generic chaining, we show that
the exponential width for any set will be at most log p times the Gaussian width
of the set, yielding Gaussian width based results even for the sub-exponential case.
Further, for certain popular estimators, viz Lasso and Group Lasso, using a VCdimension based analysis, we show that the sample complexity will in fact be the
same order as Gaussian designs. Our general analysis and results are the first in
the sub-exponential setting, and are readily applicable to special sub-exponential
families such as log-concave and extreme-value distributions.
1
Introduction
We consider the following problem of high dimensional linear regression:
y = X?? + ? ,
(1)
where y ? Rn is the response vector, X ? Rn?p has independent isotropic sub-exponential random rows, ? ? Rn has i.i.d sub-exponential entries and the number of covariates p is much larger
compared to the number of samples n. Given y, X and assuming that ?? is ?structured?, usually characterized as having a small value according to some norm R(?), the problem is to recover ?? close
to ?? . Considerable progress has been made over the past decade on high-dimensional structured
estimation using suitable M-estimators or norm-regularized regression [16, 2] of the form:
1
ky ? X?k22 + ?n R(?) ,
(2)
???n = argmin
??Rp 2n
where R(?) is a suitable norm, and ?n > 0 is the regularization parameter. Early work focused
on high-dimensional estimation of sparse vectors using the Lasso and related estimators, where
R(?) = k?k1 [13, 22, 23]. Sample complexity of such estimators have been rigorously established
based on the RIP(restricted isometry property) [4, 5] and the more general RE(restricted eigenvalue)
conditions [3, 16, 2]. Several subsequent advances have considered structures beyond `1 , using more
general norms such as (overlapping) group sparse norms, k-support norm, nuclear norm, and so on
[16, 8, 7]. In recent years, much of the literature has been unified and nonasymptotic estimation
error bound analysis techniques have been developed for regularized estimation with any norm [2].
1
In spite of such advances, most of the existing literature relies on the assumption that entries in
the design matrix X ? Rn?p are sub-Gaussian. In particular, recent unified treatments based on
decomposable norms, atomic norms, or general norms all rely on concentration properties of subGaussian distributions [16, 7, 2]. Certain estimators, such as the Dantzig selector and variants,
consider a constrained problem rather than a regularized problem as in (2) but the analysis again
relies on entries of X being sub-Gaussian [6, 8]. For the setting of constrained estimation, building
on prior work by [10], [20] outlines a possible strategy for such analysis which can work for any
distribution, but works out details only for the sub-Gaussian case. In recent work [9] considered
sub-Gaussian design matrices but with heavy-tailed noise, and suggested modifying the estimator in
(1) via a median-of-means type estimator based on multiple estimates of ?? from sub-samples.
In this paper, we establish results for the norm-regularized estimation problem as in (2) for any
norm R(?) under the assumption that elements Xij of the design matrix X ? Rn?p follow a subexponential distribution, whose tails are dominated by scaled versions of the (symmetric) exponential distribution, i.e., P (|Xij | > t) ? c1 exp(?t/c2 ) for all t ? 0 and for suitable constants c1 , c2
[12, 21]. To understand the motivation of our work, note that in most of machine learning and
statistics, unlike in compressed sensing, the design matrix cannot be chosen but gets determined
by the problem. In many application domains like finance, climate science, ecology, social network analysis, etc., variables with heavier tails than sub-Gaussians are frequently encountered. For
example in climate science, to understand the relationship between extreme value phenomena like
heavy precipitation variables from the extreme-value distributions are used. While high dimensional
statistical techniques have been used in practice for such applications, currently lacking is the theoretical guarantees on their performance. Note that the class of sub-exponential distributions have
heavier tails compared to sub-Gaussians but have all moments. To the best of our knowledge, this
is the first paper to analyze regularized high-dimensional estimation problems of the form (2) with
sub-exponential design matrices and noise.
? n k2 = k??? ? ?? k2 , where ?? is
In our main result, we obtain bounds on the estimation error k?
n
the optimal structured parameter. The sample complexity bounds are log p worse compared to the
sub-Gaussian case. For example for the `1 norm, we obtain n = O(s log2 p) sample complexity
bound instead of O(s log p) for the sub-Gaussian case. The analysis depends on two key ingredients
which have been discussed in previous work [16, 2]: 1. The satisfaction of the RE condition on a set
A which is the error set associated with the norm, and 2. The design matrix-noise interaction manifested in the form of lower bounds on the regularization parameter. Specifically, the RE condition
depends on the properties of the design matrix. We outline two different approaches for obtaining
the sample complexity, to satisfy the RE condition: one based on the ?exponential width? of A and
another based on the VC-dimension of linear predictors drawn from A [10, 20, 11]. For two widely
used cases, Lasso and group-lasso, we show that the VC-dimension based analysis leads to a sharp
bound on the sample complexity, which is exactly the same order as that for sub-Gaussian design
matrices! In particular, for Lasso with s-sparsity, O(s log p) samples are sufficient to satisfy the RE
condition for sub-exponential designs. Further, we show that the bound on the regularization parameter depends on the ?exponential width? we (?R ) of the unit norm ball ?R = {u ? Rp |R(u) ? 1}.
Through a careful argument based on?generic chaining [19], we show that for any set T ? Rp , the
exponential width we (T ) ? cwg (T ) log p, where wg (T ) is the Gaussian width of the set T and c
is an absolute constant. Recent advances on computing or bounding wg (T ) for various structured
sets can then be used to bound we (T ). Again, for the case of Lasso, we (?R ) ? c log p.
The rest of the paper is organized as follows. In Section 2 we describe various aspects of the problem
and highlight our contributions. In Section 3 we establish a key result on the relationship between
Gaussian and exponential widths of sets which will be used for our subsequent analysis. In Section
4 we establish results on the regularization parameter ?n , RE constant ? and the non-asymptotic
? n k2 . We show some experimental results before concluding in Section 6.
estimation error k?
2
Background and Preliminaries
In this section, we describe various aspects of the problem, introducing notations along the way, and
highlight our contributions. Throughout the paper values of constants change from line to line.
2
2.1
Problem setup
We consider the problem defined in (2). The goal of this paper is to establish conditions for consis? n k2 = k?? ? ?? k2 .
tent estimation and derive bounds on k?
? ?
? n = ???
Error set: Under the assumption ?n ? ?R? ( n1 X T (y?X?? )), ? > 1, the error vector ?
p?1
lies in a cone A ? S
[3, 16, 2].
Regularization parameter: For ? > 1, ?n ? ?R? ( n1 X T (y ? X?)) following analysis in [16, 2].
Restricted Eigenvalue (RE) conditions: For consistent estimation, the design matrix X should
satisfy the following RE condition inf u?A ?1n kXuk2 ? ? on the error set A for some constant
? > 0 [3, 16, 2, 20, 18]. The RE sample complexity is the number of samples n required to satisfy
the RE condition and has been shown to be related to the Gaussian width of the error set. [7, 2, 20].
Deterministic recovery bounds: If X satisfies the RE condition on the error set A and ?n satisfies
? n k2 ? c?(A) ?n with high probability
the assumptions stated earlier, [2] show the error bound k?
?
R(u)
(w.h.p), for some constant c, where ?(A) = supu?A kuk2 is the norm compatibility constant.
`1 norm regularization: One example for R(?) we will consider throughout the paper is the `1
norm regularization. In particular we will always consider k?? k0 = s.
Group-sparse norms: Another popular example we consider is the group-sparse norm. Let G =
{G1 , G2 , . . . , GNG } denote a collection of groups, which are blocks of any vector ? ? Rp . For any
vector ? ? Rp , let ?NG denote a vector with coordinates ?iNG = ?i if i ? GNG , else ?iNG = 0.
Let m = maxi?[1,??? ,NG ] |Gi | be the maximum size of any group. In the group sparse setting for
any subset SG ? {1, 2, . . . , NG } with cardinality |SG | = sG , we assume that the parameter vector
?? ? Rp satisfies ??NG = ~0, ?NG 6? SG . Such a vector is called SG -group sparse. We will focus on
PNG i
the case when R(?) = i=1
k? k2 .
2.2
Contributions
One of our major results is the relationship between the Gaussian and exponential width of sets
using arguments from generic chaining [19]. Existing analysis frameworks for our problem for
sub-Gaussian X and ? obtain results in terms of Gaussian widths of suitable sets associated with
the norm [2, 20]. For sub-exponential X and ? this dependency, in some cases, is replaced by the
exponential width of the set. By establishing a precise relationship between the two quantities, we
leverage existing results on the computation of Gaussian widths for our scenario. Another contribution is obtaining the same order of the RE sample complexity bound as for the sub-Gaussian case
for `1 and group-sparse norms. While this strong result has already been explored in [11] for `1 ,
we adapt it for our analysis framework and also extend it to the group-sparse setting. As for the
application of our work, the results are applicable to all log-concave distributions which by definition are distributions admitting a log-concave density f i.e. a density of the form f = e? with
? any concave function. This covers many practically used distributions including extreme value
distributions.
3
Relationship between Gaussian and Exponential Widths
In this section we introduce a complexity parameter of a set we (?), which we call the exponential
width of the set, and establish a sharp upper bound for it in terms?of the Gaussian width of the set
wg (?). In particular, we prove the inequality: we (A) ? c ? wg (A) log p for some fixed constant c.
To see the connection with the rest of the paper, remember that our subsequent results for ?n and ?
are expressed in terms of the Gaussian width and exponential width of specific sets associated with
the norm. With this result, we establish precise sample complexity bounds by leveraging a body of
literature on the computation of Gaussian widths for various structured sets [7, 20]. We note that
while the exponential width has been defined and used earlier, see for e.g. [19, 15], to the best of
our knowledge this is the first result establishing the relation between the Gaussian and exponential
widths of sets. Our result relies on generic chaining [19].
3
3.1
Generic Chaining, Gaussian Width and Exponential Widths
Consider a process {Xt }t?T = hh, ti indexed by a set T ? Rp , where each element hi has mean
0. It follows from the definition that the process is centered, i.e., E(Xt ) = 0, ?t ? T . We will
also assume for convenience w.l.o.g that set T is finite. Also, for any s, t ? T , consider a canonical
distance metric d(s, t). We are interested in computing the quantity E supt?T Xt . Now, for reasons
detailed in the supplement,nconsider that we split T into a sequence of subsets T0 ? T1 ? . . . ? T ,
with T0 = {t0 }, |Tn | ? 22 for n ? 1 and Tm = T for some large m. Let function ?n : T ? Tn ,
defined as ?n (t) = {s : d(s, t) ? d(s1 , t), ?s, s1 ? Tn }, maps each point t ? T to some point
s ? Tn closest according to d. The set Tn and the associated function ?n define a partition An
of the set T . Each element of the partition An has some element s ?nTn and all t ? T closest to
it according to the map ?n . Also the size of the partition |An | ? 22 . An are called admissible
sequences in generic chaining. Note that there are multiple admissible sequences corresponding to
multiple ways of defining the sets T0 , T1 , . . . , Tm . We will denote by ?(An (t)) the diameter of the
element An (t) w.r.t distance metric d defined as ?(An (t)) = sups,t?An (t) d(s, t).
Definition 1 ?-functionals: [19] Given ? > 0, and a metric space (T, d) we define
X
?? (T, d) = inf sup
2n/? ?(An (t)) ,
(3)
t
n?0
where the inf is taken over all possible admissible sequences of the set T .
Gaussian width: Let {Xt }t?T = hg, ti where each element gi is i.i.d N (0, 1). The quantity
wg (T ) = E supt?T Xt is called the Gaussian width of the set T . Define the distance metric
d2 (s, t) = ks ? tk2 . The relation between Gaussian width and the ?-functionals is seen from
the following result from [Theorem 2.1.1] of [19] stated below:
1
?2 (T, d2 ) ? wg (T ) ? L?2 (T, d2 ) .
(4)
L
Note that, following [Theorem
in
2.1.5]
[19] any process which satisfies the concentration bound
u2
P (|Xs ? Xt | ? u) ? 2 exp ? d2 (s,t)2 satisfies the upper bound in (4).
Exponential width: Let {Xt }t?T = he, ti where each element ei is is a centered i.i.d exponential
random variable satisfying P (|ei | ? u) = exp(?u). Define the distance metrics d2 (s, t) = ks ? tk2
and d? (s, t) = ks ? tk? . The quantity we (T ) = E supt?T Xt is called the exponential width of
the set T . By [Theorem 1.2.7] and [Theorem 5.2.7] in [19], for some universal constant L, we (T )
satisfies:
1
(?2 (T, d2 ) + ?1 (T, d? )) ? we (T ) ? L(?2 (T, d2 ) + ?1 (T, d? ))
(5)
L
Note that
which satisfies
the sub-exponential concentration bound P (|Xs ? Xt | ? u) ?
any process
u
u2
satisfies the upper bound in the above inequality [15, 19].
2 exp ?K min d2 (s,t)2 , d? (s,t)
3.2
An Upper Bound for the Exponential Width
In this section we prove the following relationship between the exponential and Gaussian widths:
Theorem 1 For any set T ? Rp , for some constant c the following holds:
p
we (T ) ? c ? wg (T ) log p .
Proof:
(6)
The result depends on geometric results [Lemma 2.6.1] and [Theorem 2.6.2] in [19].
Theorem 2 [19] Consider a countable set T ? Rp , and a number u > 0. Assume that the
Gaussian width is bounded i.e. S = ?2 (T, d2 ) ? ?. Then there is a decomposition T ? T1 + T2
where T1 + T2 = {t1 + t2 : t1 ? T1 , t2 ? T2 }, such that
?2 (T1 , d2 ) ? LS ,
?1 (T1 , d? ) ? LSu
(7)
LS
?2 (T2 , d2 ) ? LS ,
T2 ?
B1 ,
(8)
u
where L is some universal constant and B1 is the unit `1 norm ball in Rp .
4
We first examine the exponential widths of the sets T1 and T2 . For the set T1 :
we (T1 ) ? L[?2 (T1 , d2 ) + ?1 (T1 , d? )] ? L[S + Su] = L(wg (T ) + wg (T )u) ,
(9)
where the first inequality follows from (5) and the second inequality follows from (7). We will
need the following result on bounding the exponential width of an unit `1 -norm ball in p dimensions
to compute the exponential width of T2 . The proof, given in the supplement, is based on the fact
supt?B1 he, ti = kek? and then using a simple union bound argument to bound kek? .
Lemma 1 Consider the set B1 = {t ? Rp : ktk1 ? 1}. Then for some universal constant L:
we (B1 ) = E sup he, ti ? L log p .
(10)
t?B1
The exponential width of T2 is:
we (T2 ) = we ((LS/u)B1 ) = (LS/u)we (B1 ) = (L/u)wg (T )we (B1 ) ? (L/u)wg (T ) log p . (11)
The first equality follows from (8) as T2 is a subset of a (LS/u)-scaled `1 norm ball, the second
inequality follows from elementary properties of widths of sets and the last inequality follows from
Lemma
? 1. Now as stated in Theorem
? 2, u in (9)
? and (11) is any number greater than 0. We choose
u = log p and noting that (1 + log p) ? L log p for some constant L yields:
p
p
we (T1 ) ? Lwg (T ) log p,
we (T2 ) ? Lwg (T ) log p
(12)
The final step, following arguments as [Theorem 2.1.6] [19], is to bound exponential width of set T .
p
we (T ) = E[suphh, ti] ? E[ sup hh, t1 i] + E[ sup hh, t2 i] ? we (T1 ) + we (T2 ) ? Lwg (T ) log p .
t?T
t1 ?T1
t2 ?T2
This proves Theorem 1.
4
Recovery Bounds
? n = ?? ? ?? . If the regularization parameter ?n ?
We obtain bounds on the error vector ?
?R? ( n1 X T (y ? X?? )), ? > 1 and the RE condition is satisfied on the error set A with RE constant ?, then [2, 16] obtain the following error bound w.h.p for some constant c:
? n k2 ? c ?
k?
?n
?(A) ,
?
(13)
where ?(A) is the norm compatibility constant given by supu?A (R(u)/kuk2 ).
4.1
Regularization Parameter
As discussed earlier, for our analysis the regularization parameter should satisfy ?n ?
?R? ( n1 X T (y ? X?? )), ? > 1. Observe that for the linear model (1), ? = y ? X?? is the noise,
implying that ?n ? ?R? ( n1 X T ?). With e denoting a sub-exponential random vector with i.i.d
entries,
1 T
1 T ?
1
?
E R
X ?
= E sup k?k2
X
,u
= E[k?k2 ]E sup he, ui . (14)
n
n
k?k2
n
u??R
u??R
The first equality follows from the definition of dual norm. The second inequality follows from
the fact that X and ? are independent of each other. Also by elementary arguments [21],
e = X T (?/|?k2 ) has i.i.d sub-exponential entries with sub-exponential norm bounded by
sup??Rn khXiT , ?/k?k2 ik?1 . The above argument was first proposed for the sub-Gaussian case
in [2]. For sub-exponential design and noise, the difference compared to the sub-Gaussian case is
the dependence on the exponential width instead of the Gaussian width of the unit norm ball. Using known results on the Gaussian widths of unit `1 and group-sparse norms, corollaries below are
derived using the relationship between Gaussian and exponential widths derived in Section 3:
5
Corollary 1 If R(?) is the `1 norm, for sub-exponential design matrix X and noise ?,
1 T
?0
?
?
E R
X (y ? X? )
? ? log p .
n
n
(15)
Corollary 2 If R(?) is the group-sparse norm, for sub-exponential design matrix X and noise ?,
1 T
?0 p
?
?
(m + log NG ) log p .
(16)
E R
X (y ? X? )
??
n
n
4.2
The RE condition
For Gaussian and sub-Gaussian X, previous work has established RIP bounds of the form ?1 ?
inf ( ?1n )kXuk2 ? sup ( ?1n )kXuk2 ? ?2 . In particular, RIP is satisfied w.h.p if the number of
u?A
u?A
samples is of the order of square of the Gaussian width of the error set ,i.e., O(wg2 (A)), which we
will call the sub-Gaussian RE sample complexity bound. As we move to heavier tails, establishing
such two-sided bounds requires assumptions on the boundedness of the Euclidean norm of the rows
of X [15, 17, 10]. On the other hand, analysis of only the lower bound requires very few assumptions
on X. In particular, kXuk2 being the sum of random non-negative quantities the lower bound should
be satisfied even with very weak moment assumptions on X. Making these observations, [10, 17]
develop arguments obtaining sub-Gaussian RE sample complexity bounds when set A is the unit
sphere S p?1 even for design matrices having only bounded fourth moments. Note that with such
weak moment assumptions, a non-trivial non-asymptotic upper bound cannot be established. Our
analysis for the RE condition essentially follow this premise and arguments from [10].
4.2.1
A Bound Based on Exponential Width
We obtain a sample complexity bound which depends on the exponential width of the error set A.
The result we state below follows along similar arguments made in [20], which in turn are based on
arguments from [10, 14].
Theorem 3 Let X ? Rn?p have independent isotropic sub-exponential rows. Let A ? S p?1 ,
0 < ? < 1, and c is a constant that depends on the sub-exponential norm K = supu?A k|hX, ui|k?1 .
Let we (A) denote the exponential width of the set. Then for some ? > 0 with probability atleast
(1 ? exp(?? 2 /2)),
?
inf kXuk2 ? c?(1 ? ? 2 )2 n ? 4we (A) ? ?? .
(17)
u?A
Contrasting the result (17) with previous results for the sub-Gaussian case [2, 20] the dependence
on wg (A) on the r.h.s is replaced by we (A), thus leading to a log p worse sample complexity bound.
The corollary below applies the result for the `1 norm. Note that results from [1] for `1 norm show
RIP bounds w.h.p for the same number of samples.
Corollary 3 For an s-sparse ?? and `1 norm regularization, if n ? c ? s log2 p then with probability
atleast (1 ? exp(?? 2 /2)) and constants c, ? depending on ? and ? ,
inf kXuk2 ? ? .
u?A
4.2.2
(18)
A Bound Based on VC-Dimensions
In this section, we show a stronger sub-Gaussian RE sample complexity result for sub-exponential
X and `1 , group-sparse regularization. The arguments follow along similar lines to [11, 10].
Theorem 4 Let X ? Rn?p be a random matrix with isotropic random sub-exponential rows Xi ?
Rp . Let A ? S p?1 , 0 < ? < 1, c is a constant that depends on the sub-exponential norm K =
supu?A k|hX, ui|k?1 and define ? = c(1 ? ? 2 )2 . Let we (A) denote the exponential width of the set
6
A. Let C? = {I[|hXi , ui| > ?], u ? A} be a VC-class with VC-dimension V C(C? ) ? d. For some
suitable constant c1 , if n ? c1 (d/? 2 ), then with probability atleast 1 ? exp(??0 ? 2 n):
1
c?(1 ? ? 2 )2
inf ? kXuk2 ?
.
u?A
2
n
(19)
Consider the case of `1 norm. A consequence of the above result is that the RE condition is satisfied
on the set B = {u|kuk0 = s1 } ? S p?1 for some s1 ? c ? s where c is a constant that will depend
on the RE constant
? when n is O(s1 log p). The argument follows from the fact that B ? S p?1 is a
p
union of s1 spheres. Thus the result is obtained by applying Theorem 4 to each sphere and using
a union bound argument. The final step involves showing that the RE condition is satisfied on the
error set A if it is satisfied on B using Maurey?s empirical approximation argument [17, 18, 11].
Corollary 4 For set A ? S p?1 , which is the error set for the `1 norm, if n ? c2 s log(ep/s)/? 2
for some suitable constant c2 , then with probability atleast 1 ? exp(??0 n? 2 ) ? w?1 p1?1 ?1 , where
?0 , ?1 , w > 1 are constants, the following result holds for ? depending on the constant ?:
1
inf ? kXuk2 ? ? .
n
(20)
u?A
Essentially the same arguments for the group-sparse norm lead to the following result:
Corollary 5 For set A ? S p?1 , which is the error set for the group-sparse norm, if n ? (c(msG +
1
where
sG log(eNG /sG )))/? 2 , then with probability atleast 1 ? exp(??0 n? 2 ) ? ?1 ?1 ?1
?1 ?1
w
NG
m
?0 , ?1 , w > 1 are constants and ? depending on constant ?,
1
inf ? kXuk2 ? ? .
n
u?A
4.3
(21)
Recovery Bounds for `1 and Group-Sparse Norms
We combine result (13) with results obtained for ?n and ? previously for `1 and group-sparse norms.
Corollary 6 For the `1 norm, when n ? cs log p for some constant c, with high probability:
?
?
? n k2 ? O( s log p/ n) .
k?
(22)
Corollary 7 For the group-sparse norm, when n ? c(msG + sG log(NG )), for some constant c,
with high probability:
!
r
s
log
p(m
+
log
N
)
G
G
? n k2 ? O
.
(23)
k?
n
?
Both bounds are log p worse compared to corresponding bounds for the sub-Gaussian case. In
terms of sample complexity, n should scale as O(s log2 p), instead of O(s log p) for sub-Gaussian,
for `1 norm and O(sG log p(m + log NG )), instead of O(sG (m + log NG )) for the sub-Gaussian
case, for group-sparse lasso to get upto a constant order error bound.
5
Experiments
We perform experiments on synthetic data to compare estimation errors for Gaussian and subexponential design matrices and noise for both `1 and group sparse norms. For `1 we run experiments with dimensionality p = 300 and sparsity level s = 10. For group sparse norms we run
experiments with dimensionality p = 300, max. group size m = 6, number of groups NG = 50
groups each of size 6 and 4 non-zero groups. For the design matrix X, for the Gaussian case we
sample rows randomly from an isotropic Gaussian distribution, while for sub-exponential design
7
Probability of success
1
0.8
Figure 1: Probability of recovery
in noiseless case with increasing
sample size. There is a sharp
phase transition and the curves
overlap for Gaussian and subexponential designs.
0.6
Basis pursuit with Gaussian design
0.4
Basis pursuit with sub?exponential design
Group sparse with Gaussian design
0.2
Group sparse with sub?exponential design
0
0
20
40
60
80
100
120
Number of samples
140
160
180
0.9
1
Lasso with Gaussian design and noise
0.95
Lasso with sub?exponential design and noise
0.85
Estimation error
0.9
0.85
Estimation error
200
0.8
0.75
0.7
0.65
Group sparse lasso with
Gaussian design and noise
0.8
Group sparse lasso with
sub?exponential design and noise
0.75
0.7
0.6
0.55
60
80
100
120
140
160
180
200
0.65
120
130
140
150
160
Number of samples
170
180
? n k2 vs sample size for `1 (left) and group-sparse norms (right). The curve for
Figure 2: Estimation error k?
sub-exponential designs and noise decays slower than Gaussians.
matrices we sample each row of X randomly from an isotropic extreme-value distribution. The
number of samples n in X is incremented in steps of 10 with an initial starting value of 5. For the
noise ?, it is sampled i.i.d from the Gaussian and extreme-value distributions with variance 1 for
the Gaussian and sub-exponential cases respectively. For each sample size n, we repeat the procedure above 100 times and all results reported in the plots are average values over the 100 runs. We
report two sets of results. Figure 1 shows percentage of success vs sample size for the noiseless
case when y = X?? . A success in the noiseless case denotes exact recovery which is possible when
the RE condition is satisfied. Hence we expect the sample complexity for recovery to be order of
square of Gaussian width for Gaussian and extreme-value distributions as validated by the plots in
Figure 1. Figure 2 shows average estimation error vs number of samples for the noisy case when
y = X?? +?. The noise is added only for runs in which exact recovery was possible in the noiseless
case. For example when n = 5 we do not have any results in Figure 2 as even noiseless recovery is
not possible. For each n, the estimation errors are average values over 100 runs. As seen in Figure
2, the error decay is slower for extreme-value distributions compared to the Gaussian case.
6
Conclusions
This paper presents a unified framework for analysis of non-asymptotic error and structured recovery
in norm regularized regression problems when the design matrix and noise are sub-exponential,
essentially generalizing the corresponding analysis and results for the sub-Gaussian case. The main
observation is that the dependence on Gaussian width is replaced by the exponential width of suitable
sets associated with the norm. Together with the result on the relationship between exponential and
Gaussian widths, previous analysis techniques essentially carry over to the sub-exponential case. We
also show that a stronger result exists for the RE condition for the Lasso and group-lasso problems.
As future work we will consider extending the stronger result for the RE condition for all norms.
Acknowledgements: This work was supported by NSF grants IIS-1447566, IIS-1447574,
IIS-1422557, CCF-1451986, CNS-1314560, IIS-0953274, IIS-1029711, and by NASA grant
NNX12AQ39A.
8
References
[1] R. Adamczak, A. E. Litvak, A. Pajor, and N. Tomczak-Jaegermann. Restricted isometry property of matrices with independent columns and neighborly polytopes by random sampling.
Constructive Approximation, 34(1):61?88, 2011.
[2] A. Banerjee, S. Chen, F. Fazayeli, and V. Sivakumar. Estimation with Norm Regularization.
In NIPS, 2014.
[3] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector.
Annals of Statistics, 37(4):1705?1732, 2009.
[4] E. J. Candes, J. Romberg, and T. Tao. Robust Uncertainty Principles : Exact Signal Reconstruction from Highly Incomplete Frequency Information. IEEE Transactions on Information
Theory, 52(2):489?509, 2006.
[5] E. J. Candes and T. Tao. Decoding by Linear Programming. IEEE Transactions on Information
Theory, 51(12):4203?4215, 2005.
[6] E. J. Candes and T. Tao. The Dantzig selector : statistical estimation when p is much larger
than n. Annals of Statistics, 35(6):2313?2351, 2007.
[7] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The Convex Geometry of Linear
Inverse Problems. Foundations of Computational Mathematics, 12(6):805?849, 2012.
[8] S. Chatterjee, S. Chen, and A. Banerjee. Generalized Dantzig Selector: Application to the
k-support norm. In NIPS, 2014.
[9] D. Hsu and S. Sabato. Heavy-tailed regression with a generalized median-of-means. In ICML,
2014.
[10] V. Koltchinskii and S. Mendelson. Bounding the smallest singular value of a random matrix
without concentration. arXiv:1312.3580, 2013.
[11] G. Lecu?e and S. Mendelson.
Sparse recovery under weak moment assumptions.
arXiv:1401.2188, 2014.
[12] M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes.
Springer Berlin, 1991.
[13] N. Meinshausen and B. Yu. Lasso-type recovery of sparse representations for high-dimensional
data. Annals of Statistics, 37(1):246?270, 2009.
[14] S. Mendelson. Learning without concentration. Journal of the ACM, To appear, 2015.
[15] S. Mendelson and G. Paouris. On generic chaining and the smallest singular value of random
matrices with heavy tails. Journal of Functional Analysis, 262(9):3775?3811, 2012.
[16] S. N. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A Unified Framework for HighDimensional Analysis of $M$-Estimators with Decomposable Regularizers. Statistical Science, 27(4):538?557, 2012.
[17] R. I. Oliveira. The lower tail of random quadratic forms, with applications to ordinary least
squares and restricted eigenvalue properties. arXiv:1312.2903, 2013.
[18] M. Rudelson and S. Zhou. Reconstruction from anisotropic random measurements. IEEE
Transaction on Information Theory, 59(6):3434?3447, 2013.
[19] M. Talagrand. The Generic Chaining. Springer Berlin, 2005.
[20] J. A. Tropp. Convex recovery of a structured signal from independent random linear measurements. In Sampling Theory - a Renaissance. To appear, 2015.
[21] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Y Eldar
and G. Kutyniok, editors, Compressed Sensing, pages 210?268. Cambridge University Press,
Cambridge, 2012.
[22] M. J Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using L1
-constrained quadratic programmming ( Lasso ). IEEE Transaction on Information Theory,
55(5):2183?2201, 2009.
[23] P. Zhao and B. Yu. On Model Selection Consistency of Lasso. Journal of Machine Learning
Research, 7:2541?2563, 2006.
9
| 5935 |@word version:1 norm:58 stronger:3 d2:12 decomposition:1 eng:1 boundedness:1 carry:1 moment:5 initial:1 denoting:1 past:1 existing:4 readily:1 subsequent:3 partition:3 plot:2 v:3 implying:1 isotropic:5 along:3 c2:4 ik:1 prove:2 combine:1 introduce:1 p1:1 frequently:1 examine:1 cardinality:1 increasing:1 precipitation:1 pajor:1 notation:1 bounded:3 argmin:1 developed:1 contrasting:1 unified:4 guarantee:1 remember:1 ti:6 concave:4 finance:1 kutyniok:1 exactly:1 scaled:2 k2:16 unit:6 grant:2 appear:2 before:1 t1:19 engineering:1 consequence:1 establishing:3 sivakumar:2 koltchinskii:1 dantzig:4 k:3 meinshausen:1 atomic:1 practice:1 block:1 union:3 supu:4 procedure:1 litvak:1 empirical:1 universal:3 spite:1 get:2 cannot:2 close:1 selection:1 romberg:1 convenience:1 applying:1 map:2 deterministic:1 starting:1 l:6 convex:2 focused:1 decomposable:2 recovery:13 estimator:9 nuclear:1 coordinate:1 annals:3 rip:4 exact:3 programming:1 element:7 satisfying:1 ep:1 pradeepr:1 incremented:1 complexity:19 covariates:1 ui:4 rigorously:1 depend:3 basis:2 k0:1 various:4 describe:2 whose:1 larger:2 widely:1 compressed:2 wg:12 statistic:4 gi:2 g1:1 noisy:2 final:2 sequence:4 eigenvalue:3 ledoux:1 reconstruction:2 interaction:1 ky:1 extending:1 tk:1 derive:1 develop:1 vcdimension:1 depending:3 progress:1 strong:1 c:3 involves:1 modifying:1 vc:5 centered:2 premise:1 hx:2 preliminary:1 elementary:2 hold:3 practically:1 considered:2 exp:9 major:1 bickel:1 early:1 smallest:2 estimation:23 applicable:2 currently:1 utexas:1 city:1 gaussian:63 always:1 supt:4 rather:1 zhou:1 renaissance:1 corollary:9 derived:2 focus:1 viz:1 validated:1 contrast:1 relation:2 interested:1 tao:3 compatibility:2 dual:1 subexponential:3 eldar:1 constrained:3 special:1 having:2 ng:11 sampling:2 yu:3 icml:1 future:1 t2:17 report:1 few:1 randomly:2 replaced:3 phase:1 geometry:1 cns:1 n1:5 ecology:1 highly:1 umn:1 fazayeli:1 extreme:8 pradeep:1 yielding:1 admitting:1 hg:1 regularizers:1 indexed:1 incomplete:1 euclidean:1 re:25 theoretical:1 column:1 earlier:3 cover:1 ordinary:1 introducing:1 entry:5 subset:3 predictor:1 reported:1 dependency:1 synthetic:1 vershynin:1 recht:1 density:2 negahban:1 wg2:1 decoding:1 together:1 again:2 satisfied:7 choose:1 worse:3 zhao:1 leading:1 nonasymptotic:1 parrilo:1 twin:1 satisfy:5 depends:7 analyze:1 sup:9 recover:1 kuk0:1 candes:3 contribution:4 square:3 kek:2 variance:1 yield:1 weak:3 simultaneous:1 definition:4 frequency:1 jaegermann:1 associated:5 proof:2 sampled:1 hsu:1 treatment:1 popular:2 knowledge:2 dimensionality:2 organized:1 nasa:1 follow:3 response:1 ritov:1 talagrand:2 hand:1 tropp:1 ei:2 su:1 banerjee:4 overlapping:1 building:1 k22:1 ccf:1 regularization:13 equality:2 hence:1 symmetric:1 ktk1:1 climate:2 width:52 chaining:8 generalized:2 outline:2 tn:5 l1:1 arindam:1 functional:1 anisotropic:1 banach:1 tail:6 discussed:2 extend:1 he:4 measurement:3 cambridge:2 consistency:1 mathematics:1 minnesota:1 hxi:1 vidyashankar:1 etc:1 closest:2 isometry:2 recent:4 inf:9 scenario:1 certain:2 manifested:1 inequality:7 success:3 lecu:1 seen:2 greater:1 signal:2 ii:5 multiple:3 ing:2 characterized:1 adapt:1 sphere:3 ravikumar:2 variant:1 regression:4 essentially:4 noiseless:5 metric:5 arxiv:3 c1:4 background:1 else:1 median:2 singular:2 sabato:1 rest:2 unlike:1 leveraging:1 consis:1 call:2 subgaussian:1 leverage:1 noting:1 split:1 lasso:19 gng:2 tm:2 texas:1 t0:4 heavier:3 kxuk2:9 detailed:1 tsybakov:1 oliveira:1 png:1 diameter:1 xij:2 percentage:1 canonical:1 nsf:1 group:32 key:2 threshold:1 drawn:2 year:1 cone:1 sum:1 run:5 inverse:1 fourth:1 uncertainty:1 family:1 throughout:2 chandrasekaran:1 bound:42 hi:1 ntn:1 quadratic:2 encountered:1 msg:2 dominated:1 aspect:2 argument:15 min:1 concluding:1 structured:9 department:2 according:3 ball:5 making:1 s1:6 tent:1 restricted:5 sided:1 taken:1 previously:1 turn:1 nnx12aq39a:1 hh:3 pursuit:2 gaussians:3 observe:1 generic:8 upto:1 slower:2 rp:12 denotes:1 rudelson:1 log2:3 k1:1 tk2:2 establish:6 prof:1 move:1 already:1 quantity:5 added:1 strategy:1 concentration:5 dependence:3 distance:4 berlin:2 trivial:1 reason:1 willsky:1 assuming:1 relationship:8 tomczak:1 setup:1 lsu:1 stated:3 negative:1 design:32 countable:1 perform:1 upper:5 observation:2 finite:1 defining:1 precise:2 rn:8 sharp:4 required:1 connection:1 established:3 polytopes:1 nip:2 beyond:2 suggested:1 usually:1 below:4 sparsity:3 including:1 max:1 wainwright:2 suitable:8 satisfaction:1 overlap:1 rely:1 regularized:6 isoperimetry:1 prior:1 literature:3 sg:10 geometric:1 acknowledgement:1 asymptotic:5 lacking:1 expect:1 highlight:2 maurey:1 ingredient:1 foundation:1 sufficient:1 consistent:1 principle:1 editor:1 atleast:5 heavy:4 austin:1 row:6 repeat:1 last:1 supported:1 understand:2 absolute:1 sparse:27 curve:2 dimension:5 transition:1 made:2 collection:1 neighborly:1 social:1 transaction:4 functionals:2 selector:4 b1:9 xi:1 decade:1 tailed:2 robust:1 obtaining:3 domain:1 main:2 motivation:1 noise:18 bounding:3 body:1 sub:58 exponential:66 lie:1 admissible:3 theorem:13 kuk2:2 specific:1 normregularized:1 xt:9 showing:1 sensing:2 maxi:1 explored:1 x:2 decay:2 exists:1 mendelson:4 supplement:2 chatterjee:1 chen:2 generalizing:1 expressed:1 g2:1 u2:2 applies:1 springer:2 satisfies:8 relies:3 acm:1 goal:1 careful:1 considerable:1 change:1 determined:1 specifically:1 lemma:3 called:4 experimental:1 highdimensional:1 support:2 constructive:1 phenomenon:1 |
5,453 | 5,936 | Less is More: Nystr?om Computational Regularization
Alessandro Rudi?
Raffaello Camoriano??
Lorenzo Rosasco??
?
Universit`a degli Studi di Genova - DIBRIS, Via Dodecaneso 35, Genova, Italy
?
Istituto Italiano di Tecnologia - iCub Facility, Via Morego 30, Genova, Italy
?
Massachusetts Institute of Technology and Istituto Italiano di Tecnologia
Laboratory for Computational and Statistical Learning, Cambridge, MA 02139, USA
raffaello.camoriano@iit.it
{ale rudi, lrosasco}@mit.edu
Abstract
We study Nystr?om type subsampling approaches to large scale kernel methods,
and prove learning bounds in the statistical learning setting, where random sampling and high probability estimates are considered. In particular, we prove that
these approaches can achieve optimal learning bounds, provided the subsampling
level is suitably chosen. These results suggest a simple incremental variant of
Nystr?om Kernel Regularized Least Squares, where the subsampling level implements a form of computational regularization, in the sense that it controls at
the same time regularization and computations. Extensive experimental analysis shows that the considered approach achieves state of the art performances on
benchmark large scale datasets.
1
Introduction
Kernel methods provide an elegant and effective framework to develop nonparametric statistical
approaches to learning [1]. However, memory requirements make these methods unfeasible when
dealing with large datasets. Indeed, this observation has motivated a variety of computational strategies to develop large scale kernel methods [2?8].
In this paper we study subsampling methods, that we broadly refer to as Nystr?om approaches. These
methods replace the empirical kernel matrix, needed by standard kernel methods, with a smaller
matrix obtained by (column) subsampling [2, 3]. Such procedures are shown to often dramatically
reduce memory/time requirements while preserving good practical performances [9?12]. The goal
of our study is two-fold. First, and foremost, we aim at providing a theoretical characterization of the
generalization properties of such learning schemes in a statistical learning setting. Second, we wish
to understand the role played by the subsampling level both from a statistical and a computational
point of view. As discussed in the following, this latter question leads to a natural variant of Kernel
Regularized Least Squares (KRLS), where the subsampling level controls both regularization and
computations.
From a theoretical perspective, the effect of Nystr?om approaches has been primarily characterized considering the discrepancy between a given empirical kernel matrix and its subsampled version [13?19]. While interesting in their own right, these latter results do not directly yield information on the generalization properties of the obtained algorithm. Results in this direction, albeit
suboptimal, were first derived in [20] (see also [21,22]), and more recently in [23,24]. In these latter
papers, sharp error analyses in expectation are derived in a fixed design regression setting for a form
of Kernel Regularized Least Squares. In particular, in [23] a basic uniform sampling approach is
studied, while in [24] a subsampling scheme based on the notion of leverage score is considered.
The main technical contribution of our study is an extension of these latter results to the statistical
learning setting, where the design is random and high probability estimates are considered. The
1
more general setting makes the analysis considerably more complex. Our main result gives optimal finite sample bounds for both uniform and leverage score based subsampling strategies. These
methods are shown to achieve the same (optimal) learning error as kernel regularized least squares,
recovered as a special case, while allowing substantial computational gains. Our analysis highlights
the interplay between the regularization and subsampling parameters, suggesting that the latter can
be used to control simultaneously regularization and computations. This strategy implements a
form of computational regularization in the sense that the computational resources are tailored to
the generalization properties in the data. This idea is developed considering an incremental strategy to efficiently compute learning solutions for different subsampling levels. The procedure thus
obtained, which is a simple variant of classical Nystr?om Kernel Regularized Least Squares with uniform sampling, allows for efficient model selection and achieves state of the art results on a variety
of benchmark large scale datasets.
The rest of the paper is organized as follows. In Section 2, we introduce the setting and algorithms
we consider. In Section 3, we present our main theoretical contributions. In Section 4, we discuss
computational aspects and experimental results.
2
Supervised learning with KRLS and Nystr?om approaches
Let X ?R be a probability space with distribution ?, where we view X and R as the input and output
spaces, respectively. Let ?X denote the marginal distribution of ? on X and ?(?|x) the conditional
distribution on R given x ? X. Given a hypothesis space H of measurable functions from X to R,
the goal is to minimize the expected risk,
Z
min E(f ),
E(f ) =
(f (x) ? y)2 d?(x, y),
(1)
f ?H
X?R
provided ? is known only through a training set of (xi , yi )ni=1 sampled identically and independently
according to ?. A basic example of the above setting is random design regression with the squared
loss, in which case
yi = f? (xi ) + i , i = 1, . . . , n,
(2)
with f? a fixed regression function, 1 , . . . , n a sequence of random variables seen as noise, and
x1 , . . . , xn random inputs. In the following, we consider kernel methods, based on choosing a
hypothesis space which is a separable reproducing kernel Hilbert space. The latter is a Hilbert space
H of functions, with inner product h?, ?iH , such that there exists a function K : X ? X ? R with
the following two properties: 1) for all x ? X, Kx (?) = K(x, ?) belongs to H, and 2) the so called
reproducing property holds: f (x) = hf, Kx iH , for all f ? H, x ? X [25]. The function K, called
reproducing kernel, is easily shown to be symmetric and positive definite, that is the kernel matrix
(KN )i,j = K(xi , xj ) is positive semidefinite for all x1 , . . . , xN ? X, N ? N. A classical way to
derive an empirical solution to problem (1) is to consider a Tikhonov regularization approach, based
on the minimization of the penalized empirical functional,
n
1X
(f (xi ) ? yi )2 + ?kf k2H , ? > 0.
f ?H n
i=1
min
(3)
The above approach is referred to as Kernel Regularized Least Squares (KRLS) or Kernel Ridge
Regression (KRR). It is easy to see that a solution f?? to problem (3) exists, it is unique and the
representer theorem [1] shows that it can be written as
f?? (x) =
n
X
?
? i K(xi , x)
with
?
? = (Kn + ?nI)?1 y,
(4)
i=1
where x1 , . . . , xn are the training set points, y = (y1 , . . . , yn ) and Kn is the empirical kernel matrix.
Note that this result implies that we can restrict the minimization in (3) to the space,
Hn = {f ? H | f =
n
X
?i K(xi , ?), ?1 , . . . , ?n ? R}.
i=1
Storing the kernel matrix Kn , and solving the linear system in (4), can become computationally
unfeasible as n increases. In the following, we consider strategies to find more efficient solutions,
2
based on the idea of replacing Hn with
Hm = {f | f =
m
X
?i K(?
xi , ?), ? ? Rm },
i=1
where m ? n and {?
x1 , . . . , x
?m } is a subset of the input points in the training set. The solution f??,m
of the corresponding minimization problem can now be written as,
f??,m (x) =
m
X
>
>
?
? i K(?
xi , x) with ?
? = (Knm
Knm + ?nKmm )? Knm
y,
(5)
i=1
where A? denotes the Moore-Penrose pseudoinverse of a matrix A, and (Knm )ij = K(xi , x
?j ),
(Kmm )kj = K(?
xk , x
?j ) with i ? {1, . . . , n} and j, k ? {1, . . . , m} [2]. The above approach is
related to Nystr?om methods and different approximation strategies correspond to different ways to
select the inputs subset. While our framework applies to a broader class of strategies, see Section C.1, in the following we primarily consider two techniques.
Plain Nystr?om. The points {?
x1 , . . . , x
?m } are sampled uniformly at random without replacement
from the training set.
Approximate leverage scores (ALS) Nystr?om. Recall that the leverage scores associated to the
training set points x1 , . . . , xn are
(li (t))ni=1 ,
li (t) = (Kn (Kn + tnI)?1 )ii ,
i ? {1, . . . , n}
(6)
for any t > 0, where (Kn )ij = K(xi , xj ). In practice, leverage scores are onerous to compute
and approximations (?li (t))ni=1 can be considered [16, 17, 24] . In particular, in the following we are
interested in suitable approximations defined as follows:
Definition 1 (T -approximate leverage scores). Let (li (t))ni=1 be the leverage scores associated to
the training set for a given t. Let ? > 0, t0 > 0 and T ? 1. We say that (b
li (t))ni=1 are T -approximate
leverage scores with confidence ?, when with probability at least 1 ? ?,
1
li (t) ? b
li (t) ? T li (t) ?i ? {1, . . . , n}, t ? t0 .
T
Given T -approximate leverage scores for t > ?0 , {?
x1 , . . . , x
?m } are sampled from the training set inP
dependently with replacement, and with probability to be selected given by Pt (i) = ?li (t)/ j ?lj (t).
In the next section, we state and discuss our main result showing that the KRLS formulation based on
plain or approximate leverage scores Nystr?om provides optimal empirical solutions to problem (1).
3
Theoretical analysis
In this section, we state and discuss our main results. We need several assumptions. The first basic
assumption is that problem (1) admits at least a solution.
Assumption 1. There exists an fH ? H such that
E(fH ) = min E(f ).
f ?H
Note that, while the minimizer might not be unique, our results apply to the case in which fH is the
unique minimizer with minimal norm. Also, note that the above condition is weaker than assuming
the regression function in (2) to belong to H. Finally, we note that the study of the paper can be
adapted to the case in which minimizers do not exist, but the analysis is considerably more involved
and left to a longer version of the paper.
The second assumption is a basic condition on the probability distribution.
Assumption 2. Let zx be the random variable zx = y ? fH (x), with x ? X, and y distributed
according to ?(y|x). Then, there exists M, ? > 0 such that E|zx |p ? 21 p!M p?2 ? 2 for any p ? 2,
almost everywhere on X.
The above assumption is needed to control random quantities and is related to a noise assumption in
the regression model (2). It is clearly weaker than the often considered bounded output assumption
3
[25], and trivially verified in classification.
The last two assumptions describe the capacity (roughly speaking the ?size?) of the hypothesis space
induced by K with respect to ? and the regularity of fH with respect to K and ?. To discuss them,
we first need the following definition.
Definition 2 (Covariance operator and effective dimensions). We define the covariance operator as
Z
C : H ? H, hf, CgiH =
f (x)g(x)d?X (x) , ? f, g ? H.
X
Moreover, for ? > 0, we define the random variable Nx (?) = Kx , (C + ?I)?1 Kx H with x ? X
distributed according to ?X and let
N (?) = E Nx (?),
N? (?) = sup Nx (?).
x?X
We add several comments. Note that C corresponds to the second moment operator, but we refer to
it as the covariance operator with an abuse of terminology. Moreover, note that N (?) = Tr(C(C +
?I)?1 ) (see [26]). This latter quantity, called effective dimension or degrees of freedom, can be seen
as a measure of the capacity of the hypothesis space. The quantity N? (?) can be seen to provide a
uniform bound on the leverage scores in Eq. (6). Clearly, N (?) ? N? (?) for all ? > 0.
Assumption 3. The kernel K is measurable, C is bounded. Moreover, for all ? > 0 and a Q > 0,
N? (?) < ?,
(7)
??
N (?) ? Q?
,
0 < ? ? 1.
(8)
Measurability of K and boundedness of C are minimal conditions to ensure that the covariance
operator is a well defined linear, continuous, self-adjoint, positive operator [25]. Condition (7) is
satisfied if the kernel is bounded supx?X K(x, x) = ?2 < ?, indeed in this case N? (?) ? ?2 /?
for all ? > 0. Conversely, it can be seen that condition (7) together with boundedness of C imply
that the kernel is bounded, indeed 1
?2 ? 2kCkN? (kCk).
Boundedness of the kernel implies in particular that the operator C is trace class and allows to
use tools from spectral theory. Condition (8) quantifies the capacity assumption and is related to
covering/entropy number conditions (see [25] for further details). In particular, it is known that
condition (8) is ensured if the eigenvalues (?i )i of C satisfy a polynomial decaying condition ?i ?
1
i? ? . Note that, since the operator C is trace class, Condition (8) always holds for ? = 1. Here,
for space constraints and in the interest of clarity we restrict to such a polynomial condition, but the
analysis directly applies to other conditions including exponential decay or a finite rank conditions
[26]. Finally, we have the following regularity assumption.
Assumption 4. There exists s ? 0, 1 ? R < ?, such that kC ?s fH kH < R.
The above condition is fairly standard, and can be equivalently formulated in terms of classical
concepts in approximation theory such as interpolation spaces [25]. Intuitively, it quantifies the
degree to which fH can be well approximated by functions in the RKHS H and allows to control
the bias/approximation error of a learning solution. For s = 0, it is always satisfied. For larger
s, we are assuming fH to belong to subspaces of H that are the images of the fractional compact
operators C s . Such spaces contain functions which, expanded on a basis of eigenfunctions of C,
have larger coefficients in correspondence to large eigenvalues. Such an assumption is natural in
view of using techniques such as (4), which can be seen as a form of spectral filtering, that estimate
stable solutions by discarding the contribution of small eigenvalues [27]. In the next section, we
are going to quantify the quality of empirical solutions of Problem (1) obtained by schemes of the
form (5), in terms of the quantities in Assumptions 2, 3, 4.
1
If N? (?) is finite, then N? (kCk) = supx?X k(C + kCkI)?1 Kx k2 ? 1/2kCk?1 supx?X kKx k2 , therefore K(x, x) ? 2kCkN? (kCk).
4
3.1
Main results
In this section, we state and discuss our main results, starting with optimal finite sample error bounds
for regularized least squares based on plain and approximate leverage score based Nystr?om subsampling.
Theorem 1. Under Assumptions 1, 2, 3, and 4, let ? > 0, v = min(s, 1/2), p = 1 + 1/(2v + ?)
and assume
p
6?2
38p
114?2 p
n ? 1655?2 + 223?2 log
.
+
log
?
kCk
kCk?
Then, the following inequality holds with probability at least 1 ? ?,
s
!
2
2v+1
Q?
M
?
6
?
2
E(f??,m ) ? E(fH ) ? q n 2v+?+1 , with q = 6R 2kCk + p
+
log , (9)
?
kCk
?
kCk
1
with f??,m as in (5), ? = kCkn? 2v+?+1 and
1. for plain Nystr?om
m ? (67 ? 5N? (?)) log
12?2
;
??
2. for ALS Nystr?om and T -approximate leverage scores with subsampling probabilities P? ,
2
12n
t0 ? 19?
n log ? and
48n
m ? (334 ? 78T 2 N (?)) log
.
?
We add several comments. First, the above results can be shown to be optimal in a minimax sense.
Indeed, minimax lower bounds proved in [26, 28] show that the learning rate in (9) is optimal under the considered assumptions (see Thm. 2, 3 of [26], for a discussion on minimax lower bounds
see Sec. 2 of [26]). Second, the obtained bounds can be compared to those obtained for other regularized learning techniques. Techniques known to achieve optimal error rates include Tikhonov
regularization [26, 28, 29], iterative regularization by early stopping [30, 31], spectral cut-off regularization (a.k.a. principal component regression or truncated SVD) [30, 31], as well as regularized
stochastic gradient methods [32]. All these techniques are essentially equivalent from a statistical
point of view and differ only in the required computations. For example, iterative methods allow
for a computation of solutions corresponding to different regularization levels which is more efficient than Tikhonov or SVD based approaches. The key observation is that all these methods have
the same O(n2 ) memory requirement. In this view, our results show that randomized subsampling
methods can break such a memory barrier, and consequently achieve much better time complexity,
while preserving optimal learning guarantees. Finally, we can compare our results with previous
analysis of randomized kernel methods. As already mentioned, results close to those in Theorem 1
are given in [23, 24] in a fixed design setting. Our results extend and generalize the conclusions of
these papers to a general statistical learning setting. Relevant results are given in [8] for a different
approach, based on averaging KRLS solutions obtained splitting the data in m groups (divide and
conquer RLS). The analysis in [8] is only in expectation, but considers random design and shows
that the proposed method is indeed optimal provided the number of splits is chosen depending on
the effective dimension N (?). This is the only other work we are aware of establishing optimal
learning rates for randomized kernel approaches in a statistical learning setting. In comparison with
Nystr?om computational regularization the main disadvantage of the divide and conquer approach is
computational and in the model selection phase where solutions corresponding to different regularization parameters and number of splits usually need to be computed.
The proof of Theorem 1 is fairly technical and lengthy. It incorporates ideas from [26] and techniques developed to study spectral filtering regularization [30, 33]. In the next section, we briefly
sketch some main ideas and discuss how they suggest an interesting perspective on regularization
techniques including subsampling.
3.2
Proof sketch and a computational regularization perspective
A key step in the proof of Theorem 1 is an error decomposition, and corresponding bound, for any
fixed ? and m. Indeed, it is proved in Theorem 2 and Proposition 2 that, for ? > 0, with probability
5
10 0
10 -12
10 -4
10 -2
10 -6
?
10 -13
?
?
10 -8
10 -4
10 -14
10 -10
10 -6
10 -12
200
400
600
800
10 -15
1000
50
100
m
0.032
0.0325
0.033
0.0335
150
200
250
300
1000
m
0.034
0.0345
0.035
0.04
RMSE
0.05
0.06
2000
3000
4000
5000
m
0.07
0.08
0.09
0.1
Classification Error
15
20
25
RMSE
Figure 1: Validation errors associated to 20 ? 20 grids of values for m (x axis) and ? (y axis) on
pumadyn32nh (left), breast cancer (center) and cpuSmall (right).
at least 1 ? ?,
1/2
?
.R
E(f?,m ) ? E(fH )
!
r
p
N? (?)
6
? 2 N (?)
log + RC(m)1/2+v + R?1/2+v .
+
n
n
?
(10)
The first and last term in the right hand side of the above inequality can be seen as forms of sample
and approximation errors [25] and are studied in Lemma 4 and Theorem 2. The mid term can be
seen as a computational error and depends on the considered subsampling scheme. Indeed, it is
shown in Proposition 2 that C(m) can be taken as,
12?2
?m ,
Cpl (m) = min t > 0 (67 ? 5N? (t)) log
t?
for the plain Nystr?om approach, and
19?2
48n
12n
2
CALS (m) = min
log
? t ? kCk 78T N (t) log
?m ,
n
?
?
for the approximate leverage scores approach. The bounds in Theorem 1 follow by: 1) minimizing
in ? the sum of the first and third term 2) choosing m so that the computational error is of the
same order of the other terms. Computational resources and regularization are then tailored to the
generalization properties of the data at hand. We add a few comments. First, note that the error bound
in (10) holds for a large class of subsampling schemes, as discussed in Section C.1 in the appendix.
Then specific error bounds can be derived developing computational error estimates. Second, the
error bounds in Theorem 2 and Proposition 2, and hence in Theorem 1, easily generalize to a larger
class of regularization schemes beyond Tikhonov approaches, namely spectral filtering [30]. For
space constraints, these extensions are deferred to a longer version of the paper. Third, we note that,
in practice, optimal data driven parameter choices, e.g. based on hold-out estimates [31], can be
used to adaptively achieve optimal learning bounds.
Finally, we observe that a different perspective is derived starting from inequality (10), and noting
that the role played by m and ? can also be exchanged. Letting m play the role of a regularization
parameter, ? can be set as a function of m and m tuned adaptively. For example, in the case of a
plain Nystr?om approach, if we set
1
log m
?=
, and m = 3n 2v+?+1 log n,
m
then the obtained learning solution achieves the error bound in Eq. (9). As above, the subsampling
level can also be chosen by cross-validation. Interestingly, in this case by tuning m we naturally
control computational resources and regularization. An advantage of this latter parameterization
is that, as described in the following, the solution corresponding to different subsampling levels is
easy to update using Cholesky rank-one update formulas [34]. As discussed in the next section,
in practice, a joint tuning over m and ? can be done starting from small m and appears to be
advantageous both for error and computational performances.
4
M
Incremental updates and experimental analysis
In this section, we first describe an incremental strategy to efficiently explore different subsampling
levels and then perform extensive empirical tests aimed in particular at: 1) investigating the statistical and computational benefits of considering varying subsampling levels, and 2) compare the
6
120
Incremental Nystr?m
Batch Nystr?m
100
80
Time (s)
Input: Dataset (xi , yi )ni=1 , Subsampling (?
xj )m
j=1 ,
Regularization Parameter ?.
Output: Nystr?om KRLS
?1 , . . . , ?
? m }.
? estimators {?
Compute ?1 ; R1 ? ?1 ;
for t ? {2, . . . , m} do
Compute
At , ut , vt ;
Rt?1 0
Rt ? cholup(Rt , ut ,0 +0 );
Rt ?
;
0
0
Rt ? cholup(Rt , vt ,0 ?0 );
?
? t ? Rt?1 (Rt?> (At y));
end for
Algorithm 1: Incremental Nystr?om KRLS.
60
40
20
0
1
201
401
600
800
1000
m
Figure 2: Model selection time on the
cpuSmall dataset. m ? [1, 1000]
and T = 50, 10 repetitions.
performance of the algorithm with respect to state of the art solutions on several large scale benchmark datasets. Throughout this section, we only consider a plain Nystr?om approach, deferring to
future work the analysis of leverage scores based sampling techniques. Interestingly, we will see
that such a basic approach can often provide state of the art performances.
4.1
Efficient incremental updates
Algorithm 1 efficiently compute solutions corresponding to different subsampling levels, by exploiting rank-one Cholesky updates [34]. The proposed procedure allows to efficiently compute a whole
regularization path of solutions, and hence perform fast model selection2 (see Sect. A). In Algorithm 1, the function cholup is the Cholesky rank-one update formula available in many linear
algebra libraries. The total cost of the algorithm is O(nm2 + m3 ) time to compute ?
?2, . . . , ?
?m,
while a naive non-incremental algorithm would require O(nm2 T + m3 T ) with T is the number of
analyzed subsampling levels. The following are some quantities needed by the algorithm:
? A1 = a 1
and At = (At?1 at ) ? Rn?t , for any 2 ? t ? m. Moreover, for any 1 ? t ? m, gt = 1 + ?t and
ut = (ct /(1 + gt ), gt ),
at = (K(?
xt , x1 ), . . . , K(?
xt , xn )),
c t = A>
t?1 at + ?nbt ,
vt = (ct /(1 + gt ), ?1),
bt = (K(?
xt , x
?1 ), . . . , K(?
xt , x
?t?1 )),
? t = a>
xt , x
?t ).
t at + ?nK(?
4.2
Experimental analysis
We empirically study the properties of Algorithm 1, considering a Gaussian kernel of width ?. The
selected datasets are already divided in a training and a test part3 . We randomly split the training
part in a training set and a validation set (80% and 20% of the n training points, respectively) for
parameter tuning via cross-validation. The m subsampled points for Nystr?om approximation are selected uniformly at random from the training set. We report the performance of the selected model
on the fixed test set, repeating the process for several trials.
Interplay between ? and m. We begin with a set of results showing that incrementally exploring different subsampling levels can yield very good performance while substantially reducing the
computational requirements. We consider the pumadyn32nh (n = 8192, d = 32), the breast
cancer (n = 569, d = 30), and the cpuSmall (n = 8192, d = 12) datasets4 . In Figure 1, we
report the validation errors associated to a 20 ? 20 grid of values for ? and m. The ? values are
logarithmically spaced, while the m values are linearly spaced. The ranges
?7 and
kernel bandwidths,
chosen according to preliminarytests on the data,
are
?
=
2.66,
?
?
10
,
1
, m ? [10, 1000] for
?12
?3
pumadyn32nh,
?
=
0.9,
?
?
10
,
10
,
m
?
[5,
300]
for
breast
cancer,
and ? = 0.1,
?15 ?12
? ? 10 , 10
, m ? [100, 5000] for cpuSmall. The main observation that can be derived
from this first series of tests is that a small m is sufficient to obtain the same results achieved with
the largest m. For example, for pumadyn32nh it is sufficient to choose m = 62 and ? = 10?7
to obtain an average test RMSE of 0.33 over 10 trials, which is the same as the one obtained using
m = 1000 and ? = 10?3 , with a 3-fold speedup of the joint training and validation phase. Also,
it is interesting to observe that for given values of ?, large values of m can decrease the performance. This observation is consistent with the results in Section 3.1, showing that m can play the
2
The code for Algorithm 1 is available at lcsl.github.io/NystromCoRe.
In the following we denote by n the total number of points and by d the number of dimensions.
4
www.cs.toronto.edu/?delve and archive.ics.uci.edu/ml/datasets
3
7
Table 1: Test RMSE comparison for exact and approximated kernel methods. The results for KRLS,
Batch Nystr?om, RF and Fastfood are the ones reported in [6]. ntr is the size of the training set.
Dataset
ntr
d
Incremental
Nystr?om RBF
KRLS
RBF
Batch
Nystr?om RBF
RF
RBF
Fastfood
RBF
Fastfood
FFT
KRLS
Matern
Fastfood
Matern
Insurance Company
CPU
CT slices (axial)
Year Prediction MSD
Forest
5822
6554
42800
463715
522910
85
21
384
90
54
0.23180 ? 4 ? 10?5
2.8466 ? 0.0497
7.1106 ? 0.0772
0.10470 ? 5 ? 10?5
0.9638 ? 0.0186
0.231
7.271
NA
NA
NA
0.232
6.758
60.683
0.113
0.837
0.266
7.103
49.491
0.123
0.840
0.264
7.366
43.858
0.115
0.840
0.266
4.544
58.425
0.106
0.838
0.234
4.345
NA
NA
NA
0.235
4.211
14.868
0.116
0.976
role of a regularization parameter. Similar results are obtained for breast cancer, where for
? = 4.28 ? 10?6 and m = 300 we obtain a 1.24% average classification error on the test set over
20 trials, while for ? = 10?12 and m = 67 we obtain 1.86%. For cpuSmall, with m = 5000 and
? = 10?12 the average test RMSE over 5 trials is 12.2, while for m = 2679 and ? = 10?15 it is
only slightly higher, 13.3, but computing its associated solution requires less than half of the time
and approximately half of the memory.
Regularization path computation. If the subsampling level m is used as a regularization parameter,
the computation of a regularization path corresponding to different subsampling levels becomes crucial during the model selection phase. A naive approach, that consists in recomputing the solutions
of Eq. 5 for each subsampling level, would require O(m2 nT + m3 LT ) computational time, where
T is the number of solutions with different subsampling levels to be evaluated and L is the number
of Tikhonov regularization parameters. On the other hand, by using the incremental Nystr?om algorithm the model selection time complexity is O(m2 n + m3 L) for the whole regularization path.
We experimentally verify this speedup on cpuSmall with 10 repetitions, setting m ? [1, 5000]
and T = 50. The model selection times, measured on a server with 12 ? 2.10GHz Intelr Xeonr
E5-2620 v2 CPUs and 132 GB of RAM, are reported in Figure 2. The result clearly confirms the
beneficial effects of incremental Nystr?om model selection on the computational time.
Predictive performance comparison. Finally, we consider the performance of the algorithm on
several large scale benchmark datasets considered in [6], see Table 1. ? has been chosen on the
basis of preliminary data analysis. m and ? have been chosen by cross-validation,
starting from
small subsampling values up to mmax = 2048, and considering ? ? 10?12 , 1 . After model selection, we retrain the best model on the entire training set and compute the RMSE on the test set.
We consider 10 trials, reporting the performance mean and standard deviation. The results in Table
1 compare Nystr?om computational regularization with the following methods (as in [6]):
?
?
?
?
Kernel Regularized Least Squares (KRLS): Not compatible with large datasets.
Random Fourier features (RF): As in [4], with a number of random features D = 2048.
Fastfood RBF, FFT and Matern kernel: As in [6], with D = 2048 random features.
Batch Nystr?om: Nystr?om method [3] with uniform sampling and m = 2048.
The above results show that the proposed incremental Nystr?om approach behaves really well, matching state of the art predictive performances.
Acknowledgments
The work described in this paper is supported by the Center for Brains, Minds and Machines
(CBMM), funded by NSF STC award CCF-1231216; and by FIRB project RBFR12M3AC, funded
by the Italian Ministry of Education, University and Research.
References
[1] Bernhard Sch?olkopf and Alexander J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond (Adaptive Computation and Machine Learning). MIT Press, 2002.
[2] Alex J. Smola and Bernhard Sch?olkopf. Sparse Greedy Matrix Approximation for Machine Learning. In
ICML, pages 911?918. Morgan Kaufmann, 2000.
[3] C. Williams and M. Seeger. Using the Nystr?om Method to Speed Up Kernel Machines. In NIPS, pages
682?688. MIT Press, 2000.
[4] Ali Rahimi and Benjamin Recht. Random Features for Large-Scale Kernel Machines. In NIPS, pages
1177?1184. Curran Associates, Inc., 2007.
8
[5] J. Yang, V. Sindhwani, H. Avron, and M. W. Mahoney. Quasi-Monte Carlo Feature Maps for ShiftInvariant Kernels. In ICML, volume 32 of JMLR Proceedings, pages 485?493. JMLR.org, 2014.
[6] Quoc V. Le, Tam?as Sarl?os, and Alexander J. Smola. Fastfood - Computing Hilbert Space Expansions in
loglinear time. In ICML, volume 28 of JMLR Proceedings, pages 244?252. JMLR.org, 2013.
[7] Si Si, Cho-Jui Hsieh, and Inderjit S. Dhillon. Memory Efficient Kernel Approximation. In ICML, volume 32 of JMLR Proceedings, pages 701?709. JMLR.org, 2014.
[8] Yuchen Zhang, John C. Duchi, and Martin J. Wainwright. Divide and Conquer Kernel Ridge Regression.
In COLT, volume 30 of JMLR Proceedings, pages 592?617. JMLR.org, 2013.
[9] S. Kumar, M. Mohri, and A. Talwalkar. Ensemble Nystrom Method. In NIPS, pages 1060?1068, 2009.
[10] Mu Li, James T. Kwok, and Bao-Liang Lu. Making Large-Scale Nystr?om Approximation Possible. In
ICML, pages 631?638. Omnipress, 2010.
[11] Kai Zhang, Ivor W. Tsang, and James T. Kwok. Improved Nystr?om Low-rank Approximation and Error
Analysis. ICML, pages 1232?1239. ACM, 2008.
[12] Bo Dai, Bo Xie 0002, Niao He, Yingyu Liang, Anant Raj, Maria-Florina Balcan, and Le Song. Scalable
Kernel Methods via Doubly Stochastic Gradients. In NIPS, pages 3041?3049, 2014.
[13] Petros Drineas and Michael W. Mahoney. On the Nystr?om Method for Approximating a Gram Matrix for
Improved Kernel-Based Learning. JMLR, 6:2153?2175, December 2005.
[14] A. Gittens and M. W. Mahoney. Revisiting the Nystrom method for improved large-scale machine learning. 28:567?575, 2013.
[15] Shusen Wang and Zhihua Zhang. Improving CUR Matrix Decomposition and the Nystr?om Approximation via Adaptive Sampling. JMLR, 14(1):2729?2769, 2013.
[16] Petros Drineas, Malik Magdon-Ismail, Michael W. Mahoney, and David P. Woodruff. Fast approximation
of matrix coherence and statistical leverage. JMLR, 13:3475?3506, 2012.
[17] Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, Richard Peng, and Aaron Sidford.
Uniform Sampling for Matrix Approximation. In ITCS, pages 181?190. ACM, 2015.
[18] Shusen Wang and Zhihua Zhang. Efficient Algorithms and Error Analysis for the Modified Nystrom
Method. In AISTATS, volume 33 of JMLR Proceedings, pages 996?1004. JMLR.org, 2014.
[19] S. Kumar, M. Mohri, and A. Talwalkar. Sampling methods for the Nystr?om method. JMLR, 13(1):981?
1006, 2012.
[20] Corinna Cortes, Mehryar Mohri, and Ameet Talwalkar. On the Impact of Kernel Approximation on
Learning Accuracy. In AISTATS, volume 9 of JMLR Proceedings, pages 113?120. JMLR.org, 2010.
[21] R Jin, T. Yang, M. Mahdavi, Y. Li, and Z. Zhou. Improved Bounds for the Nystr?om Method With
Application to Kernel Classification. Information Theory, IEEE Transactions on, 59(10), Oct 2013.
[22] Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, and Zhi-Hua Zhou. Nystr?om Method vs Random Fourier Features: A Theoretical and Empirical Comparison. In NIPS, pages 485?493, 2012.
[23] Francis Bach. Sharp analysis of low-rank kernel matrix approximations. In COLT, volume 30, 2013.
[24] A. Alaoui and M. W. Mahoney. Fast randomized kernel methods with statistical guarantees. arXiv, 2014.
[25] I. Steinwart and A. Christmann. Support Vector Machines. Springer New York, 2008.
[26] Andrea Caponnetto and Ernesto De Vito. Optimal rates for the regularized least-squares algorithm. Foundations of Computational Mathematics, 7(3):331?368, 2007.
[27] L. Lo Gerfo, Lorenzo Rosasco, Francesca Odone, Ernesto De Vito, and Alessandro Verri. Spectral Algorithms for Supervised Learning. Neural Computation, 20(7):1873?1897, 2008.
[28] I. Steinwart, D. Hush, and C. Scovel. Optimal rates for regularized least squares regression. In COLT,
2009.
[29] S. Mendelson and J. Neeman. Regularization in kernel learning. The Annals of Statistics, 38(1), 2010.
[30] F. Bauer, S. Pereverzev, and L. Rosasco. On regularization algorithms in learning theory. Journal of
complexity, 23(1):52?72, 2007.
[31] A. Caponnetto and Yuan Yao. Adaptive rates for regularization operators in learning theory. Analysis and
Applications, 08, 2010.
[32] Y. Ying and M. Pontil. Online gradient descent learning algorithms. Foundations of Computational
Mathematics, 8(5):561?596, 2008.
[33] Alessandro Rudi, Guillermo D. Canas, and Lorenzo Rosasco. On the Sample Complexity of Subspace
Learning. In NIPS, pages 2067?2075, 2013.
[34] Gene H. Golub and Charles F. Van Loan. Matrix computations, volume 3. JHU Press, 2012.
9
| 5936 |@word trial:5 briefly:1 version:3 polynomial:2 norm:1 advantageous:1 suitably:1 confirms:1 tat:1 covariance:4 decomposition:2 hsieh:1 nystr:40 tr:1 boundedness:3 moment:1 series:1 score:15 woodruff:1 tuned:1 rkhs:1 interestingly:2 neeman:1 recovered:1 scovel:1 nt:1 si:2 written:2 john:1 update:6 v:1 half:2 selected:4 greedy:1 parameterization:1 xk:1 characterization:1 provides:1 toronto:1 org:6 zhang:4 rc:1 become:1 yuan:1 prove:2 consists:1 doubly:1 yingyu:1 introduce:1 firb:1 peng:1 expected:1 indeed:7 andrea:1 roughly:1 brain:1 company:1 zhi:1 cpu:2 considering:5 becomes:1 provided:3 begin:1 bounded:4 moreover:4 project:1 substantially:1 developed:2 guarantee:2 avron:1 universit:1 rm:1 ensured:1 k2:2 control:6 yn:1 gerfo:1 positive:3 io:1 establishing:1 path:4 interpolation:1 abuse:1 approximately:1 might:1 studied:2 conversely:1 cpl:1 delve:1 range:1 practical:1 unique:3 acknowledgment:1 practice:3 definite:1 implement:2 procedure:3 pontil:1 empirical:9 jhu:1 matching:1 confidence:1 inp:1 jui:1 suggest:2 unfeasible:2 close:1 selection:8 operator:10 risk:1 www:1 measurable:2 equivalent:1 map:1 center:2 pereverzev:1 williams:1 tianbao:1 starting:4 independently:1 musco:2 splitting:1 canas:1 m2:2 estimator:1 notion:1 annals:1 pt:1 play:2 exact:1 curran:1 hypothesis:4 associate:1 logarithmically:1 approximated:2 cut:1 role:4 wang:2 tsang:1 revisiting:1 sect:1 decrease:1 alessandro:3 substantial:1 mentioned:1 benjamin:1 complexity:4 mu:1 vito:2 solving:1 algebra:1 ali:1 predictive:2 basis:2 drineas:2 easily:2 joint:2 iit:1 fast:3 effective:4 describe:2 monte:1 choosing:2 odone:1 sarl:1 larger:3 kai:1 say:1 statistic:1 online:1 interplay:2 sequence:1 eigenvalue:3 advantage:1 product:1 relevant:1 uci:1 achieve:5 tni:1 adjoint:1 ismail:1 kh:1 bao:1 olkopf:2 exploiting:1 regularity:2 requirement:4 r1:1 incremental:12 derive:1 develop:2 depending:1 axial:1 measured:1 ij:2 eq:3 c:1 christmann:1 implies:2 quantify:1 differ:1 direction:1 stochastic:2 cals:1 education:1 require:2 generalization:4 really:1 preliminary:2 proposition:3 extension:2 exploring:1 rong:1 hold:5 considered:9 ic:1 cbmm:1 datasets4:1 k2h:1 camoriano:2 achieves:3 early:1 fh:10 krr:1 largest:1 repetition:2 tool:1 minimization:3 mit:3 clearly:3 always:2 gaussian:1 aim:1 modified:1 zhou:2 varying:1 broader:1 derived:5 maria:1 rank:6 seeger:1 talwalkar:3 sense:3 minimizers:1 stopping:1 lj:1 bt:1 entire:1 kc:1 italian:1 going:1 quasi:1 interested:1 classification:4 colt:3 art:5 special:1 fairly:2 marginal:1 aware:1 ernesto:2 sampling:8 yu:1 rls:1 icml:6 representer:1 discrepancy:1 future:1 report:2 richard:1 primarily:2 few:1 randomly:1 simultaneously:1 subsampled:2 raffaello:2 phase:3 replacement:2 freedom:1 interest:1 shusen:2 insurance:1 golub:1 deferred:1 mahoney:5 analyzed:1 semidefinite:1 istituto:2 divide:3 yuchen:1 exchanged:1 theoretical:5 minimal:2 recomputing:1 column:1 disadvantage:1 sidford:1 cost:1 deviation:1 subset:2 cpusmall:6 uniform:6 reported:2 kn:7 supx:3 considerably:2 cho:1 adaptively:2 recht:1 randomized:4 lee:1 off:1 michael:3 together:1 yao:1 na:6 squared:1 satisfied:2 rosasco:4 hn:2 choose:1 tam:1 li:12 mahdavi:2 suggesting:1 knm:4 de:2 sec:1 coefficient:1 inc:1 satisfy:1 depends:1 view:5 break:1 matern:3 sup:1 francis:1 hf:2 decaying:1 rmse:6 contribution:3 om:38 square:10 minimize:1 ni:7 accuracy:1 kaufmann:1 efficiently:4 ensemble:1 yield:2 correspond:1 spaced:2 generalize:2 itcs:1 lu:1 carlo:1 zx:3 lengthy:1 definition:3 involved:1 lcsl:1 james:2 nystrom:3 naturally:1 associated:5 di:3 proof:3 petros:2 cur:1 gain:1 sampled:3 proved:2 dataset:3 massachusetts:1 recall:1 fractional:1 ut:3 organized:1 hilbert:3 appears:1 higher:1 supervised:2 follow:1 xie:1 improved:4 verri:1 formulation:1 done:1 evaluated:1 smola:3 sketch:2 hand:3 steinwart:2 replacing:1 christopher:1 o:1 incrementally:1 quality:1 measurability:1 usa:1 effect:2 concept:1 contain:1 verify:1 ccf:1 facility:1 regularization:34 hence:2 symmetric:1 laboratory:1 moore:1 dependently:1 dhillon:1 francesca:1 mmax:1 during:1 self:1 width:1 covering:1 ridge:2 duchi:1 omnipress:1 balcan:1 image:1 recently:1 charles:1 behaves:1 functional:1 empirically:1 cohen:1 volume:8 discussed:3 belong:2 extend:1 he:1 refer:2 cambridge:1 tuning:3 trivially:1 grid:2 mathematics:2 funded:2 stable:1 longer:2 gt:4 add:3 own:1 perspective:4 italy:2 lrosasco:1 belongs:1 driven:1 raj:1 tikhonov:5 server:1 inequality:3 vt:2 rbfr12m3ac:1 yi:4 preserving:2 seen:7 ministry:1 morgan:1 dai:1 ale:1 ii:1 ntr:2 rahimi:1 caponnetto:2 technical:2 characterized:1 cross:3 bach:1 divided:1 msd:1 cameron:1 award:1 a1:1 impact:1 prediction:1 variant:3 regression:9 florina:1 basic:5 essentially:1 expectation:2 foremost:1 breast:4 arxiv:1 scalable:1 kernel:43 tailored:2 achieved:1 crucial:1 sch:2 rest:1 archive:1 eigenfunctions:1 comment:3 induced:1 elegant:1 alaoui:1 december:1 incorporates:1 leverage:16 noting:1 yang:3 split:3 identically:1 easy:2 fft:2 variety:2 xj:3 restrict:2 suboptimal:1 bandwidth:1 reduce:1 idea:4 inner:1 krls:11 t0:3 motivated:1 gb:1 song:1 speaking:1 york:1 dramatically:1 aimed:1 nonparametric:1 repeating:1 mid:1 exist:1 nsf:1 broadly:1 kck:10 group:1 key:2 terminology:1 clarity:1 verified:1 ram:1 sum:1 year:1 everywhere:1 reporting:1 almost:1 throughout:1 coherence:1 appendix:1 genova:3 bound:16 ct:3 played:2 rudi:3 correspondence:1 fold:2 adapted:1 constraint:2 alex:1 aspect:1 fourier:2 speed:1 min:6 kumar:2 separable:1 expanded:1 ameet:1 martin:1 speedup:2 developing:1 according:4 smaller:1 slightly:1 beneficial:1 gittens:1 deferring:1 making:1 kmm:1 quoc:1 intuitively:1 taken:1 computationally:1 resource:3 discus:6 needed:3 mind:1 letting:1 italiano:2 end:1 available:2 magdon:1 apply:1 observe:2 kwok:2 v2:1 spectral:6 batch:4 corinna:1 denotes:1 subsampling:30 ensure:1 include:1 conquer:3 approximating:1 classical:3 feng:1 malik:1 question:1 quantity:5 already:2 strategy:8 rt:8 mehrdad:1 niao:1 loglinear:1 gradient:3 subspace:2 capacity:3 nx:3 considers:1 studi:1 assuming:2 code:1 providing:1 minimizing:1 ying:1 equivalently:1 liang:2 trace:2 design:5 perform:2 allowing:1 observation:4 datasets:8 benchmark:4 finite:4 jin:2 descent:1 truncated:1 y1:1 rn:1 reproducing:3 sharp:2 thm:1 david:1 namely:1 required:1 extensive:2 anant:1 kkx:1 nm2:2 hush:1 nip:6 beyond:2 usually:1 rf:3 including:2 memory:6 wainwright:1 suitable:1 natural:2 regularized:12 minimax:3 scheme:6 github:1 technology:1 lorenzo:3 imply:1 library:1 axis:2 hm:1 naive:2 kj:1 kf:1 loss:1 highlight:1 interesting:3 filtering:3 validation:7 foundation:2 degree:2 sufficient:2 consistent:1 storing:1 lo:1 cancer:4 compatible:1 penalized:1 mohri:3 supported:1 last:2 guillermo:1 bias:1 weaker:2 understand:1 allow:1 institute:1 side:1 barrier:1 sparse:1 distributed:2 benefit:1 slice:1 plain:7 xn:5 dimension:4 ghz:1 gram:1 bauer:1 van:1 adaptive:3 transaction:1 approximate:8 compact:1 bernhard:2 gene:1 dealing:1 ml:1 pseudoinverse:1 investigating:1 xi:11 degli:1 continuous:1 iterative:2 quantifies:2 onerous:1 table:3 forest:1 improving:1 e5:1 expansion:1 mehryar:1 complex:1 stc:1 aistats:2 main:10 fastfood:6 linearly:1 whole:2 noise:2 dibris:1 n2:1 x1:8 referred:1 retrain:1 wish:1 exponential:1 jmlr:16 third:2 theorem:10 formula:2 discarding:1 specific:1 xt:5 showing:3 decay:1 admits:1 cortes:1 exists:5 mendelson:1 ih:2 albeit:1 kx:5 nk:1 entropy:1 lt:1 yin:1 explore:1 ivor:1 penrose:1 zhihua:2 inderjit:1 bo:2 sindhwani:1 applies:2 hua:1 springer:1 corresponds:1 minimizer:2 acm:2 ma:1 oct:1 conditional:1 goal:2 formulated:1 consequently:1 rbf:6 replace:1 experimentally:1 loan:1 tecnologia:2 uniformly:2 reducing:1 averaging:1 principal:1 lemma:1 called:3 total:2 experimental:4 svd:2 m3:4 shiftinvariant:1 aaron:1 select:1 cholesky:3 support:2 latter:8 alexander:2 |
5,454 | 5,937 | Logarithmic Time Online Multiclass prediction
Anna Choromanska
Courant Institute of Mathematical Sciences
New York, NY, USA
achoroma@cims.nyu.edu
John Langford
Microsoft Research
New York, NY, USA
jcl@microsoft.com
Abstract
We study the problem of multiclass classification with an extremely large number
of classes (k), with the goal of obtaining train and test time complexity logarithmic in the number of classes. We develop top-down tree construction approaches
for constructing logarithmic depth trees. On the theoretical front, we formulate a
new objective function, which is optimized at each node of the tree and creates
dynamic partitions of the data which are both pure (in terms of class labels) and
balanced. We demonstrate that under favorable conditions, we can construct logarithmic depth trees that have leaves with low label entropy. However, the objective
function at the nodes is challenging to optimize computationally. We address the
empirical problem with a new online decision tree construction procedure. Experiments demonstrate that this online algorithm quickly achieves improvement in
test error compared to more common logarithmic training time approaches, which
makes it a plausible method in computationally constrained large-k applications.
1
Introduction
The central problem of this paper is computational complexity in a setting where the number of
classes k for multiclass prediction is very large. Such problems occur in natural language (Which
translation is best?), search (What result is best?), and detection (Who is that?) tasks. Almost all
machine learning algorithms (with the exception of decision trees) have running times for multiclass
classification which are O(k) with a canonical example being one-against-all classifiers [1].
In this setting, the most efficient possible accurate approach is given by information theory [2].
In essence, any multiclass classification algorithm must uniquely specify the bits of all labels that
it predicts correctly on. Consequently, Kraft?s inequality ([2] equation 5.6) implies that the expected computational complexity of predicting correctly is ?(H(Y )) per example where H(Y ) is
the Shannon entropy of the label. For the worst case distribution on k classes, this implies ?(log(k))
computation is required.
Hence, our goal is achieving O(log(k)) computational time per example1 for both training and
testing, while effectively using online learning algorithms to minimize passes over the data.
The goal of logarithmic (in k) complexity naturally motivates approaches that construct a logarithmic depth hierarchy over the labels, with one label per leaf. While this hierarchy is sometimes
available through prior knowledge, in many scenarios it needs to be learned as well. This naturally
leads to a partition problem which arises at each node in the hierarchy. The partition problem is
finding a classifier: c : X ? {?1, 1} which divides examples into two subsets with a purer set of
labels than the original set. Definitions of purity vary, but canonical examples are the number of
labels remaining in each subset, or softer notions such as the average Shannon entropy of the class
labels. Despite resulting in a classifier, this problem is fundamentally different from standard binary
classification. To see this, note that replacing c(x) with ?c(x) is very bad for binary classification,
but has no impact on the quality of a partition2 . The partition problem is fundamentally non-convex
1
2
Throughout the paper by logarithmic time we mean logarithmic time per example.
The problem bears parallels to clustering in this regard.
1
for symmetric classes since the average c(x)?c(x)
of c(x) and ?c(x) is a poor partition (the always-0
2
function places all points on the same side).
The choice of partition matters in problem dependent ways. For example, consider examples on a
line with label i at position i and threshold classifiers. In this case, trying to partition class labels
{1, 3} from class label 2 results in poor performance.
accuracy
The partition problem is typically solved for decision tree learning via an enumerate-and-test approach amongst a small set of possible classifiers (see e.g. [3]). In the multiclass setting, it is
desirable to achieve substantial error reduction for each node in the tree which motivates using a richer set of classifiers in the nodes to minimize the number of nodes, and thereby decrease the computational complexity. The main theoretical contribution of this work is to establish a boosting algorithm for learning trees with O(k) nodes and O(log k) depth, thereby addressing the goal of logarithmic time train and test complexity. Our main theoretical result,
presented in Section 2.3, generalizes a binary boosting-by-decision-tree theorem [4] to multiclass boosting. As in all boosting results, performance is critically dependent on the quality
of the weak learner, supporting intuition that we need sufficiently rich partitioners at nodes.
The approach uses a new objective for decision tree learning, which we optimize at each
node of the tree. The objective and its theoretical properties are presented in Section 2.
A complete system with multiple partitions
LOMtree vs one?against?all
could be constructed top down (as the boost1
OAA
ing theorem) or bottom up (as Filter tree [5]).
LOMtree
A bottom up partition process appears impossi0.8
ble with representational constraints as shown
in Section 6 in the Supplementary material so
we focus on top-down tree creation.
0.6
Whenever there are representational constraints
on partitions (such as linear classifiers), finding a strong partition function requires an efficient search over this set of classifiers. Ef0.2
ficient searches over large function classes are
routinely performed via gradient descent tech0
niques for supervised learning, so they seem
26
105
1000
21841 105033
number of classes
like a natural candidate. In existing literature,
Figure 1: A comparison of One-Against- examples for doing this exist when the problem
All (OAA) and the Logarithmic Online Multi- is indeed binary, or when there is a prespeciclass Tree (LOMtree) with One-Against-All con- fied hierarchy over the labels and we just need
strained to use the same training time as the to find partitioners aligned with that hierarchy.
LOMtree by dataset truncation and LOMtree con- Neither of these cases applies?we have multistrained to use the same representation complex- ple labels and want to dynamically create the
ity as One-Against-All. As the number of class choice of partition, rather than assuming that
labels grows, the problem becomes harder and the one was handed to us. Does there exist a purity criterion amenable to a gradient descent apLOMtree becomes more dominant.
proach? The precise objective studied in theory
fails this test due to its discrete nature, and even natural approximations are challenging to tractably
optimize under computational constraints. As a result, we use the theoretical objective as a motivation and construct a new Logarithmic Online Multiclass Tree (LOMtree) algorithm for empirical
evaluation.
0.4
Creating a tree in an online fashion creates a new class of problems. What if some node is initially
created but eventually proves useless because no examples go to it? At best this results in a wasteful
solution, while in practice it starves other parts of the tree which need representational complexity.
To deal with this, we design an efficient process for recycling orphan nodes into locations where
they are needed, and prove that the number of times a node is recycled is at most logarithmic in the
number of examples. The algorithm is described in Section 3 and analyzed in Section 3.1.
And is it effective? Given the inherent non-convexity of the partition problem this is unavoidably
an empirical question which we answer on a range of datasets varying from 26 to 105K classes in
Section 4. We find that under constrained training times, this approach is quite effective compared
to all baselines while dominating other O(log k) train time approaches.
What?s new? To the best of our knowledge, the splitting criterion, the boosting statement, the
LOMtree algorithm, the swapping guarantee, and the experimental results are all new here.
2
1.1
Prior Work
Only a few authors address logarithmic time training. The Filter tree [5] addresses consistent (and
robust) multiclass classification, showing that it is possible in the statistical limit. The Filter tree
does not address the partition problem as we do here which as shown in our experimental section is
often helpful. The partition finding problem is addressed in the conditional probability tree [6], but
that paper addresses conditional probability estimation. Conditional probability estimation can be
converted into multiclass prediction [7], but doing so is not a logarithmic time operation.
Quite a few authors have addressed logarithmic testing time while allowing training time to be O(k)
or worse. While these approaches are intractable on our larger scale problems, we describe them
here for context. The partition problem can be addressed by recursively applying spectral clustering
on a confusion graph [8] (other clustering approaches include [9]). Empirically, this approach has
been found to sometimes lead to badly imbalanced splits [10]. In the context of ranking, another
approach uses k-means hierarchical clustering to recover the label sets for a given partition [11].
The more recent work [12] on the multiclass classification problem addresses it via sparse output
coding by tuning high-cardinality multiclass categorization into a bit-by-bit decoding problem. The
authors decouple the learning processes of coding matrix and bit predictors and use probabilistic
decoding to decode the optimal class label. The authors however specify a class similarity which is
O(k 2 ) to compute (see Section 2.1.1 in [12]), and hence this approach is in a different complexity
class than ours (this is also born out experimentally). The variant of the popular error correcting
output code scheme for solving multi-label prediction problems with large output spaces under the
assumption of output sparsity was also considered in [13]. Their approach in general requires O(k)
running time to decode since, in essence, the fit of each label to the predictions must be checked
and there are O(k) labels. Another approach [14] proposes iterative least-squares-style algorithms
for multi-class (and multi-label) prediction with relatively large number of examples and data dimensions, and the work of [15] focusing in particular on the cost-sensitive multiclass classification.
Both approaches however have O(k) training time.
Decision trees are naturally structured to allow logarithmic time prediction. Traditional decision
trees often have difficulties with a large number of classes because their splitting criteria are not
well-suited to the large class setting. However, newer approaches [16, 17] have addressed this effectively at significant scales in the context of multilabel classification (multilabel learning, with
missing labels, is also addressed in [18]). More specifically, the first work [16] performs brute force
optimization of a multilabel variant of the Gini index defined over the set of positive labels in the
node and assumes label independence during random forest construction. Their method makes fast
predictions, however has high training costs [17]. The second work [17] optimizes a rank sensitive
loss function (Discounted Cumulative Gain). Additionally, a well-known problem with hierarchical
classification is that the performance significantly deteriorates lower in the hierarchy [19] which
some authors solve by biasing the training distribution to reduce error propagation while simultaneously combining bottom-up and top-down approaches during training [20].
The reduction approach we use for optimizing partitions implicitly optimizes a differential objective.
A non-reductive approach to this has been tried previously [21] on other objectives yielding good
results in a different context.
2
Framework and theoretical analysis
In this section we describe the essential elements of the approach, and outline the theoretical properties of the resulting framework. We begin with high-level ideas.
2.1
Setting
We employ a hierarchical approach for learning a multiclass decision tree structure, training this
structure in a top-down fashion. We assume that we receive examples x ? X ? Rd , with labels
y ? {1, 2, . . . , k}. We also assume access to a hypothesis class H where each h ? H is a binary
classifier, h : X 7? {?1, 1}. The overall objective is to learn a tree of depth O(log k), where
each node in the tree consists of a classifier from H. The classifiers are trained in such a way that
hn (x) = 1 (hn denotes the classifier in node n of the tree3 ) means that the example x is sent to the
right subtree of node n, while hn (x) = ?1 sends x to the left subtree. When we reach a leaf, we
predict according to the label with the highest frequency amongst the examples reaching that leaf.
3
Further in the paper we skip index n whenever it is clear from the context that we consider a fixed tree
node.
3
In the interest of computational complexity, we want to encourage the number of examples going
to the left and right to be fairly balanced. For good statistical accuracy, we want to send examples
of class i almost exclusively to either the left or the right subtree, thereby refining the purity of the
class distributions at subsequent levels in the tree. The purity of a tree node is therefore a measure
of whether the examples of each class reaching the node are then mostly sent to its one child node
(pure split) or otherwise to both children (impure split). The formal definitions of balancedness and
purity are introduced in Section 2.2. An objective expressing both criteria4 and resulting theoretical
properties are illustrated in the following sections. A key consideration in picking this objective is
that we want to effectively optimize it over hypotheses h ? H, while streaming over examples in
an online fashion5 . This seems unsuitable with some of the more standard decision tree objectives
such as Shannon or Gini entropy, which leads us to design a new objective. At the same time, we
show in Section 2.3 that under suitable assumptions, optimizing the objective also leads to effective
reduction of the average Shannon entropy over the entire tree.
2.2
An objective and analysis of resulting partitions
We now define a criterion to measure the quality of a hypothesis h ? H in creating partitions at a
fixed node n in the tree. Let ?i denotes the proportion of label i amongst the examples reaching this
node. Let P (h(x) > 0) and P (h(x) > 0|i) denote the fraction of examples reaching n for which
h(x) > 0, marginally and conditional on class i respectively. Then we define the objective6 :
k
X
J(h) = 2
?i |P (h(x) > 0) ? P (h(x) > 0|i)| .
(1)
i=1
We aim to maximize the objective J(h) to obtain high quality partitions. Intuitively, the objective
encourages the fraction of examples going to the right from class i to be substantially different from
the background fraction for each class i. As a concrete simple scenario, if P (h(x) > 0) = 0.5 for
some hypothesis h, then the objective prefers P (h(x) > 0|i) to be as close to 0 or 1 as possible for
each class i, leading to pure partitions. We now make these intuitions more formal.
Definition 1 (Purity). The hypothesis h ? H induces a pure split if
k
X
? :=
?i min(P (h(x) > 0|i), P (h(x) < 0|i)) ? ?,
i=1
where ? ? [0, 0.5), and ? is called the purity factor.
In particular, a partition is called maximally pure if ? = 0, meaning that each class is sent exclusively
to the left or the right. We now define a similar definition for the balancedness of a split.
Definition 2 (Balancedness). The hypothesis h ? H induces a balanced split if
c ? P (h(x) > 0) ? 1 ? c,
{z
}
|
=?
where c ? (0, 0.5], and ? is called the balancing factor.
A partition is called maximally balanced if ? = 0.5, meaning that an equal number of examples
are sent to the left and right children of the partition. The balancing factor and the purity factor
are related as shown in Lemma 1 (the proofs of Lemma 1 and the following lemma (Lemma 2) are
deferred to the Supplementary material).
Lemma 1. For any hypothesis h, and any distribution over examples (x, y), the purity factor ? and
the balancing factor ? satisfy ? ? min{(2 ? J(h))/(4?) ? ?, 0.5}.
A partition is called maximally pure and balanced if it satisfies both ? = 0 and ? = 0.5. We see
that J(h) = 1 for a hypothesis h inducing a maximally pure and balanced partition as captured in
the next lemma. Of course we do not expect to have hypotheses producing maximally pure and
balanced splits in practice.
Lemma 2. For any hypothesis h : X 7? {?1, 1}, the objective J(h) satisfies J(h) ? [0, 1].
Furthermore, if h induces a maximally pure and balanced partition then J(h) = 1.
4
We want an objective to achieve its optimum for simultaneously pure and balanced split. The standard
entropy-based criteria, such as Shannon or Gini entropy, as well as the criterion we will propose, posed in
Equation 1, satisfy this requirement (for the entropy-based criteria see [4], for our criterion see Lemma 2).
5
Our algorithm could also be implemented as batch or streaming, where in case of the latter one can for
example make one pass through the data per every tree level, however for massive datasets making multiple
passes through the data is computationally costly, further justifying the need for an online approach.
6
The proposed objective function exhibits some similarities with the so-called Carnap?s measure [22, 23]
used in probability and inductive logic.
4
2.3
Quality of the entire tree
The above section helps us understand the quality of an individual split produced by effectively
maximizing J(h). We next reason about the quality of the entire tree as we add more and more
nodes. We measure the quality of trees using the average entropy over all the leaves in the tree, and
track the decrease of this entropy as a function of the number of nodes. Our analysis extends the
theoretical analysis in [4], originally developed to show the boosting properties of the decision trees
for binary classification problems, to the multiclass classification setting.
Given a tree T , we consider the entropy function Gt as the measure of the quality of tree:
k
X X
1
Gt =
wl
?l,i ln
?l,i
i=1
l?L
where ?l,i ?s are the probabilities that a randomly chosen data point x drawn from P, where P is
a fixed target distribution over X , has label i given that x reaches node l, L denotes the set of all
tree leaves, t denotes the number of internal tree nodes, and wl is the weight
P of leaf l defined as the
probability a randomly chosen x drawn from P reaches leaf l (note that l?L wl = 1).
We next state the main theoretical result of this paper (it is captured in Theorem 1). We adopt
the weak learning framework. The weak hypothesis assumption, captured in Definition 3, posits that
each node of the tree T has a hypothesis h in its hypothesis class H which guarantees simultaneously
a ?weak? purity and a ?weak? balancedness of the split on any distribution P over X . Under this
assumption, one can use the new decision tree approach to drive the error below any threshold.
Definition 3 (Weak Hypothesis Assumption). Let m denote any node of the tree T , and let ?m =
P (hm (x) > 0) and Pm,i = P (hm (x) > 0|i). Furthermore, let ? ? R+ be such that for all m,
? ? (0, min(?m , 1 ? ?m )]. We say that the weak hypothesis assumption is satisfied when for any
distribution P over X at each node m of the tree T there exists a hypothesis hm ? H such that
Pk
J(hm )/2 = i=1 ?m,i |Pm,i ? ?m | ? ?.
Theorem 1. Under the Weak Hypothesis Assumption, for any ? ? [0, 1], to obtain Gt ? ? it suffices
to make t ? (1/?)
4(1??)2 ln k
?2
splits.
We defer the proof of Theorem 1 to the Supplementary material and provide its sketch now. The
analysis studies a tree construction algorithm where we recursively find the leaf node with the highest
weight, and choose to split it into two children. Let n be the heaviest leaf at time t. Consider splitting
it to two children. The contribution of node n to the tree entropy changes after it splits. This change
(entropy reduction) corresponds to a gap in the Jensen?s inequality applied to the concave function,
and thus can further be lower-bounded (we use the fact that Shannon entropy is strongly concave
with respect to `1 -norm (see e.g., Example 2.5 in Shalev-Shwartz [24])). The obtained lower-bound
turns out to depend proportionally on J(hn )2 . This implies that the larger the objective J(hn )
is at time t, the larger the entropy reduction ends up being, which further reinforces intuitions to
maximize J. In general, it might not be possible to find any hypothesis with a large enough objective
J(hn ) to guarantee sufficient progress at this point so we appeal to a weak learning assumption. This
assumption can be used to further lower-bound the entropy reduction and prove Theorem 1.
3
The LOMtree Algorithm
The objective function of Section 2 has another convenient form which yields a simple online algorithm for tree construction and training. Note that Equation 1 can be written (details are shown in
Section 12 in the Supplementary material) as
J(h) = 2Ei [|Ex [1(h(x) > 0)] ? Ex [1(h(x) > 0|i)]|].
Maximizing this objective is a discrete optimization problem that can be relaxed as follows
J(h) = 2Ei [|Ex [h(x)] ? Ex [h(x)|i]|],
where Ex [h(x)|i] is the expected score of class i.
We next explain our empirical approach for maximizing the relaxed objective. The empirical estimates of the expectations can be easily stored and updated online in every tree node. The decision
whether to send an example reaching a node to its left or right child node is based on the sign of the
difference between the two expectations: Ex [h(x)] and Ex [h(x)|y], where y is a label of the data
point, i.e. when Ex [h(x)]?Ex [h(x)|y] > 0 the data point is sent to the left, else it is sent to the right.
This procedure is conveniently demonstrated on a toy example in Section 13 in the Supplement.
During training, the algorithm assigns a unique label to each node of the tree which is currently a
leaf. This is the label with the highest frequency amongst the examples reaching that leaf. While
5
Algorithm 1 LOMtree algorithm (online tree training)
Input: regression algorithm R, max number of tree non-leaf nodes T , swap resistance RS
Subroutine SetNode (v)
mv = ? (mv (y) - sum of the scores for class y)
lv = ? (lv (y) - number of points of class y reaching v)
nv = ? (nv (y) - number of points of class y which are used to train regressor in v)
ev = ? (ev (y) - expected score for class y)
Ev = 0 (expected total score)
Cv = 0 (the size of the smallest leaf7 in the subtree with root v)
Subroutine UpdateC (v)
While (v 6= r AND CPARENT(v) 6= Cv )
v = PARENT(v); Cv = min(CLEFT(v) , CRIGHT(v) )8
Subroutine Swap (v)
Find a leaf s for which (Cs = Cr )
sPA=PARENT(s); sGPA= GRANDPA(s); sSIB=SIBLING(s)9
If (sPA = LEFT(sGPA )) LEFT(sGPA ) = sSIB Else RIGHT(sGPA ) = sSIB
UpdateC (sSIB ); SetNode (s); LEFT(v) = s; SetNode (sPA ); RIGHT(v) = sPA
Create root r = 0: SetNode (r); t = 1
For each example (x, y) do
Set j = r
While j is not a leaf do
If (lj (y) = ?)
mj (y) = 0; lj (y) = 0; nj (y) = 0; ej (y) = 0
If (Ej > ej (y)) c = ?1 Else c = 1
Train hj with example (x, c): R(x, c)
Pk
mj (i) 10
lj (y)++; nj (y) ++; mj (y) += hj (x); ej (y) = mj (y)/nj (y); Ej = Pi=1
k
i=1 nj (i)
Set j to the child of j corresponding to hj
If(j is a leaf)
lj (y)++
If(lj has at least 2 non-zero entries)
If(t<T OR Cj?maxi lj (i)>RS (Cr+1))
If (t<T )
SetNode (LEFT(j)); SetNode (RIGHT(j)); t++
Else Swap(j)
CLEFT(j)=bCj /2c; CRIGHT(j)=Cj?CLEFT(j) ; UpdateC (LEFT(j))
Cj ++
testing, a test example is pushed down the tree along the path from the root to the leaf, where in each
non-leaf node of the path its regressor directs the example either to the left or right child node. The
test example is then labeled with the label assigned to the leaf that this example descended to.
The training algorithm is detailed in Algorithm 1 where each tree node contains a classifier (we use
linear classifiers), i.e. hj is the regressor stored in node j and hj (x) is the value of the prediction
of hj on example x11 . The stopping criterion for expanding the tree is when the number of non-leaf
nodes reaches a threshold T .
3.1 Swapping
Consider a scenario where the current training example descends to leaf j. The leaf can split (create
two children) if the examples that reached it in the past were coming from at least two different
classes. However, if the number of non-leaf nodes of the tree reaches threshold T , no more nodes
can be expanded and thus j cannot create children. Since the tree construction is done online, some
nodes created at early stages of training may end up useless because no examples reach them later
7
The smallest leaf is the one with the smallest total number of data points reaching it in the past.
PARENT (v), LEFT (v) and RIGHT (v) denote resp. the parent, and the left and right child of node v.
9
GRANDPA (v) and SIBLING (v) denote respectively the grandparent of node v and the sibling of node v, i.e.
the node which has the same parent as v.
10
In the implementation both sums are stored as variables thus updating Ev takes O(1) computations.
11
We also refer to this prediction value as the ?score? in this section.
8
6
r
...
j
r
...
...
...
...
sGPA
...
s
...
j
sPA
s
sSIB
...
...
sPA
sGPA
...
sSIB
...
...
...
...
Figure 2: Illustration of the swapping procedure. Left: before the swap, right: after the swap.
on. This prevents potentially useful splits such as at leaf j. This problem can be solved by recycling
orphan nodes (subroutine Swap in Algorithm 1). The general idea behind node recycling is to allow
nodes to split if a certain condition is met. In particular, node j splits if the following holds:
Cj ?
max
i?{1,2,...,k}
lj (i) > RS (Cr + 1),
(2)
where r denotes the root of the entire tree, Cj is the size of the smallest leaf in the subtree with root
j, where the smallest leaf is the one with the smallest total number of data points reaching it in the
past, lj is a k-dimensional vector of non-negative integers where the ith element is the count of the
number of data points with label i reaching leaf j in the past, and finally RS is a ?swap resistance?.
The subtraction of maxi?{1,2,...,k} lj (i) in Equation 2 ensures that a pure node will not be recycled.
If the condition in Inequality 2 is satisfied, the swap of the nodes is performed where an orphan
leaf s, which was reached by the smallest number of examples in the past, and its parent sPA are
detached from the tree and become children of node j whereas the old sibling sSIB of an orphan node
s becomes a direct child of the old grandparent sGPA . The swapping procedure is shown in Figure 2.
The condition captured in the Inequality 2 allows us to prove that the number of times any given
node is recycled is upper-bounded by the logarithm of the number of examples whenever the swap
resistance is 4 or more (Lemma 3).
Lemma 3. Let the swap resistance RS be greater or equal to 4. Then for all sequences of examples,
the number of times Algorithm 1 recycles any given node is upper-bounded by the logarithm (with
base 2) of the sequence length.
4
Experiments
We address several hypotheses experimentally.
1. The LOMtree algorithm achieves true logarithmic time computation in practice.
2. The LOMtree algorithm is competitive with or better than all other logarithmic train/test
time algorithms for multiclass classification.
3. The LOMtree algorithm has statistical performance close to more common O(k) approaches.
To address these hypotheses, we conTable 1: Dataset sizes.
ducted experiments on a variety of
Isolet Sector Aloi ImNet ODP
benchmark multiclass datasets: Isosize
52.3MB 19MB 17.7MB104GB12 3GB
let, Sector, Aloi, ImageNet (Im# features 617 54K 128
6144 0.5M
Net) and ODP13 . The details of the
# examples 7797 9619 108K 14.2M 1577418
datasets are provided in Table 1. The
datasets were divided into training
# classes
26
105 1000 ?22K ?105K
(90%) and testing (10%). Furthermore, 10% of the training dataset was
used as a validation set.
The baselines we compared LOMtree with are a balanced random tree of logarithmic depth (Rtree)
and the Filter tree [5]. Where computationally feasible, we also compared with a one-against-all
classifier (OAA) as a representative O(k) approach. All methods were implemented in the Vowpal
Wabbit [25] learning system and have similar levels of optimization. The regressors in the tree nodes
for LOMtree, Rtree, and Filter tree as well as the OAA regressors were trained by online gradient
descent for which we explored step sizes chosen from the set {0.25, 0.5, 0.75, 1, 2, 4, 8}. We used
linear regressors. For each method we investigated training with up to 20 passes through the data and
we selected the best setting of the parameters (step size and number of passes) as the one minimizing
the validation error. Additionally, for the LOMtree we investigated different settings of the stopping
12
13
compressed
The details of the source of each dataset are provided in the Supplementary material.
7
criterion for the tree expansion: T = {k ? 1, 2k ? 1, 4k ? 1, 8k ? 1, 16k ? 1, 32k ? 1, 64k ? 1},
and swap resistance RS = {4, 8, 16, 32, 64, 128, 256}.
log2(time ratio)
In Table 2 and 3 we report respectively train time and per-example test time (the best performer is
indicated in bold). Training time (and later reported test error) is not provided for OAA on ImageNet
and ODP due to intractability14 -both are petabyte scale computations15 .
Table 2: Training time on selected problems. Table 3: Per-example test time on all problems.
Isolet Sector
Aloi
Isolet Sector Aloi ImNet ODP
LOMtree 16.27s 12.77s 51.86s
LOMtree 0.14ms 0.13ms 0.06ms 0.52ms 0.26ms
OAA
19.58s 18.37s 11m2.43s
OAA 0.16 ms 0.24ms 0.33ms 0.21s 1.05s
The first hypothesis is consistent with the experimental results. Time-wise LOMtree significantly
outperforms OAA due to building only close-to logarithmic depth trees. The improvement in the
training time increases with the number of classes in the classification problem. For instance on Aloi
training with LOMtree is 12.8 times faster than with OAA. The same can be said about the test time,
where the per-example test time for Aloi, ImageNet and ODP are respectively 5.5, 403.8 and 4038.5
times faster than OAA. The significant advantage of LOMtree over OAA is also captured in Figure 3.
Next, in Table 4 (the best logarithmic time perLOMtree vs one?against?all
former is indicated in bold) we report test error
12
of logarithmic train/test time algorithms. We
10
also show the binomial symmetrical 95% confidence intervals for our results. Clearly the sec8
ond hypothesis is also consistent with the experimental results. Since the Rtree imposes a
6
random label partition, the resulting error it ob4
tains is generally worse than the error obtained
by the competitor methods including LOMtree
2
which learns the label partitioning directly from
the data. At the same time LOMtree beats Fil6
8
10
12
14
16
ter tree on every dataset, though for ImageNet
log2(number of classes)
Figure 3: Logarithm of the ratio of per-example and ODP (both have a high level of noise) the
advantage of LOMtree is not as significant.
test times of OAA and LOMtree on all problems.
Table 4: Test error (%) and confidence interval on all problems.
LOMtree
Rtree
Filter tree
OAA
Isolet 6.36?1.71 16.92?2.63 15.10?2.51 3.56?1.30%
Sector 16.19?2.33 15.77?2.30 17.70?2.41 9.17?1.82%
Aloi 16.50?0.70 83.74?0.70 80.50?0.75 13.78?0.65%
ImNet 90.17?0.05 96.99?0.03 92.12?0.04
NA
ODP 93.46?0.12 93.85?0.12 93.76?0.12
NA
The third hypothesis is weakly consistent with the empirical results. The time advantage of LOMtree
comes with some loss of statistical accuracy with respect to OAA where OAA is tractable. We
conclude that LOMtree significantly closes the gap between other logarithmic time methods and
OAA, making it a plausible approach in computationally constrained large-k applications.
5
Conclusion
The LOMtree algorithm reduces the multiclass problem to a set of binary problems organized in a
tree structure where the partition in every tree node is done by optimizing a new partition criterion
online. The criterion guarantees pure and balanced splits leading to logarithmic training and testing
time for the tree classifier. We provide theoretical justification for our approach via a boosting
statement and empirically evaluate it on multiple multiclass datasets. Empirically, we find that this
is the best available logarithmic time approach for multiclass classification problems.
Acknowledgments
We would like to thank Alekh Agarwal, Dean Foster, Robert Schapire and Matus Telgarsky for
valuable discussions.
14
Note however that the mechanics of testing datastes are much easier - one can simply test with effectively
untrained parameters on a few examples to measure the test speed thus the per-example test time for OAA on
ImageNet and ODP is provided.
15
Also to the best of our knowledge there exist no state-of-the-art results of the OAA performance on these
datasets published in the literature.
8
References
[1] R. Rifkin and A. Klautau. In defense of one-vs-all classification. J. Mach. Learn. Res., 5:101?141, 2004.
[2] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, Inc., 1991.
[3] L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and Regression Trees. CRC
Press LLC, Boca Raton, Florida, 1984.
[4] M. Kearns and Y. Mansour. On the boosting ability of top-down decision tree learning algorithms. Journal
of Computer and Systems Sciences, 58(1):109?128, 1999 (also In STOC, 1996).
[5] A. Beygelzimer, J. Langford, and P. D. Ravikumar. Error-correcting tournaments. In ALT, 2009.
[6] A. Beygelzimer, J. Langford, Y. Lifshits, G. B. Sorkin, and A. L. Strehl. Conditional probability tree
estimation analysis and algorithms. In UAI, 2009.
[7] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
[8] S. Bengio, J. Weston, and D. Grangier. Label embedding trees for large multi-class tasks. In NIPS, 2010.
[9] G. Madzarov, D. Gjorgjevikj, and I. Chorbev. A multi-class svm classifier utilizing binary decision tree.
Informatica, 33(2):225?233, 2009.
[10] J. Deng, S. Satheesh, A. C. Berg, and L. Fei-Fei. Fast and balanced: Efficient label tree learning for large
scale object recognition. In NIPS, 2011.
[11] J. Weston, A. Makadia, and H. Yee. Label partitioning for sublinear ranking. In ICML, 2013.
[12] B. Zhao and E. P. Xing. Sparse output coding for large-scale visual recognition. In CVPR, 2013.
[13] D. Hsu, S. Kakade, J. Langford, and T. Zhang. Multi-label prediction via compressed sensing. In NIPS,
2009.
[14] A. Agarwal, S. M. Kakade, N. Karampatziakis, L. Song, and G. Valiant. Least squares revisited: Scalable
approaches for multi-class prediction. In ICML, 2014.
[15] O. Beijbom, M. Saberian, D. Kriegman, and N. Vasconcelos. Guess-averse loss functions for costsensitive multiclass boosting. In ICML, 2014.
[16] R. Agarwal, A. Gupta, Y. Prabhu, and M. Varma. Multi-label learning with millions of labels: Recommending advertiser bid phrases for web pages. In WWW, 2013.
[17] Y. Prabhu and M. Varma. Fastxml: A fast, accurate and stable tree-classifier for extreme multi-label
learning. In ACM SIGKDD, 2014.
[18] H.-F. Yu, P. Jain, P. Kar, and I. S. Dhillon. Large-scale multi-label learning with missing labels. In ICML,
2014.
[19] T.-Y. Liu, Y. Yang, H. Wan, H.-J. Zeng, Z. Chen, and W.-Y. Ma. Support vector machines classification
with a very large-scale taxonomy. In SIGKDD Explorations, 2005.
[20] P. N. Bennett and N. Nguyen. Refined experts: improving classification in large taxonomies. In SIGIR,
2009.
[21] A. Montillo, J. Tu, J. Shotton, J. Winn, J.E. Iglesias, D.N. Metaxas, and A. Criminisi. Entanglement and
differentiable information gain maximization. Decision Forests for Computer Vision and Medical Image
Analysis, 2013.
[22] K. Tentori, V. Crupi, N. Bonini, and D. Osherson. Comparison of confirmation measures. Cognition,
103(1):107 ? 119, 2007.
[23] R. Carnap. Logical Foundations of Probability. 2nd ed. Chicago: University of Chicago Press. Par. 87
(pp. 468-478), 1962.
[24] S. Shalev-Shwartz. Online learning and online convex optimization. Found. Trends Mach. Learn.,
4(2):107?194, 2012.
[25] J. Langford, L. Li, and A. Strehl. http://hunch.net/?vw, 2007.
[26] Y. Nesterov. Introductory lectures on convex optimization : a basic course. Applied optimization, Kluwer
Academic Publ., 2004.
[27] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image
database. In CVPR, 2009.
9
| 5937 |@word norm:1 seems:1 nd:1 proportion:1 r:6 tried:1 thereby:3 harder:1 recursively:2 reduction:6 born:1 contains:1 exclusively:2 score:5 liu:1 ours:1 past:5 existing:1 outperforms:1 current:1 com:1 beygelzimer:2 must:2 written:1 john:2 subsequent:1 partition:32 chicago:2 v:3 leaf:29 selected:2 guess:1 ith:1 boosting:9 node:60 location:1 revisited:1 zhang:1 mathematical:1 along:1 constructed:1 direct:1 differential:1 become:1 prove:3 consists:1 introductory:1 indeed:1 expected:4 mechanic:1 multi:11 discounted:1 reductive:1 cardinality:1 becomes:3 begin:1 provided:4 bounded:3 what:3 substantially:1 developed:1 finding:3 nj:4 guarantee:4 every:4 concave:2 classifier:18 brute:1 partitioning:2 medical:1 producing:1 positive:1 before:1 limit:1 despite:1 mach:2 path:2 might:1 tournament:1 studied:1 dynamically:1 challenging:2 range:1 unique:1 acknowledgment:1 testing:6 ond:1 practice:3 gjorgjevikj:1 procedure:4 descended:1 empirical:6 significantly:3 convenient:1 confidence:2 cannot:1 close:4 context:5 applying:1 yee:1 optimize:4 www:1 dean:1 demonstrated:1 missing:2 maximizing:3 send:2 go:1 vowpal:1 convex:3 sigir:1 formulate:1 splitting:3 assigns:1 pure:12 correcting:2 m2:1 isolet:4 utilizing:1 fastxml:1 varma:2 ity:1 embedding:1 notion:1 justification:1 updated:1 resp:1 construction:6 hierarchy:6 target:1 decode:2 massive:1 us:2 hypothesis:23 hunch:1 element:3 trend:1 recognition:3 updating:1 imnet:3 predicts:1 labeled:1 database:1 bottom:3 solved:2 worst:1 boca:1 ensures:1 averse:1 decrease:2 highest:3 valuable:1 balanced:12 intuition:3 substantial:1 convexity:1 complexity:9 saberian:1 entanglement:1 nesterov:1 kriegman:1 dynamic:1 multilabel:3 trained:2 depend:1 solving:1 weakly:1 creation:1 creates:2 kraft:1 learner:1 swap:11 easily:1 osherson:1 routinely:1 train:8 jain:1 fast:3 effective:3 describe:2 gini:3 shalev:2 refined:1 quite:2 richer:1 supplementary:5 plausible:2 cvpr:2 dominating:1 larger:3 solve:1 otherwise:1 posed:1 say:1 compressed:2 ability:1 online:17 sequence:2 advantage:3 differentiable:1 net:2 wabbit:1 propose:1 coming:1 mb:2 tu:1 aligned:1 combining:1 unavoidably:1 rifkin:1 achieve:2 representational:3 iglesias:1 inducing:1 parent:6 optimum:1 requirement:1 categorization:1 telgarsky:1 object:1 help:1 develop:1 progress:1 strong:1 implemented:2 c:1 skip:1 implies:3 descends:1 met:1 come:1 posit:1 filter:6 criminisi:1 exploration:1 softer:1 material:5 crc:1 suffices:1 im:1 hold:1 sufficiently:1 considered:1 cognition:1 predict:1 matus:1 strained:1 achieves:2 vary:1 adopt:1 smallest:7 early:1 favorable:1 estimation:3 label:44 currently:1 sensitive:2 wl:3 create:4 clearly:1 always:1 aim:1 rather:1 reaching:10 cr:3 ej:5 hj:6 varying:1 breiman:1 focus:1 refining:1 improvement:2 directs:1 rank:1 karampatziakis:1 sigkdd:2 baseline:2 helpful:1 dependent:2 stopping:2 streaming:2 typically:1 entire:4 lj:9 initially:1 going:2 choromanska:1 subroutine:4 overall:1 classification:19 x11:1 proposes:1 constrained:3 art:1 fairly:1 equal:2 construct:3 vasconcelos:1 yu:1 icml:4 report:2 fundamentally:2 inherent:1 few:3 employ:1 randomly:2 simultaneously:3 individual:1 microsoft:2 friedman:1 detection:1 interest:1 evaluation:1 deferred:1 analyzed:1 extreme:1 yielding:1 swapping:4 behind:1 amenable:1 accurate:2 encourage:1 tree:78 divide:1 old:2 logarithm:3 re:1 theoretical:11 handed:1 instance:1 cover:1 maximization:1 phrase:1 cost:2 addressing:1 subset:2 entry:1 predictor:1 front:1 stored:3 reported:1 answer:1 probabilistic:1 dong:1 decoding:2 picking:1 regressor:3 quickly:1 concrete:1 na:2 heaviest:1 central:1 satisfied:2 hn:6 choose:1 wan:1 worse:2 creating:2 expert:1 zhao:1 style:1 leading:2 toy:1 li:3 converted:1 coding:3 bold:2 matter:1 inc:1 satisfy:2 ranking:2 mv:2 performed:2 root:5 later:2 doing:2 reached:2 competitive:1 recover:1 xing:1 parallel:1 defer:1 contribution:2 minimize:2 square:2 odp:7 accuracy:3 who:1 yield:1 weak:9 metaxas:1 critically:1 produced:1 marginally:1 drive:1 published:1 explain:1 reach:6 whenever:3 checked:1 ed:1 definition:7 against:7 competitor:1 frequency:2 pp:1 naturally:3 proof:2 con:2 gain:2 hsu:1 dataset:5 popular:1 logical:1 knowledge:3 cj:5 organized:1 appears:1 focusing:1 originally:1 courant:1 supervised:1 specify:2 maximally:6 done:2 though:1 strongly:1 furthermore:3 just:1 stage:1 langford:5 sketch:1 carnap:2 web:1 replacing:1 ei:2 zeng:1 propagation:1 costsensitive:1 quality:9 indicated:2 grows:1 building:1 usa:2 detached:1 true:1 inductive:1 hence:2 assigned:1 former:1 symmetric:1 dhillon:1 illustrated:1 deal:1 during:3 uniquely:1 encourages:1 essence:2 criterion:12 m:8 trying:1 stone:1 outline:1 complete:1 demonstrate:2 confusion:1 performs:1 meaning:2 wise:1 consideration:1 image:2 common:2 empirically:3 million:1 kluwer:1 significant:3 expressing:1 refer:1 cv:3 tuning:1 rd:1 pm:2 grangier:1 language:1 access:1 stable:1 similarity:2 alekh:1 gt:3 add:1 base:1 dominant:1 imbalanced:1 recent:1 optimizing:3 optimizes:2 scenario:3 certain:1 inequality:4 binary:8 kar:1 captured:5 greater:1 relaxed:2 performer:1 deng:2 purity:10 subtraction:1 maximize:2 advertiser:1 montillo:1 impure:1 multiple:3 desirable:1 reduces:1 ing:1 recycles:1 faster:2 academic:1 justifying:1 divided:1 ravikumar:1 impact:1 prediction:12 variant:2 regression:2 scalable:1 basic:1 vision:1 expectation:2 sometimes:2 agarwal:3 receive:1 background:1 want:5 whereas:1 addressed:5 aloi:7 else:4 jcl:1 source:1 sends:1 interval:2 winn:1 pass:4 nv:2 sent:6 seem:1 integer:1 vw:1 yang:1 ter:1 split:18 enough:1 bengio:1 shotton:1 variety:1 independence:1 fit:1 bid:1 sorkin:1 reduce:1 idea:2 multiclass:21 sibling:4 klautau:1 whether:2 defense:1 gb:1 ficient:1 song:1 resistance:5 york:2 prefers:1 enumerate:1 useful:1 generally:1 clear:1 proportionally:1 detailed:1 induces:3 informatica:1 schapire:1 http:1 exist:3 canonical:2 sign:1 deteriorates:1 correctly:2 per:10 track:1 reinforces:1 discrete:2 proach:1 key:1 threshold:4 achieving:1 drawn:2 wasteful:1 neither:1 graph:1 fraction:3 sum:2 place:1 almost:2 throughout:1 extends:1 ble:1 decision:15 spa:7 bit:4 pushed:1 bound:2 badly:1 occur:1 constraint:3 fei:4 speed:1 extremely:1 min:4 expanded:1 relatively:1 structured:1 according:1 poor:2 son:1 newer:1 kakade:2 making:2 intuitively:1 computationally:5 equation:4 ln:2 previously:1 turn:1 eventually:1 count:1 needed:1 tractable:1 end:2 available:2 generalizes:1 operation:1 hierarchical:4 spectral:1 batch:1 florida:1 original:1 thomas:1 top:6 running:2 remaining:1 clustering:4 include:1 assumes:1 denotes:5 log2:2 recycling:3 binomial:1 unsuitable:1 prof:1 establish:1 objective:26 question:1 costly:1 traditional:1 said:1 exhibit:1 amongst:4 gradient:3 grandparent:2 grandpa:2 thank:1 prabhu:2 reason:1 assuming:1 makadia:1 code:1 length:1 useless:2 index:2 illustration:1 ratio:2 minimizing:1 mostly:1 olshen:1 sector:5 statement:2 potentially:1 robert:1 stoc:1 taxonomy:2 negative:1 design:2 implementation:1 motivates:2 satheesh:1 publ:1 allowing:1 upper:2 datasets:7 benchmark:1 descent:3 supporting:1 beat:1 precise:1 mansour:1 raton:1 introduced:1 required:1 optimized:1 imagenet:6 cleft:3 learned:1 beijbom:1 tractably:1 nip:3 address:8 below:1 pattern:1 ev:4 biasing:1 sparsity:1 max:2 including:1 suitable:1 natural:3 difficulty:1 force:1 predicting:1 scheme:1 cim:1 created:2 hm:4 prior:2 literature:2 loss:3 expect:1 bear:1 par:1 sublinear:1 lecture:1 lv:2 validation:2 foundation:1 sufficient:1 consistent:4 imposes:1 foster:1 pi:1 balancing:3 translation:1 strehl:2 course:2 truncation:1 side:1 allow:2 formal:2 understand:1 institute:1 sparse:2 regard:1 depth:7 dimension:1 llc:1 cumulative:1 rich:1 partitioners:2 author:5 regressors:3 ple:1 nguyen:1 implicitly:1 logic:1 tains:1 uai:1 symmetrical:1 conclude:1 recommending:1 oaa:18 shwartz:2 search:3 iterative:1 table:6 additionally:2 nature:1 learn:3 robust:1 confirmation:1 petabyte:1 mj:4 expanding:1 obtaining:1 improving:1 recycled:3 forest:2 orphan:4 example1:1 untrained:1 complex:1 investigated:2 constructing:1 expansion:1 anna:1 pk:2 main:3 motivation:1 noise:1 child:13 fied:1 representative:1 fashion:2 lifshits:1 ny:2 wiley:1 fails:1 position:1 candidate:1 third:1 learns:1 down:7 theorem:6 bad:1 bishop:1 showing:1 jensen:1 maxi:2 nyu:1 appeal:1 explored:1 alt:1 svm:1 sensing:1 gupta:1 intractable:1 essential:1 exists:1 socher:1 niques:1 effectively:5 valiant:1 supplement:1 subtree:5 gap:2 easier:1 chen:1 suited:1 entropy:16 logarithmic:26 simply:1 visual:1 conveniently:1 prevents:1 applies:1 springer:1 corresponds:1 satisfies:2 acm:1 ma:1 weston:2 conditional:5 goal:4 consequently:1 bennett:1 feasible:1 experimentally:2 change:2 specifically:1 decouple:1 lemma:10 kearns:1 called:6 total:3 pas:1 experimental:4 shannon:6 exception:1 berg:1 internal:1 support:1 latter:1 arises:1 evaluate:1 ex:9 |
5,455 | 5,938 | Collaborative Filtering with Graph Information:
Consistency and Scalable Methods
Nikhil Rao
Hsiang-Fu Yu
Pradeep Ravikumar
Inderjit S. Dhillon
{nikhilr, rofuyu, paradeepr, inderjit}@cs.utexas.edu
Department of Computer Science
University of Texas at Austin
Abstract
Low rank matrix completion plays a fundamental role in collaborative filtering
applications, the key idea being that the variables lie in a smaller subspace than
the ambient space. Often, additional information about the variables is known,
and it is reasonable to assume that incorporating this information will lead to
better predictions. We tackle the problem of matrix completion when pairwise
relationships among variables are known, via a graph. We formulate and derive
a highly efficient, conjugate gradient based alternating minimization scheme that
solves optimizations with over 55 million observations up to 2 orders of magnitude faster than state-of-the-art (stochastic) gradient-descent based methods. On
the theoretical front, we show that such methods generalize weighted nuclear norm
formulations, and derive statistical consistency guarantees. We validate our results
on both real and synthetic datasets.
1
Introduction
Low rank matrix completion approaches are among the most widely used collaborative filtering
methods, where a partially observed matrix is available to the practitioner, who needs to impute the
missing entries. Specifically, suppose there exists a ratings matrix Y 2 Rm?n , and we only observe
a subset of the entries Yij , 8(i, j) 2 ?, |?| = N ? mn. The goal is to estimate Yi,j , 8(i, j) 2
/ ?.
To this end, one typically looks to solve one of the following (equivalent) programs:
1
Z? = arg min kP? (Y
Z 2
? ,H
? = arg min 1 kP? (Y
W
W,H 2
Z)k2F +
(1)
z kZk?
W H T )k2F +
w
2
kW k2F +
h
2
kHk2F
(2)
where the nuclear norm kZk? , given by the sum of singular values, is a tight convex relaxation of the
non convex rank penalty, and is equivalent to the regularizer in (2). P? (?) is the projection operator
that only retains those entries of the matrix that lie in the set ?.
In many cases however, one not only has the partially observed ratings matrix, but also has access
to additional information about the relationships between the variables involved. For example, one
might have access to a social network of users. Similarly, one might have access to attributes of
items, movies, etc. The nature of the attributes can be fairly arbitrary, but it is reasonable to assume
that ?similar? users/items share ?similar? attributes. A natural question to ask then, is if one can take
advantage of this additional information to make better predictions. In this paper, we assume that
the row and column variables lie on graphs. The graphs may naturally be part of the data (social
networks, product co-purchasing graphs) or they can be constructed from available features. The
idea then is to incorporate this additional structural information into the matrix completion setting.
1
We not only require the resulting optimization program to enforce additional constraints on Z, but
we also require it to admit efficient optimization algorithms. We show in the sections that follow that
this in fact is indeed the case. We also perform a theoretical analysis of our problem when the observed entries of Y are corrupted by additive white Gaussian noise. To summarize, the contributions
of our paper are as follows:
? We provide a scalable algorithm for matrix completion graph with structural information.
Our method relies on efficient Hessian-vector multiplication schemes, and is orders of magnitude faster than (stochastic) gradient descent based approaches.
? We make connections with other structured matrix factorization frameworks. Notably, we
show that our method generalizes the weighted nuclear norm [21], and methods based on
Gaussian generative models [27].
? We derive consistency guarantees for graph regularized matrix completion, and empirically
show that our bound is smaller than that of traditional matrix completion, where graph
information is ignored.
? We empirically validate our claims, and show that our method achieves comparable error
rates to other methods, while being significantly more scalable.
Related Work and Key Differences
For convex methods for matrix factorization, Haeffele et al. [9] provided a framework to use regularizers with norms other than the Euclidean norm in (2). Abernethy et al. [1] considered a kernel
based embedding of the data, and showed that the resulting problem can be expressed as a norm minimization scheme. Srebro and Salakhutdinov [21] introduced a weighted nuclear norm, and showed
that the method enjoys superior performance as compared to standard matrix completion under a
non-uniform sampling scheme. We show that the graph based framework considered in this paper is
in fact a generalization of the weighted nuclear norm problem, with non-diagonal weight matrices.
In the context of matrix factorization with graph structural information, [5] considered a graph regularized nonnegative matrix factorization framework and proposed a gradient descent based method
to solve the problem. In the context of recommendation systems in social networks, Ma et al. [14]
modeled the weight of a graph edge1 explicitly in a re-weighted regularization framework. Li and
Yeung [13] considered a similar setting to ours, but a key point of difference between all the aforementioned methods and our paper is that we consider the partially observed ratings case. There are
some works developing algorithms for the situation with partially observations [12, 26, 27]; however, none of them provides statistical guarantees. Weighted norm minimization has been considered
before ([16, 21]) in the context of low rank matrix completion. The thrust of these methods has been
to show that despite suboptimal conditions (correlated data, non-uniform sampling), the sample
complexity does not change. None of these methods use graph information. We are interested in a
complementary question: Given variables conforming to graph information, can we obtain better
guarantees under uniform sampling to those achieved by traditional methods?
2
Graph-Structured Matrix Factorization
Assume that the ?true? target matrix can be factorized as Z ? = W ? (H ? )T , and there exist a graph
(V w , E w ) whose adjacency matrix encodes the relationships between the m rows of W ? and a graph
(V h , E h ) for n rows of H ? . In particular, two rows (or columns) connected by an edge in the graph
are ?close? to each other in the Euclidean distance. In the context of graph-based embedding, [3, 4]
proposed a smoothing term of the form
1X w
E (wi
2 i,j ij
wj )2 = tr(W T Lap(E w )W )
(3)
where Lap(E w ) :=PDw E w is the graph Laplacian for (V w , E w ), where Dw is the diagonal
w
w
matrix with Dii
= j?i Eij
. Adding (3) into the minimization problem (2) encourages solutions
w
where wi ? wj when Eij is large. A similar argument holds for H ? and the associated graph
Laplacian Lap(E h ).
1
The authors call this the ?trust? between links in a social network
2
We would thus not only want the target matrix to be low rank, but also want the variables W, H to
be faithful to the underlying graph structure. To this end, we consider the following problem:
1
L
min kP? Y W H T k2F + {tr(W T Lap(E w )W ) + tr(H T Lap(E h )H)}+
(4)
W,H 2
2
w
2
kW k2F +
h
2
kHk2F
1
1
kP? Y W H T k2F +
tr(W T Lw W ) + tr(H T Lh H)
(5)
2
2
where Lw := L Lap(E w ) + w Im , and Lh is defined similarly. Note that we subsume the regularization parameters in the definition of Lw , Lh . Note that kW k2F = tr(W T Im W ).
? min
W,H
The regularizer in (5) encourages solutions that are smooth with respect to the corresponding graphs.
However, the Laplacian matrix can be replaced by other (positive, semi-definite) matrices that encourage structure by different means. Indeed, a very general class of Laplacian based regularizers
was considered in [20], where one can replace Lw by a function:
hx, ? (Lap(E))xi
where
? (Lap(E)) ?
|V |
X
? ( i )qi qiT ,
i=1
where {( i , qi )} constitute the eigen-system of Lap(E) and ? ( i ) is a scalar function of the eigenvalues. Our case corresponds to ? (?) being the identity function. We briefly summarize other
schemes that fit neatly into (5), apart from the graph regularizer we consider:
Covariance matrices for variables: [27] proposed a kernelized probabilistic matrix factorization
(KPMF), which is a generative model to incorporate covariance information of the variables into
matrix factorization. They assumed that each row of W ? , H ? is generated according to a multivariate
Gaussian, and solving the corresponding MAP estimation procedure yields exactly (5), with Lw =
Cw 1 and Lh = Ch 1 , where Cw , Ch are the associated covariance matrices.
Feature matrices for variables: Assume that there is a feature matrix X 2 Rm?d for objects
associated rows. For such X, one can construct a graph (and hence a Laplacian) using various
methods such as k-nearest neighbors, ?-nearest neighbors etc. Moreover, one can assume that there
exists a kernel k(xi , xj ) that encodes pairwise relations, and we can use the Kernel Gram matrix as
a Laplacian.
We can thus see that problem (5) is a very general scheme, and can incorporate information available
in many different forms. In the sequel, we assume the matrices Lw , Lh are given. In the theoretical
analysis in Section 5, for ease of exposition, we further assume that the minimum eigenvalues of
Lw , Lh are unity. A general (nonzero) minimum eigenvalue will merely introduce multiplicative
constants in our bounds.
3
GRALS: Graph Regularized Alternating Least Squares
In this section, we propose efficient algorithms for (5), which is convex with respect to W or H
separately. This allows us to employ alternating minimization methods [25] to solve the problem.
When Y is fully observed, Li and Yeung [13] propose an alternating minimization scheme using
block steepest descent. We deal with the partially observed setting, and propose to apply conjugate
gradient (CG), which is known to converge faster than steepest descent, to solve each subproblem.
We propose a very efficient Hessian-vector multiplication routine that results in the algorithm being
highly scalable, compared to the (stochastic) gradient descent approaches in [14, 27].
We assume that Y 2 Rm?n , W 2 Rm?k and H 2 Rn?k . When optimizing H with W fixed, we
obtain the following sub-problem.
1
1
min f (H) = kP? Y W H T k2F + tr(H T Lh H).
(6)
H
2
2
Optimizing W while H fixed is similar, and thus we only show the details for solving (6). Since Lh
is nonsingular, (6) is strongly convex.2 We first present our algorithm for the fully observed case,
since it sets the groundwork for the partially observed setting.
2
In fact, a nonsingular Lh can be handled using proximal updates, and our algorithm will still apply
3
Algorithm 2 Hv-Multiplication for g? (s)
? Given: Matrices Lh , W, ?
? Multiplication: r2 g(s0 )s:
1 Input: S 2 Rk?n s.t.
s = vec(S)
2 Compute
P K = [kT1 , . . . , kn ] s.t.
kj
i2?j (wi sj )wi
3 A
K + SLh
4 Return: vec(A)
Algorithm 1 Hv-Multiplication for g(s)
? Given: Matrices Lh , W
? Initialization: G = W T W
? Multiplication: r2 g(s0 )s:
1 Input: S 2 Rn?k s.t.
s = vec(S)
2 A
SG + Lh S
3 Return: vec(A)
3.1 Fully Observed Case
As in [5, 13] among others, there may be scenarios where Y is completely observed, and the goal
is to find the row/column embeddings that conform to the corresponding graphs. In this case, the
loss term in (6) is simply kY
W H T k2F . Thus, setting rf (H) = 0 is equivalent to solving the
following Sylvester equation for an n ? k matrix H:
HW T W + Lh H = Y T W.
(7)
(7) admits a closed form solution. However the standard Bartels-Stewart algorithm for the Sylvester
equation requires transforming both W T W and Lh into Schur form (diagonal in our case where
W T W and Lh are symmetric) by the QR algorithm, which is time consuming for a large Lh . Thus,
we consider applying conjugate gradient (CG) to minimize f (H) directly. We define the following
quadratic function:
1
T
g(s) := sT M s vec Y T W s, s 2 Rnk , M = Ik ? Lh + (W T W ) ? In
2
It is not hard to show that f (H) = g(vec(H)) and so we apply CG to minimize g(s).
The most crucial step in CG is the Hessian-vector multiplication. Using the identity (B T ?
A) vec(X) = vec(AXB), it follows that
(Ik ? Lh ) s = vec(Lh S) , and
(W T W ) ? In s = vec SW T W ,
where vec(S) = s. Thus the Hessian-vector multiplication can be implemented by a series of matrix
multiplications as follows.
M s = vec Lh S + S(W T W ) ,
where W T W can be pre-computed and stored in O(k 2 ) space. The details are presented in Algorithm 1. The time complexity for a single CG iteration is O(nnz(Lh )k + nk 2 ), where nnz(?) is the
number of non zeros. Since in most practical applications k is generally small, the complexity is
essentially O(nnz(Lh )k) as long as nk ? nnz(Lh ).
3.2 Partially Observed Case
P
In this case, the loss term of (6) becomes (i,j)2? (Yij wiT hj )2 , where wiT is the i-th row of W
and hj is the j-th column of H T . Similar to the fully observed case, we can define:
1 T
T
s M? s vec W T Y s,
2
? + Lh ? Ik , B
? 2 Rnk?nk is a block diagonal matrix with n diagonal blocks
where M? = B
P
T
Bj 2 Rk?k . Bj =
i2?j wi wi , where ?j = {i : (i, j) 2 ?}. Again, we can see f (H) =
g? (vec H T ). Note that the transpose H T is used here instead of H, which is used in the fully
observed case.
g? (s) :=
For a given s, let S = [s1 , . . . sj , . . . sn ] be a matrix such that vec(S) = s and K =
? = vec(K). Note that since n can be very large in
[k1 , . . . , kj , . . . , kn ] with kj = Bj sj . Then Bs
practice, it may not be feasible to compute and store all Bj in the beginning. Alternatively, Bj sj
can be computed in O(|?j |k) time as follows.
X
B j sj =
(wiT sj )wi .
i2?j
4
? can be computed in O(|?|k) time, and the Hessian-vector multiplication M? s can be
Thus Bs
done in O (|?|k + nnz(Lh )k) time. See Algorithm 2 for a detailed procedure. As a result, each CG
iteration for minimizing g? (s) is also very cheap.
Remark on Convergence. In [2], it is shown that any local minimizer of (5) is a global minimizer
of (5) if k is larger than the true rank of the underlying matrix.3 From [25], the alternating minimization procedure is guaranteed to globally converge to a block coordinate-wise minimum4 of (5).
The converged point might not be a local minimizer, but it still yields good performance in practice.
Most importantly, since the updates are cheap to perform, our algorithm scales well to large datasets.
4
Convex Connection via Generalized Weighted Nuclear Norm
We now show that the regularizer in (5) can be cast as a generalized version of the weighted nuclear
norm. The weights in our case will correspond to the scaling factors introduced on the matrices
W, H due to the eigenvalues of the shifted graph Laplacians Lw , Lh respectively.
4.1
A weighted atomic norm:
From [7], we know that the nuclear norm is the gauge function induced by the atomic set: A? =
{wi hTi : kwi k = khi k = 1}. Note that all rank one matrices in A? have unit Frobenius norm.
1/2
Now, assume P = [p1 , . . . , pm ] 2 Rm?m is a basis of Rm and Sp
is a diagonal matrix with
1/2
(Sp
)ii
0 encoding the ?preference? over the space spanned by pi . The more the preference,
1/2
the larger the value. Similarly, consider the basis Q and the preference Sq
for Rn . Let A =
1/2
1/2
P Sp
and B = QSq
, and consider the following ?preferential? atomic set:
A := {
i
= wi hTi : wi = Aui , hi = Bvi , kui k = kvi k = 1}.
(8)
Clearly, each atom in A has non-unit Frobenius norm. This atomic set allows for biasing of the
solutions towards certain atoms. We then define a corresponding atomic norm:
X
X
kZkA = inf
|ci | s.t. Z =
ci i .
(9)
i 2A
i 2A
It is not hard to verify that kZkA is a norm and {Z : kZkA ? ? } is closed and convex.
4.2
Equivalence to Graph Regularization
The graph regularization (5) can be shown to be a special case of the atomic norm (9), as a consequence of the following result:
Theorem 1. For any A = P Sp
kZkA = inf
W,H
1/2
, B = QSq
1
{kA
2
1
1/2
, and corresponding weighted atomic set A ,
W k2F + kB
1
Hk2F }
Z = W HT .
s.t.
We prove this result in Appendix A. Theorem 1 immediately leads us to the following equivalence
result:
Corollary 1. Let Lw = Uw Sw UwT and Lh = Uh Sh UhT be the eigen decomposition for Lw and Lh .
We have
Tr W T Lw W = kA 1 W k2F ,
and Tr H T Lh H = kB 1 Hk2F ,
1/2
1/2
1/2
where A = Uw Sw
and B = Uh Sh . As a result, kM kA with the preference pair (Uw , Sw )
1/2
for the column space and the preference pair (Uh , Sh ) for row space is a weighted atomic norm
equivalent for the graph regularization using Lw and Lh .
The results above allow us to obtain the dual weighted atomic norm for a matrix Z
1
1
kZk?A = kAT ZB k = kSw 2 UwT ZUh Sh 2 k
3
4
The authors actually show this for a more general class of regularizers.
Nash equilibrium is used in [25].
5
(10)
which is a weighted spectral norm. An elementary proof of this result can be found in Appendix B.
Note that we can then write
kZkA = kA
1
ZB
T
1
1
k? = kSw2 Uw 1 ZUh T Sh2 k?
(11)
In [21], the authors consider a norm similar to (11), but with A, B being diagonal matrices. In the
spirit of their nomenclature, we refer to the norm in (11) as the generalized weighted nuclear norm.
5
Statistical Consistency in the Presence of Noisy Measurements
In this section, we derive theoretical guarantees for the graph regularized low rank matrix estimators.
We first introduce some additional notation. We assume that there is a m ? n matrix Z ? of rank
k with kZ ? kF = 1, and N = |?| entries of Z ? are uniformly sampled5 and revealed to us (i.e.,
Y = P? (Z ? )). We further assume an one-to-one mapping between the set of observed indices ?
and {1, 2, . . . , N } so that the tth measurement is given by
yt = Yi(t),j(t) = hei(t) eTj(t) , Z ? i + p
mn
(12)
?t ? N (0, 1).
?t
where h?, ?i denotes the matrix trace inner product, and i(t), j(t) is a randomly selected coordinate
pair from [m]?[n]. Let A, B are corresponding matrices defined in Corollary 1 for the given Lw , Lh .
W.L.O.G, we assume that the minimum singular value of both Lw and Lh is 1. We then define the
following graph based complexity measures:
?g (Z) :=
p
mn
kA 1 ZB
kA 1 ZB
T
k1
,
Tk
F
g (Z)
:=
kA 1 ZB
kA 1 ZB
T
k?
Tk
F
(13)
where k ? k1 is the element-wise `1 norm. Finally, we assume that the true matrix Z ? can be
expressed as a linear combination of atoms from (8) (we define ?? := ?g (Z ? )):
Z ? = AU ? (V ? )T B T , U ? 2 Rm?k , V ? 2 Rn?k ,
(14)
Our goal in this section will be to characterize the solution to the following convex program, where
the constraint set precludes selection of overly complex matrices in the sense of (13):
s
(
)
1
N
2
Z? = arg min kP? (Y Z)kF + kZkA where C := Z : ?g (Z) g (Z) ? c?0
,
Z2C N
log(m + n)
(15)
where c?0 is a constant depending on ?? .
A quick note on solving (15): since k ? kA is a weighted nuclear norm, one can resort to proximal
point methods [6], or greedy methods developed specifically for atomic norm constrained minimization [18, 22]. The latter are particularly attractive, since the greedy step reduces to computing the
maximum singular vectors which can be efficiently computed using power methods. However, such
methods will first involve computing the eigen decompositions of the graph Laplacians, and then
storing the large, dense matrices A, B. We refrain from resorting to such methods in Section 6, and
instead use the efficient framework derived in Section 3. We now state our main theoretical result:
Theorem 2. Suppose we observe N entries of the form (12) from a matrix Z ? 2 Rm?n , with
?
?? := ?g (Z ? ) and which can be represented
q using at most k atoms from (8). Let Z be the minimizer
of the convex problem (15) with
kZ?
C1
(m+n) log(m+n)
.
N
Z ? k2F ? C??2 max 1,
2
Then, with high probability, we have
k(m + n) log(m + n)
+O
N
where C, C1 are positive constants.
See Appendix C for the detailed proof. A proof sketch is as follows:
5
Our results can be generalized to non uniform sampling schemes as well.
6
?
??2
N
?
Proof Sketch: There are three major portions of the proof:
? Using the fact that Z ? has unit Frobeniusp
norm and can be expressed as a combination of
at most k atoms, we can show kZ ? kA ? k (Appendix C.1)
? Using (10), we can derive a bound for the dual norm of the gradient of the loss L(Z), given
1
1
by krL(Z)k?A = kSw 2 UwT rL(Z)Uh Sh 2 k. (Appendix C.2)
? Finally, using (13), we define a notion of restricted strong convexity (RSC) that the ?error?
matrices Z ? Z? lie in. The proof follows closely along the lines of the equivalent result
in [16], with appropriate modifications to accommodate our generalized weighted nuclear
norm. (Appendix C.3).
5.1
Comparison to Standard Matrix Completion:
It is instructive to consider our result in the context of noisy matrix completion with uniform samples.
In this case, one would replace Lw , Lh by identity matrices, effectively ignoring graph information
p
1
available. Specifically, the ?standard? notion of spikiness (?n := mn kZk
kZkF ) defined in [16]
will apply, and the corresponding error bound (Theorem 2) will have ?? replaced by ?n (Z ? ). In
general, it is hard to quantify the relationship between ?g and ?n , and a detailed comparison is an
interesting topic for future work. However, we show below using simulations for various scenarios
that the former is much smaller than the latter. We generate m ? m matrices of rank k = 10,
M = U ?V T with U, V being random orthonormal matrices and ? having diagonal elements picked
from a uniform[0, 1] distribution. We generate graphs at random using the schemes discussed below,
and set Z = AM B T , with A, B as defined in Corollary 1. We then compute ?n , ?g for various m.
Comparing ?g to ?n :
Most real world graphs exhibit a power law degree distribution. We
generated graphs with the ith node having degree (m ? ip ) with varying negative p values. Figure
1(a) shows that as p ! 0 from below, the gains received from using our norm is clear compared to
the standard nuclear norm. We also observe that in general the weighted formulation is never worse
then unweighted (The dotted magenta line is ?n /?g = 1). The same applies for random graphs,
where there is an edge between each (i, j) with varying probability p (Figure 1(b)).
?4
25
5
m = 100
m = 200
m = 300
20
4
6
m = 100
m = 200
m = 300
x 10
GWNN
NN
5
10
MSE
?n / ?g
?n / ?g
4
15
3
3
2
2
5
0
?0.1
1
?0.2
?0.5
?1
?1.5
p
?2
1
0.1
0.15
(a) Power Law
0.2
0.25
0.5
p
(b) Random
1
0
0
1
2
3
# measurements x 1000
4
(c) Sample Complexity
Figure 1: (a), (b): Ratio of spikiness measures for traditional matrix completion and our formulation.
(c): Sample complexity for the nuclear norm (NN) and generalized weighted nuclear norm (GWNN)
Sample Complexity: We tested the sample complexity needed to recover a m = n = 200, k = 20
matrix, generated from a power law distributed graph with p = 0.5. Figure 1(c) again outlines that
the atomic formulation requires fewer examples to get an accurate recovery. We average the results
over 10 independent runs, and we used [18] to solve the atomic norm constrained problem.
6
Experiments on Real Datasets
Comparison to Related Formulations: We compare GRALS to other methods that incorporate
side information for matrix completion: the ADMM method of [12] that regularizes the entire target
matrix; using known features (IMC) [10, 24]; and standard matrix completion (MC). We use the
MOVIELENS 100k dataset,6 that has user/movie features along with the ratings matrix. The dataset
contains user features (such as age (numeric), gender (binary), and occupation), which we map
6
http://grouplens.org/datasets/movielens/
7
1.25
Method
IMC
Global mean
User mean
Movie mean
ADMM
MC
GRALS
ADMM
MC
GRALS
1.2
RMSE
1.15
1.1
1.05
1
0.95
0.9
?4
?2
0
2
log10(time) (s)
4
6
8
Figure 2: Time comparison of different methods
on MOVIELENS 100k
Dataset
Flixster ([11])
Douban ([14])
YahooMusic ([8])
RMSE
1.653
1.154
1.063
1.033
0.996
0.973
0.945
Table 1: RMSE on the
MOVIELENS dataset
Table 2: Data statistics.
# users
# items
# ratings
147,612
48,794
8,196,077
129,490
58,541 16,830,839
249,012 296,111 55,749,965
# links
2,538,746
1,711,802
57,248,136
rank used
10
10
20
into a 22 dimensional feature vector per user. We then construct a 10-nearest neighbor graph using
the euclidean distance metric. We do the same for the movies, except in this case we have an 18
dimensional feature vector per movie. For IMC, we use the feature vectors directly. We trained
a model of rank 10, and chose optimal parameters by cross validation. Table 1 shows the RMSE
obtained for the methods considered. Figure 2 shows that the ADMM method, while obtaining a
reasonable RMSE does not scale well, since one has to compute an SVD at each iteration.
Scalability of GRALS: We now demonstrate that the proposed GRALS method is more efficient
than other state-of-the-art methods for solving the graph-regularized matrix factorization problem
(5). We compare GRALS to the SGD method in [27], and GD: ALS with simple gradient descent.
We consider three large-scale real-world collaborate filtering datasets with graph information: see
Table 2 for details.7 We randomly select 90% of ratings as the training set and use the remaining
10% as the test set. All the experiments are performed on an Intel machine with Xeon CPU E52680 v2 Ivy Bridge and enough RAM. Figure 3 shows orders of magnitude improvement in time
compared to SGD. More experimental results are provided in the supplementary material.
(a) Flixster
(b) Douban
(c) YahooMusic
Figure 3: Comparison of GRALS, GD, and SGD. The x-axis is the computation time in log-scale.
7
Discussion
In this paper, we have considered the problem of collaborative filtering with graph information for
users and/or items, and showed that it can be cast as a generalized weighted nuclear norm problem. We derived statistical consistency guarantees for our method, and developed a highly scalable
alternating minimization method. Experiments on large real world datasets show that our method
achieves ? 2 orders of magnitude speedups over competing approaches.
Acknowledgments
This research was supported by NSF grant CCF-1320746. H.-F. Yu acknowledges support from an Intel PhD
fellowship. NR was supported by an ICES fellowship.
7
See more details in Appendix D.
8
References
[1] Jacob Abernethy, Francis Bach, Theodoros Evgeniou, and Jean-Philippe Vert. Low-rank matrix factorization with attributes. arXiv preprint cs/0611124, 2006.
[2] Francis Bach, Julien Mairal, and Jean Ponce. Convex sparse matrix factorizations. CoRR, abs/0812.1869,
2008.
[3] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and
clustering. In NIPS, volume 14, pages 585?591, 2001.
[4] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373?1396, 2003.
[5] Deng Cai, Xiaofei He, Jiawei Han, and Thomas S Huang. Graph regularized nonnegative matrix factorization for data representation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 33(8):
1548?1560, 2011.
[6] Jian-Feng Cai, Emmanuel J Cand`es, and Zuowei Shen. A singular value thresholding algorithm for matrix
completion. SIAM Journal on Optimization, 20(4):1956?1982, 2010.
[7] Venkat Chandrasekaran, Benjamin Recht, Pablo A Parrilo, and Alan S Willsky. The convex geometry of
linear inverse problems. Foundations of Computational Mathematics, 12(6):805?849, 2012.
[8] Gideon Dror, Noam Koenigstein, Yehuda Koren, and Markus Weimer. The yahoo! music dataset and
kdd-cup?11. In KDD Cup, pages 8?18, 2012.
[9] Benjamin Haeffele, Eric Young, and Rene Vidal. Structured low-rank matrix factorization: Optimality,
algorithm, and applications to image processing. In Proceedings of the 31st International Conference on
Machine Learning (ICML-14), pages 2007?2015, 2014.
[10] Prateek Jain and Inderjit S Dhillon.
Provable inductive matrix completion.
arXiv preprint
arXiv:1306.0626, 2013.
[11] Mohsen Jamali and Martin Ester. A matrix factorization technique with trust propagation for recommendation in social networks. In Proceedings of the Fourth ACM Conference on Recommender Systems,
RecSys ?10, pages 135?142, 2010.
[12] Vassilis Kalofolias, Xavier Bresson, Michael Bronstein, and Pierre Vandergheynst. Matrix completion on
graphs. (EPFL-CONF-203064), 2014.
[13] Wu-Jun Li and Dit-Yan Yeung. Relation regularized matrix factorization. In 21st International Joint
Conference on Artificial Intelligence, 2009.
[14] Hao Ma, Dengyong Zhou, Chao Liu, Michael R. Lyu, and Irwin King. Recommender systems with
social regularization. In Proceedings of the fourth ACM international conference on Web search and data
mining, WSDM ?11, pages 287?296, Hong Kong, China, 2011.
[15] Paolo Massa and Paolo Avesani. Trust-aware bootstrapping of recommender systems. ECAI Workshop
on Recommender Systems, pages 29?33, 2006.
[16] Sahand Negahban and Martin J Wainwright. Restricted strong convexity and weighted matrix completion:
Optimal bounds with noise. The Journal of Machine Learning Research, 13(1):1665?1697, 2012.
[17] Sahand N Negahban, Pradeep Ravikumar, Martin J Wainwright, and Bin Yu. A unified framework for
high-dimensional analysis of m-estimators with decomposable regularizers. Statistical Science, 27(4):
538?557, 2012.
[18] Nikhil Rao, Parikshit Shah, and Stephen Wright. Conditional gradient with enhancement and truncation
for atomic-norm regularization. NIPS workshop on Greedy Algorithms, 2013.
[19] Benjamin Recht. A simpler approach to matrix completion. The Journal of Machine Learning Research,
12:3413?3430, 2011.
[20] Alexander J Smola and Risi Kondor. Kernels and regularization on graphs. In Learning theory and kernel
machines, pages 144?158. Springer, 2003.
[21] Nathan Srebro and Ruslan R Salakhutdinov. Collaborative filtering in a non-uniform world: Learning
with the weighted trace norm. In Advances in Neural Information Processing Systems, pages 2056?2064,
2010.
[22] Ambuj Tewari, Pradeep K Ravikumar, and Inderjit S Dhillon. Greedy algorithms for structurally constrained high dimensional problems. In Advances in Neural Information Processing Systems, pages 882?
890, 2011.
[23] Roman Vershynin. A note on sums of independent random matrices after ahlswede-winter. Lecture notes,
2009.
[24] Miao Xu, Rong Jin, and Zhi-Hua Zhou. Speedup matrix completion with side information: Application
to multi-label learning. In Advances in Neural Information Processing Systems, pages 2301?2309, 2013.
[25] Yangyang. Xu and Wotao Yin. A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion. SIAM Journal on Imaging
Sciences, 6(3):1758?1789, 2013.
[26] Zhou Zhao, Lijun Zhang, Xiaofei He, and Wilfred Ng. Expert finding for question answering via graph
regularized matrix completion. Knowledge and Data Engineering, IEEE Transactions on, PP(99), 2014.
[27] Tinghui Zhou, Hanhuai Shan, Arindam Banerjee, and Guillermo Sapiro. Kernelized probabilistic matrix
factorization: Exploiting graphs and side information. In SDM, volume 12, pages 403?414. SIAM, 2012.
9
| 5938 |@word kong:1 version:1 briefly:1 kondor:1 norm:38 km:1 simulation:1 covariance:3 decomposition:2 jacob:1 sgd:3 yahoomusic:2 tr:9 accommodate:1 reduction:1 liu:1 series:1 contains:1 groundwork:1 ours:1 ka:10 comparing:1 conforming:1 additive:1 thrust:1 kdd:2 cheap:2 update:2 generative:2 selected:1 greedy:4 item:4 fewer:1 intelligence:2 beginning:1 steepest:2 ith:1 slh:1 pdw:1 z2c:1 provides:1 node:1 preference:5 theodoros:1 org:1 simpler:1 zhang:1 along:2 constructed:1 ik:3 prove:1 introduce:2 pairwise:2 notably:1 indeed:2 p1:1 cand:1 multi:1 salakhutdinov:2 globally:1 wsdm:1 zhi:1 cpu:1 becomes:1 provided:2 underlying:2 moreover:1 notation:1 factorized:1 prateek:1 dror:1 developed:2 unified:1 finding:1 bootstrapping:1 guarantee:6 sapiro:1 tackle:1 exactly:1 rm:8 unit:3 grant:1 before:1 positive:2 ice:1 local:2 engineering:1 consequence:1 despite:1 encoding:1 might:3 chose:1 initialization:1 au:1 china:1 equivalence:2 co:1 ease:1 factorization:16 faithful:1 practical:1 acknowledgment:1 atomic:13 practice:2 block:5 definite:1 yehuda:1 kat:1 sq:1 procedure:3 nnz:5 yan:1 significantly:1 vert:1 projection:1 pre:1 ahlswede:1 get:1 close:1 selection:1 operator:1 context:5 applying:1 lijun:1 equivalent:5 map:2 quick:1 missing:1 yt:1 convex:11 formulate:1 wit:3 shen:1 recovery:1 immediately:1 decomposable:1 estimator:2 importantly:1 nuclear:15 spanned:1 orthonormal:1 dw:1 embedding:3 notion:2 coordinate:3 target:3 play:1 suppose:2 user:8 element:2 particularly:1 observed:14 role:1 subproblem:1 preprint:2 hv:2 wj:2 connected:1 benjamin:3 transforming:1 nash:1 complexity:8 convexity:2 khk2f:2 trained:1 tight:1 solving:5 mohsen:1 eric:1 completely:1 basis:2 uh:4 joint:1 various:3 represented:1 regularizer:4 jain:1 kp:6 artificial:1 abernethy:2 whose:1 jean:2 widely:1 solve:5 larger:2 nikhil:2 supplementary:1 precludes:1 statistic:1 niyogi:2 noisy:2 ip:1 advantage:1 eigenvalue:4 sdm:1 cai:2 propose:4 product:2 ivy:1 frobenius:2 validate:2 ky:1 qr:1 scalability:1 exploiting:1 convergence:1 etj:1 enhancement:1 object:1 tk:2 derive:5 depending:1 completion:22 koenigstein:1 dengyong:1 nearest:3 ij:1 received:1 kzka:6 strong:2 solves:1 implemented:1 c:2 quantify:1 closely:1 attribute:4 stochastic:3 kb:2 dii:1 material:1 adjacency:1 bin:1 require:2 hx:1 generalization:1 elementary:1 im:2 yij:2 qsq:2 rong:1 hold:1 considered:8 wright:1 equilibrium:1 mapping:1 bj:5 lyu:1 claim:1 major:1 achieves:2 estimation:1 ruslan:1 label:1 grouplens:1 utexas:1 bridge:1 gauge:1 weighted:21 minimization:9 clearly:1 gaussian:3 zhou:4 hj:2 varying:2 corollary:3 derived:2 ponce:1 improvement:1 rank:14 cg:6 sense:1 am:1 nn:2 epfl:1 typically:1 entire:1 jiawei:1 kernelized:2 relation:2 bartels:1 interested:1 arg:3 among:3 aforementioned:1 dual:2 yahoo:1 art:2 smoothing:1 fairly:1 special:1 constrained:3 construct:2 never:1 having:2 evgeniou:1 sampling:4 atom:5 aware:1 kw:3 ng:1 yu:3 look:1 k2f:12 icml:1 future:1 others:1 roman:1 employ:1 belkin:2 randomly:2 winter:1 parikshit:1 replaced:2 geometry:1 ab:1 highly:3 mining:1 multiconvex:1 sh:5 pradeep:3 regularizers:4 accurate:1 ambient:1 edge:2 fu:1 encourage:1 preferential:1 lh:33 euclidean:3 re:1 theoretical:5 rsc:1 column:5 xeon:1 rao:2 retains:1 stewart:1 bresson:1 entry:6 subset:1 jamali:1 uniform:7 eigenmaps:2 front:1 characterize:1 stored:1 kn:2 corrupted:1 proximal:2 synthetic:1 gd:2 vershynin:1 st:3 recht:2 fundamental:1 siam:3 international:3 negahban:2 sequel:1 probabilistic:2 michael:2 again:2 huang:1 ester:1 worse:1 admit:1 conf:1 resort:1 zhao:1 expert:1 return:2 li:3 parrilo:1 explicitly:1 multiplicative:1 performed:1 picked:1 closed:2 francis:2 portion:1 recover:1 rmse:5 partha:2 collaborative:5 contribution:1 square:1 minimize:2 who:1 efficiently:1 yield:2 nonsingular:2 correspond:1 uwt:3 massa:1 generalize:1 none:2 mc:3 converged:1 definition:1 pp:1 involved:1 naturally:1 associated:3 proof:6 gain:1 dataset:5 ask:1 knowledge:1 dimensionality:1 routine:1 actually:1 miao:1 follow:1 formulation:5 done:1 strongly:1 smola:1 sketch:2 grals:8 web:1 trust:3 banerjee:1 propagation:1 verify:1 true:3 ccf:1 former:1 regularization:8 hence:1 xavier:1 alternating:6 symmetric:1 dhillon:3 nonzero:1 inductive:1 i2:3 white:1 deal:1 attractive:1 impute:1 encourages:2 hong:1 generalized:7 outline:1 demonstrate:1 image:1 wise:2 krl:1 arindam:1 superior:1 empirically:2 sh2:1 hk2f:2 rl:1 volume:2 million:1 discussed:1 he:2 refer:1 measurement:3 imc:3 cup:2 vec:16 rene:1 consistency:5 pm:1 similarly:3 resorting:1 neatly:1 collaborate:1 mathematics:1 access:3 han:1 etc:2 multivariate:1 showed:3 optimizing:2 inf:2 apart:1 scenario:2 store:1 certain:1 binary:1 refrain:1 yi:2 tinghui:1 minimum:3 additional:6 zuowei:1 deng:1 converge:2 semi:1 ii:1 stephen:1 reduces:1 smooth:1 alan:1 faster:3 bvi:1 cross:1 long:1 bach:2 ravikumar:3 laplacian:8 qi:2 prediction:2 scalable:5 sylvester:2 essentially:1 metric:1 yeung:3 iteration:3 kernel:5 arxiv:3 achieved:1 c1:2 want:2 separately:1 fellowship:2 spikiness:2 singular:4 jian:1 crucial:1 kwi:1 induced:1 spirit:1 schur:1 practitioner:1 call:1 structural:3 presence:1 revealed:1 embeddings:1 enough:1 uht:1 xj:1 fit:1 competing:1 suboptimal:1 inner:1 idea:2 texas:1 handled:1 sahand:2 penalty:1 nomenclature:1 hessian:5 constitute:1 remark:1 ignored:1 generally:1 tewari:1 detailed:3 involve:1 clear:1 vassilis:1 tth:1 dit:1 generate:2 http:1 exist:1 nsf:1 shifted:1 dotted:1 overly:1 per:2 conform:1 write:1 paolo:2 key:3 kalofolias:1 ht:1 uw:4 ram:1 graph:49 relaxation:1 merely:1 imaging:1 sum:2 run:1 inverse:1 fourth:2 reasonable:3 chandrasekaran:1 wu:1 appendix:7 scaling:1 comparable:1 rnk:2 bound:5 shan:1 hi:1 guaranteed:1 haeffele:2 koren:1 quadratic:1 nonnegative:3 aui:1 constraint:2 yangyang:1 encodes:2 markus:1 nathan:1 argument:1 min:6 optimality:1 martin:3 speedup:2 department:1 structured:3 developing:1 according:1 combination:2 conjugate:3 smaller:3 unity:1 wi:10 b:2 s1:1 modification:1 restricted:2 equation:2 hei:1 needed:1 know:1 end:2 available:4 generalizes:1 vidal:1 apply:4 observe:3 v2:1 enforce:1 spectral:2 appropriate:1 pierre:1 douban:2 shah:1 eigen:3 thomas:1 denotes:1 remaining:1 clustering:1 sw:4 log10:1 qit:1 music:1 kt1:1 k1:3 emmanuel:1 risi:1 feng:1 tensor:1 question:3 traditional:3 diagonal:8 nr:1 exhibit:1 gradient:10 subspace:1 distance:2 link:2 cw:2 recsys:1 topic:1 provable:1 willsky:1 modeled:1 relationship:4 index:1 ratio:1 minimizing:1 hao:1 trace:2 negative:1 noam:1 bronstein:1 perform:2 wotao:1 recommender:4 observation:2 datasets:6 descent:8 xiaofei:2 philippe:1 jin:1 situation:1 subsume:1 regularizes:1 rn:4 arbitrary:1 rating:6 introduced:2 pablo:1 cast:2 pair:3 connection:2 nip:2 below:3 pattern:1 laplacians:2 biasing:1 summarize:2 gideon:1 program:3 ambuj:1 rf:1 max:1 wainwright:2 power:4 natural:1 regularized:9 mn:4 scheme:9 movie:5 julien:1 axis:1 acknowledges:1 jun:1 kj:3 sn:1 chao:1 sg:1 kf:2 multiplication:10 occupation:1 rofuyu:1 law:3 fully:5 loss:3 lecture:1 interesting:1 filtering:6 srebro:2 vandergheynst:1 age:1 validation:1 foundation:1 degree:2 purchasing:1 s0:2 wilfred:1 thresholding:1 storing:1 share:1 pi:1 austin:1 row:9 guillermo:1 supported:2 flixster:2 transpose:1 ecai:1 truncation:1 enjoys:1 side:3 allow:1 neighbor:3 mikhail:2 sparse:1 distributed:1 kzk:4 gram:1 world:4 unweighted:1 kz:3 numeric:1 author:3 social:6 transaction:2 sj:6 global:2 mairal:1 assumed:1 consuming:1 xi:2 alternatively:1 search:1 table:4 nature:1 ignoring:1 obtaining:1 kui:1 mse:1 complex:1 sp:4 dense:1 main:1 weimer:1 noise:2 complementary:1 xu:2 intel:2 venkat:1 hsiang:1 sub:1 structurally:1 khi:1 lie:4 answering:1 lw:15 hti:2 hw:1 young:1 rk:2 theorem:4 magenta:1 kvi:1 r2:2 admits:1 incorporating:1 exists:2 workshop:2 adding:1 effectively:1 corr:1 ci:2 phd:1 magnitude:4 nk:3 yin:1 lap:9 eij:2 simply:1 expressed:3 partially:7 inderjit:4 scalar:1 recommendation:2 hua:1 applies:1 ch:2 corresponds:1 minimizer:4 gender:1 relies:1 acm:2 ma:2 springer:1 kzkf:1 conditional:1 goal:3 identity:3 king:1 exposition:1 towards:1 replace:2 admm:4 axb:1 change:1 hard:3 feasible:1 specifically:3 movielens:4 uniformly:1 except:1 zb:6 svd:1 experimental:1 e:1 select:1 support:1 latter:2 irwin:1 alexander:1 incorporate:4 tested:1 instructive:1 correlated:1 |
5,456 | 5,939 | Efficient and Parsimonious Agnostic Active Learning
Tzu-Kuo Huang
Microsoft Research, NYC
Alekh Agarwal
Microsoft Research, NYC
Daniel Hsu
Columbia University
tkhuang@microsoft.com
alekha@microsoft.com
djhsu@cs.columbia.edu
John Langford
Microsoft Research, NYC
Robert E. Schapire
Microsoft Research, NYC
jcl@microsoft.com
schapire@microsoft.com
Abstract
We develop a new active learning algorithm for the streaming setting satisfying
three important properties: 1) It provably works for any classifier representation
and classification problem including those with severe noise. 2) It is efficiently
implementable with an ERM oracle. 3) It is more aggressive than all previous
approaches satisfying 1 and 2. To do this, we create an algorithm based on a
newly defined optimization problem and analyze it. We also conduct the first experimental analysis of all efficient agnostic active learning algorithms, evaluating
their strengths and weaknesses in different settings.
1
Introduction
Given a label budget, what is the best way to learn a classifier?
Active learning approaches to this question are known to yield exponential improvements over supervised learning under strong assumptions [7]. Under much weaker assumptions, streaming-based
agnostic active learning [2, 4, 5, 9, 18] is particularly appealing since it is known to work for any
classifier representation and any label distribution with an i.i.d. data source.1 Here, a learning algorithm decides for each unlabeled example in sequence whether or not to request a label, never
revisiting this decision. Restated then: What is the best possible active learning algorithm which
works for any classifier representation, any label distribution, and is computationally tractable?
Computational tractability is a critical concern, because most known algorithms for this setting [e.g.,
2, 16, 18] require explicit enumeration of classifiers, implying exponentially-worse computational
complexity compared to typical supervised learning algorithms. Active learning algorithms based
on empirical risk minimization (ERM) oracles [4, 5, 13] can overcome this intractability by using
passive classification algorithms as the oracle to achieve a computationally acceptable solution.
Achieving generality, robustness, and acceptable computation has a cost. For the above methods [4,
5, 13], a label is requested on nearly every unlabeled example where two empirically good classifiers
disagree. This results in a poor label complexity, well short of information-theoretic limits [6] even
for general robust solutions [18]. Until now.
In Section 3, we design a new algorithm called ACTIVE C OVER (AC) for constructing query probability functions that minimize the probability of querying inside the disagreement region?the set
of points where good classifiers disagree?and never query otherwise. This requires a new algorithm that maintains a parsimonious cover of the set of empirically good classifiers. The cover is
a result of solving an optimization problem (in Section 4) specifying the properties of a desirable
1
See the monograph of Hanneke [11] for an overview of the existing literature, including alternative settings
where additional assumptions are placed on the data source (e.g., separability) [8, 3, 1].
1
query probability function. The cover size provides a practical knob between computation and label
complexity, as demonstrated by the complexity analysis we present in Section 4.
Also in Section 3, we prove that AC effectively maintains a set of good classifiers, achieves good
generalization error, and has a label complexity bound tighter than previous approaches. The label
complexity bound depends on the disagreement coefficient [10], which does not completely capture
the advantage of the algorithm. In the end of Section 3 we provide an example of a hard active
learning problem where AC is substantially superior to previous tractable approaches. Together,
these results show that AC is better and sometimes substantially better in theory.
Do agnostic active learning algorithms work in practice? No previous works have addressed this
question empirically. Doing so is important because analysis cannot reveal the degree to which existing classification algorithms effectively provide an ERM oracle. We conduct an extensive study in
Section 5 by simulating the interaction of the active learning algorithm with a streaming supervised
dataset. Results on a wide array of datasets show that agnostic active learning typically outperforms
passive learning, and the magnitude of improvement depends on how carefully the active learning
hyper-parameters are chosen.
More details (theory, proofs and empirical evaluation) are in the long version of this paper [14].
2
Preliminaries
Let P be a distribution over X ? {?1}, and let H ? {?1}X be a set of binary classifiers, which
we assume is finite for simplicity.2 Let EX [?] denote expectation with respect to X ? PX , the
marginal of P over X . The expected error of a classifier h ? H is err(h) := Pr(X,Y )?P (h(X) 6=
Y ), and the error minimizer is denoted by h? := arg minh?H err(h). The (importance weighted)
empirical error of h ? H on a multiset
P S of importance weighted and labeled examples drawn from
X ? {?1} ? R+ is err(h, S) := (x,y,w)?S w ? 1(h(x) 6= y)/|S|. The disagreement region for a
subset of classifiers A ? H is DIS(A) := {x ? X | ?h, h0 ? A such that h(x) 6= h0 (x)}. The regret
of a classifier h ? H relative to another h0 ? H is reg(h, h0 ) := err(h) ? err(h0 ), and the analogous
empirical regret on S is reg(h, h0 , S) := err(h, S) ? err(h0 , S). When the second classifier h0 in
(empirical) regret is omitted, it is taken to be the (empirical) error minimizer in H.
A streaming-based active learner receives i.i.d. labeled examples (X1 , Y1 ), (X2 , Y2 ), . . . from P one
at a time; each label Yi is hidden unless the learner decides on the spot to query it. The goal is to
produce a classifier h ? H with low error err(h), while querying as few labels as possible. In the
IWAL framework [4], a decision whether or not to query a label is made randomly: the learner
picks a probability p ? [0, 1], and queries the label with that probability. Whenever p > 0, an
unbiased error estimate can be produced using inverse probability weighting [12]. Specifically, for
any classifier h, an unbiased estimator E of err(h) based on (X, Y ) ? P and p is as follows: if Y
is queried, then E = 1(h(X) 6= Y )/p; else, E = 0. It is easy to check that E(E) = err(h). Thus,
when the label is queried, we produce the importance weighted labeled example (X, Y, 1/p).3
3
Algorithm and Statistical Guarantees
Our new algorithm, shown as Algorithm 1, breaks the example stream into epochs. The algorithm
admits any epoch schedule so long as the epoch lengths satisfy ?m?1 ? 2?m . For technical reasons,
we always query the first 3 labels to kick-start the algorithm. At the start of epoch m, AC computes
a query probability function Pm : X ? [0, 1] which will be used for sampling the data points to
query during the epoch. This is done by maintaining a few objects of interest during each epoch
in Step 4: (1) the best classifier hm+1 on the sample Z?m collected so far, where Z?m has a mix of
queried and predicted labels; (2) a radius ?m , which is based on the level of concentration we want
various empirical quantities to satisfy; and (3) the set Am+1 consisting of all the classifiers with
empirical regret at most ?m on Z?m . Within the epoch, Pm determines the probability of querying
an example in the disagreement region for this set Am of ?good? classifiers; examples outside this
2
The assumption that H is finite can be relaxed to VC-classes using standard arguments.
If the label is not queried, we produce an ignored example of weight zero; its only purpose is to maintain
the correct count of querying opportunities. This ensures that 1/|S| is the correct normalization in err(h, S).
3
2
Algorithm 1 ACTIVE C OVER (AC)
input: Constants c1 , c2 , c3 , confidence ?, error radius ?, parameters ?, ?, ? for (OP), epoch schedule
0 = ?0 < 3 = ?1 < ?2 < ?3 < . . . < ?M satisfying ?m+1 ? 2?m for m ? 1.
?
initialize: epoch m = 0, Z?0 := ?, ?0 := c1 1 + c2 1 log 3, where m := 32 log(|H|?m /?)/?m .
1: Query the labels {Yi }3i=1 of the first three unlabeled examples {Xi }3i=1 , and set A1 := H,
P1 ? Pmin,i = 1, and S = {(Xj , Yj , 1)}3j=1 .
2: for i = 4, . . . , n, do
3:
if i = ?m + 1 then
4:
Set Z?m = Z?m?1 ? S, and S = ?. Let
q
hm+1 := arg min err(h, Z?m ), ?m := c1 m err(hm+1 , Z?m ) + c2 m log ?m , and
h?H
Am+1 := {h ? H | err(h, Z?m ) ? err(hm+1 , Z?m ) ? ??m }.
5:
6:
7:
8:
9:
10:
11:
12:
13:
Compute the solution Pm+1 (?) to the problem (OP) and increment m := m + 1.
end if
if next unlabeled point Xi ? Dm := DIS(Am ), then
Toss coin with bias Pm (Xi ); add example (Xi , Yi , 1/Pm (Xi )) to S if outcome is heads,
otherwise add (Xi , 1, 0) to S (see Footnote 3).
else
Add example with predicted label (Xi , hm (Xi ), 1) to S.
end if
end for
Return hM +1 := arg minh?H err(h, Z?M ).
region are not queried but given labels predicted by hm (so error estimates are not unbiased). AC
computes Pm by solving the optimization problem (OP), which is further discussed below.
The objective function of (OP) encourages small query probabilities in order to minimize the label
complexity. The constraints (1) in (OP) bound the variance in our importance-weighted regret estimates for every h ? H. This is key to ensuring good generalization as we will later use Bernsteinstyle bounds which rely on our random variables having a small variance. More specifically, the
LHS of the constraints measures the variance in our empirical regret estimates for h, measured only
on the examples in the disagreement region Dm . This is because the importance weights in the form
of 1/Pm (X) are only applied to these examples; outside this region we use the predicted labels with
an importance weight of 1. The RHS of the constraint consists of three terms. The first term ensures
the feasibility of the problem, as P (X) ? 1/(2?2 ) for X ? Dm will always satisfy the constraints.
The second empirical regret term makes the constraints easy to satisfy for bad hypotheses?this is
crucial to rule out large label complexities in case there are bad hypotheses that disagree very often
with hm . A benefit of this is easily seen when ?hm ? H, which might have a terrible regret, but
would force a near-constant query probability on the disagreement region if ? = 0. Finally, the
third term will be on the same order as the second one for hypotheses in Am , and is only included
to capture the allowed level of slack in our constraints which will be exploited for the efficient implementation in Section 4. In addition to controlled variance, good concentration also requires the
random variables of interest to be appropriately bounded. This is ensured through the constraints (2),
which impose a minimum query probability on the disagreement region. Outside the disagreement
region, we use the predicted label with an importance weight of 1, so that our estimates will always
be bounded (albeit biased) in this region. Note that this optimization problem is written with respect
to the marginal distribution of the data points PX , meaning that we might have infinitely many of
the latter constraints. In Section 4, we describe how to solve this optimization problem efficiently,
and using access to only unlabeled examples drawn from PX .
Algorithm 1 requires several input parameters, which must satisfy:
?
1
1
, ?2 ?
, ? ? 216, c1 ? 2? 6, c2 ? 216c21 , c3 ? 1.
? ? 1, ? ?
8nM log n
?nM log n
The first three parameters, ?, ? and ? control the tightness of the variance constraints (1). The next
three parameters ?, c1 and c2 control the threshold that defines the set of empirically good classifiers;
c3 is used in the minimum probability (4) and can be simply set to 1.
3
Optimization Problem (OP) to compute Pm
min
P
s.t.
where
1
1 ? P (X)
1(h(x) 6= hm (x) ? x ? Dm )
?h ? H EX
? bm (h),
P (X)
?x ? X 0 ? P (x) ? 1, and ?x ? Dm P (x) ? Pmin,m
EX
(1)
(2)
Ihm (X) = 1(h(x) 6= hm (x) ? x ? Dm ),
bm (h) = 2?2 EX [Ihm (X)] + 2? 2 ?reg(h, hm , Z?m?1 )?m?1 ?m?1 + ??m?1 ?2m?1 ,
(3)
?
?
1
c3
, ?.
(4)
Pmin,m = min ? q
?
2
?m?1 err(hm ,Zm?1 )
+
log
?
m?1
nM
Epoch Schedules: The algorithm takes an arbitrary epoch schedule subject to ?m < ?m+1 ? 2?m .
Two natural extremes are unit-length epochs, ?m = m, and doubling epochs, ?m+1 = 2?m . The
main difference lies in the number of times (OP) is solved, which is a substantial computational
consideration. Unless otherwise stated, we assume the doubling epoch schedule where the query
probability and ERM classifier are recomputed only O(log n) times.
Generalization and Label Complexity. We present guarantees on the generalization error and
label complexity of Algorithm 1 assuming a solver for (OP), which we provide in the next section.
Our first theorem provides a bound on generalization error. Define
m
1 X
(?j ? ?j?1 )E(X,Y )?P [1(h(X) 6= Y ? X ? DIS(Aj ))],
?m j=1
p
??0 := ?0 and ??m := c1 m errm (h? ) + c2 m log ?m for m ? 1.
errm (h) :=
Essentially ??m is a population counterpart of the quantity ?m used in Algorithm 1, and crucially
relies on errm (h? ), the true error of h? restricted to the disagreement region at epoch m. This quan?
tity captures the inherent noisiness of the problem, and modulates the transition between O(1/ n)
to O(1/n) type error bounds as we see next.
?
Theorem 1. Pick any 0 < ? < 1/e such that |H|/? > 192. Then recalling that h? =
arg minh?H err(h), we have for all epochs m = 1, 2, . . . , M , with probability at least 1 ? ?
reg(h, h? ) ? 16???m for all h ? Am+1 ,
reg(h? , hm+1 , Z?m ) ? 216?m .
and
(5)
(6)
The proof is in Section 7.2.2 of [14]. Since we use ? ? 216, the bound (6) implies that h? ? Am for
all epochs m. This also maintains that all the predicted labels used by our algorithm are identical to
those of h? , since no disagreement amongst classifiers in Am was observed on those examples. This
observation will be critical to our proofs, where we will exploit the fact that using labels predicted
by h? instead of observed labels on certain examples only introduces a bias in favor of h? , thereby
ensuring that we never mistakenly drop the optimal classifier from Am . The bound (5) shows that
every classifier in Am+1 has a small regret to h? . Since the ERM classifier hm+1 is always in Am+1 ,
this yields our main generalization error bound on the classifier h?m +1 output by Algorithm 1.
Additionally, it also clarifies the definition of the sets Am as the set of good classifiers: these are
classifiers which indeed have small population regret relative to h? . In a realizable setting where h?
?
?
has zero error, ??m = O(1/?
m ) leading to a O(1/n) regret after n unlabeled examples are
? presented
to the algorithm. On the other extreme, if errm (h? ) is a constant, then the regret is O(1/ n). There
are also interesting regimes in between, where err(h? ) might be a constant, but errm (h? ) measured
4
over the disagreement region decreases rapidly. More specifically, we show in Appendix E of [14]
that the expected regret of the classifier returned by Algorithm 1 achieves the optimal rate [6] under
the Tsybakov [17] noise condition.
Next, we provide a label complexity guarantee in terms of the disagreement coefficient [11]:
? = ?(h? ) := supr>0 PX {x | ?h ? H s.t. h? (x) 6= h(x), PX {x0 | h(x0 ) 6= h? (x0 )} ? r}/r.
Theorem 2. With probability at least 1 ? ?, the number
p of label queries made by Algorithm 1 after
? nerrM (h? ) log(|H|/?) + log(|H|/?)).
n examples over M epochs is 4? errM (h? )n + ? ? O(
The theorem is proved in Appendix D of [14]. The first term of the label complexity bound is
linear in the number of unlabeled examples, but can be quite small if ? is small, or if errM (h? ) ?
? ?n), but also
0?it is indeed 0 in the realizable setting. The second term grows at most as O(
becomes a constant for realizable problems. Consequently, we attain a logarithmic label complexity
in the realizable setting. In noisy settings, our label complexity improves ?
upon that of predecessors
such as [5, 13]. Beygelzimer et al. [5] obtain a label complexity of ? n, exponentially worse
for realizable
problems. A related algorithm, Oracular CAL [13], has label complexity scaling
p
with nerr(h? ) but a worse dependence on ?. In all comparisons the use of errM (h? ) provides a
qualitatively superior analysis to all previous results depending on err(h? ) since this captures the
fact that noisy labels outside the disagreement region do not affect the label complexity. Finally,
as in our regret analysis, we show in Appendix E of [14] that the label complexity of Algorithm 1
achieves the information-theoretically lower bound [6] under Tsybakov?s low-noise condition [17].
Section 4.2.2 of [14] gives an example where the label complexity of Algorithm 1 is significantly
smaller than both IWAL and Oracular CAL by virtue of rarely querying in the disagreement region.
The example considers a distribution and a classifier space with the following structure: (i) for most
examples a single good classifier predicts differently from the remaining classifiers; (ii) on a few
examples, half the classifiers predict one way and half the other. In the first case, little advantage is
gained from a label because it provides evidence against only a single classifier. ACTIVE C OVER
queries over the disagreement region with a probability close to Pmin in case
?(i) and probability 1 in
case (ii), while others query with probability ?(1) everywhere implying O( n) times more queries.
4
Efficient implementation
The computation of hm is an ERM operation, which can be performed efficiently whenever an efficient passive learner is available. However, several other hurdles remain. Testing for x ? DIS(Am )
in the algorithm, as well as finding a solution to (OP) are considerably more challenging. The epoch
schedule helps, but (OP) is still solved O(log n) times, necessitating an extremely efficient solver.
Starting with the first issue, we follow Dasgupta et al. [9] who cleverly observed that x ? Dm :=
DIS(Am ) can be efficiently determined using a single call to an ERM oracle. Specifically, to apply
their method, we use the oracle to find4 h0 = arg min{err(h, Z?m?1 ) | h ? H, h(x) 6= hm (x)}. It
can then be argued that x ? Dm = DIS(Am ) if and only if the easily-measured regret of h0 (that is,
reg(h0 , hm , Z?m?1 )) is at most ??m?1 . Solving (OP) efficiently is a much bigger challenge because
it is enormous: There is one variable P (x) for every point x ? X , one constraint (1) for each
classifier h and bound constraints (2) on P (x) for every x. This leads to infinitely many variables
and constraints, with an ERM oracle being the only computational primitive available.
We eliminate the bound constraints using barrier functions. Notice that the objective EX [1/(1 ?
P (x))] is already a barrier at P (x) = 1. To enforce the lower bound (2), we modify the objective to
1
1(X ? Dm )
2
EX
+ ? EX
,
(7)
1 ? P (X)
P (X)
where ? is a parameter chosen momentarily to ensure P (x) ? Pmin,m for all x ? Dm . Thus, the
modified goal is to minimize (7) over non-negative P subject only to (1). We solve the problem in
the dual where we have a large but finite number of optimization variables, and efficiently maximize
the dual using coordinate ascent with access to an ERM oracle over H. Let ?h ? 0 denote the
4
See Appendix F of [15] for how to deal with one constraint with an unconstrained oracle.
5
Algorithm 2 Coordinate ascent algorithm to solve (OP)
input Accuracy parameter ? > 0. initialize ? ? 0.
1: loop
2:
Rescale: ? ? s ? ? where
ms = arg
maxs?[0,1] D(s ? ?).
? = arg max EX Ih (X) ? bm (h).
3:
Find h
P? (X)
i
h m h?H
Ih
? (X)
?
? bm (h) ? ? then
4:
if EX
P? (X)
5:
6:
7:
return ?
else
Update ?h? as ?h? ? ?h? + 2
8:
end if
9: end loop
?
EX [Ih?m (X)/P? (X)] ? bm (h)
.
m
3
EX [Ih? (X)/q? (X) ]
Lagrange multiplier for the constraint (1) for classifier h. Then for any ?, we can minimize the
Lagrangian over each primal variable P (X) yielding the solution
s
X
1(x ? Dm )q? (x)
P? (x) =
, where q? (x) = ?2 +
?h Ihm (x)
(8)
1 + q? (x)
h?H
Ihm (x)
and
= 1(h(x) 6= hm (x) ? x ? Dm ). Clearly, ?/(1 + ?) ? P? (x) ? 1 for all x ? Dm , so
all the bound constraints (2) in (OP) are satisfied if we choose ? = 2Pmin,m . Plugging the solution
P? into the Lagrangian, we obtain the dual problem of maximizing the dual objective
X
D(?) = EX 1(X ? Dm )(1 + q? (X))2 ?
?h bm (h) + C0
(9)
h?H
over ? ? 0. The constant C0 is equal to 1?Pr(Dm ) where Pr(Dm ) = Pr(X ? Dm ). An algorithm
to approximately solve this problem is presented in Algorithm 2. The algorithm takes a parameter
? > 0 specifying the degree to which all of the constraints (1) are to be approximated. Since D is
concave, the rescaling step can be solved using a straightforward numerical line search. The main
implementation challenge is in finding the most violated constraint (Step 3). Fortunately, this step
can be reduced to a single call to an ERM oracle. To see this, note that the constraint violation on
classifier
mh can be written as
1
I (X)
? bm (h) = EX 1(X ? Dm )
? 2?2 1(h(X) 6= hm (X))
EX h
P (X)
P (X)
2
? 2? ??m?1 ?m?1 (err(h, Z?m?1 ) ? err(hm , Z?m?1 )) ? ??m?1 ?2m?1 .
The second term of the right-hand expression is simply the scaled risk (classification error) of h with
respect to the actual labels. The first term is the risk of h in predicting samples which have been
labeled according to hm with importance weights of 1/P (x) ? 2?2 if x ? Dm and 0 otherwise; note
that these weights may be positive or negative. The last two terms do not depend on h. Thus, given
access to PX (or samples approximating it, discussed shortly), the most violated constraint can be
found by solving an ERM problem defined on the labeled samples in Z?m?1 and samples drawn from
PX labeled by hm , with appropriate importance weights detailed in Appendix F.1 of [14]. When all
primal constraints are approximately satisfied, the algorithm stops. We have the following guarantee
on the convergence of the algorithm.
3
Theorem 3. When run on the m-th epoch, Algorithm 2 halts in at most Pr(Dm )/(8Pmin,m
?2 )
? ? 0 such that P ? satisfies the simple bound constraints in (2)
iterations and outputs a solution ?
?
exactly, the variance constraints in (1) up to an additive factor of ?, and
1
1
EX
? EX
+ 4Pmin,m Pr(Dm ),
(10)
1 ? P?? (X)
1 ? P ? (X)
? 1 ? Pr(Dm )/?.
where P ? is the solution to (OP). Furthermore, k?k
If ? is set to ? 2 ?m?1 ?2m?1 , an amount of constraint violation tolerable in our analysis, the number
2
of iterations (hence the number of ERM oracle calls) in Theorem 3 is at most O(?m?1
). The proof
is in Appendix F.2 of [14].
6
OAC
AUC-GAIN?
AUC-GAIN
0.151
0.065
Table 1: Summary of performance metrics
ORA IWAL0
IWAL1
ORA - OAC
0.150
0.085
0.142
0.081
0.125
0.078
IWAL0
ORA IWAL1
PASSIVE
0.115
0.073
0.121
0.075
0.095
0.072
Solving (OP) with expectation over samples: So far we considered solving (OP) defined on the
unlabeled data distribution PX , which is unavailable in practice. A natural substitute for PX is an
i.i.d. sample drawn from it. In Appendix F.3 of [14] we show that solving a properly-defined sample
variant of (OP) leads to a solution to the original (OP) with similar guarantees as in Theorem 3.
5
Experiments with Agnostic Active Learning
While AC is efficient in the number of ERM oracle calls, it needs to store all past examples, resulting
in large space complexity. As Theorem 3 suggests, the query probability function (8) may need as
many as O(?i2 ) classifiers, further increasing storage demand. Aiming at scalable implementation,
we consider an online approximation of AC, given in Section 6.1 of [14]. The main differences
from AC are: (1) instead of a batch ERM oracle, it invokes an online oracle; and (2) instead of
repeatedly solving (OP) from scratch, it maintains a fixed-size set of classifiers (and hence non-zero
dual variables), called the cover, for representing the query probability, and updates the cover with
every new example in a manner similar to the coordinate ascent algorithm for solving (OP). We
conduct an empirical comparison of the following efficient agnostic active learning algorithms:
OAC : Online approximation of ACTIVE C OVER (Algorithm 3 in Section 6.1 of [14]).
IWAL0 and IWAL1 : The algorithm of [5] and a variant that uses a tighter threshold.
ORA - OAC , ORA - IWAL0 , and ORA - IWAL1 : Oracular-CAL [13] versions of OAC , IWAL0
PASSIVE: Passive learning on a labeled sub-sample drawn uniformly at random.
and IWAL1 .
Details about these algorithms are in Section 6.2 of [14]. The high-level differences among these
algorithms are best explained in the context of the disagreement region: OAC does importanceweighted querying of labels with an optimized query probability in the disagreement region, while
using predicted labels outside; IWAL0 and IWAL1 maintain a non-zero minimum query probability
everywhere; ORA - OAC, ORA - IWAL0 and ORA - IWAL1 query labels in their respective disagreement
regions with probability 1, using predicted labels otherwise.
We implemented these algorithms in Vowpal Wabbit (http://hunch.net/?vw/), a fast learning system based on online convex optimization, using logistic regression as the ERM oracle. We
performed experiments on 22 binary classification datasets with varying sizes (103 to 106 ) and diverse feature characteristics. Details about the datasets are in Appendix G.1 of [14]. Our goal is to
evaluate the test error improvement per label query achieved by different algorithms. To simulate the
streaming setting, we randomly permuted the datasets, ran the active learning algorithms through the
first 80% of data, and evaluated the learned classifiers on the remaining 20%. We repeated this process 9 times to reduce variance due to random permutation. For each active learning algorithm, we
obtain the test error rates of classifiers trained at doubling numbers of label queries starting from 10
to 10240. Formally, let errora,p (d, j, q) denote the test error of the classifier returned by algorithm
a using hyper-parameter setting p on the j-th permutation of dataset d immediately after hitting the
q-th label budget, 10?2(q?1) , 1 ? q ? 11. Let querya,p (d, j, q) be the actual number of label queries
made, which can be smaller than 10 ? 2(q?1) when algorithm a reaches the end of the training data
before hitting that label budget. To evaluate an algorithm, we consider the area under its curve of
test error against log number of label queries:
10
querya,p (d, j, q + 1)
1 X
AUCa,p (d, j) =
errora,p (d, j, q + 1) + errora,p (d, j, q) ? log2
.
2 q=1
querya,p (d, j, q)
A good active learning algorithm has a small value of AUC, which indicates that the test error
decreases quickly as the number of label queries increases. We use a logarithmic scale for the
number of label queries to focus on the performance with few label queries where active learning is
the most relevant. More details about hyper-parameters are in Appendix G.2 of [14].
7
0.2
relative improvement in test error
relative improvement in test error
0.2
0.1
0
OAC
IWAL0
IWAL1
ORA-OAC
ORA-IWAL0
ORA-IWAL1
PASSIVE
baseline
-0.1
-0.2
10 1
10 2
10 3
10 4
0.1
0
-0.2
10 1
number of label queries
OAC
IWAL0
IWAL1
ORA-OAC
ORA-IWAL0
ORA-IWAL1
PASSIVE
baseline
-0.1
10 2
10 3
10 4
number of label queries
(a) Best hyper-parameter per dataset
(b) Best fixed hyper-parameter
Figure 1: Average relative improvement in test error v.s. number of label queries
We measure the performance of each algorithm a by the following two aggregated metrics:
AUCbase (d, j) ? AUCa,p (d, j)
AUC-GAIN? (a) := mean max median
,
p
1?j?9
d
AUCbase (d, j)
AUCbase (d, j) ? AUCa,p (d, j)
,
AUC-GAIN(a) := max mean median
p
1?j?9
d
AUCbase (d, j)
where AUCbase denotes the AUC of PASSIVE using a default hyper-parameter setting, i.e., a learning rate of 0.4 (see Appendix G.2 of [14]). The first metric shows the maximal gain each algorithm
achieves with the best hyper-parameter setting for each dataset, while the second shows the gain by
using the single hyper-parameter setting that performs the best on average across datasets.
Results and Discussions. Table 1 gives a summary of the performances of different algorithms.
When using hyper-parameters optimized on a per-dataset basis (top row in Table 1), OAC achieves
the largest improvement over the PASSIVE baseline, with IWAL0 achieving almost the same improvement and IWAL1 improving slightly less. Oracular-CAL variants perform worse, but still do better
than PASSIVE with the best learning rate for each dataset, which leads to an average of 9.5% improvement in AUC over the default learning rate. When using the best fixed hyper-parameter setting
across all datasets (bottom row in Table 1), all active learning algorithms achieve less improvement
compared with PASSIVE (7% improvement with the best fixed learning rate). In particular, OAC gets
only 6.5% improvement. This suggests that careful tuning of hyper-parameters is critical for OAC
and an important direction for future work.
Figure 1(a) describes the behaviors of different algorithms in more detail. For each algorithm a we
identify the best fixed hyper-parameter setting
AUCbase (d, j) ? AUCa,p (d, j)
,
(11)
p? := arg max mean median
p
1?j?9
d
AUCbase (d, j)
and plot the relative test error improvement by a using p? averaged across all datasets at the 11 label
budgets:
11
errorbase (d, j, q) ? errora,p? (d, j, q)
.
(12)
10 ? 2(q?1) , mean median
1?j?9
d
errorbase (d, j, q)
q=1
All algorithms, including PASSIVE, perform similarly during the first few hundred label queries.
IWAL0 performs the best at label budgets larger than 80, while IWAL1 does almost as well. ORA OAC is the next best, followed by ORA - IWAL1 and ORA - IWAL0 . OAC performs worse than PASSIVE
except at label budgets between 320 and 1280. In Figure 1(b),we plot results obtained by each
algorithm a using the best hyper-parameter setting for each dataset d:
AUCbase (d, j) ? AUCa,p (d, j)
?
pd := arg max median
.
(13)
p
1?j?9
AUCbase (d, j)
As expected, all algorithms perform better, but OAC benefits the most from using the best hyperparameter setting per dataset. Appendix G.3 of [14] gives more detailed results, including test error
rates obtained by all algorithms at different label query budgets for individual datasets.
In sum, when using the best fixed hyper-parameter setting, IWAL0 outperforms other algorithms.
When using the best hyper-parameter setting tuned for each dataset, OAC and IWAL0 perform equally
well and better than other algorithms.
8
References
[1] Maria-Florina Balcan and Phil Long. Active and passive learning of linear separators under
log-concave distributions. In Conference on Learning Theory, pages 288?316, 2013.
[2] Maria-Florina Balcan, Alina Beygelzimer, and John Langford. Agnostic active learning. In
Proceedings of the 23rd international conference on Machine learning, pages 65?72. ACM,
2006.
[3] Maria-Florina Balcan, Andrei Broder, and Tong Zhang. Margin based active learning. In
Proceedings of the 20th annual conference on Learning theory, pages 35?50. Springer-Verlag,
2007.
[4] A. Beygelzimer, S. Dasgupta, and J. Langford. Importance weighted active learning. In ICML,
2009.
[5] A. Beygelzimer, D. Hsu, J. Langford, and T. Zhang. Agnostic active learning without constraints. In NIPS, 2010.
[6] R.M. Castro and R.D. Nowak. Minimax bounds for active learning. Information Theory, IEEE
Transactions on, 54(5):2339 ?2353, 2008.
[7] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine
Learning, 15:201?221, 1994.
[8] S. Dasgupta. Coarse sample complexity bounds for active learning. In Advances in Neural
Information Processing Systems 18, 2005.
[9] S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In
NIPS, 2007.
[10] S. Hanneke. Theoretical Foundations of Active Learning. PhD thesis, Carnegie Mellon University, 2009.
[11] Steve Hanneke. Theory of disagreement-based active learning. Foundations and Trends in
Machine Learning, 7(2-3):131?309, 2014.
[12] D. G. Horvitz and D. J. Thompson. A generalization of sampling without replacement from a
finite universe. J. Amer. Statist. Assoc., 47:663?685, 1952. ISSN 0162-1459.
[13] Daniel J. Hsu. Algorithms for Active Learning. PhD thesis, University of California at San
Diego, 2010.
[14] Tzu-Kuo Huang, Alekh Agarwal, Daniel J Hsu, John Langford, and Robert E Schapire. Efficient and parsimonious agnostic active learning. arXiv preprint arXiv:1506.08669, 2015.
[15] Nikos Karampatziakis and John Langford. Online importance weight aware updates. In UAI
2011, Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence,
Barcelona, Spain, July 14-17, 2011, pages 392?399, 2011.
[16] Vladimir Koltchinskii. Rademacher complexities and bounding the excess risk in active learning. J. Mach. Learn. Res., 11:2457?2485, December 2010.
[17] A. B. Tsybakov. Optimal aggregation of classifiers in statistical learning. Ann. Statist., 32:
135?166, 2004.
[18] Chicheng Zhang and Kamalika Chaudhuri. Beyond disagreement-based agnostic active learning. In Advances in Neural Information Processing Systems, pages 442?450, 2014.
9
| 5939 |@word version:2 c0:2 crucially:1 pick:2 thereby:1 daniel:3 tuned:1 outperforms:2 existing:2 err:23 past:1 com:4 horvitz:1 beygelzimer:4 written:2 must:1 john:4 additive:1 numerical:1 drop:1 plot:2 update:3 atlas:1 implying:2 half:2 intelligence:1 short:1 provides:4 multiset:1 coarse:1 zhang:3 c2:6 predecessor:1 prove:1 consists:1 inside:1 manner:1 theoretically:1 x0:3 indeed:2 expected:3 behavior:1 p1:1 os:18 little:1 enumeration:1 actual:2 solver:2 increasing:1 becomes:1 spain:1 bounded:2 agnostic:12 what:2 substantially:2 finding:2 guarantee:5 every:6 concave:2 exactly:1 ensured:1 classifier:43 scaled:1 assoc:1 control:2 unit:1 positive:1 before:1 modify:1 limit:1 aiming:1 mach:1 approximately:2 might:3 iwal:2 koltchinskii:1 specifying:2 challenging:1 suggests:2 c21:1 averaged:1 practical:1 yj:1 testing:1 practice:2 regret:15 importanceweighted:1 spot:1 area:1 empirical:11 attain:1 significantly:1 confidence:1 get:1 cannot:1 unlabeled:8 close:1 cal:4 storage:1 risk:4 context:1 demonstrated:1 lagrangian:2 maximizing:1 vowpal:1 primitive:1 straightforward:1 starting:2 phil:1 convex:1 thompson:1 restated:1 simplicity:1 immediately:1 estimator:1 rule:1 array:1 population:2 coordinate:3 increment:1 analogous:1 diego:1 us:1 hypothesis:3 hunch:1 trend:1 satisfying:3 particularly:1 approximated:1 predicts:1 labeled:7 observed:3 bottom:1 preprint:1 solved:3 capture:4 revisiting:1 region:18 ensures:2 alekha:1 momentarily:1 decrease:2 ran:1 monograph:1 substantial:1 pd:1 complexity:22 trained:1 depend:1 solving:9 upon:1 learner:4 completely:1 basis:1 easily:2 mh:1 differently:1 various:1 fast:1 describe:1 query:35 artificial:1 hyper:15 outside:5 h0:11 outcome:1 quite:1 larger:1 solve:4 tightness:1 otherwise:5 favor:1 noisy:2 online:5 sequence:1 advantage:2 wabbit:1 net:1 interaction:1 maximal:1 zm:1 relevant:1 loop:2 rapidly:1 chaudhuri:1 achieve:2 convergence:1 rademacher:1 produce:3 object:1 help:1 depending:1 develop:1 ac:10 measured:3 rescale:1 op:20 strong:1 implemented:1 c:1 predicted:9 implies:1 direction:1 radius:2 correct:2 vc:1 require:1 argued:1 generalization:8 preliminary:1 tighter:2 considered:1 predict:1 achieves:5 omitted:1 purpose:1 label:63 largest:1 create:1 djhsu:1 weighted:5 minimization:1 clearly:1 always:4 modified:1 varying:1 knob:1 focus:1 noisiness:1 improvement:13 properly:1 maria:3 check:1 indicates:1 karampatziakis:1 baseline:3 am:15 realizable:5 streaming:5 typically:1 eliminate:1 hidden:1 provably:1 arg:9 classification:5 issue:1 dual:5 denoted:1 among:1 initialize:2 marginal:2 equal:1 aware:1 never:3 having:1 sampling:2 identical:1 icml:1 nearly:1 future:1 others:1 inherent:1 few:5 randomly:2 individual:1 consisting:1 replacement:1 microsoft:8 maintain:2 recalling:1 interest:2 evaluation:1 severe:1 weakness:1 introduces:1 violation:2 extreme:2 yielding:1 primal:2 nowak:1 lh:1 respective:1 unless:2 conduct:3 supr:1 re:1 theoretical:1 cover:5 tractability:1 cost:1 subset:1 hundred:1 seventh:1 considerably:1 international:1 broder:1 together:1 quickly:1 thesis:2 tzu:2 satisfied:2 huang:2 choose:1 worse:5 leading:1 return:2 pmin:8 rescaling:1 aggressive:1 coefficient:2 satisfy:5 depends:2 stream:1 later:1 break:1 performed:2 analyze:1 doing:1 start:2 aggregation:1 maintains:4 chicheng:1 minimize:4 tity:1 accuracy:1 variance:7 characteristic:1 who:1 efficiently:6 yield:2 clarifies:1 identify:1 produced:1 hanneke:3 footnote:1 reach:1 monteleoni:1 whenever:2 definition:1 against:2 dm:22 proof:4 hsu:5 newly:1 dataset:9 proved:1 stop:1 gain:6 improves:1 schedule:6 carefully:1 steve:1 supervised:3 follow:1 amer:1 done:1 evaluated:1 generality:1 furthermore:1 langford:6 until:1 hand:1 receives:1 mistakenly:1 cohn:1 defines:1 logistic:1 aj:1 reveal:1 grows:1 y2:1 unbiased:3 counterpart:1 true:1 multiplier:1 hence:2 i2:1 deal:1 during:3 encourages:1 auc:7 m:1 theoretic:1 necessitating:1 performs:3 passive:15 balcan:3 meaning:1 consideration:1 superior:2 permuted:1 empirically:4 overview:1 exponentially:2 discussed:2 mellon:1 queried:5 nyc:4 unconstrained:1 tuning:1 pm:8 similarly:1 rd:1 access:3 alekh:2 add:3 store:1 certain:1 verlag:1 binary:2 yi:3 exploited:1 seen:1 minimum:3 additional:1 relaxed:1 impose:1 fortunately:1 nikos:1 aggregated:1 maximize:1 july:1 ii:2 desirable:1 mix:1 technical:1 long:3 equally:1 bigger:1 a1:1 feasibility:1 ensuring:2 controlled:1 variant:3 regression:1 florina:3 plugging:1 halt:1 oac:18 expectation:2 essentially:1 metric:3 iteration:2 sometimes:1 normalization:1 arxiv:2 agarwal:2 achieved:1 c1:6 addition:1 want:1 hurdle:1 addressed:1 else:3 jcl:1 source:2 median:5 crucial:1 appropriately:1 biased:1 scalable:1 ascent:3 subject:2 quan:1 december:1 call:4 near:1 kick:1 vw:1 easy:2 xj:1 affect:1 reduce:1 whether:2 expression:1 returned:2 repeatedly:1 ignored:1 detailed:2 amount:1 tsybakov:3 statist:2 reduced:1 schapire:3 terrible:1 http:1 notice:1 per:4 diverse:1 carnegie:1 hyperparameter:1 dasgupta:4 key:1 recomputed:1 threshold:2 enormous:1 achieving:2 drawn:5 alina:1 ihm:4 sum:1 run:1 inverse:1 everywhere:2 uncertainty:1 almost:2 parsimonious:3 decision:2 acceptable:2 appendix:11 scaling:1 bound:18 followed:1 oracle:15 annual:1 strength:1 constraint:25 x2:1 simulate:1 argument:1 min:4 extremely:1 px:9 according:1 request:1 poor:1 oracular:4 cleverly:1 smaller:2 remain:1 across:3 separability:1 slightly:1 describes:1 appealing:1 castro:1 explained:1 restricted:1 pr:7 erm:15 taken:1 computationally:2 slack:1 count:1 tractable:2 end:7 available:2 operation:1 apply:1 enforce:1 disagreement:20 appropriate:1 simulating:1 tolerable:1 alternative:1 robustness:1 coin:1 shortly:1 batch:1 substitute:1 original:1 denotes:1 remaining:2 ensure:1 top:1 opportunity:1 maintaining:1 log2:1 exploit:1 invokes:1 approximating:1 objective:4 question:2 quantity:2 already:1 concentration:2 dependence:1 amongst:1 collected:1 considers:1 reason:1 assuming:1 length:2 issn:1 vladimir:1 robert:2 stated:1 negative:2 design:1 implementation:4 twenty:1 perform:4 disagree:3 ladner:1 observation:1 datasets:8 implementable:1 finite:4 minh:3 head:1 y1:1 arbitrary:1 extensive:1 c3:4 optimized:2 california:1 learned:1 barcelona:1 nip:2 beyond:1 below:1 regime:1 challenge:2 including:4 max:6 critical:3 natural:2 rely:1 force:1 predicting:1 representing:1 minimax:1 hm:23 columbia:2 epoch:20 literature:1 relative:6 permutation:2 interesting:1 querying:6 foundation:2 degree:2 intractability:1 row:2 summary:2 placed:1 last:1 dis:6 bias:2 weaker:1 wide:1 barrier:2 benefit:2 overcome:1 curve:1 default:2 evaluating:1 transition:1 computes:2 made:3 qualitatively:1 san:1 bm:7 far:2 transaction:1 excess:1 active:39 decides:2 uai:1 xi:8 search:1 table:4 additionally:1 learn:2 robust:1 unavailable:1 requested:1 improving:2 separator:1 constructing:1 main:4 universe:1 rh:1 bounding:1 noise:3 allowed:1 repeated:1 x1:1 andrei:1 tong:1 sub:1 explicit:1 exponential:1 lie:1 weighting:1 third:1 theorem:8 bad:2 admits:1 virtue:1 concern:1 evidence:1 ih:4 albeit:1 effectively:2 importance:11 modulates:1 gained:1 phd:2 magnitude:1 kamalika:1 budget:7 demand:1 margin:1 logarithmic:2 errora:4 simply:2 infinitely:2 lagrange:1 hitting:2 doubling:3 springer:1 minimizer:2 determines:1 errm:8 relies:1 satisfies:1 acm:1 goal:3 consequently:1 careful:1 ann:1 toss:1 hard:1 included:1 typical:1 specifically:4 determined:1 uniformly:1 except:1 called:2 kuo:2 experimental:1 rarely:1 formally:1 latter:1 violated:2 evaluate:2 reg:6 scratch:1 ex:16 |
5,457 | 594 | Statistical and Dynamical Interpretation of ISIH
Data from Periodically Stimulated Sensory Neurons
John K. Douglass and Frank Moss
Department of Biology and Department of Physics
University of Missouri at St. Louis
St. Louis, MO 63121
Andre Longtin
Department of Physics
University of Ottawa
Ottawa, Ontario, Canada KIN 6N5
Abstract
We interpret the time interval data obtained from periodically stimulated
sensory neurons in terms of two simple dynamical systems driven by noise
with an embedded weak periodic function called the signal: 1) a bistable
system defined by two potential wells separated by a barrier, and 2) a FitzHugh-Nagumo system. The implementation is by analog simulation: electronic circuits which mimic the dynamics. For a given signal frequency, our
simulators have only two adjustable parameters, the signal and noise intensities. We show that experimental data obtained from the periodically stimulated mechanoreceptor in the crayfish tail fan can be accurately approximated
by these simulations. Finally, we discuss stochastic resonance in the two
models.
1 INTRODUcnON
It is well known that sensory information is transmitted to the brain using a
code which must be based on the time intervals between neural firing events or
the mean firing rate. However, in any collection of such data, and even when
the sensory system is stimulated with a periodic signal, statistical analyses have
shown that a significant fraction of the intervals are random, having no coherent relationship to the stimulus. We call this component the ''noise". It is clear
993
994
Douglass, Moss, and Longtin
that coherent and incoherent subsets of such data must be separated. Moreover,
the noise intensity depends upon the stimulus intensity in a nonlinear manner
through, for example, efferent connections in the visual system (Kaplan and
Barlow, 1980) and is often much larger (sometimes several orders of magnitude
larger!) than can be accounted for by equilibrium statistical mechanics (Denk
and Webb, 1992). Evidence that the noise in networks of neurons can dynamically alter the properties of the membrane potential and time constants has also
been accumulated (Kaplan and Barlow, 1976; Treutlein and Schulten, 1985;
Bemander, Koch and Douglas, 1992). Recently, based on comparisons of interspike interval histograms (ISIH's) obtained from passive analog simulations of
simple bistable systems, with those from auditory neurons, it was suggested that
the noise intensity may play a critical role in the ability of the living system to
sense the stimulus intensity (Longtin, Bulsara and Moss, 1991). In this work, it
is shown that in the simulations, ISIH s are reproduced provided that noise is
added to a weak signal, i.e. one that cannot cause firing by itself. All of these
processes are essentially nonlinear, and they indicate the ultimate futility of
simply measuring the 'background spontaneous rate" and later subtracting it
from spike rates obtained with a stimulus applied. Indeed, they raise serious
doubts regarding the applicability of any linear transform theory to neural problems.
9
In this paper, we investigate the possibility that the noise can enhance the ability
of a sensory neuron to transmit information about periodic stimuli. The present
study relies on two objects, the ISIH and the power spectrum, both familiar
measurements in electrophysiology. These are obtained from analog simulations
of two simple dynamical systems, 1) the overdamped motion of a particle in a
bistable, quartic potential; and 2) the FitzHugh- Nagumo model. The results of
these simulations are compared with those from experiments on the mechanoreceptor in the tailfan of the crayfish Procambarus clarkii.
2 THE ANALOG SIMULATOR
Previously, we made detailed comparisons of ISIH's obtained from a variety of
sensory modalities (Longtin, Bulsara and Moss, 1991) with those measured on
the bistable system,
.:t
= x - x 3 + ~(t) + f
sin(wt)
(1)
where f is the stimulus intensity, and ~ is a quasi white, Gaussian noise, defined
by (~(t)~(s? = (DI r)exp( - It-sll r) with D the noise intensity and r a
(dimensionless) noise correlation time. Quasi white means that the actual noise
correlation time is at least one order of magnitude smaller than the integrator
Statistical & Dynamical Interpretation of ISIH Data from Periodically Stimulated Sensory Neurons
time constant (the "clock" by which the simulator measures time). It was shown
that the neurophysiological data could be satisfactorily matched by data from
the simulation by adjusting either the noise intensity or the stimulus intensity
provided that the other quantity had a value not very different from the height
of the potential barrier. Moreover, bistable dynamical systems of the type represented by Eq. (1) (and many others as well) have been frequently used to
demonstrate stochastic resonance (SR), an essentially nonlinear process
whereby the signal-to-noise ratio (SNR) of a weak signal can be enhanced by
the noise. Below we show that SR can be demonstrated in a typical excitable
system of the type often used to model sensory neurons. This raises a tantalizing question: can SR be discovered as a naturally occurring phenomenon in
living systems? More information can be found in a recent review and workshop proceedings (Moss, 1993; Chialvo and Apkarian, 1993; Longtin, 1993).
There is, however, a significant difference between the dynamics represented by
Eq. (1) and the more usual neuron models which are excitable systems. A
simple example of the latter is the FitzHugh-Nagumo (FN) model, the ISIH's of
which have recently been studied (Longtin, 1993). The FN model is an excitable system controlled by a bifurcation parameter. When the voltage variable is
perturbed past a certain boundary, a large excursion, identified with a neural
firing event, occurs. Thus a detenninistic refractory period is built into the
model as the time required for the execution of a single firing event. By contrast, in the bistable system, a firing event is represented by the transition from
well A to well B. Before another firing can occur, the system must be reset by
a reverse transition from B to A, which is essentially stochastic. The bistable
system thus exhibits a statistical distribution of refractory periods. The FN
system is not bistable, but, depending on the value of the bifurcation parameter,
it can be either periodically firing (oscillating) or residing on a fixed point. The
FN model used here is defined by (Longtin, 1993),
v = \-(v - 0.5)(1 - v)
w= v- w
- w
+ ~(t),
- [b + fsin(wt)],
(2)
(3)
where v is the fast variable (action potential) to which the noise ~ is added, W is
the recovery variable to which the signal is added, and b is the bifurcation parameter. The range of behaviors is given by: b >0.65, fixed point and b ~ 0.65,
oscillating. We operate far into the fixed point regime at b = 0.9, so that
bursts of sustained oscillations do not occur. Thus single spikes at more-orless random times but with some coherence with the signal are generated. A
schematic diagram of the analog simulator is shown in Fig. 1. The simulator is
constructed of standard electronic chips: voltage multipliers (X) and operational
995
996
Douglass, Moss, and Longtin
Output, P(T), P(w)
PC
Asyst
softwo.re
Action potential: fast variable
v-
v(v-O.5)(t-v) - w + Sit)
+
S (,:.Lt._---,
Noise
gen.
n
v(l-v)(f +0.5)
+ gn (t)
vet)
Recovery: slow variable
w
=
v-w-b-
b=0.1
v-w
to
E
sinwt
1.0
v-w-b
wet)
Signal
gen.
Esinwt
Fig. 1 Analog simulator of FitzHugh-Nagumo model. The characteristic response times are determined by the integrator time constants as
shown. The noise correlation time was Tn = 10- 5 S.
amplifier summers (+). The fast variable, vCt), was digitized and analyzed for
the ISIH and the power spectrum by the PC shown. Note that the noise correlation time, 10- 5 s, is equal to the fast variable integrator time constant and is
much larger than the slow variable time constant. This noise is, therefore, colored. Analog simulator designs, nonlinear experiments and colored noise have
recently been reviewed (Moss and McClintock, 1989). Below we compare data
from this simulator with electrophysiological data from the crayfish.
Statistical & Dynamical Interpretation of ISIH Data from Periodically Stimulated Sensory Neurons
3 EXPERIMENTS WITH CRAYFISH MECHANORECEPTOR CELLS
Single hair mechanoreceptor cells of the crayfish tailfan represent a simple and
robust system lacking known efferents. A simple system is necessary, since we
are searching for a specific dynamical behavior which might be masked in a
more complex physiology. In this system, small motions of the hairs (as small
as a few tens of nanometers) are transduced into spike trains which travel up
the sensory neuron to the caudal ganglion. These neurons show a range of
spontaneous firing rates (internal noise). In this experiment, a neuron with a
relatively high internal noise was chosen. Other experiments and more details
are described elsewhere (Bulsara, Douglass and Moss, 1993). The preparation
consisted of a piece of the tailfan from which the sensory nerve bundle and
ganglion were exposed surgically. This appendage was sinusoidally moved
through the saline solution by an electromagnetic transducer. Extracellular
recordings from an identified hair cell were made using standard methods. The
preparations typically persisted in good physiological condition for 8 to 12
hours. An example ISIH is shown in the upper panel of Fig. 2. The stimulus
period was, To = 14 ms. Note the peak sequence at the integer multiples of To
(Longtin, et ai, 1991). This ISIH was measured in about 15 minutes for which
about 8K spikes were obtained. An ISIH obtained from the FN simulator in the
same time and including about the same number of spikes is shown in the lower
panel. The similarity demonstrates that neurophysiological ISIH's can easily be
mimicked with FN models as well as with bistable models. Our model is also
able to reproduce non renewal effects (data not shown) which occur at high frequency and! or low stimulus or noise intensity, and for which the first peak in
the ISIH is not the one of maximum amplitude.
We turn now to the question of whether SR, based on the power spectrum, can
be demonstrated in such excitable systems. The power spectrum typically
shows a sharp peak due to the signal at frequency wo, riding on a broad noise
background. An example, measured on the FN simulator, is shown in the left
panel in Fig. 3. This spectrum was obtained for a constant signal intensity set
just above threshold and for the stated external noise intensity. The SNR, in
decibels, is defined as the ratio of the strength S(w) of the signal feature to the
noise amplitude, N(w), measured at the base of the signal feature: SNR = 10
10glOS! N. The panel on the right of Fig. 3 shows the SNR's obtained from a
large number of such power spectra, each measured for a different noise intensity. Clearly there is an optimal noise intensity which maximizes the SNR.
This is, to our knowledge, the first demonstration of SR based on the power
spectra in an excitable system. Just as for the bistable systems (Moss, 1993),
when the external noise intensity is too low, the signal is not "sampled" frequently enough and the SNR is low. By contrast, when the noise intensity is too
997
998
Douglass, Moss, and Longtin
Iii 27.0
--.....
.::! 24.0
~
..... 21.0
:g
C1J
C1
>-
18.0
15.0
:!:!
...... 12.0
;; 9.00
It!
.gt- 6. 00
o. 3.00
16.0 32.0 48.0 64.0 80 . 0 96 . 0 112. 128 . 144.xEb60.
Time Imsl
Iii 27.0
--.....
.::::! 24.0
....~ 21.0
:g
C1J
C1
>-
18.0
15.0
....:!:! 12.0
E
9. 00
It!
.gt-
6.00
o. 3.00
,
16.0 32.0 48.0 64.0 80.0 96.0 112. 128. 144'xEb60.
Time Ims)
Fig. 2. ISIH's obtained from the crayfish stimulated at 68.6 Hz
(upper); and the FN simulator driven at the same frequency with b =
0.9, Vnoise = 0.022 V nns' and Vsig = 0.53 Vnns (lower).
high, the signal becomes randomized. The occurrence of a maximum in the
SNR is thus motivated. SR has also been studied using well residence time probability densities, which are analogous to the physiological ISIH's (Longtin, el
ai, 1991), and was further studied in the FN system (Longtin, 1993). In these
cases, it is observed that the individual peak heights pass through maxima as the
noise intensity is varied, thus demonstrating SR, similar to that shown in Fig. 3,
based on the ISIH (or residence time probability density).
4 DISCUSSION
We have shown that physiological measurements such as the familiar ISIH patterns obtained from periodically stimulated sensory neurons can be easily mimicked by analog simulations of simple noisy systems, in particular bistable sys-
Statistical & Dynamica1lnterpretation of ISIH Data from Periodically Stimulated Sensory Neurons
1200 ,---- - - - - - - - ,
~66t:r:.~~~ ~
1000
~
>-
~
en 7 ? 33
~
c:
~ 2.20
~:D~
....c...
~
~
8.00
~
'"
1.47
~~
~
600 : ~
4.00 - ~
C1
tn
2 .00
t . 733
x
o
n.
.050 .100 .150 .200 .250 .300 .350 .400 .450xE IfiOO
Frequency (Kllz)
OO~OO
I~~~~~.UO
NOISE VOI.TACE (toIV.msl
Fig. 3. A power spectrum from the FN simulator stimulated by a 20
Hz signal for b = 0.9, f = 0.25 V and V,.oise = 0.021 Vnns (left).
The SNR's versus noise voltage measured in the FN system showing
SR at V,.olse ~ 10 mVnns (right). Similar SR results based on the ISIH
have been obtained by Longtin (1993) and by Chialvo and Apkarian
(1993).
tems for which the refractory period is strictly stochastic and excitable systems
for which the refractory period is deterministic. Further, we have shown that
SR, based on SNR's obtained from the power spectrum, can be demonstrated for
the FitzHugh-Nagumo model. It is worth emphasizing that these results are
possible only because the systems are inherently nonlinear. The signal alone is
too weak to cause firing events in either the bistable or the excitable models.
Thus these results suggest that biological systems may be able to detect weak
stimuli in the presence of background noise which they could not otherwise
detect. Careful behavioral studies will be necessary to decide this question,
however, a recent and interesting psychophysics experiment using human interpretations of ambiguous figures, presented in sequences with both coherent
and random components points directly to this possibility (Chialvo and
Apkarian, 1993).
Acknowledgements
This work was supported by the Office of Naval Research grant NOOOI4-92-J1235 and by NSERC (Canada).
References
Bemander, 0, Koch, C. and Douglas R. (1992) Network activity determines
spatio-temporal integration in single cells, in Advances in Neural Information
Processing Systems 3; R. Lippman, J. Moody and D. Touretzky, editors;
Morgan Kaufmann, San Mateo, CA. 43-50
999
1000
Douglass, Moss, and Longtin
Bulsara, A., Douglass, J. and Moss, F. (1993) Nonlinear Resonance: Noiseassisted information processing in physical and neurophysiological systems.
Nav. Res. Rev. in press.
Chialvo, D. and Apkarian, V. (1993) Modulated noisy biological dynamics: three
examples; in Proceedings of the NATO ARW on Stochastic Resonance in Physics and Biology, edited by F. Moss, A. Bulsara, and M. F. Shlesinger, 1. Stat.
Phys. 70, forthcoming
Oenk, W. and Webb, W. (1992) Forward and reverse transduction at the limit of
sensitivity studied by correlating electrical and mechanical fluctuations in frog
saccular hair cells. Hear. Res. 60, 89-102.
Kaplan, E. and Barlow, R. (1976) Energy, quanta and Limulus vision. Vision
Res. 16, 745-751
Kaplan, E. and Barlow, R. (1980) Circadian clock in Limulus brain increases
response and decreases noise of retinal photoreceptors. Nature 286, 393
Longtin, A. (1993) Stochastic resonance in neuron models, in Proceedings of
the NATO ARWon Stochastic Resonance in Physics and Biology, edited by F.
Moss, A. Bulsara, and M. F. Shlesinger, J. Stat. Phys. 70, forthcoming
Longtin, A, Bulsara, A and Moss F. (1991) Time interval sequences in bistable
systems and the noise-induced transmission of information by sensory neurons.
Phys. Rev. Lett. 67, 656-659
Moss, F. (1993) Stochastic resonance: from the ice ages to the monkey's ear;
in, Some Problems in Statistical Physics, edited by G. H. Weiss, SIAM, Philadelphia, in press
Moss, F. and McClintock, P.V.E. editors (1989) Noise in Nonlinear Dynamical
Systems, Vols. 1 - 3, Cambridge University Press.
Treutlein, H. and Schulten, K. (1985) Noise induced limit cycles of the Bonhoeffer- Van der Pol model of neural pulses. Ber. Bunsenges. Phys. Chern. 89,
710.
| 594 |@word pulse:1 simulation:8 saccular:1 past:1 must:3 john:1 fn:11 periodically:8 interspike:1 alone:1 sys:1 colored:2 tems:1 height:2 burst:1 constructed:1 transducer:1 sustained:1 behavioral:1 manner:1 indeed:1 behavior:2 frequently:2 mechanic:1 simulator:12 brain:2 integrator:3 actual:1 becomes:1 provided:2 moreover:2 matched:1 circuit:1 transduced:1 panel:4 maximizes:1 nav:1 monkey:1 voi:1 temporal:1 futility:1 demonstrates:1 uo:1 grant:1 louis:2 before:1 ice:1 limit:2 firing:10 fluctuation:1 might:1 frog:1 studied:4 mateo:1 dynamically:1 range:2 satisfactorily:1 lippman:1 physiology:1 suggest:1 cannot:1 dimensionless:1 deterministic:1 demonstrated:3 recovery:2 searching:1 analogous:1 transmit:1 spontaneous:2 play:1 enhanced:1 approximated:1 observed:1 role:1 electrical:1 cycle:1 decrease:1 edited:3 pol:1 denk:1 dynamic:3 raise:2 surgically:1 exposed:1 apkarian:4 upon:1 easily:2 chip:1 represented:3 train:1 separated:2 fast:4 larger:3 otherwise:1 ability:2 transform:1 itself:1 noisy:2 reproduced:1 sequence:3 chialvo:4 subtracting:1 reset:1 sll:1 gen:2 ontario:1 moved:1 transmission:1 circadian:1 oscillating:2 object:1 depending:1 oo:2 stat:2 measured:6 eq:2 indicate:1 bulsara:7 stochastic:8 human:1 bistable:13 electromagnetic:1 biological:2 strictly:1 koch:2 residing:1 exp:1 equilibrium:1 limulus:2 mo:1 travel:1 wet:1 caudal:1 clearly:1 gaussian:1 voltage:3 office:1 naval:1 contrast:2 sense:1 detect:2 el:1 accumulated:1 typically:2 mechanoreceptor:4 quasi:2 reproduce:1 resonance:7 renewal:1 bifurcation:3 psychophysics:1 integration:1 equal:1 having:1 msl:1 biology:3 broad:1 alter:1 mimic:1 others:1 stimulus:10 serious:1 few:1 c1j:2 missouri:1 individual:1 familiar:2 saline:1 amplifier:1 investigate:1 possibility:2 analyzed:1 pc:2 bundle:1 detenninistic:1 necessary:2 re:4 sinusoidally:1 gn:1 measuring:1 applicability:1 ottawa:2 subset:1 snr:9 masked:1 too:3 perturbed:1 periodic:3 nns:1 st:2 density:2 peak:4 randomized:1 sensitivity:1 siam:1 shlesinger:2 physic:5 enhance:1 moody:1 ear:1 external:2 doubt:1 potential:6 retinal:1 depends:1 piece:1 later:1 kaufmann:1 characteristic:1 weak:5 accurately:1 worth:1 phys:4 touretzky:1 andre:1 energy:1 frequency:5 naturally:1 di:1 efferent:2 sampled:1 auditory:1 adjusting:1 noooi4:1 knowledge:1 electrophysiological:1 amplitude:2 nerve:1 response:2 wei:1 just:2 correlation:4 clock:2 nonlinear:7 treutlein:2 vols:1 riding:1 effect:1 consisted:1 multiplier:1 barlow:4 white:2 sin:1 ambiguous:1 whereby:1 m:1 demonstrate:1 tn:2 motion:2 passive:1 recently:3 physical:1 refractory:4 analog:8 interpretation:4 tail:1 interpret:1 ims:1 significant:2 measurement:2 cambridge:1 ai:2 particle:1 had:1 similarity:1 gt:2 base:1 recent:2 quartic:1 driven:2 introducnon:1 reverse:2 certain:1 xe:1 der:1 transmitted:1 morgan:1 period:5 living:2 signal:18 multiple:1 nagumo:5 controlled:1 schematic:1 hair:4 n5:1 vision:2 essentially:3 longtin:16 histogram:1 represent:1 sometimes:1 cell:5 c1:3 chern:1 background:3 interval:5 diagram:1 modality:1 operate:1 sr:10 recording:1 hz:2 induced:2 call:1 integer:1 presence:1 iii:2 enough:1 variety:1 forthcoming:2 identified:2 regarding:1 whether:1 motivated:1 ultimate:1 wo:1 cause:2 action:2 clear:1 detailed:1 ten:1 threshold:1 demonstrating:1 douglas:9 appendage:1 fraction:1 decide:1 electronic:2 excursion:1 residence:2 oscillation:1 coherence:1 summer:1 fan:1 activity:1 strength:1 occur:3 nanometer:1 arw:1 fitzhugh:5 relatively:1 extracellular:1 department:3 membrane:1 smaller:1 rev:2 previously:1 discus:1 turn:1 occurrence:1 mimicked:2 added:3 quantity:1 spike:5 question:3 occurs:1 usual:1 exhibit:1 code:1 relationship:1 ratio:2 demonstration:1 webb:2 frank:1 stated:1 kaplan:4 implementation:1 design:1 adjustable:1 upper:2 neuron:16 bemander:2 digitized:1 persisted:1 discovered:1 varied:1 sharp:1 canada:2 intensity:17 required:1 mechanical:1 connection:1 isih:20 coherent:3 hour:1 able:2 suggested:1 dynamical:8 below:2 pattern:1 regime:1 hear:1 built:1 including:1 power:8 event:5 critical:1 incoherent:1 excitable:7 philadelphia:1 moss:17 review:1 acknowledgement:1 embedded:1 lacking:1 interesting:1 versus:1 age:1 editor:2 elsewhere:1 accounted:1 supported:1 ber:1 barrier:2 van:1 boundary:1 lett:1 transition:2 quantum:1 sensory:14 forward:1 collection:1 made:2 san:1 far:1 nato:2 correlating:1 photoreceptors:1 spatio:1 spectrum:9 vet:1 reviewed:1 stimulated:10 nature:1 robust:1 ca:1 inherently:1 operational:1 complex:1 noise:39 fig:8 crayfish:6 en:1 transduction:1 slow:2 schulten:2 kin:1 minute:1 emphasizing:1 specific:1 decibel:1 showing:1 physiological:3 evidence:1 sit:1 workshop:1 magnitude:2 execution:1 occurring:1 bonhoeffer:1 electrophysiology:1 tantalizing:1 simply:1 lt:1 ganglion:2 neurophysiological:3 visual:1 nserc:1 determines:1 relies:1 careful:1 typical:1 determined:1 wt:2 called:1 pas:1 experimental:1 internal:2 oise:1 vct:1 latter:1 modulated:1 overdamped:1 preparation:2 phenomenon:1 |
5,458 | 5,940 | Matrix Completion with Noisy Side Information
?
Kai-Yang Chiang? Cho-Jui Hsieh ? Inderjit S. Dhillon ?
?
University of Texas at Austin
University of California at Davis
?
{kychiang,inderjit}@cs.utexas.edu
?
chohsieh@ucdavis.edu
Abstract
We study the matrix completion problem with side information. Side information
has been considered in several matrix completion applications, and has been empirically shown to be useful in many cases. Recently, researchers studied the effect
of side information for matrix completion from a theoretical viewpoint, showing
that sample complexity can be significantly reduced given completely clean features. However, since in reality most given features are noisy or only weakly informative, the development of a model to handle a general feature set, and investigation of how much noisy features can help matrix recovery, remains an important
issue. In this paper, we propose a novel model that balances between features and
observations simultaneously in order to leverage feature information yet be robust
to feature noise. Moreover, we study the effect of general features in theory and
show that by using our model, the sample complexity can be lower than matrix
completion as long as features are sufficiently informative. This result provides
a theoretical insight into the usefulness of general side information. Finally, we
consider synthetic data and two applications ? relationship prediction and semisupervised clustering ? and show that our model outperforms other methods for
matrix completion that use features both in theory and practice.
1
Introduction
Low rank matrix completion is an important topic in machine learning and has been successfully
applied to many practical applications [22, 12, 11]. One promising direction in this area is to exploit
the side information, or features, to help matrix completion tasks. For example, in the famous Netflix
problem, besides rating history, profile of users and/or genre of movies might also be given, and one
could possibly leverage such side information for better prediction. Observing the fact that such
additional features are usually available in real applications, how to better incorporate features into
matrix completion becomes an important problem with both theoretical and practical aspects.
Several approaches have been proposed for matrix completion with side information, and most of
them empirically show that features are useful for certain applications [1, 28, 9, 29, 33]. However,
there is surprisingly little analysis on the effect of features for general matrix completion. More recently, Jain and Dhillon [18] and Xu et al. [35] provided non-trivial guarantees on matrix completion
with side information. They showed that if ?perfect? features are given, under certain conditions,
one can substantially reduce the sample complexity by solving a feature-embedded objective. This
result suggests that completely informative features are extremely powerful for matrix completion,
and the algorithm has been successfully applied in many applications [29, 37]. However, this model
is still quite restrictive since if features are not perfect, it fails to guarantee recoverability and could
even suffer poor performance in practice. A more general model with recovery analysis to handle
noisy features is thus desired.
In this paper, we study the matrix completion problem with general side information. We propose a
dirty statistical model which balances between feature and observation information simultaneously
to complete a matrix. As a result, our model can leverage feature information, yet is robust to noisy
features. Furthermore, we provide a theoretical foundation to show the effectiveness of our model.
We formally quantify the quality of features and show that the sample complexity of our model
1
depends on feature quality. Two noticeable results could thus be inferred: first, unlike [18, 35],
given any feature set, our model is guaranteed to achieve recovery with at most O(n3/2 ) samples in
distribution-free manner, where n is the dimensionality of the matrix. Second, if features are reasonably good, we can improve the sample complexity to o(n3/2 ). We emphasize that since ?(n3/2 )
is the lower bound of sample complexity for distribution-free, trace-norm regularized matrix completion [32], our result suggests that even noisy features could asymptotically reduce the number
of observations needed in matrix completion. In addition, we empirically show that our model outperforms other completion methods on synthetic data as well as in two applications: relationship
prediction and semi-supervised clustering. Our contribution can be summarized as follows:
? We propose a dirty statistical model for matrix completion with general side information
where the matrix is learned by balancing features and pure observations simultaneously.
? We quantify the effectiveness of features in matrix completion problem.
? We show that our model is guaranteed to recover the matrix with any feature set, and
moreover, the sample complexity can be lower than standard matrix completion given informative features.
The paper is organized as follows. Section 2 states some related research. In Section 3, we introduce
our proposed model for matrix completion with general side information. We theoretically analyze
the effectiveness of features in our model in Section 4, and show experimental results in Section 5.
2
Related Work
Matrix completion has been widely applied to many machine learning tasks, such as recommender
systems [22], social network analysis [12] and clustering [11]. Several theoretical foundations have
also been established. One remarkable milestone is the strong guarantee provided by Cand`es et
al. [7, 5], who proves that O(npolylogn) observations are sufficient for exact recovery provided
entries are uniformly sampled at random. Several work also studies recovery under non-uniform
distributional assumptions [30, 10], distribution-free setting [32], and noisy observations [21, 4].
Several works also consider side information in matrix completion [1, 28, 9, 29, 33]. Although most
of them found that features are helpful for certain applications [28, 33] and cold-start setting [29]
from their experimental supports, their proposed methods focus on the non-convex matrix factorization formulation without any theoretical guarantees. Compared to them, our model mainly focuses
on a convex trace-norm regularized objective and on theoretical insight on the effect of features. On
the other hand, Jain and Dhillon [18] (also see [38]) studied an inductive matrix completion objective
to incorporate side information, and followup work [35] also considers a similar formulation with
trace norm regularized objective. Both of them show that recovery guarantees could be attained with
lower sample complexity when features are perfect. However, if features are imperfect, such models
cannot recover the underlying matrix and could suffer poor performance in practice. We will have a
detailed discussion on inductive matrix completion model in Section 3.
Our proposed model is also related to the family of dirty statistical models [36], where the model
parameter is expressed as the sum of a number of parameter components, each of which has its
own structure. Dirty statistical models have been proposed mostly for robust matrix completion,
graphical model estimation, and multi-task learning to decompose the sparse component (noise) and
low-rank component (model parameters) [6, 8, 19]. Our proposed algorithm is completely different.
We aim to decompose the model into two parts: the part that can be described by side information
and the part that has to be recovered purely by observations.
3
A Dirty Statistical Model for Matrix Completion with Features
Let R ? Rn1 ?n2 be the underlying rank-k matrix that aims to be recovered, where k ? min(n1 , n2 )
so that R is low-rank. Let ? be the set of observed entries sampled from R with cardinality |?| = m.
Furthermore, let X ? Rn1 ?d1 and Y ? Rn2 ?d2 be the feature set, where each row xi (or yi ) denotes
the feature of the i-th row (or column) entity of R. Both d1 , d2 ? min(n1 , n2 ) but can be either
smaller or larger than k. Thus, given a set of observations ? and the feature set X and Y as side
information, the goal is to recover the underlying low rank matrix R.
To begin with, consider an ideal case where the given features are ?perfect? in the following sense:
col(R) ? col(X) and row(R) ? col(Y ).
(1)
Such a feature set can be thought as perfect since it fully describes the true latent feature space of
R. Then, instead of recovering the low rank matrix R directly, one can recover a smaller matrix
2
M ? Rd1 ?d2 such that R = XM Y T . The resulting formulation, called inductive matrix completion (or IMC in brief) [18], is shown to be both theoretically preferred [18, 35] and useful in real
applications [37, 29]. Details of this model can be found in [18, 35].
However, in practice, most given features X and Y will not be perfect. In fact, they could be quite
noisy or only weakly correlated to the latent feature space of R. Though in some cases applying
IMC with imperfect X, Y might still yield decent performance, in many other cases, the performance
drastically drops when features become noisy. This weakness of IMC can also be empirically seen
in Section 5. Therefore, a more robust model is desired to better handle noisy features.
We now introduce a dirty statistical model for matrix completion with (possibly noisy) features.
The core concept of our model is to learn the underlying matrix by balancing feature information
and observations. Specifically, we propose to learn R jointly from two parts, one is the low rank
estimate from feature space XM Y T , and the other part N is the part outside the feature space.
Thus, N can be used to capture the information that noisy features fail to describe, which is then
estimated by pure observations. Naturally, both XM Y T and N are preferred to be low rank since
they are aggregated to estimate a low rank matrix R. This further leads a preference on M to be low
rank as well, since one could expect only a small subspace of X and a subspace of Y are jointly
effective to form the low rank space XM Y T . Putting all of above together, we consider to solve the
following problem:
!
min
?((XM Y T + N )ij , Rij ) + ?M ?M ?? + ?N ?N ?? ,
(2)
M,N
(i,j)??
where M and N are regularized with trace norm because of the low rank prior. The underlying
matrix R can thus be estimated by XM ? Y T +N ? . We refer our model as DirtyIMC for convenience.
To solve the convex problem (2), we propose an alternative minimization scheme to solve N and M
iteratively. Our algorithm is stated in details in Appendix A. One remark of this algorithm is that it
is guaranteed to converge to a global optimal, since the problem is jointly convex with M and N .
The parameters ?M and ?N are crucial for controlling the importance between features and residual.
When ?M = ?, M will be enforced to 0, so features are disregarded and (2) becomes a standard
matrix completion objective. Another special case is ?N = ?, in which N will be enforced to 0
and the objective becomes IMC. Intuitively, with an appropriate ratio ?M /?N , the proposed model
can incorporate useful part of features, yet be robust to noisy part by compensating from pure observations. Some natural questions arise from here: How to quantify the quality of features? What
is the right ?M and ?N given a feature set? And beyond intuition, how much can we benefit from
features using our model in theory? We will formally answer these questions in Section 4.
4
Theoretical Analysis
Now we analyze the usefulness of features in our model under a theoretical perspective. We first
quantify the quality of features and show that with reasonably good features, our model achieves
recovery with lower sample complexity. Finally, we compare our results to matrix completion and
IMC. Due to space limitations, detailed proofs of Theorems and Lemmas are left in Appendix B.
4.1
Preliminaries
Recall that our goal is to recover a rank-k matrix R given observed entry set ?, feature set X and Y
described in Section 3. To recover the matrix with our model (Equation (2)), it is equivalent to solve
the hard-constraint problem:
!
min
?((XM Y T + N )ij , Rij ), subject to ?M ?? ? M, ?N ?? ? N .
(3)
M,N
(i,j)??
For simplicity, we will consider d = max(d1 , d2 ) = O(1) so that feature dimensions do not grow
as a function of n. We assume each entry (i, j) ? ? is sampled i.i.d. under an unknown distribution with index set {(i? , j? )}m
?=1 . Also, each entry
? of R is assumed to be upper bounded, i.e.
maxij |Rij | ? R (so that trace norm of R is in O( n1 n2 )). Such circumstance is consistent with
real scenarios like the Netflix problem where users can rate movies with scale from 1 to 5. For convenience, let ? = (M, N ) be any feasible solution, and ? = {(M, N ) | ?M ?? ? M, ?N ?? ? N }
be the feasible solution set. Also, let f? (i, j) = xTi M yj + Nij be the estimation function for Rij
parameterized by ?, and F? = {f? | ? ? ?} be the set of feasible functions. We are interested in
the following two ??-risk? quantities:
"
#
? Expected ?-risk: R? (f ) = E(i,j) ?(f (i, j), Rij ) .
3
? ? (f ) =
? Empirical ?-risk: R
1
m
$
(i,j)??
?(f (i, j), Rij ).
? ? (f ), and it is sufficient
Thus, our model is to solve for ? that parameterizes f ? = arg minf ?F? R
to show that recovery can be attained if R? (f ? ) approaches to zero with large enough n and m.
?
4.2
Measuring the Quality of Features
We now link the quality of features to Rademacher complexity, a learning theoretic tool to measure
the complexity of a function class. We will show that quality features result in a lower model
complexity and thus a smaller error bound. Under such a viewpoint, the upper bound of Rademacher
complexity could be used for measuring the quality of features.
To begin with, we apply the following Lemma to bound the expected ?-risk.
Lemma 1 (Bound on Expected ?-risk [2]). Let ? be a loss function with Lipschitz constant L?
bounded by B with respect to its first argument, and ? be a constant where 0 < ? < 1. Let R(F? )
be the Rademacher complexity of the function class F? (w.r.t. ? and associated with ?) defined as:
m
#
"
1 !
?? ?(f (i? , j? ), Ri? j? ) ,
(4)
R(F? ) = E? sup
f ?F? m ?=1
where each ?? takes values {?1} with equal probability. Then with probability at least 1 ? ?, for
%
all f ? F? we have:
1
"
#
? ? (f ) + 2E? R(F? ) + B log ? .
R? (f ) ? R
2m
"
#
? ? and model complexity E? R(F? ) have to be
Apparently, to guarantee a small enough R? , both R
"
#
bounded. The next key lemma shows that, the model complexity term E? R(F? ) is related to the
feature quality in matrix completion context.
Before diving into the details, we first provide an intuition on the meaning of ?good? features.
Consider any imperfect feature set which violates (1). One can imagine such feature set is perturbed
by some misleading noise which is not correlated to the true latent features. However, features
should still be effective if such noise does not weaken the true latent feature information too much.
Thus, if a large portion of true latent features lies on the informative part of the feature spaces X
and Y , they should still be somewhat informative and helpful for recovering the matrix R.
More formally, the model complexity can be bounded in terms of M and N by the following lemma:
Lemma 2. Let X = maxi ?xi ?2 , Y = maxi ?yi ?2 and n = max(n1 , n2 ). Then the model complexity of function class F? is upper bounded by:
&
&
&
(
'
?
?
"
#
N ( n1 + n2 )
log 2d
log 2n
E? R(F? ) ? 2L? MX Y
.
+ min 2L? N
, 9CL? B
m
m
m
Then, by Lemma 1 and 2, one could
construct a feasible solution set (by setting M and
" carefully
#
? ? (f ? ) and E? R(F? ) are controlled to be reasonably small. We now suggest
N ) such that both R
a witness pair of M and N constructed as follows. Let ? be defined as:
*
)
mini ?xi ? mini ?yi ?
.
? = min
,
X
Y
Let T? (?) : R+ ? R+ be the thresholding operator where T? (x) = x if x ? ? and T? (x) =
$d 1
T
0 otherwise. In addition, let X =
i=1 ?i ui vi be the reduced SVD of X, and define X? =
$d 1
T
i=1 ?1 T? (?i /?1 )ui vi to be the ??-informative? part of X. The ?-informative part of Y , denoted
? ?? and N = ?R ? X? M
? Y?T ?? ,
as Y? , can also be defined similarly. Now consider setting M = ?M
where
? = arg min ?X? M Y?T ? R?2F = (X?T X? )?1 X?T RY? (Y?T Y? )?1
M
M
is the optimal solution for approximating R under the informative feature space X? and Y? . Then
? will not grow as n increases.
the following lemma shows that the trace norm of M
Lemma 3. Fix ?, ? ? (0, 1], and let d? = min(rank(X? ), rank(Y? )). Then with some universal
constant C ? :
d?
? ?? ?
.
?M
C ? ?2 ? 2 ? 2 X Y
4
Moreover, by combining Lemma 1 - 3, we can upper bound R? (f ? ) of DirtyIMC as follows:
? ?? and N = ?R ? X? M
? Y T ?? . Then with
Theorem 1. Consider problem (3) with M = ?M
?
probability at least 1 ? ?, the expected ?-risk of an optimal solution (N ? , M ? ) will be bounded by:
%
&
&
&
'
(
?
?
?
log 1?
n
+
n
)
N
(
log
2n
log
2d
4L
d
1
2
?
R? (f ? ) ? min 4L? N
+ ? 2 2 2
, 36CL? B
+B
.
m
m
C? ? ?
m
2m
4.3
Sample Complexity Analysis
From Theorem 1, we can derive the following sample complexity guarantee of our model. For
simplicity, we assume k = O(1) so it will not grow as n increases in the following discussion.
"
#
Corollary 1. Suppose we aim to ??-recover? R where E(i,j) ?(Nij + XM YijT , Rij ) < ? given
?
an arbitrarily small ?. Then for DirtyIMC model, O(min(N n, N 2 log n)/?2 ) observations are
sufficient for ?-recovery provided a sufficiently large n.
Corollary 1 suggests that the sample complexity of our model only depends on the trace norm of
? Y T will
residual N . This matches the intuition of good features stated in Section 4.2 because X M
cover most part of R if features are good, and as a result, N will be small and one can enjoy small
sample complexity by exploiting quality features.
We also compare our sample complexity result with other models. First, suppose features are perfect
(so that N = O(1)), our result suggests that only O(log n) samples are required for recovery.
This matches the result of [35], in which the authors show that given perfect features, O(log n)
observations are enough for exact recovery by solving the IMC objective. However, IMC does
not guarantee recovery when features are
? not perfect, while our result shows that recovery is still
attainable by DirtyIMC with O(min(N n, N 2 log n)/?2 ) samples. We will also empirically justify
this result in Section 5.
On the other hand, for standard matrix completion (i.e. no features are considered), the most wellknown guarantee is that under certain conditions, one can achieve O(n poly log n) sample complexity for both ?-recovery [34] and exact recovery [5]. However, these bounds only hold with
distributional assumptions on observed entries. For sample complexity without any distributional
assumptions, Shamir et al. [32] recently showed that O(n3/2 ) entries are sufficient for ?-recovery,
and this bound is tight if no further distribution of observed entries is assumed. Compared to those
results, our analysis also requires no assumptions on distribution of observed entries, and our sample
complexity yields O(n3/2 ) as well in the worst case, by the fact that N ? ?R?? = O(n). Notice
that it is reasonable to meet the lower bound ?(n3/2 ) even given features, since in an extreme case,
X, Y could be random matrices and have no correlation to R, and thus the given information is as
same as that in standard matrix completion.
However, in many applications, features will be far from random, and our result provides a theoretical insight to show that features can be useful even if they are imperfect. Indeed, as long as features
are informative enough such that N = o(n), our sample complexity will be asymptotically lower
than O(n3/2 ). Here we provide two concrete instances for such a scenario. In the first scenario, we
consider the rank-k matrix R to be generated from random orthogonal model [5] as follows:
Theorem 2. Let R ? Rn?n be generated from random orthogonal model, where U = {ui }ki=1 , V =
{vi }ki=1 are random orthogonal bases, and ?1 . . . ?k are singular
values with arbitrary magnitude.
?
Let ?t be the largest singular value such that limn?? ?t / n = 0. Then, given the noisy features
X, Y where X:i = ui (and Y:i = vi ) if i < t and X:i (and V:i ) be any basis orthogonal to U (and
V ) if i ? t, o(n) samples are sufficient for DirtyIMC to achieve ?-recovery.
Theorem 2 suggests that, under random orthogonal model, if features are not too noisy in the sense
that noise only corrupts the true subspace associated with smaller singular values, we can approximately recover R with only o(n) observations. An empirical justification for this result is presented
in Appendix C. Another scenario is to consider R to be the product of two rank-k Gaussian matrices:
Theorem 3. Let R = U V T be a rank-k matrix, where U, V ? Rn?k are true latent row/column features with each Uij , Vij ? N (0, ? 2 ) i.i.d. Suppose now we are given a feature set X, Y where g(n)
row items and h(n) column items have corrupted features. Moreover, each corrupted row/column
item has perturbed feature xi = ui + ?ui and yi = vi + ?vi , where ??u?? ? ?1 and
5
Sparsity (?s) = 0.25965
0.8
0.8
0.6
SVDfeature
MC
IMC
DirtyIMC
0.2
0
0
0.2
0.4
0.6
0.8
Feature noise level (?f)
0.6
SVDfeature
MC
IMC
DirtyIMC
0.4
0.2
0
0
1
0.2
(a) ?s = 0.1
Feature noise level (?f) = 0.1
0.2
Relative error
Relative error
0.4
0
0
0.2
0.3
Sparsity (?s)
0.4
0.6
0.8
Feature noise level (?f)
0
0
1
0.2
0.4
0.5
0.8
0.8
0.6
(d) ?f = 0.1
0
0
SVDfeature
MC
IMC
DirtyIMC
0.1
0.6
0.4
0.2
0.2
0.3
Sparsity (?s)
(e) ?f = 0.5
0.4
1
Feature noise level (?f) = 0.9
1
0.4
0.4
0.6
0.8
Feature noise level (?f)
(c) ?s = 0.4
1
0.2
0.1
0.4
Feature noise level (?f) = 0.5
SVDfeature
MC
IMC
DirtyIMC
0.6
0.6
SVDfeature
MC
IMC
DirtyIMC
(b) ?s = 0.25
1
0.8
0.8
0.2
Relative error
0.4
Sparsity (?s) = 0.39413
1
Relative error
1
Relative error
Relative error
Sparsity (?s) = 0.095825
1
0.5
0
0
SVDfeature
MC
IMC
DirtyIMC
0.1
0.2
0.3
Sparsity (?s)
0.4
0.5
(f) ?f = 0.9
Figure 1: Performance of various methods for matrix completion under different sparsity and feature
quality. Compared to other feature-based completion methods, the top figures show that DirtyIMC
is less sensitive to noisy features with each ?s , and the bottom figures show that error of DirtyIMC
always decreases to 0 with more observations given any feature quality.
??v?
, ?2 with
,some constants
+ ??
- ?1 and ?2 . Then for DirtyIMC model (3), with high probability,
O max( g(n), h(n))n log n observations are sufficient for ?-recovery.
Theorem 3 suggests that, if features have good quality in the sense that items with corrupted features
are not too?many, for example g(n), h(n) = O(log n), then sample complexity of DirtyIMC can be
O(n log n log n) = o(n3/2 ) as well. Thus, both Theorem 2 and 3 provide concrete examples
showing that given imperfect yet informative features, the sample complexity of our model can be
asymptotically lower than the lower bound of pure matrix completion (which is ?(n3/2 )).
5
Experimental Results
In this section, we show the effectiveness of the DirtyIMC model (2) for matrix completion with
features on both synthetic datasets and real-world applications. For synthetic datasets, we show
that DirtyIMC model better recovers low rank matrices under various quality of features. For real
applications, we consider relationship prediction and semi-supervised clustering, where the current
state-of-the-art methods are based on matrix completion and IMC respectively. We show that by
applying DirtyIMC model to these two problems, we can further improve performance by making
better use of features.
5.1
Synthetic Experiments
We consider matrix recovery with features on synthetic data generated as follows. We create a
low rank matrix R = U V T , as the true latent row/column space U, V ? R200?20 , Uij , Vij ?
N (0, 1/20). We then randomly sample ?s percent of entries ? from R as observations, and construct
a perfect feature set X ? , Y ? ? R200?40 which satisfies (1). To examine performance under different
quality of features, we generate features X, Y with a noise parameter ?f , where X and Y will be
derived by replacing ?f percent of bases of X ? (and Y ? ) with bases orthogonal to X ? (and Y ? ). We
then consider recovering the underlying matrix R given X, Y and a subset ? of R.
We compare our DirtyIMC model (2) with standard trace-norm regularized matrix completion (MC)
and two other feature-based completion methods: IMC [18] and SVDfeature [9]. The standard
? ? R?F /?R?F is used to evaluate a recovered matrix R.
? For each method, we
relative error ?R
select parameters from the set {10? }2?=?3 and report the one with the best recovery. All results are
averaged over 5 random trials.
Figure 1 shows the recovery of each method under each sparsity level ?s = 0.1, 0.25, 0.4, and
each feature noise level ?f = 0.1, 0.5 and 0.9. We first observe that in the top figures, IMC and
6
Method
Accuracy
AUC
DirtyIMC
0.9474?0.0009
0.9506
MF-ALS [16]
0.9412?0.0011
0.9020
IMC [18]
0.9139?0.0016
0.9109
HOC-3
0.9242?0.0010
0.9432
HOC-5 [12]
0.9297?0.0011
0.9480
Table 1: Relationship prediction on Epinions. Compared with other approaches, DirtyIMC model
gives the best performance in terms of both accuracy and AUC.
SVDfeature perform similarly under different ?s . This suggests that with sufficient observations,
performance of IMC and SVDfeature mainly depend on feature quality and will not be affected
much by the number of observations. As a result, given good features (1d), they achieve smaller
error compared to MC with few observations, but as features become noisy (1e-1f), they suffer
poor performance by trying to learn the underlying matrix under biased feature spaces. Another
interesting finding is that when good features are given (1d), IMC (and SVDfeature) still fails to
achieve 0 relative error as the number of observations increases, which reconfirms that IMC cannot
guarantee recoverability when features are not perfect. On the other hand, we see that performance
of DirtyIMC can be improved by both better features or more observations. In particular, it makes
use of informative features to achieve lower error compared to MC and is also less sensitive to noisy
features compared to IMC and SVDfeature. Some finer recovery results on ?s and ?f can be found
in Appendix C.
5.2
Real-world Applications
Relationship Prediction in Signed Networks. As the first application, we consider relationship
prediction problem in an online review website Epinions [26], where people can write reviews and
trust or distrust others based on their reviews. Such social network can be modeled as a signed
network where trust/distrust are modeled as positive/negative edges between entities [24], and the
problem is to predict unknown relationship between any two users given the network. A state-ofthe-art approach is the low rank model [16, 12] where one can first conduct matrix completion on
adjacency matrix and then use the sign of completed matrix for relationship prediction. Therefore,
if features of users are available, we can also consider low rank model by using our model for matrix
completion step. This approach can be regarded as an improvement over [16] by incorporating
feature information.
In this dataset, there are about n = 105K users and m = 807K observed relationship pairs where
15% relationships are distrust. In addition to who-trust-to-whom information, we also have user
feature matrix Z ? Rn?41 where for each user a 41-dimensional feature is collected based on
the user?s review history, such as number of positive/negative reviews the user gave/received. We
then consider the low-rank model in [16] where matrix completion is conducted by DirtyIMC with
non-convex relaxation (5) (DirtyIMC), IMC [18] (IMC), and matrix factorization proposed in [16]
(MF-ALS), along with another two prediction methods, HOC-3 and HOC-5 [12]. Note that both
row and column entities are users so X = Y = Z is set for both DirtyIMC and IMC model.
We conduct the experiment using 10-fold cross validation on observed edges, where the parameters
are chosen from the set ,2?=?3 {10? , 5 ? 10? }. The averaged accuracy and AUC of each method
are reported in Table 1. We first observe that IMC performs worse than MF-ALS even though IMC
takes features into account. This is because features are only weakly related to relationship matrix,
and as a result, IMC is misled by such noisy features. On the other hand, DirtyIMC performs
the best among all prediction methods. In particular, it performs slightly better than MF-ALS in
terms of accuracy, and much better in terms of AUC. This shows DirtyIMC can still exploit weakly
informative features without being trapped by noisy features.
Semi-supervised Clustering. We now consider semi-supervised clustering problem as another application. Given n items, the item feature matrix Z ? Rn?d , and m pairwise constraints specifying
whether item i and j are similar or dissimilar, the goal is to find a clustering of items such that most
similar items are within the same cluster.
We notice that the problem can indeed be solved by matrix completion. Consider S ? Rn?n to be
the signed similarity matrix defined as Sij = 1 (or ?1) if item i and j are similar (or dissimilar), and
0 if similarity is unknown. Then solving semi-supervised clustering becomes equivalent to finding
a clustering of the symmetric signed graph S, where the goal is to cluster nodes so that most edges
within the same group are positive and most edges between groups are negative [12]. As a result, a
matrix completion approach [12] can be applied to solve the signed graph clustering problem on S.
Apparently, the above solution is not optimal for semi-supervised clustering as it disregards features. Many semi-supervised clustering algorithms are thus proposed by taking both item features
7
0.3
0.2
0.1
0
0
Covtype
K?means
SignMC
MCCC
DirtyIMC
0.4
Pairwise error
0.4
Pairwise error
Segment
0.5
K?means
SignMC
MCCC
DirtyIMC
0.3
0.2
0.5
0.1
0.5
1
1.5
2
Number of observed pairs
0
0
5
x 10
K?means
SignMC
MCCC
DirtyIMC
0.4
Pairwise error
Mushroom
0.5
0.3
0.2
0.1
1
2
3
4
5
6
Number of observed pairs
0
0
4
x 10
0.5
1
1.5
2
2.5
Number of observed pairs
3
5
x 10
Figure 2: Semi-supervised clustering on real-world datasets. For Mushroom dataset where features
are almost ideal, both MCCC and DirtyIMC achieve 0 error rate. For Segment and Covtype where
features are more noisy, our model outperforms MCCC as its error decreases given more constraints.
Mushrooms
Segment
Covtype
number of items n
8124
2319
11455
feature dimension d
112
19
54
number of clusters k
2
7
7
Table 2: Statistics of semi-supervised clustering datasets.
and constraints into consideration [13, 25, 37]. The current state-of-the-art method is the MCCC
algorithm [37], which essentially solves semi-supervised clustering with IMC objective. In [37], the
authors show that by running k-means on the top-k eigenvectors of the completed matrix ZM Z T ,
MCCC outperforms other state-of-the-art algorithms [37].
We now consider solving semi-supervised clustering with our DirtyIMC model. Our algorithm,
summarized in Algorithm 2 in Appendix D, first completes the pairwise matrix with DirtyIMC
objective (2) instead of IMC (with both X, Y are set as Z), and then runs k-means on the top-k
eigenvectors of the completed matrix to obtain a clustering. This algorithm can be viewed as an
improved version of MCCC to handle noisy features Z.
We now compare our algorithm with k-means, signed graph clustering with matrix completion [12]
(SignMC) and MCCC [37]. Note that since MCCC has been shown to outperform most other
state-of-the-art semi-supervised clustering algorithms in [37], comparing with MCCC is sufficient
to demonstrate the effectiveness of our algorithm. We perform each method on three real-world
datasets: Mushrooms, Segment and Covtype 1 . All of them are classification benchmarks where
features and ground-truth class of items are both available, and their statistics are summarized in Table 2. For each dataset, we randomly sample m = [1, 5, 10, 15, 20, 25, 30] ? n pairwise constraints,
and perform each algorithm to derive a clustering ?, where ?i is the cluster index of item i. We then
evaluate ? by the following pairwise error to ground-truth:
)
*
!
!
n(n ? 1)
1(?i ?= ?j ) +
1(?i = ?j )
2
?
?
?
?
(i,j):?i ?=?j
(i,j):?i =?j
where ?i? is the ground-truth class of item i.
Figure 2 shows the result of each method on all three datasets. We first see that for Mushrooms
dataset where features are perfect (100% training accuracy can be attained by linear-SVM for classification), both MCCC and DirtyIMC can obtain a perfect clustering, which shows that MCCC is
indeed effective with perfect features. For Segment and Covtype datasets, we observe that the performance of k-means and MCCC are dominated by feature quality. Although MCCC still benefits
from constraint information as it outperforms k-means, it clearly does not make the best use of constraints, as its performance does not improves even if number of constraints increases. On the other
hand, the error rate of SignMC can always decrease down to 0 by increasing m. However, since it
disregards features, it suffers from a much higher error rate than methods with features when constraints are few. We again see DirtyIMC combines advantage from MCCC and SignMC, as it makes
use of features when few constraints are observed yet leverages constraint information simultaneously to avoid being trapped by feature noise. This experiment shows that our model outperforms
state-of-the-art approaches for semi-supervised clustering.
Acknowledgement. We thank David Inouye and Hsiang-Fu Yu for helpful comments and discussions. This research was supported by NSF grants CCF-1320746 and CCF-1117055.
1
All datasets are available at http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/.
For Covtype, we subsample from the entire dataset to make each cluster has balanced size.
8
References
[1] J. Abernethy, F. Bach, T. Evgeniou, and J.-P. Vert. A new approach to collaborative filtering: Operator
estimation with spectral regularization. JMLR, 10:803?826, 2009.
[2] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural
results. JMLR, 3:463?482, 2003.
[3] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA 02178-9998, 1999.
[4] E. Cand`es and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925?936, 2010.
[5] E. Cand`es and B. Recht. Exact matrix completion via convex optimization. Commun. ACM, 55(6):111?
119, 2012.
[6] E. J. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? J. ACM, 58(3):11:1?
11:37, 2011.
[7] E. J. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Trans.
Inf. Theor., 56(5):2053?2080, 2010.
[8] V. Chandrasekaran, P. A. Parrilo, and A. S. Willsky. Latent variable graphical model selection via convex
optimization. The Annals of Statistics, 2012.
[9] T. Chen, W. Zhang, Q. Lu, K. Chen, Z. Zheng, and Y. Yu. SVDFeature: A toolkit for feature-based
collaborative filtering. JMLR, 13:3619?3622, 2012.
[10] Y. Chen, S. Bhojanapalli, S. Sanghavi, and R. Ward. Coherent matrix completion. In ICML, 2014.
[11] Y. Chen, A. Jalali, S. Sanghavi, and H. Xu. Clustering partially observed graphs via convex optimization.
JMLR, 15(1):2213?2238, 2014.
[12] K.-Y. Chiang, C.-J. Hsieh, N. Natarajan, I. S. Dhillon, and A. Tewari. Prediction and clustering in signed
networks: A local to global perspective. JMLR, 15:1177?1213, 2014.
[13] J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon. Information-theoretic metric learning. In ICML,
pages 209?216, 2007.
[14] U. Feige and G. Schechtman. On the optimality of the random hyperplane rounding technique for max
cut. Random Struct. Algorithms, 20(3):403?440, 2002.
[15] L. Grippo and M. Sciandrone. Globally convergent block-coordinate techniques for unconstrained optimization. Optimization Methods and Software, 10:587?637, 1999.
[16] C.-J. Hsieh, K.-Y. Chiang, and I. S. Dhillon. Low rank modeling of signed networks. In KDD, 2012.
[17] C.-J. Hsieh and P. A. Olsan. Nuclear norm minimization via active subspace selection. In ICML, 2014.
[18] P. Jain and I. S. Dhillon. Provable inductive matrix completion. CoRR, abs/1306.0626, 2013.
[19] A. Jalali, P. Ravikumar, S. Sanghavi, and C. Ruan. A dirty model for multi-task learning. In NIPS, 2010.
[20] S. M. Kakade, K. Sridharan, and A. Tewari. On the complexity of linear prediction: Risk bounds, margin
bounds, and regularization. In NIPS, pages 793 ? 800, 2008.
[21] R. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. JMLR, 2010.
[22] Y. Koren, R. M. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. IEEE
Computer, 42:30?37, 2009.
[23] B. Laurent and P. Massart. Adaptive estimation of a quadratic functional by model selection. The Annals
of Statistics, 28(5):1302?1338, 2000.
[24] J. Leskovec, D. Huttenlocher, and J. Kleinberg. Predicting positive and negative links in online social
networks. In WWW, 2010.
[25] Z. Li and J. Liu. Constrained clustering by spectral kernel learning. In ICCV, 2009.
[26] P. Massa and P. Avesani. Trust-aware bootstrapping of recommender systems. In Proceedings of ECAI
2006 Workshop on Recommender Systems, pages 29?33, 2006.
[27] R. Meir and T. Zhang. Generalization error bounds for bayesian mixture algorithms. JMLR, 2003.
[28] A. K. Menon, K.-P. Chitrapura, S. Garg, D. Agarwal, and N. Kota. Response prediction using collaborative filtering with hierarchies and side-information. In KDD, pages 141?149, 2011.
[29] N. Natarajan and I. S. Dhillon. Inductive matrix completion for predicting gene-disease associations.
Bioinformatics, 30(12):60?68, 2014.
[30] S. Negahban and M. J. Wainwright. Restricted strong convexity and weighted matrix completion: Optimal
bounds with noise. JMLR, 13(1):1665?1697, 2012.
[31] M. Rudelson and R. Vershynin. Smallest singular value of a random rectangular matrix. Comm. Pure
Appl. Math, pages 1707?1739, 2009.
[32] O. Shamir and S. Shalev-Shwartz. Matrix completion with the trace norm: Learning, bounding, and
transducing. JMLR, 15(1):3401?3423, 2014.
[33] D. Shin, S. Cetintas, K.-C. Lee, and I. S. Dhillon. Tumblr blog recommendation with boosted inductive
matrix completion. In CIKM, pages 203?212, 2015.
[34] N. Srebro and A. Shraibman. Rank, trace-norm and max-norm. In COLT, pages 545?560, 2005.
[35] M. Xu, R. Jin, and Z.-H. Zhou. Speedup matrix completion with side information: Application to multilabel learning. In NIPS, 2013.
[36] E. Yang and P. Ravikumar. Dirty statistical models. In NIPS, 2013.
[37] J. Yi, L. Zhang, R. Jin, Q. Qian, and A. Jain. Semi-supervised clustering by input pattern assisted pairwise
similarity matrix completion. In ICML, 2013.
[38] K. Zhong, P. Jain, and I. S. Dhillon. Efficient matrix sensing using rank-1 gaussian measurements. In
International Conference on Algorithmic Learning Theory(ALT), 2015.
9
| 5940 |@word trial:1 kulis:1 version:1 norm:12 d2:4 hsieh:4 attainable:1 liu:1 outperforms:6 recovered:3 current:2 comparing:1 yet:5 mushroom:5 belmont:1 informative:13 kdd:2 drop:1 chohsieh:1 website:1 item:15 core:1 chiang:3 provides:2 math:1 node:1 preference:1 zhang:3 along:1 constructed:1 become:2 combine:1 introduce:2 manner:1 pairwise:8 theoretically:2 indeed:3 expected:4 cand:5 examine:1 multi:2 ry:1 compensating:1 globally:1 little:1 xti:1 cardinality:1 increasing:1 becomes:4 provided:4 begin:2 moreover:4 underlying:7 bounded:6 bhojanapalli:1 what:1 substantially:1 shraibman:1 finding:2 bootstrapping:1 guarantee:10 milestone:1 grant:1 enjoy:1 bertsekas:1 before:1 positive:4 local:1 meet:1 laurent:1 approximately:1 might:2 signed:8 garg:1 studied:2 suggests:7 specifying:1 appl:1 factorization:3 averaged:2 practical:2 yj:1 practice:4 block:1 cold:1 svdfeature:12 shin:1 area:1 empirical:2 universal:1 bell:1 significantly:1 thought:1 vert:1 jui:1 suggest:1 cannot:2 convenience:2 selection:3 operator:2 risk:8 applying:2 context:1 www:2 equivalent:2 convex:9 rectangular:1 simplicity:2 recovery:22 pure:5 qian:1 insight:3 regarded:1 nuclear:1 oh:1 handle:4 coordinate:1 justification:1 annals:2 controlling:1 imagine:1 suppose:3 user:10 exact:4 shamir:2 programming:1 hierarchy:1 natarajan:2 cut:1 distributional:3 huttenlocher:1 observed:12 bottom:1 csie:1 rij:7 capture:1 worst:1 solved:1 decrease:3 balanced:1 intuition:3 disease:1 convexity:1 complexity:31 ui:6 r200:2 comm:1 multilabel:1 weakly:4 solving:4 tight:1 depend:1 segment:5 purely:1 completely:3 basis:1 various:2 genre:1 jain:6 describe:1 effective:3 outside:1 shalev:1 abernethy:1 quite:2 kai:1 widely:1 larger:1 solve:6 otherwise:1 statistic:4 ward:1 jointly:3 noisy:23 online:2 hoc:4 advantage:1 propose:5 product:1 zm:1 combining:1 achieve:7 exploiting:1 cluster:5 rademacher:4 perfect:14 help:2 derive:2 completion:59 ij:2 received:1 noticeable:1 solves:1 strong:2 recovering:3 c:1 quantify:4 direction:1 libsvmtools:1 violates:1 adjacency:1 fix:1 generalization:1 investigation:1 decompose:2 preliminary:1 ntu:1 theor:1 assisted:1 hold:1 sufficiently:2 considered:2 ground:3 wright:1 algorithmic:1 predict:1 achieves:1 smallest:1 estimation:4 utexas:1 sensitive:2 largest:1 create:1 successfully:2 tool:1 weighted:1 minimization:2 clearly:1 gaussian:3 always:2 aim:3 avoid:1 zhou:1 zhong:1 boosted:1 corollary:2 derived:1 focus:2 improvement:1 rank:26 mainly:2 sense:3 helpful:3 entire:1 uij:2 interested:1 corrupts:1 tao:1 issue:1 arg:2 among:1 classification:2 denoted:1 colt:1 development:1 plan:1 art:6 special:1 constrained:1 ruan:1 equal:1 construct:2 evgeniou:1 aware:1 yu:2 icml:4 minf:1 report:1 others:1 sanghavi:3 few:3 randomly:2 simultaneously:4 n1:5 ab:1 zheng:1 weakness:1 mixture:1 extreme:1 edge:4 fu:1 orthogonal:6 conduct:2 desired:2 theoretical:10 nij:2 weaken:1 leskovec:1 instance:1 column:6 modeling:1 cover:1 measuring:2 entry:11 subset:1 uniform:1 usefulness:2 rounding:1 conducted:1 too:3 reported:1 answer:1 perturbed:2 corrupted:3 synthetic:6 cho:1 vershynin:1 recht:1 international:1 negahban:1 lee:1 together:1 concrete:2 again:1 rn1:2 possibly:2 worse:1 li:2 account:1 parrilo:1 rn2:1 summarized:3 depends:2 vi:6 apparently:2 observing:1 portion:1 analyze:2 sup:1 netflix:2 recover:8 start:1 contribution:1 collaborative:3 accuracy:5 who:2 yield:2 ofthe:1 massa:1 famous:1 bayesian:1 mc:9 lu:1 researcher:1 finer:1 history:2 suffers:1 volinsky:1 naturally:1 proof:1 associated:2 recovers:1 sampled:3 dataset:5 recall:1 dimensionality:1 improves:1 organized:1 carefully:1 attained:3 higher:1 supervised:14 response:1 improved:2 formulation:3 though:2 furthermore:2 correlation:1 hand:5 replacing:1 trust:4 nonlinear:1 keshavan:1 chitrapura:1 quality:17 menon:1 scientific:1 semisupervised:1 effect:4 concept:1 true:7 ccf:2 inductive:6 regularization:2 symmetric:1 dhillon:10 iteratively:1 auc:4 davis:2 trying:1 complete:1 theoretic:2 demonstrate:1 performs:3 percent:2 meaning:1 consideration:1 novel:1 recently:3 functional:1 empirically:5 association:1 imc:29 refer:1 epinions:2 measurement:1 reconfirms:1 unconstrained:1 similarly:2 toolkit:1 similarity:3 base:3 own:1 showed:2 perspective:2 diving:1 commun:1 wellknown:1 scenario:4 inf:1 certain:4 blog:1 arbitrarily:1 yi:5 seen:1 additional:1 somewhat:1 aggregated:1 converge:1 semi:14 match:2 cross:1 long:2 bach:1 ravikumar:2 controlled:1 prediction:13 circumstance:1 essentially:1 metric:1 kernel:1 agarwal:1 addition:3 completes:1 grow:3 singular:4 limn:1 crucial:1 biased:1 unlike:1 massart:1 comment:1 subject:1 sridharan:1 effectiveness:5 structural:1 near:1 yang:2 leverage:4 ideal:2 enough:4 decent:1 gave:1 followup:1 reduce:2 imperfect:5 parameterizes:1 texas:1 whether:1 bartlett:1 suffer:3 remark:1 useful:5 tewari:2 detailed:2 eigenvectors:2 reduced:2 generate:1 http:1 outperform:1 meir:1 nsf:1 notice:2 sign:1 estimated:2 trapped:2 cikm:1 write:1 affected:1 group:2 putting:1 key:1 clean:1 asymptotically:3 relaxation:2 graph:4 sum:1 enforced:2 run:1 parameterized:1 powerful:1 family:1 reasonable:1 almost:1 chandrasekaran:1 distrust:3 appendix:5 bound:15 ki:2 guaranteed:3 koren:1 convergent:1 fold:1 quadratic:1 constraint:11 n3:9 ri:1 software:1 kota:1 kleinberg:1 dominated:1 aspect:1 argument:1 extremely:1 min:11 optimality:1 dirtyimc:35 speedup:1 poor:3 smaller:5 describes:1 slightly:1 feige:1 kakade:1 tw:1 making:1 intuitively:1 iccv:1 sij:1 restricted:1 equation:1 remains:1 fail:1 cjlin:1 needed:1 available:4 apply:1 observe:3 appropriate:1 spectral:2 sciandrone:1 alternative:1 struct:1 tumblr:1 denotes:1 clustering:26 dirty:8 top:4 completed:3 graphical:2 running:1 rudelson:1 exploit:2 restrictive:1 prof:1 approximating:1 objective:9 question:2 quantity:1 jalali:2 subspace:4 mx:1 link:2 thank:1 entity:3 athena:1 topic:1 whom:1 considers:1 collected:1 trivial:1 provable:1 willsky:1 besides:1 index:2 relationship:11 mini:2 ratio:1 balance:2 modeled:2 mostly:1 trace:10 stated:2 negative:4 unknown:3 perform:3 recommender:4 upper:4 observation:22 datasets:9 benchmark:1 jin:2 witness:1 yijt:1 rn:5 recoverability:2 arbitrary:1 inferred:1 rating:1 david:1 pair:5 required:1 california:1 coherent:1 learned:1 ucdavis:1 established:1 nip:4 trans:1 beyond:1 usually:1 pattern:1 xm:8 sparsity:8 max:5 maxij:1 wainwright:1 power:1 natural:1 regularized:5 predicting:2 residual:2 misled:1 transducing:1 scheme:1 improve:2 movie:2 misleading:1 brief:1 inouye:1 prior:1 review:5 acknowledgement:1 relative:8 embedded:1 fully:1 expect:1 loss:1 interesting:1 limitation:1 filtering:3 srebro:1 remarkable:1 validation:1 foundation:2 sufficient:8 consistent:1 thresholding:1 viewpoint:2 vij:2 balancing:2 austin:1 row:8 surprisingly:1 supported:1 free:3 ecai:1 drastically:1 side:18 taking:1 sparse:1 benefit:2 dimension:2 world:4 author:2 adaptive:1 far:1 social:3 emphasize:1 preferred:2 gene:1 global:2 active:1 assumed:2 xi:4 shwartz:1 latent:8 reality:1 table:4 promising:1 learn:3 reasonably:3 robust:6 sra:1 cl:2 poly:1 montanari:1 bounding:1 noise:16 arise:1 profile:1 n2:6 subsample:1 xu:3 hsiang:1 fails:2 col:3 lie:1 jmlr:9 theorem:8 down:1 showing:2 maxi:2 sensing:1 covtype:6 svm:1 alt:1 incorporating:1 mendelson:1 workshop:1 corr:1 importance:1 magnitude:1 disregarded:1 margin:1 chen:4 mf:4 rd1:1 expressed:1 partially:1 inderjit:2 grippo:1 recommendation:1 truth:3 satisfies:1 acm:2 ma:2 goal:4 viewed:1 lipschitz:1 feasible:4 hard:1 specifically:1 uniformly:1 justify:1 hyperplane:1 lemma:10 principal:1 called:1 experimental:3 e:5 svd:1 disregard:2 schechtman:1 formally:3 select:1 support:1 people:1 dissimilar:2 bioinformatics:1 incorporate:3 evaluate:2 d1:3 correlated:2 |
5,459 | 5,941 | Learning with Symmetric Label Noise: The
Importance of Being Unhinged
Brendan van Rooyen?,?
?
Aditya Krishna Menon?,?
The Australian National University
?
Robert C. Williamson?,?
National ICT Australia
{ brendan.vanrooyen, aditya.menon, bob.williamson }@nicta.com.au
Abstract
Convex potential minimisation is the de facto approach to binary classification.
However, Long and Servedio [2010] proved that under symmetric label noise
(SLN), minimisation of any convex potential over a linear function class can result in classification performance equivalent to random guessing. This ostensibly
shows that convex losses are not SLN-robust. In this paper, we propose a convex,
classification-calibrated loss and prove that it is SLN-robust. The loss avoids the
Long and Servedio [2010] result by virtue of being negatively unbounded. The
loss is a modification of the hinge loss, where one does not clamp at zero; hence,
we call it the unhinged loss. We show that the optimal unhinged solution is equivalent to that of a strongly regularised SVM, and is the limiting solution for any
convex potential; this implies that strong `2 regularisation makes most standard
learners SLN-robust. Experiments confirm the unhinged loss? SLN-robustness is
borne out in practice. So, with apologies to Wilde [1895], while the truth is rarely
pure, it can be simple.
1
Learning with symmetric label noise
Binary classification is the canonical supervised learning problem. Given an instance space X, and
samples from some distribution D over X ? {?1}, the goal is to learn a scorer s : X ? R with low
misclassification error on future samples drawn from D. Our interest is in the more realistic scenario
where the learner observes samples from some corruption D of D, where labels have some constant
probability of being flipped, and the goal is still to perform well with respect to D. This problem is
known as learning from symmetric label noise (SLN learning) [Angluin and Laird, 1988].
Long and Servedio [2010] showed that there exist linearly separable D where, when the learner
observes some corruption D with symmetric label noise of any nonzero rate, minimisation of any
convex potential over a linear function class results in classification performance on D that is equivalent to random guessing. Ostensibly, this establishes that convex losses are not ?SLN-robust? and
motivates the use of non-convex losses [Stempfel and Ralaivola, 2009, Masnadi-Shirazi et al., 2010,
Ding and Vishwanathan, 2010, Denchev et al., 2012, Manwani and Sastry, 2013].
In this paper, we propose a convex loss and prove that it is SLN-robust. The loss avoids the result
of Long and Servedio [2010] by virtue of being negatively unbounded. The loss is a modification of the hinge loss where one does not clamp at zero; thus, we call it the unhinged loss. This
loss has several appealing properties, such as being the unique convex loss satisfying a notion of
?strong? SLN-robustness (Proposition 5), being classification-calibrated (Proposition 6), consistent
when minimised on D (Proposition 7), and having an simple optimal solution that is the difference
of two kernel means (Equation 8). Finally, we show that this optimal solution is equivalent to that of
a strongly regularised SVM (Proposition 8), and any twice-differentiable convex potential (Proposition 9), implying that strong `2 regularisation endows most standard learners with SLN-robustness.
1
The classifier resulting from minimising the unhinged loss is not new [Devroye et al., 1996, Chapter 10], [Sch?olkopf and Smola, 2002, Section 1.2], [Shawe-Taylor and Cristianini, 2004, Section
5.1]. However, establishing this classifier?s (strong) SLN-robustness, uniqueness thereof, and its
equivalence to a highly regularised SVM solution, to our knowledge is novel.
2
Background and problem setup
Fix an instance space X. We denote by D a distribution over X ? {?1}, with random variables
(X, Y) ? D. Any D may be expressed via the class-conditionals (P, Q) = (P(X | Y = 1), P(X |
Y = ?1)) and base rate ? = P(Y = 1), or via the marginal M = P(X) and class-probability
function ? : x 7? P(Y = 1 | X = x). We interchangeably write D as DP,Q,? or DM,? .
2.1
Classifiers, scorers, and risks
A scorer is any function s : X ? R. A loss is any function ` : {?1} ? R ? R. We use `?1 , `1 to
refer to `(?1, ?) and `(1, ?). The `-conditional risk L` : [0, 1] ? R ? R is defined as L` : (?, v) 7?
? ? `1 (v) + (1 ? ?) ? `?1 (v). Given a distribution D, the `-risk of a scorer s is defined as
.
LD
` (s) =
[`(Y, s(X))] ,
E
(X,Y)?D
(1)
D
so that LD
` (s) = E [L` (?(X), s(X))]. For a set S, L` (S) is the set of `-risks for all scorers in S.
X?M
A function class is any F ? RX . Given some F, the set of restricted Bayes-optimal scorers for a
loss ` are those scorers in F that minimise the `-risk:
.
SD,F,?
= Argmin LD
` (s).
`
s?F
The set of (unrestricted) Bayes-optimal scorers is SD,?
= SD,F,?
for F = RX . The restricted
`
`
`-regret of a scorer is its excess risk over that of any restricted Bayes-optimal scorer:
.
D
regretD,F
(s) = LD
` (s) ? inf L` (t).
`
t?F
Binary classification is concerned with the zero-one loss, `01 : (y, v) 7? Jyv < 0K + 21 Jv = 0K.
A loss ` is classification-calibrated if all its Bayes-optimal scorers are also optimal for zero-one
loss: (?D) SD,?
? SD,?
01 . A convex potential is any loss ` : (y, v) 7? ?(yv), where ? : R ? R+ is
`
convex, non-increasing, differentiable with ?0 (0) < 0, and ?(+?) = 0 [Long and Servedio, 2010,
Definition 1]. All convex potentials are classification-calibrated [Bartlett et al., 2006, Theorem 2.1].
2.2
Learning with symmetric label noise (SLN learning)
The problem of learning with symmetric label noise (SLN learning) is the following [Angluin and
Laird, 1988, Kearns, 1998, Blum and Mitchell, 1998, Natarajan et al., 2013]. For some notional
?clean? distribution D, which we would like to observe, we instead observe samples from some
corrupted distribution SLN(D, ?), for some ? ? [0, 1/2). The distribution SLN(D, ?) is such that
the marginal distribution of instances is unchanged, but each label is independently flipped with
probability ?. The goal is to learn a scorer from these corrupted samples such that LD
01 (s) is small.
For any quantity in D, we denote its corrupted counterparts in SLN(D, ?) with a bar, e.g. M for
the corrupted marginal distribution, and ? for the corrupted class-probability function; additionally,
when ? is clear from context, we will occasionally refer to SLN(D, ?) by D. It is easy to check that
the corrupted marginal distribution M = M , and [Natarajan et al., 2013, Lemma 7]
(?x ? X) ?(x) = (1 ? 2?) ? ?(x) + ?.
3
(2)
SLN-robustness: formalisation
We consider learners (`, F) for a loss ` and a function class F, with learning being the search for
some s ? F that minimises the `-risk. Informally, (`, F) is ?robust? to symmetric label noise (SLNrobust) if minimising ` over F gives the same classifier on both the clean distribution D, which
2
the learner would like to observe, and SLN(D, ?) for any ? ? [0, 1/2), which the learner actually
observes. We now formalise this notion, and review what is known about SLN-robust learners.
3.1
SLN-robust learners: a formal definition
For some fixed instance space X, let ? denote the set of distributions on X ? {?1}. Given a notional
?clean? distribution D, Nsln : ? ? 2? returns the set of possible corrupted versions of D the learner
may observe, where labels are flipped with unknown probability ?:
1
Nsln : D 7? SLN(D, ?) | ? ? 0,
.
2
Equipped with this, we define our notion of SLN-robustness.
Definition 1 (SLN-robustness). We say that a learner (`, F) is SLN-robust if
D,F,?
D,F,?
(?D ? ?) (?D ? Nsln (D)) LD
) = LD
).
01 (S`
01 (S`
(3)
That is, SLN-robustness requires that for any level of label noise in the observed distribution D, the
classification performance (wrt D) of the learner is the same as if the learner directly observes D.
Unfortunately, a widely adopted class of learners is not SLN-robust, as we will now see.
3.2
Convex potentials with linear function classes are not SLN-robust
Fix X = Rd , and consider learners with a convex potential `, and a function class of linear scorers
Flin = {x 7? hw, xi | w ? Rd }.
This captures e.g. the linear SVM and logistic regression, which are widely studied in theory and
applied in practice. Disappointingly, these learners are not SLN-robust: Long and Servedio [2010,
Theorem 2] give an example where, when learning under symmetric label noise, for any convex
potential `, the corrupted `-risk minimiser over Flin has classification performance equivalent to
random guessing on D. This implies that (`, Flin ) is not SLN-robust1 as per Definition 1.
Proposition 1 (Long and Servedio [2010, Theorem 2]). Let X = Rd for any d ? 2. Pick any convex
potential `. Then, (`, Flin ) is not SLN-robust.
3.3
The fallout: what learners are SLN-robust?
In light of Proposition 1, there are two ways to proceed in order to obtain SLN-robust learners: either
we change the class of losses `, or we change the function class F.
The first approach has been pursued in a large body of work that embraces non-convex losses
[Stempfel and Ralaivola, 2009, Masnadi-Shirazi et al., 2010, Ding and Vishwanathan, 2010,
Denchev et al., 2012, Manwani and Sastry, 2013]. While such losses avoid the conditions of Proposition 1, this does not automatically imply that they are SLN-robust when used with Flin . In Appendix
B, we present evidence that some of these losses are in fact not SLN-robust when used with Flin .
The second approach is to consider suitably rich F that contains the Bayes-optimal scorer for D,
e.g. by employing a universal kernel. With this choice, one can still use a convex potential loss, and
in fact, owing to Equation 2, any classification-calibrated loss.
Proposition 2. Pick any classification-calibrated `. Then, (`, RX ) is SLN-robust.
Both approaches have drawbacks. The first approach has a computational penalty, as it requires
optimising a non-convex loss. The second approach has a statistical penalty, as estimation rates
with a rich F will require a larger sample size. Thus, it appears that SLN-robustness involves a
computational-statistical tradeoff. However, there is a variant of the first option: pick a loss that is
convex, but not a convex potential. Such a loss would afford the computational and statistical advantages of minimising convex risks with linear scorers. Manwani and Sastry [2013] demonstrated
that square loss, `(y, v) = (1 ? yv)2 , is one such loss. We will show that there is a simpler loss that
is convex and SLN-robust, but is not in the class of convex potentials by virtue of being negatively
unbounded. To derive this loss, we first re-interpret robustness via a noise-correction procedure.
1
Even if we were content with a difference of ? [0, 1/2] between the clean and corrupted minimisers?
performance, Long and Servedio [2010, Theorem 2] implies that in the worst case = 1/2.
3
4
A noise-corrected loss perspective on SLN-robustness
We now re-express SLN-robustness to reason about optimal scorers on the same distribution, but
with two different losses. This will help characterise a set of ?strongly SLN-robust? losses.
4.1
Reformulating SLN-robustness via noise-corrected losses
Given any ? ? [0, 1/2), Natarajan et al. [2013, Lemma 1] showed how to associate with a loss ` a
D
noise-corrected counterpart ` such that LD
` (s) = L` (s). The loss ` is defined as follows.
Definition 2 (Noise-corrected loss). Given any loss ` and ? ? [0, 1/2), the noise-corrected loss ` is
(?y ? {?1}) (?v ? R) `(y, v) =
(1 ? ?) ? `(y, v) ? ? ? `(?y, v)
.
1 ? 2?
(4)
Since ` depends on the unknown parameter ?, it is not directly usable to design an SLN-robust
learner. Nonetheless, it is a useful theoretical device, since, by construction, for any F, SD,F,?
=
`
SD,F,?
= SD,F,?
. This means that a sufficient condition for (`, F) to be SLN-robust is for SD,F,?
.
`
`
`
Ghosh et al. [2015, Theorem 1] proved a sufficient condition on ` such that this holds, namely,
(?C ? R)(?v ? R) `1 (v) + `?1 (v) = C.
(5)
Interestingly, Equation 5 is necessary for a stronger notion of robustness, which we now explore.
4.2
Characterising a stronger notion of SLN-robustness
As the first step towards a stronger notion of robustness, we rewrite (with a slight abuse of notation)
LD
` (s) =
E
(X,Y)?D
[`(Y, s(X))] =
E
(Y,S)?R(D,s)
.
[`(Y, S)] = L` (R(D, s)),
where R(D, s) is a distribution over labels and scores. Standard SLN-robustness requires that label
noise does not change the `-risk minimisers, i.e. that if s is such that L` (R(D, s)) ? L` (R(D, s0 ))
for all s0 , the same relation holds with D in place of D. Strong SLN-robustness strengthens this
notion by requiring that label noise does not affect the ordering of all pairs of joint distributions over
labels and scores. (This of course trivially implies SLN-robustness.) As with the definition of D,
given a distribution R over labels and scores, let R be the corresponding distribution where labels
are flipped with probability ?. Strong SLN-robustness can then be made precise as follows.
Definition 3 (Strong SLN-robustness). Call a loss ` strongly SLN-robust if for every ? ? [0, 1/2),
(?R, R0 ) L` (R) ? L` (R0 ) ?? L` (R) ? L` (R0 ).
We now re-express strong SLN-robustness using a notion of order equivalence of loss pairs, which
simply requires that two losses order all distributions over labels and scores identically.
? order equivalent if
Definition 4 (Order equivalent loss pairs). Call a pair of losses (`, `)
(?R, R0 ) L` (R) ? L` (R0 ) ?? L`?(R) ? L`?(R0 ).
Clearly, order equivalence of (`, `) implies SD,F,?
= SD,F,?
, which in turn implies SLN-robustness.
`
`
It is thus not surprising that we can relate order equivalence to strong SLN-robustness of `.
Proposition 3. A loss ` is strongly SLN-robust iff for every ? ? [0, 1/2), (`, `) are order equivalent.
This connection now lets us exploit a classical result in decision theory about order equivalent losses
being affine transformations of each other. Combined with the definition of `, this lets us conclude
that the sufficient condition of Equation 5 is also necessary for strong SLN-robustness of `.
Proposition 4. A loss ` is strongly SLN-robust if and only if it satisfies Equation 5.
We now return to our original goal, which was to find a convex ` that is SLN-robust for Flin (and
ideally more general function classes). The above suggests that to do so, it is reasonable to consider
those losses that satisfy Equation 5. Unfortunately, it is evident that if ` is convex, non-constant, and
bounded below by zero, then it cannot possibly be admissible in this sense. But we now show that
removing the boundedness restriction allows for the existence of a convex admissible loss.
4
5
The unhinged loss: a convex, strongly SLN-robust loss
Consider the following simple, but non-standard convex loss:
unh
`unh
1 (v) = 1 ? v and `?1 (v) = 1 + v.
Compared to the hinge loss, the loss does not clamp at zero, i.e. it does not have a hinge. (Thus, peculiarly, it is negatively unbounded, an issue we discuss in ?5.3.) Thus, we call this the unhinged loss2 .
The loss has a number of attractive properties, the most immediate being is its SLN-robustness.
5.1
The unhinged loss is strongly SLN-robust
unh
unh
Since `unh
is strongly SLN-robust, and thus that
1 (v) + `?1 (v) = 2, Proposition 4 implies that `
unh
(` , F) is SLN-robust for any F. Further, the following uniqueness property is not hard to show.
Proposition 5. Pick any convex loss `. Then,
(?C ? R) `1 (v) + `?1 (v) = C ?? (?A, B, D ? R) `1 (v) = ?A ? v + B, `?1 (v) = A ? v + D.
That is, up to scaling and translation, `unh is the only convex loss that is strongly SLN-robust.
Returning to the case of linear scorers, the above implies that (`unh , Flin ) is SLN-robust. This does
not contradict Proposition 1, since `unh is not a convex potential as it is negatively unbounded. Intuitively, this property allows the loss to offset the penalty incurred by instances that are misclassified
with high margin by awarding a ?gain? for instances that correctly classified with high margin.
5.2
The unhinged loss is classification calibrated
SLN-robustness is by itself insufficient for a learner to be useful. For example, a loss that is uniformly zero is strongly SLN-robust, but is useless as it is not classification-calibrated. Fortunately,
the unhinged loss is classification-calibrated, as we now establish. For technical reasons (see ?5.3),
we operate with FB = [?B, +B]X , the set of scorers with range bounded by B ? [0, ?).
Proposition 6. Fix ` = `unh . For any DM,? , B ? [0, ?), S`D,FB ,? = {x 7? B ? sign(2?(x) ? 1)}.
Thus, for every B ? [0, ?), the restricted Bayes-optimal scorer over FB has the same sign as the
Bayes-optimal classifier for 0-1 loss. In the limiting case where F = RX , the optimal scorer is
attainable if we operate over the extended reals R ? {??}, so that `unh is classification-calibrated.
5.3
Enforcing boundedness of the loss
While the classification-calibration of `unh is encouraging, Proposition 6 implies that its (unrestricted) Bayes-risk is ??. Thus, the regret of every non-optimal scorer s is identically +?, which
hampers analysis of consistency. In orthodox decision theory, analogous theoretical issues arise
when attempting to establish basic theorems with unbounded losses [Ferguson, 1967, pg. 78].
We can side-step this issue by restricting attention to bounded scorers, so that `unh is effectively
bounded. By Proposition 6, this does not affect the classification-calibration of the loss. In the context of linear scorers, boundedness of scorers can be achieved by regularisation:
instead of work?
ing with Flin , one can instead use Flin,? = {x 7? hw, xi | ||w||2 ? 1/ ?}, where ? > 0, so
that Flin,? ? FR/?? for R = supx?X ||x||2 . Observe that as (`unh , F) is SLN-robust for any F,
(`unh , Flin,? ) is SLN-robust for any ? > 0. As we shall see in ?6.3, working with Flin,? also lets us
establish SLN-robustness of the hinge loss when ? is large.
5.4
Unhinged loss minimisation on corrupted distribution is consistent
Using bounded scorers makes it possible to establish a surrogate regret bound for the unhinged loss.
This shows classification consistency of unhinged loss minimisation on the corrupted distribution.
2
This loss has been considered in Sriperumbudur et al. [2009], Reid and Williamson [2011] in the context
of maximum mean discrepancy; see the Appendix. The analysis of its SLN-robustness is to our knowledge
novel.
5
Proposition 7. Fix ` = `unh . Then, for any D, ? ? [0, 1/2), B ? [1, ?), and scorer s ? FB ,
1
D,FB
regretD
(s) =
? regret`D,FB (s).
01 (s) ? regret`
1 ? 2?
Standard rates of convergence via generalisation bounds are also trivial to derive; see the Appendix.
6
Learning with the unhinged loss and kernels
We now show that the optimal solution for the unhinged loss when employing regularisation and
kernelised scorers has a simple form. This sheds further light on SLN-robustness and regularisation.
6.1
The centroid classifier optimises the unhinged loss
Consider minimising the unhinged
risk over the class of kernelised scorers FH,? = {s : x 7?
?
hw, ?(x)iH | ||w||H ? 1/ ?} for some ? > 0, where ? : X ? H is a feature mapping into a
reproducing kernel Hilbert space H with kernel k. Equivalently, given a distribution3 D, we want
?
?
wunh,?
= argmin E [1 ? Y ? hw, ?(X)i] + hw, wiH .
(6)
2
(X,Y)?D
w?H
The first-order optimality condition implies that
1
?
(7)
wunh,?
= ? E [Y ? ?(X)] ,
? (X,Y)?D
which is the kernel mean map of D [Smola et al., 2007], and thus the optimal unhinged scorer is
1
1
s?unh,? : x 7? ? E [Y ? k(X, x)] = x 7? ? ? ? E [k(X, x)] ? (1 ? ?) ? E [k(X, x)] .
X?P
X?Q
? (X,Y)?D
?
(8)
From Equation 8, the unhinged solution is equivalent to a nearest centroid classifier [Manning et al.,
2008, pg. 181] [Tibshirani et al., 2002] [Shawe-Taylor and Cristianini, 2004, Section 5.1]. Equation
8 gives a simple way to understand the SLN-robustness of (`unh , FH,? ), as the optimal scorers on
the clean and corrupted distributions only differ by a scaling (see the Appendix):
1
? E
Y ? k(X, x) .
(9)
(?x ? X) E [Y ? k(X, x)] =
1 ? 2? (X,Y)?D
(X,Y)?D
Interestingly, Servedio [1999, Theorem 4] established that a nearest centroid classifier (which they
termed ?AVERAGE ?) is robust to a general class of label noise, but required the assumption that
M is uniform over the unit sphere. Our result establishes that SLN robustness of the classifier
holds without any assumptions on M . In fact, Ghosh et al. [2015, Theorem 1] lets one quantify the
unhinged loss? performance under a more general noise model; see the Appendix for discussion.
6.2
Practical considerations
We note several points relating to practical usage of the unhinged loss with kernelised scorers. First,
cross-validation is not required to select ?, since changing ? only changes the magnitude of scores,
not their sign. Thus, for the purposes of classification, one can simply use ? = 1.
Second, we can easily extend the scorers to use a bias regularised with strength 0 < ?b 6= ?. Tuning
?b is equivalent to computing s?unh,? as per Equation 8, and tuning a threshold on a holdout set.
?
Third, when H = Rd for d small, we can store wunh,?
explicitly, and use this to make predictions.
For high (or infinite) dimensional H, we can either make predictions directly via Equation 8, or
use random Fourier features [Rahimi and Recht, 2007] to (approximately) embed H into some low?
dimensional Rd , and then store wunh,?
as usual. (The latter requires a translation-invariant kernel.)
?
We now show that under some assumptions, wunh,?
coincides with the solution of two established
methods; the Appendix discusses some further relationships, e.g. to the maximum mean discrepancy.
3
Given a training sample S ? Dn , we can use plugin estimates as appropriate.
6
6.3
Equivalence to a highly regularised SVM and other convex potentials
There is an interesting equivalence between the unhinged solution and that of a highly regularised
SVM. This has been noted in e.g. Hastie et al. [2004, Section 6], which showed how SVMs approach
a nearest centroid classifier, which is of course the optimal unhinged solution.
Proposition 8. Pick any D and ? : X ? H with R = supx?X ||?(x)||H < ?. For any ? > 0, let
?
whinge,?
= argmin
w?H
E
(X,Y)?D
[max(0, 1 ? Y ? hw, ?(x)iH )] +
?
hw, wiH
2
?
?
be the soft-margin SVM solution. Then, if ? ? R2 , whinge,?
= wunh,?
.
Since (`unh , FH,? ) is SLN-robust, it follows that for `hinge : (y, v) 7? max(0, 1?yv), (`hinge , FH,? )
is similarly SLN-robust provided ? is sufficiently large. That is, strong `2 regularisation (and a
bounded feature map) endows the hinge loss with SLN-robustness4 . Proposition 8 can be generalised
?
to show that wunh,?
is the limiting solution of any twice differentiable convex potential. This shows
that strong `2 regularisation endows most learners with SLN-robustness. Intuitively, with strong
regularisation, one only considers the behaviour of a loss near zero; since a convex potential ? has
?0 (0) < 0, it will behave similarly to its linear approximation around zero, viz. the unhinged loss.
Proposition 9. Pick any D, bounded feature mapping ? : X ? H, and twice differentiable convex
?
potential ? with ?00 ([?1, 1]) bounded. Let w?,?
be the minimiser of the regularised ? risk. Then,
2
?
w ?
wunh,?
?,?
lim ?
?
= 0.
?
??? ||w?,? ||H
||wunh,?
||H
H
6.4
Equivalence to Fisher Linear Discriminant with whitened data
For binary classification on DM,? , the Fisher Linear Discriminant (FLD) finds a weight vector proportional to the minimiser of square loss `sq : (y, v) 7? (1 ? yv)2 [Bishop, 2006, Section 4.1.5],
?
wsq,?
= (EX?M [XXT ] + ?I)?1 ? E(X,Y)?D [Y ? X].
(10)
?
wsq,?
is only changed by a scaling
By Equation 9, and the fact that the corrupted marginal M = M ,
factor under label noise. This provides an alternate proof of the fact that (`sq , Flin ) is SLN-robust5
?
[Manwani and Sastry, 2013, Theorem 2]. Clearly, the unhinged loss solution wunh,?
is equivalent to
T
?
the FLD and square loss solution wsq,? when the input data is whitened i.e. E XX = I. With
X?M
a well-specified F, e.g. with a universal kernel, both the unhinged and square loss asymptotically
recover the optimal classifier, but the unhinged loss does not require a matrix inversion. With a
misspecified F, one cannot in general argue for the superiority of the unhinged loss over square loss,
or vice-versa, as there is no universally good surrogate to the 0-1 loss [Reid and Williamson, 2010,
Appendix A]; the Appendix illustrate examples where both losses may underperform.
7
SLN-robustness of unhinged loss: empirical illustration
We now illustrate that the unhinged loss? SLN-robustness is empirically manifest. We reiterate
that with high regularisation, the unhinged solution is equivalent to an SVM (and in the limit any
classification-calibrated loss) solution. Thus, we do not aim to assert that the unhinged loss is
?better? than other losses, but rather, to demonstrate that its SLN-robustness is not purely theoretical.
We first show that the unhinged risk minimiser performs well on the example of Long
and Servedio [2010] (henceforth LS10). Figure 1 shows the distribution D, where X =
{(1, 0), (?, 5?), (?, ??)} ? R2 , with marginal distribution M = { 14 , 14 , 12 } and all three instances
are deterministically positive. We pick ? = 1/2. The unhinged minimiser perfectly classifies all
three points, regardless of the level of label noise (Figure 1). The hinge minimiser is perfect when
there is no noise, but with even a small amount of noise, achieves a 50% error rate.
4
5
Long and Servedio [2010, Section 6] show that `1 regularisation does not endow SLN-robustness.
Square loss escapes the result of Long and Servedio [2010] since it is not monotone decreasing.
7
1
Unhinged
Hinge 0% noise
Hinge 1% noise
0.5
0.5
?
?
?
?
?
?
1
?0.5
=
=
=
=
=
=
0
0.1
0.2
0.3
0.4
0.49
Hinge
t-logistic
Unhinged
0.00 ? 0.00
0.15 ? 0.27
0.21 ? 0.30
0.38 ? 0.37
0.42 ? 0.36
0.47 ? 0.38
0.00 ? 0.00
0.00 ? 0.00
0.00 ? 0.00
0.22 ? 0.08
0.22 ? 0.08
0.39 ? 0.23
0.00 ? 0.00
0.00 ? 0.00
0.00 ? 0.00
0.00 ? 0.00
0.00 ? 0.00
0.34 ? 0.48
Table 1: Mean and standard deviation of the 01 error over 125 trials on LS10. Grayed cells
denote the best performer at that noise rate.
?1
Figure 1: LS10 dataset.
We next consider empirical risk minimisers from a random training sample: we construct a training
set of 800 instances, injected with varying levels of label noise, and evaluate classification performance on a test set of 1000 instances. We compare the hinge, t-logistic (for t = 2) [Ding and
Vishwanathan, 2010] and unhinged minimisers using a linear scorer without a bias term, and regularisation strength ? = 10?16 . From Table 1, even at 40% label noise, the unhinged classifier is able
to find a perfect solution. By contrast, both other losses suffer at even moderate noise rates.
We next report results on some UCI datasets, where we additionally tune a threshold so as to ensure
the best training set 0-1 accuracy. Table 2 summarises results on a sample of four datasets. (The
Appendix contains results with more datasets, performance metrics, and losses.) Even at noise close
to 50%, the unhinged loss is often able to learn a classifier with some discriminative power.
?
?
?
?
?
?
=
=
=
=
=
=
0
0.1
0.2
0.3
0.4
0.49
Hinge
t-Logistic
Unhinged
0.00 ? 0.00
0.01 ? 0.03
0.06 ? 0.12
0.17 ? 0.20
0.35 ? 0.24
0.60 ? 0.20
0.00 ? 0.00
0.01 ? 0.03
0.04 ? 0.05
0.09 ? 0.11
0.24 ? 0.16
0.49 ? 0.20
0.00 ? 0.00
0.00 ? 0.00
0.00 ? 0.01
0.02 ? 0.07
0.13 ? 0.22
0.45 ? 0.33
?
?
?
?
?
?
=
=
=
=
=
=
0
0.1
0.2
0.3
0.4
0.49
Hinge
t-Logistic
Unhinged
0.05 ? 0.00
0.06 ? 0.01
0.06 ? 0.01
0.08 ? 0.04
0.14 ? 0.10
0.45 ? 0.26
0.05 ? 0.00
0.07 ? 0.02
0.08 ? 0.03
0.11 ? 0.05
0.24 ? 0.13
0.49 ? 0.16
0.05 ? 0.00
0.05 ? 0.00
0.05 ? 0.00
0.05 ? 0.01
0.09 ? 0.10
0.46 ? 0.30
(a) iris.
?
?
?
?
?
?
=
=
=
=
=
=
0
0.1
0.2
0.3
0.4
0.49
(b) housing.
Hinge
t-Logistic
Unhinged
0.00 ? 0.00
0.10 ? 0.08
0.19 ? 0.11
0.31 ? 0.13
0.39 ? 0.13
0.50 ? 0.16
0.00 ? 0.00
0.11 ? 0.02
0.15 ? 0.02
0.22 ? 0.03
0.33 ? 0.04
0.48 ? 0.04
0.00 ? 0.00
0.00 ? 0.00
0.00 ? 0.00
0.01 ? 0.00
0.02 ? 0.02
0.34 ? 0.21
?
?
?
?
?
?
(c) usps0v7.
=
=
=
=
=
=
0
0.1
0.2
0.3
0.4
0.49
Hinge
t-Logistic
Unhinged
0.05 ? 0.00
0.15 ? 0.03
0.21 ? 0.03
0.25 ? 0.03
0.31 ? 0.05
0.48 ? 0.09
0.04 ? 0.00
0.24 ? 0.00
0.24 ? 0.00
0.24 ? 0.00
0.24 ? 0.00
0.40 ? 0.24
0.19 ? 0.00
0.19 ? 0.01
0.19 ? 0.01
0.19 ? 0.03
0.22 ? 0.05
0.45 ? 0.08
(d) splice.
Table 2: Mean and standard deviation of the 0-1 error over 125 trials on UCI datasets.
8
Conclusion and future work
We proposed a convex, classification-calibrated loss, proved that is robust to symmetric label noise
(SLN-robust), showed it is the unique loss that satisfies a notion of strong SLN-robustness, established that it is optimised by the nearest centroid classifier, and showed that most convex potentials,
such as the SVM, are also SLN-robust when highly regularised. So, with apologies to Wilde [1895]:
While the truth is rarely pure, it can be simple.
Acknowledgments
NICTA is funded by the Australian Government through the Department of Communications and
the Australian Research Council through the ICT Centre of Excellence Program. The authors thank
Cheng Soon Ong for valuable comments on a draft of this paper.
8
References
Dana Angluin and Philip Laird. Learning from noisy examples. Machine Learning, 2(4):343?370, 1988.
Peter L. Bartlett, Michael I. Jordan, and Jon D. McAuliffe. Convexity, classification, and risk bounds. Journal
of the American Statistical Association, 101(473):138 ? 156, 2006.
Christopher M Bishop. Pattern Recognition and Machine Learning. Springer-Verlag New York, Inc., 2006.
Avrim Blum and Tom Mitchell. Combining labeled and unlabeled data with co-training. In Conference on
Computational Learning Theory (COLT), pages 92?100, 1998.
Vasil Denchev, Nan Ding, Hartmut Neven, and S.V.N. Vishwanathan. Robust classification with adiabatic
quantum optimization. In International Conference on Machine Learning (ICML), pages 863?870, 2012.
Luc Devroye, L?aszl?o Gy?orfi, and G?abor Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, 1996.
Nan Ding and S.V.N. Vishwanathan. t-logistic regression. In Advances in Neural Information Processing
Systems (NIPS), pages 514?522. Curran Associates, Inc., 2010.
Thomas S. Ferguson. Mathematical Statistics: A Decision Theoretic Approach. Academic Press, 1967.
Aritra Ghosh, Naresh Manwani, and P. S. Sastry. Making risk minimization tolerant to label noise. Neurocomputing, 160:93 ? 107, 2015.
Trevor Hastie, Saharon Rosset, Robert Tibshirani, and Ji Zhu. The entire regularization path for the support
vector machine. Journal of Machine Learning Research, 5:1391?1415, December 2004. ISSN 1532-4435.
Michael Kearns. Efficient noise-tolerant learning from statistical queries. Journal of the ACM, 5(6):392?401,
November 1998.
Philip M. Long and Rocco A. Servedio. Random classification noise defeats all convex potential boosters.
Machine Learning, 78(3):287?304, 2010. ISSN 0885-6125.
Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch?utze. Introduction to Information Retrieval.
Cambridge University Press, New York, NY, USA, 2008. ISBN 0521865719, 9780521865715.
Naresh Manwani and P. S. Sastry. Noise tolerance under risk minimization. IEEE Transactions on Cybernetics,
43(3):1146?1151, June 2013.
Hamed Masnadi-Shirazi, Vijay Mahadevan, and Nuno Vasconcelos. On the design of robust classifiers for
computer vision. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010.
Nagarajan Natarajan, Inderjit S. Dhillon, Pradeep D. Ravikumar, and Ambuj Tewari. Learning with noisy
labels. In Advances in Neural Information Processing Systems (NIPS), pages 1196?1204, 2013.
Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances in Neural
Information Processing Systems (NIPS), pages 1177?1184, 2007.
Mark D. Reid and Robert C. Williamson. Composite binary losses. Journal of Machine Learning Research,
11:2387?2422, December 2010.
Mark D Reid and Robert C Williamson. Information, divergence and risk for binary experiments. Journal of
Machine Learning Research, 12:731?817, Mar 2011.
Bernhard Sch?olkopf and Alexander J Smola. Learning with kernels, volume 129. MIT Press, 2002.
Rocco A. Servedio. On PAC learning using Winnow, Perceptron, and a Perceptron-like algorithm. In Conference on Computational Learning Theory (COLT), 1999.
John Shawe-Taylor and Nello Cristianini. Kernel Methods for Pattern Analysis. Cambridge Uni. Press, 2004.
Alex Smola, Arthur Gretton, Le Song, and Bernhard Sch?olkopf. A Hilbert space embedding for distributions.
In Algorithmic Learning Theory (ALT), 2007.
Bharath K. Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Gert R. G. Lanckriet, and Bernhard Sch?olkopf.
Kernel choice and classifiability for RKHS embeddings of probability distributions. In Advances in Neural
Information Processing Systems (NIPS), 2009.
Guillaume Stempfel and Liva Ralaivola. Learning SVMs from sloppily labeled data. In Artificial Neural
Networks (ICANN), volume 5768, pages 884?893. Springer Berlin Heidelberg, 2009.
Robert Tibshirani, Trevor Hastie, Balasubramanian Narasimhan, and Gilbert Chu. Diagnosis of multiple cancer
types by shrunken centroids of gene expression. Proceedings of the National Academy of Sciences, 99(10):
6567?6572, 2002.
Oscar Wilde. The Importance of Being Earnest, 1895.
9
| 5941 |@word trial:2 version:1 inversion:1 stronger:3 earnest:1 suitably:1 underperform:1 pg:2 attainable:1 pick:7 boundedness:3 disappointingly:1 ld:9 contains:2 score:5 rkhs:1 interestingly:2 jyv:1 com:1 surprising:1 liva:1 chu:1 john:1 realistic:1 implying:1 pursued:1 device:1 provides:1 draft:1 simpler:1 unbounded:6 mathematical:1 dn:1 prove:2 stempfel:3 fld:2 classifiability:1 excellence:1 decreasing:1 balasubramanian:1 automatically:1 encouraging:1 equipped:1 increasing:1 grayed:1 provided:1 xx:1 notation:1 bounded:8 classifies:1 hinrich:1 what:2 argmin:3 narasimhan:1 ghosh:3 transformation:1 sloppily:1 assert:1 every:4 wsq:3 shed:1 returning:1 classifier:15 facto:1 unit:1 superiority:1 reid:4 mcauliffe:1 generalised:1 positive:1 sd:11 limit:1 plugin:1 establishing:1 optimised:1 path:1 abuse:1 approximately:1 lugosi:1 twice:3 au:1 studied:1 equivalence:7 suggests:1 co:1 range:1 unique:2 practical:2 acknowledgment:1 practice:2 regret:5 kernelised:3 procedure:1 sq:2 universal:2 empirical:2 orfi:1 composite:1 sln:86 cannot:2 close:1 unlabeled:1 ralaivola:3 risk:19 context:3 restriction:1 equivalent:13 map:2 demonstrated:1 gilbert:1 attention:1 regardless:1 independently:1 convex:41 peculiarly:1 pure:2 embedding:1 notion:9 gert:1 analogous:1 limiting:3 construction:1 curran:1 regularised:8 lanckriet:1 associate:2 satisfying:1 natarajan:4 strengthens:1 recognition:3 labeled:2 observed:1 aszl:1 ding:5 capture:1 worst:1 ordering:1 observes:4 valuable:1 benjamin:1 convexity:1 ideally:1 cristianini:3 ong:1 rewrite:1 ali:1 purely:1 negatively:5 learner:21 easily:1 joint:1 chapter:1 xxt:1 query:1 artificial:1 widely:2 larger:1 cvpr:1 say:1 denchev:3 statistic:1 itself:1 laird:3 noisy:2 housing:1 advantage:1 differentiable:4 isbn:1 propose:2 clamp:3 wih:2 fr:1 uci:2 combining:1 shrunken:1 iff:1 academy:1 olkopf:4 convergence:1 prabhakar:1 distribution3:1 perfect:2 wilde:3 help:1 scorer:33 derive:2 illustrate:2 minimises:1 nearest:4 orthodox:1 strong:14 kenji:1 involves:1 implies:10 australian:3 quantify:1 differ:1 drawback:1 owing:1 australia:1 raghavan:1 require:2 government:1 behaviour:1 nagarajan:1 fix:4 proposition:21 correction:1 hold:3 sufficiently:1 considered:1 around:1 mapping:2 algorithmic:1 achieves:1 utze:1 fh:4 purpose:1 uniqueness:2 estimation:1 label:28 council:1 vice:1 establishes:2 minimization:2 fukumizu:1 mit:1 clearly:2 aim:1 rather:1 avoid:1 varying:1 minimisation:5 endow:1 viz:1 june:1 check:1 contrast:1 brendan:2 centroid:6 sense:1 neven:1 ferguson:2 entire:1 abor:1 relation:1 misclassified:1 issue:3 classification:28 colt:2 marginal:6 optimises:1 construct:1 having:1 vasconcelos:1 optimising:1 flipped:4 icml:1 jon:1 future:2 discrepancy:2 report:1 escape:1 masnadi:3 national:3 hamper:1 neurocomputing:1 divergence:1 interest:1 highly:4 pradeep:1 light:2 necessary:2 arthur:2 minimiser:6 taylor:3 re:3 formalise:1 theoretical:3 instance:9 soft:1 deviation:2 uniform:1 supx:2 corrupted:13 rosset:1 calibrated:12 combined:1 recht:2 international:1 probabilistic:1 minimised:1 michael:2 possibly:1 henceforth:1 borne:1 booster:1 american:1 usable:1 return:2 potential:21 de:1 gy:1 inc:2 satisfy:1 explicitly:1 depends:1 reiterate:1 yv:4 bayes:8 option:1 recover:1 square:6 accuracy:1 rx:4 bob:1 corruption:2 cybernetics:1 classified:1 bharath:1 hamed:1 trevor:2 definition:9 sriperumbudur:2 servedio:14 nonetheless:1 notional:2 thereof:1 dm:3 nuno:1 proof:1 gain:1 proved:3 holdout:1 dataset:1 mitchell:2 manifest:1 lim:1 knowledge:2 hilbert:2 actually:1 appears:1 supervised:1 tom:1 strongly:11 mar:1 smola:4 working:1 christopher:2 logistic:8 menon:2 shirazi:3 usage:1 usa:1 requiring:1 counterpart:2 hence:1 manwani:6 reformulating:1 regularization:1 symmetric:10 nonzero:1 dhillon:1 attractive:1 interchangeably:1 noted:1 coincides:1 iris:1 minimisers:4 evident:1 demonstrate:1 theoretic:1 performs:1 saharon:1 characterising:1 consideration:1 novel:2 misspecified:1 empirically:1 ji:1 defeat:1 volume:2 extend:1 slight:1 association:1 relating:1 interpret:1 refer:2 versa:1 cambridge:2 rd:5 tuning:2 sastry:6 trivially:1 consistency:2 similarly:2 centre:1 shawe:3 funded:1 calibration:2 base:1 showed:5 perspective:1 winnow:1 inf:1 moderate:1 scenario:1 occasionally:1 termed:1 store:2 regretd:2 verlag:1 binary:6 krishna:1 unrestricted:2 fortunately:1 performer:1 r0:6 multiple:1 gretton:2 rahimi:2 ing:1 technical:1 academic:1 minimising:4 long:12 sphere:1 cross:1 retrieval:1 ravikumar:1 prediction:2 variant:1 regression:2 basic:1 whitened:2 vision:2 metric:1 fallout:1 kernel:12 achieved:1 cell:1 background:1 conditionals:1 want:1 sch:5 operate:2 comment:1 december:2 jordan:1 call:5 near:1 mahadevan:1 easy:1 concerned:1 identically:2 embeddings:1 affect:2 hastie:3 perfectly:1 tradeoff:1 minimise:1 expression:1 bartlett:2 penalty:3 song:1 suffer:1 peter:1 proceed:1 afford:1 york:2 useful:2 tewari:1 clear:1 informally:1 characterise:1 tune:1 amount:1 svms:2 angluin:3 exist:1 canonical:1 sign:3 per:2 correctly:1 tibshirani:3 diagnosis:1 write:1 shall:1 express:2 four:1 threshold:2 blum:2 drawn:1 flin:14 jv:1 changing:1 clean:5 asymptotically:1 monotone:1 injected:1 oscar:1 place:1 reasonable:1 decision:3 appendix:9 scaling:3 bound:3 nan:2 cheng:1 strength:2 vishwanathan:5 alex:1 fourier:1 optimality:1 attempting:1 separable:1 embrace:1 department:1 alternate:1 manning:2 appealing:1 modification:2 hartmut:1 making:1 intuitively:2 restricted:4 invariant:1 aritra:1 equation:11 turn:1 discus:2 wrt:1 ostensibly:2 adopted:1 observe:5 appropriate:1 robustness:38 existence:1 original:1 thomas:1 ensure:1 hinge:17 exploit:1 establish:4 classical:1 unchanged:1 summarises:1 quantity:1 rocco:2 usual:1 guessing:3 surrogate:2 dp:1 thank:1 berlin:1 philip:2 awarding:1 unh:20 argue:1 considers:1 discriminant:2 trivial:1 reason:2 nicta:2 enforcing:1 nello:1 devroye:2 issn:2 useless:1 relationship:1 insufficient:1 illustration:1 equivalently:1 setup:1 unfortunately:2 robert:5 relate:1 rooyen:1 design:2 motivates:1 unknown:2 perform:1 datasets:4 november:1 behave:1 immediate:1 extended:1 communication:1 precise:1 reproducing:1 namely:1 pair:4 required:2 specified:1 connection:1 unhinged:44 established:3 nip:4 able:2 bar:1 below:1 pattern:4 program:1 ambuj:1 max:2 power:1 misclassification:1 endows:3 zhu:1 imply:1 review:1 ict:2 regularisation:11 loss:107 interesting:1 proportional:1 dana:1 validation:1 incurred:1 affine:1 sufficient:3 consistent:2 s0:2 translation:2 cancer:1 course:2 changed:1 soon:1 formal:1 side:1 understand:1 bias:2 perceptron:2 van:1 tolerance:1 avoids:2 rich:2 fb:6 quantum:1 author:1 made:1 universally:1 employing:2 transaction:1 excess:1 contradict:1 uni:1 bernhard:3 gene:1 confirm:1 tolerant:2 conclude:1 xi:2 discriminative:1 search:1 table:4 additionally:2 learn:3 robust:43 heidelberg:1 williamson:6 icann:1 linearly:1 noise:36 arise:1 naresh:2 body:1 ny:1 formalisation:1 adiabatic:1 deterministically:1 third:1 hw:7 admissible:2 splice:1 theorem:9 removing:1 loss2:1 embed:1 bishop:2 pac:1 offset:1 r2:2 svm:9 alt:1 virtue:3 evidence:1 ih:2 restricting:1 avrim:1 effectively:1 importance:2 magnitude:1 margin:3 vijay:1 simply:2 explore:1 aditya:2 expressed:1 inderjit:1 springer:3 truth:2 satisfies:2 acm:1 conditional:1 goal:4 towards:1 luc:1 fisher:2 content:1 change:4 hard:1 generalisation:1 infinite:1 corrected:5 uniformly:1 kearns:2 lemma:2 rarely:2 select:1 guillaume:1 support:1 mark:2 latter:1 alexander:1 evaluate:1 ex:1 |
5,460 | 5,942 | Scalable Semi-Supervised Aggregation of Classifiers
Yoav Freund
UC San Diego
yfreund@cs.ucsd.edu
Akshay Balsubramani
UC San Diego
abalsubr@cs.ucsd.edu
Abstract
We present and empirically evaluate an efficient algorithm that learns to aggregate the predictions of an ensemble of binary classifiers. The algorithm uses the
structure of the ensemble predictions on unlabeled data to yield significant performance improvements. It does this without making assumptions on the structure or
origin of the ensemble, without parameters, and as scalably as linear learning. We
empirically demonstrate these performance gains with random forests.
1
Introduction
Ensemble-based learning is a very successful approach to learning classifiers, including well-known
methods like boosting [1], bagging [2], and random forests [3]. The power of these methods has
been clearly demonstrated in open large-scale learning competitions such as the Netflix Prize [4]
and the ImageNet Challenge [5]. In general, these methods train a large number of ?base? classifiers
and then combine them using a (possibly weighted) majority vote. By aggregating over classifiers,
ensemble methods reduce the variance of the predictions, and sometimes also reduce the bias [6].
The ensemble methods above rely solely on a labeled training set of data. In this paper we propose
an ensemble method that uses a large unlabeled data set in addition to the labeled set. Our work is
therefore at the intersection of semi-supervised learning [7, 8] and ensemble learning.
This paper is based on recent theoretical results of the authors [9]. Our main contributions here are
to extend and apply those results with a new algorithm in the context of random forests [3] and to
perform experiments in which we show that, when the number of labeled examples is small, our
algorithm?s performance is at least that of random forests, and often significantly better.
For the sake of completeness, we provide an intuitive introduction to the analysis given in [9]. How
can unlabeled data help in the context of ensemble learning? Consider a simple example with six
equiprobable data points. The ensemble consists of six classifiers, partitioned into three ?A? rules
and three ?B? rules. Suppose that the ?A? rules each have error 1/3 and the ?B? rules each have error
1/6. 1 If given only this information, we might take the majority vote over the six rules, possibly
giving lower weights to the ?A? rules because they have higher errors.
Suppose, however, that we are given the unlabeled information in Table 1. The columns of this table
correspond to the six classifiers and the rows to the six unlabeled examples. Each entry corresponds
to the prediction of the given classifier on the given example. As we see, the main difference between
the ?A? rules and the ?B? rules is that any two ?A? rules disagree with probability 1/3, whereas the
?B? rules always agree. For this example, it can be seen (e.g. proved by contradiction) that the only
possible true labeling of the unlabeled data that is consistent with Table 1 and with the errors of the
classifiers is that all the examples are labeled ?+?.
Consequently, we conclude that the majority vote over the ?A? rules has zero error, performing
significantly better than any of the base rules. In contrast, giving the ?B? rules equal weight would
1
We assume that (bounds on) the errors are, with high probability, true on the actual distribution. Such
bounds can be derived using large deviation bounds or bootstrap-type methods.
1
result in an a rule with error 1/6. Crucially, our reasoning to this point has solely used the structure
of the unlabeled examples along with the error rates in Table 1 to constrain our search for the true
labeling.
x1
x2
x3
x4
x5
x6
error
A classifiers
+
+
+
+
+
+
+
+
+
+
+
+
1/3 1/3 1/3
B classifiers
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
1/6 1/6 1/6
Table 1: An example of the utility of unlabeled examples in ensemble learning
By such reasoning alone, we have correctly predicted according to a weighted majority vote. This
example provides some insight into the ways in which unlabeled data can be useful:
? When combining classifiers, diversity is important. It can be better to combine less accurate
rules that disagree with each other than to combine more accurate rules that tend to agree.
? The bounds on the errors of the rules can be seen as a set of constraints on the true labeling.
A complementary set of constraints is provided by the unlabeled examples. These sets of
constraints can be combined to improve the accuracy of the ensemble classifier.
The above setup was recently introduced and analyzed in [9]. That paper characterizes the problem
as a zero-sum game between a predictor and an adversary. It then describes the minimax solution of
the game, which corresponds to an efficient algorithm for transductive learning.
In this paper, we build on the worst-case framework of [9] to devise an efficient and practical semisupervised aggregation algorithm for random forests. To achieve this, we extend the framework to
handle specialists ? classifiers which only venture to predict on a subset of the data, and abstain
from predicting on the rest. Specialists can be very useful in targeting regions of the data on which
to precisely suggest a prediction.
The high-level idea of our algorithm is to artificially generate new specialists from the ensemble.
We incorporate these, and the targeted information they carry, into the worst-case framework of [9].
The resulting aggregated predictor inherits the advantages of the original framework:
(A) Efficient: Learning reduces to solving a scalable p-dimensional convex optimization, and
test-time prediction is as efficient and parallelizable as p-dimensional linear prediction.
(B) Versatile/robust: No assumptions about the structure or origin of the predictions or labels.
(C) No introduced parameters: The aggregation method is completely data-dependent.
(D) Safe: Accuracy guaranteed to be at least that of the best classifier in the ensemble.
We develop these ideas in the rest of this paper, reviewing the core worst-case setting of [9] in Section
2, and specifying how to incorporate specialists and the resulting learning algorithm in Section 3.
Then we perform an exploratory evaluation of the framework on data in Section 4. Though the
framework of [9] and our extensions can be applied to any ensemble of arbitrary origin, in this
paper we focus on random forests, which have been repeatedly demonstrated to have state-of-theart, robust classification performance in a wide variety of situations [10]. We use a random forest
as a base ensemble whose predictions we aggregate. But unlike conventional random forests, we
do not simply take a majority vote over tree predictions, instead using a unlabeled-data-dependent
aggregation strategy dictated by the worst-case framework we employ.
2
Preliminaries
A few definitions are required to discuss these issues concretely, following [9]. Write [a]+ =
max(0, a) and [n] = {1, 2, . . . , n}. All vector inequalities are componentwise.
2
We first consider an ensemble H = {h1 , . . . , hp } and unlabeled data x1 , . . . , xn on which we wish
to predict. As in [9], the predictions and labels are allowed to be randomized, represented by values
in [?1, 1] instead of just the two values {?1, 1}. The ensemble?s predictions on the unlabeled data
are denoted by F:
?
?
h1 (x1 ) h1 (x2 ) ? ? ? h1 (xn )
?
..
.. ? ? [?1, 1]p?n
..
(1)
F = ? ...
.
.
. ?
hp (x1 ) hp (x2 ) ? ? ? hp (xn )
We use vector notation for the rows and columns of F: hi = (hi (x1 ), ? ? ? , hi (xn ))> and xj =
(h1 (xj ), ? ? ? , hp (xj ))> . The true labels on the test data T are represented by z = (z1 ; . . . ; zn ) ?
[?1, 1]n . The labels z are hidden from the predictor,
but we assume the predictor has knowledge of
P
a correlation vector b ? (0, 1]p such that n1 j hi (xj )zj ? bi , i.e. n1 Fz ? b. These p constraints
on z exactly represent upper bounds on individual classifier error rates, which can be estimated from
the training set w.h.p. when all the data are drawn i.i.d., in a standard way also used by empirical
risk minimization (ERM) methods that simply predict with the minimum-error classifier [9].
2.1
The Transductive Binary Classification Game
The idea of [9] is to formulate the ensemble aggregation problem as a two-player zero-sum game
between a predictor and an adversary. In this game, the predictor is the first player, who plays
g = (g1 ; g2 ; . . . ; gn ), a randomized label gi ? [?1, 1] for each example {xi }ni=1 . The adversary
then sets the labels z ? [?1, 1]n under the ensemble classifier error constraints defined by b. 2 The
predictor?s goal is to minimize the worst-case expected classification
error on the test data (w.r.t.
the randomized labelings z and g), which is just 21 1 ? n1 z> g . This is equivalently viewed as
maximizing worst-case correlation n1 z> g. To summarize concretely, we study the following game:
V :=
min
max
g?[?1,1]n
n
z?[?1,1] ,
1
n Fz?b
1 >
z g
n
(2)
The minimax theorem ([1], p.144) applies to the game (2), and there is an optimal strategy g? such
1 > ?
that min n
z g ? V , guaranteeing worst-case prediction error 12 (1 ? V ) on the n unlabeled
z?[?1,1] , n
1
n Fz?b
data. This optimal strategy g? is a simple function of a particular weighting over the p hypotheses ?
a nonnegative p-vector.
Definition 1 (Slack Function). Let ? ? 0p be a weight vector over H (not necessarily a distribution).
>
The vector of ensemble predictions is F> ? = (x>
1 ?, . . . , xn ?), whose elements? magnitudes are
the margins. The prediction slack function is
?(?, b) := ?(?) := ?b> ? +
n
1 X >
xj ? ? 1 +
n j=1
(3)
and this is convex in ?. The optimal weight vector ? ? is any minimizer ? ? ? arg min [?(?)].
??0p
The main result of [9] uses these to describe the minimax equilibrium of the game (2).
Theorem 2 ([9]). The minimax value of the game (2) is V = ??(? ? ). The minimax optimal
predictions are defined as follows: for all j ? [n],
> ?
> ?
x ? < 1
xj ?
j
gj? := gj (? ? ) =
> ?
sgn(xj ? ) otherwise
2
Since b is calculated from the training set and deviation bounds, we assume the problem feasible w.h.p.
3
2.2
Interpretation
Theorem 2 suggests a statistical learning algorithm for aggregating the p ensemble classifiers? predictions: estimate b from the training (labeled) set, optimize the convex slack function ?(?) to find
? ? , and finally predict with gj (? ? ) on each example j in the test set. The resulting predictions are
guaranteed to have low error, as measured by V . In particular, it is easy to prove [9] that V is at least
maxi bi , the performance of the best classifier.
The slack function (3) merits further scrutiny. Its
term depends
only on the labeled data and
Pfirst
n
not the unlabeled set, while the second term n1 j=1 x>
j ? ? 1 + incorporates only unlabeled
information. These two terms trade off smoothly ? as the problem setting becomes fully supervised
and unlabeled information is absent, the first term dominates, and ? ? tends to put all its weight on
the best single classifier like ERM.
Indeed, this viewpoint suggests a (loose) interpretation of the second term as an unsupervised regularizer for the otherwise fully supervised optimization of the ?average? error b> ?. It turns out that
a change in the regularization factor corresponds to different constraints on the true labels z:
1 >
Theorem 3 ([9]). Let V? :=
max
min
z g for any ? > 0. Then V? =
g?[?1,1]n z?[??,?]n , n
1
n Fz?b
h
i
Pn >
?
>
min??0p ?b ? + n j=1 xj ? ? 1 + .
So the regularized optimization assumes each zi ? [??, ?]. For ? < 1, this is equivalent to assuming the usual binary labels (? = 1), and then adding uniform random label noise: flipping the label
w.p. 21 (1??) on each of the n examples independently. This encourages ?clipping? of the ensemble
?
?
?
predictions x>
j ? to the ? -weighted majority vote predictions, as specified by g .
2.3
Advantages and Disadvantages
This formulation has several significant merits that would seem to recommend its use in practical
situations. It is very efficient ? once b is estimated (a scalable task, given the labeled set), the
slack function ? is effectively an average over convex functions of i.i.d. unlabeled examples, and
consequently is amenable to standard convex optimization techniques [9] like stochastic gradient
descent (SGD) and variants. These only operate in p dimensions, independent of n (which is p).
The slack function is Lipschitz and well-behaved, resulting in stable approximate learning.
Moreover, test-time prediction is extremely efficient, because it only requires the p-dimensional
weighting ? ? and can be computed example-by-example on the test set using only a dot product
in Rp . The form of g? and its dependence on ? ? facilitates interpretation as well, as it resembles
familiar objects: sigmoid link functions for linear classifiers.
Other advantages of this method also bear mention: it makes no assumptions on the structure of H
or F, is provably robust against the worst case, and adds no input parameters that need tuning. These
benefits are notable because they will be inherited by our extension of the framework in this paper.
However, this algorithm?s practical performance can still be mediocre on real data, which is often
easier to predict than an adversarial setup would have us believe. As a result, we seek to add more
information in the form of constraints on the adversary, to narrow the gap between it and reality.
3
Learning with Specialists
To address this issue, we examine a generalized scenario in which each classifier in the ensemble
can abstain on any subset of the examples instead of predicting ?1. It is a specialist that predicts
only over a subset of the input, and we think of its abstain/participate decision being randomized in
the same way as the randomized label on each example. In this section, we extend the framework of
Section 2.1 to arbitrary specialists, and discuss the semi-supervised learning algorithm that results.
In our formulation, suppose that for a classifier i ? [p] and an example x, the classifier decides
to abstain with probability 1 ? vi (x). But if the decision is to participate, the classifier predicts
4
hi (x)
P?n [?1, 1] as previously. Our only assumption on {vi (x1 ), . . . , vi (xn )} is the reasonable one
that j=1 vi (xj ) > 0, so classifier i is not a worthless specialist that abstains everywhere.
The constraint on classifier i is now not on its correlation with z on the entire test set, but on the
average correlation with z restricted to occasions on which it participates. So for some [bS ]i ? [0, 1],
n
X
vi (xj )
Pn
hi (xj )zj ? [bS ]i
(4)
k=1 vi (xk )
j=1
v (x )
Define ?i (xj ) := Pn i vji (xk ) (a distribution over j ? [n]) for convenience. Now redefine our
k=1
unlabeled data matrix as follows:
?
?
?1 (x1 )h1 (x1 ) ?1 (x2 )h1 (x2 ) ? ? ? ?1 (xn )h1 (xn )
?
?
..
..
..
..
(5)
S = n?
?
.
.
.
.
?p (x1 )hp (x1 ) ?p (x2 )hp (x2 ) ? ? ? ?p (xn )hp (xn )
Then the constraints (4) can be written as n1 Sz ? bS , analogous to the initial prediction game (2).
To summarize, our specialist ensemble aggregation game is stated as
VS :=
max
min
z?[?1,1]n , g?[?1,1]n
1
n Sz?bS
1 >
z g
n
(6)
We can immediately solve this game from Thm. 2, with (S, bS ) simply taking the place of (F, b).
Theorem 4 (Solution of the Specialist Aggregation Game). The awake ensemble prediction w.r.t.
p
X
weighting ? ? 0p on example xi is S> ? i = n
?j (xi )hj (xi )?j . The slack function is now
j=1
?S (?) :=
1
n
n h
X
i
>
S ? j ? 1 ? b>
S?
+
j=1
(7)
The minimax value of this game is VS = max??0p [??S (?)] = ??S (?S? ). The minimax optimal
predictions are defined as follows: for all i ? [n],
> ?
> ?
S ? <1
S ?S i
.
S i
> ?
[gS? ]i = gS (?S? ) =
sgn( S ?S i ) otherwise
In the no-specialists case, the vector ?i is the uniform distribution ( n1 , . . . , n1 ) for any i ? [p], and
the problem reduces to the prediction game (2). As in the original prediction game, the minimax
equilibrium depends on the data only through the ensemble predictions, but these are now of a
different form. Each example is now weighted proportionally to ?j (xi ). So on any given example
xi , only hypotheses which participate on it will be counted; and those that specialize the most
narrowly, and participate on the fewest other examples, will have more influence on the eventual
prediction gi , ceteris paribus.
3.1
Creating Specialists for an Algorithm
We can now present the main ensemble aggregation method of this paper, which creates specialists from the ensemble, adding them as additional constraints (rows of S). The algorithm,
H EDGE C LIPPER, is given in Fig. 1, and instantiates our specialist learning framework with a random forest [3]. As an initial exploration of the framework here, random forests are an appropriate
base ensemble because they are known to exhibit state-of-the-art performance [10]. Their wellknown advantages also include scalability, robustness (to corrupt data and parameter choices), and
interpretability; each of these benefits is shared by our aggregation algorithm, which consequently
inherits them all.
Furthermore, decision trees are a natural fit as the ensemble classifiers because they are inherently
hierarchical. Intuitively (and indeed formally too [11]), they act like nearest-neighbor (NN) predictors w.r.t. a distance that is ?adaptive? to the data. So each tree in a random forest represents a
5
somewhat different, nonparametric partition of the data space into regions in which one of the labels
?1 dominates. Each such region corresponds exactly to a leaf of the tree.
The idea of H EDGE C LIPPER is simply to consider each leaf in the forest as a specialist, which
predicts only on the data falling into it. By the NN intuition above, these specialists can be viewed
as predicting on data that is near them, where the supervised training of the tree attempts to define
the purest possible partitioning of the space. A pure partitioning results in many specialists with
[bS ]i ? 1, each of which contributes to the awake ensemble prediction w.r.t. ? ? over its domain, to
influence it towards the correct label (inasmuch as [bS ]i is high).
Though the idea is complex in concept for a large forest with many arbitrarily overlapping leaves
from different trees, it fits the worst-case specialist framework of the previous sections. So the
algorithm is still essentially linear learning with convex optimization, as we have described.
Algorithm 1 H EDGE C LIPPER
Input: Labeled set L, unlabeled set U
1: Using L, grow trees T = {T1 , . . . , Tp }
(regularized; see Sec. 3.2)
2: Using L, estimate bS on T and its leaves
3: Using U , (approximately) optimize (7)
to estimate ?S?
Output: The estimated weighting ?S? , for
use at test time
Figure 1: At left is algorithm H EDGE C LIPPER. At right is a schematic of how the forest structure is related
to the unlabeled data matrix S, with a given example x highlighted. The two colors in the matrix represent ?1
predictions, and white cells abstentions.
3.2
Discussion
Trees in random forests have thousands of leaves or more in practice. As we are advocating adding
so many extra specialists to the ensemble for the optimization, it is natural to ask whether this erodes
some of the advantages we have claimed earlier.
Computationally, it does not. When ?j (xi ) = 0, i.e. classifier j abstains deterministically on xi ,
then the value of hj (xi ) is irrelevant. So storing S in a sparse matrix format is natural in our setup,
with the accompanying performance gain in computing S> ? while learning ? ? and predicting with
it. This turns out to be crucial to efficiency ? each tree induces a partitioning of the data, so the set
of rows corresponding to any tree contains n nonzero entries in total. This is seen in Fig. 1.
Statistically, the situation is more complex. On one hand, there is no danger of overfitting in the
traditional sense, regardless of how many specialists are added. Each additional specialist can only
shrink the constraint set that the adversary must follow in the game (6). It only adds information
about z, and therefore the performance VS must improve, if the game is solved exactly.
However, for learning we are only concerned with approximately optimizing ?S (?) and solving the
game. This presents several statistical challenges. Standard optimization methods do not converge
as well in high ambient dimension, even given the structure of our problem. In addition, random
forests practically perform best when each tree is grown to overfit. In our case, on any sizable test
set, small leaves would cause some entries of S to have large magnitude, 1. This can foil an
algorithm like H EDGE C LIPPER by causing it to vary wildly during the optimization, particularly
since those leaves? [bS ]i values are only roughly estimated.
From an optimization perspective, some of these issues can be addressed by e.g. (pseudo)-secondorder methods [12], whose effect would be interesting to explore in future work. Our implementation
opts for another approach ? to grow trees constrained to have a nontrivial minimum weight per leaf.
Of course, there are many other ways to handle this, including using the tree structure beyond the
leaves; we just aim to conduct an exploratory evaluation here, as several of these areas remain ripe
for future research.
6
4
Experimental Evaluation
We now turn to evaluating H EDGE C LIPPER on publicly available datasets. Our implementation uses
minibatch SGD to optimize (6), runs in Python on top of the popular open-source learning package
scikit-learn, and runs out-of-core (n-independent memory), taking advantage of the scalability of our formulation. 3 The datasets are drawn from UCI/LibSVM as well as data mining sites
like Kaggle, and no further preprocessing was done on the data. We refer to ?Base RF? as the forest
of constrained trees from which our implementation draws its specialists. We restrict the training data available to the algorithm, using mostly supervised datasets because these far outnumber
medium/large-scale public semi-supervised datasets. Unused labeled examples are combined with
the test examples (and the extra unlabeled set, if any is provided) to form the set of unlabeled data
used by the algorithm. Further information and discussion on the protocol is in the appendix.
Class-imbalanced and noisy sets are included to demonstrate the aforementioned practical advantages of H EDGE C LIPPER. Therefore, AUC is an appropriate measure of performance, and these
results are in Table 2. Results are averaged over 10 runs, each drawing a different random subsample of labeled data. The best results according to a paired t-test are in bold.
We find that the use of unlabeled data is sufficient to achieve improvements over even traditionally
overfitted RFs in many cases. Notably, in most cases there is a significant benefit given by unlabeled
data in our formulation, as compared to the base RF used. The boosting-type methods also perform
fairly well, as we discuss in the next section.
Figure 2: Class-conditional ?awake ensemble prediction? (x> ? ? ) distributions, on SUSY. Rows (top to bottom): {1K, 10K, 100K} labels. Columns (left to right): ? = {1.0, 0.3, 3.0}, and the base RF.
The awake ensemble prediction values x> ? on the unlabeled set are a natural way to visualize and
explore the operation of the algorithm on the data, in an analogous way to the margin distribution in
boosting [6]. One representative sample is in Fig. 2, on SUSY, a dataset with many (5M) examples,
roughly evenly split between ?1. These plots demonstrate that our algorithm produces much more
peaked class-conditional ensemble prediction distributions than random forests, suggesting marginbased learning applications. Changing ? alters the aggressiveness of the clipping, inducing a more
or less peaked distribution. The other datasets without dramatic label imbalance show very similar
qualitative behavior in these respects, and these plots help choose ? in practice (see appendix).
Toy datasets with extremely low dimension seem to exhibit little to no significant improvement
from our method. We believe this is because the distinct feature splits found by the random forest
are few in number, and it is the diversity in ensemble predictions that enables H EDGE C LIPPER to
clip (weighted majority vote) dramatically and achieve its performance gains.
On the other hand, given a large quantity of data, our algorithm is able to learn significant structure,
the minimax structure appears appreciably close to reality, as evinced by the results on large datasets.
5
Related and Future Work
This paper?s framework and algorithms are superficially reminiscent of boosting, another paradigm
that uses voting behavior to aggregate an ensemble and has a game-theoretic intuition [1, 15]. There
is some work on semi-supervised versions of boosting [16], but it departs from this principled structure and has little in common with our approach. Classical boosting algorithms like AdaBoost [17]
make no attempt to use unlabeled data. It is an interesting open problem to incorporate boosting
ideas into our formulation, particularly since the two boosting-type methods acquit themselves well
3
It is possible to make this footprint independent of d as well by hashing features [13], not done here.
7
Dataset
kagg-prot
#
training
10
100
0.567
0.714
Random
Forest
0.509
0.665
H EDGE C LIPPER
0.500
0.656
AdaBoost
Trees
0.520
0.681
MART
[14]
0.497
0.666
Logistic
Regression
0.510
0.688
Base RF
ssl-text
10
100
0.586
0.765
0.517
0.551
0.512
0.542
0.556
0.596
0.553
0.569
0.501
0.552
kagg-cred
100
1K
10K
0.558
0.602
0.603
0.518
0.534
0.563
0.501
0.510
0.535
0.528
0.585
0.586
0.542
0.565
0.566
0.502
0.505
0.510
a1a
100
1K
0.779
0.808
0.619
0.714
0.525
0.655
0.680
0.734
0.682
0.722
0.725
0.768
w1a
100
1K
0.543
0.651
0.510
0.592
0.505
0.520
0.502
0.695
0.513
0.689
0.509
0.671
covtype
100
1K
10K
0.735
0.764
0.809
0.703
0.761
0.822
0.661
0.715
0.785
0.709
0.730
0.759
0.732
0.761
0.788
0.515
0.524
0.515
ssl-secstr
10
100
1K
0.572
0.656
0.687
0.574
0.645
0.682
0.535
0.610
0.646
0.563
0.643
0.690
0.557
0.637
0.689
0.557
0.629
0.683
SUSY
1K
10K
100K
0.776
0.785
0.799
0.769
0.787
0.797
0.764
0.784
0.797
0.747
0.787
0.797
0.771
0.789
0.796
0.720
0.759
0.779
epsilon
1K
0.651
0.659
0.645
0.718
0.726
0.774
webspam-uni
1K
10K
0.936
0.967
0.928
0.970
0.920
0.957
0.923
0.945
0.928
0.953
0.840
0.901
Table 2: Area under ROC curve for H EDGE C LIPPER vs. supervised ensemble algorithms.
in our results, and can pack information parsimoniously into many fewer ensemble classifiers than
random forests.
There is a long-recognized connection between transductive and semi-supervised learning, and our
method bridges these two settings. Popular variants on supervised learning such as the transductive
SVM [18] and graph-based or nearest-neighbor algorithms, which dominate the semi-supervised
literature [8], have shown promise largely in data-poor regimes because they face major scalability
challenges. Our focus on ensemble aggregation instead allows us to keep a computationally inexpensive linear formulation and avoid considering the underlying feature space of the data. Largely
unsupervised ensemble methods have been explored especially in applications like crowdsourcing,
in which the method of [19] gave rise to a plethora of Bayesian methods under various conditional
independence generative assumptions on F [20]. Using tree structure to construct new features has
been applied successfully, though without guarantees [21].
Learning with specialists has been studied in an adversarial online setting as in the work of Freund
et al. [22]. Though that paper?s setting and focus is different from ours, the optimal algorithms it
derives also depend on each specialist?s average error on the examples on which it is awake.
Finally, we re-emphasize the generality of our formulation, which leaves many interesting questions
to be explored. The specialists we form are not restricted to being trees; there are other ways of
dividing the data like clustering methods. Indeed, the ensemble can be heterogeneous and even
incorporate other semi-supervised methods. Our method is complementary to myriad classification
algorithms, and we hope to stimulate inquiry into the many research avenues this opens.
Acknowledgements
The authors acknowledge support from the National Science Foundation under grant IIS-1162581.
8
References
[1] Robert E. Schapire and Yoav Freund. Boosting: Foundations and Algorithms. The MIT Press, 2012.
[2] Leo Breiman. Bagging predictors. Machine learning, 24(2):123?140, 1996.
[3] Leo Breiman. Random forests. Machine learning, 45(1):5?32, 2001.
[4] Yehuda Koren. The bellkor solution to the netflix grand prize. 2009.
[5] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang,
Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large
Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015.
[6] Robert E Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. Annals of statistics, pages 1651?1686, 1998.
[7] Olivier Chapelle, Bernhard Sch?olkopf, and Alexander Zien. Semi-supervised learning.
[8] Xiaojin Zhu and Andrew B Goldberg. Introduction to semi-supervised learning. Synthesis lectures on
artificial intelligence and machine learning, 3(1):1?130, 2009.
[9] Akshay Balsubramani and Yoav Freund. Optimally combining classifiers using unlabeled data. In Conference on Learning Theory, 2015.
[10] Rich Caruana, Nikos Karampatziakis, and Ainur Yessenalina. An empirical evaluation of supervised
learning in high dimensions. In Proceedings of the 25th international conference on Machine learning,
pages 96?103. ACM, 2008.
[11] Yi Lin and Yongho Jeon. Random forests and adaptive nearest neighbors. Journal of the American
Statistical Association, 101(474):578?590, 2006.
[12] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and
stochastic optimization. The Journal of Machine Learning Research, 12:2121?2159, 2011.
[13] Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Feature hashing
for large scale multitask learning. In Proceedings of the 26th Annual International Conference on Machine
Learning, pages 1113?1120. ACM, 2009.
[14] Jerome H Friedman. Greedy function approximation: a gradient boosting machine. Annals of statistics,
pages 1189?1232, 2001.
[15] Yoav Freund and Robert E Schapire. Game theory, on-line prediction and boosting. In Proceedings of the
ninth annual conference on Computational learning theory, pages 325?332. ACM, 1996.
[16] P Kumar Mallapragada, Rong Jin, Anil K Jain, and Yi Liu. Semiboost: Boosting for semi-supervised
learning. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(11):2000?2014, 2009.
[17] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. J. Comput. Syst. Sci., 55(1):119?139, 1997.
[18] Thorsten Joachims. Transductive inference for text classification using support vector machines. In
Proceedings of the Sixteenth International Conference on Machine Learning, pages 200?209. Morgan
Kaufmann Publishers Inc., 1999.
[19] Alexander Philip Dawid and Allan M Skene. Maximum likelihood estimation of observer error-rates
using the em algorithm. Applied statistics, pages 20?28, 1979.
[20] Fabio Parisi, Francesco Strino, Boaz Nadler, and Yuval Kluger. Ranking and combining multiple predictors without labeled data. Proceedings of the National Academy of Sciences, 111(4):1253?1258, 2014.
[21] Frank Moosmann, Bill Triggs, and Frederic Jurie. Fast discriminative visual codebooks using randomized
clustering forests. In Twentieth Annual Conference on Neural Information Processing Systems (NIPS?06),
pages 985?992. MIT Press, 2007.
[22] Yoav Freund, Robert E Schapire, Yoram Singer, and Manfred K Warmuth. Using and combining predictors that specialize. In Proceedings of the twenty-ninth annual ACM symposium on Theory of computing,
pages 334?343. ACM, 1997.
[23] Predicting a Biological Response. 2012. https://www.kaggle.com/c/bioresponse.
[24] Give Me Some Credit. 2011. https://www.kaggle.com/c/GiveMeSomeCredit.
9
| 5942 |@word multitask:1 version:1 triggs:1 open:4 scalably:1 seek:1 crucially:1 dramatic:1 sgd:2 mention:1 versatile:1 carry:1 initial:2 liu:1 contains:1 ours:1 com:2 written:1 must:2 john:2 reminiscent:1 partition:1 enables:1 plot:2 v:4 alone:1 generative:1 leaf:10 fewer:1 intelligence:2 greedy:1 warmuth:1 xk:2 prize:2 core:2 manfred:1 completeness:1 boosting:14 provides:1 along:1 symposium:1 qualitative:1 consists:1 prove:1 specialize:2 combine:3 ijcv:1 redefine:1 allan:1 indeed:3 notably:1 behavior:2 themselves:1 examine:1 expected:1 roughly:2 actual:1 little:2 considering:1 becomes:1 provided:2 notation:1 moreover:1 underlying:1 medium:1 guarantee:1 pseudo:1 act:1 voting:2 exactly:3 classifier:32 prot:1 partitioning:3 grant:1 scrutiny:1 t1:1 aggregating:2 tends:1 solely:2 approximately:2 might:1 resembles:1 studied:1 specifying:1 suggests:2 bi:2 statistically:1 averaged:1 jurie:1 practical:4 practice:2 yehuda:1 x3:1 bootstrap:1 footprint:1 danger:1 area:2 empirical:2 significantly:2 suggest:1 convenience:1 unlabeled:28 targeting:1 close:1 andrej:1 put:1 context:2 risk:1 mediocre:1 influence:2 optimize:3 conventional:1 equivalent:1 demonstrated:2 bill:1 maximizing:1 www:2 regardless:1 independently:1 convex:6 formulate:1 immediately:1 pure:1 contradiction:1 rule:17 insight:1 dominate:1 handle:2 exploratory:2 traditionally:1 analogous:2 annals:2 diego:2 suppose:3 play:1 olivier:1 us:5 goldberg:1 hypothesis:2 origin:3 secondorder:1 lipper:10 element:1 dawid:1 recognition:1 particularly:2 predicts:3 labeled:11 bottom:1 solved:1 worst:9 thousand:1 region:3 sun:1 kilian:1 trade:1 overfitted:1 principled:1 intuition:2 depend:1 reviewing:1 solving:2 bellkor:1 myriad:1 creates:1 paribus:1 efficiency:1 completely:1 represented:2 various:1 regularizer:1 grown:1 fewest:1 train:1 w1a:1 distinct:1 leo:2 describe:1 jain:1 fast:1 artificial:1 labeling:3 aggregate:3 whose:3 elad:1 solve:1 drawing:1 otherwise:3 statistic:3 gi:2 g1:1 transductive:5 think:1 highlighted:1 noisy:1 online:2 advantage:7 parisi:1 propose:1 product:1 causing:1 uci:1 combining:4 achieve:3 academy:1 sixteenth:1 intuitive:1 inducing:1 venture:1 competition:1 scalability:3 olkopf:1 plethora:1 produce:1 guaranteeing:1 object:1 help:2 develop:1 andrew:1 measured:1 nearest:3 sizable:1 dividing:1 c:2 predicted:1 safe:1 correct:1 stochastic:2 abstains:2 exploration:1 sgn:2 aggressiveness:1 kluger:1 public:1 generalization:1 preliminary:1 yongho:1 biological:1 extension:2 a1a:1 rong:1 accompanying:1 practically:1 credit:1 equilibrium:2 nadler:1 predict:5 visualize:1 major:1 vary:1 estimation:1 kagg:2 label:15 bridge:1 appreciably:1 successfully:1 weighted:5 minimization:1 hope:1 mit:2 clearly:1 always:1 aim:1 pn:3 hj:2 avoid:1 breiman:2 derived:1 inherits:2 focus:3 joachim:1 improvement:3 karampatziakis:1 likelihood:1 contrast:1 adversarial:2 sense:1 inference:1 dependent:2 nn:2 entire:1 hidden:1 labelings:1 provably:1 issue:3 classification:5 arg:1 aforementioned:1 denoted:1 art:1 constrained:2 fairly:1 uc:2 ssl:2 equal:1 once:1 construct:1 x4:1 represents:1 yessenalina:1 unsupervised:2 theart:1 peaked:2 future:3 recommend:1 equiprobable:1 employ:1 few:2 wee:1 national:2 individual:1 parsimoniously:1 familiar:1 erodes:1 jeon:1 n1:8 attempt:2 friedman:1 mining:1 evaluation:4 analyzed:1 amenable:1 accurate:2 ambient:1 edge:10 tree:17 conduct:1 re:1 theoretical:1 column:3 earlier:1 gn:1 disadvantage:1 tp:1 caruana:1 yoav:7 zn:1 clipping:2 deviation:2 entry:3 subset:3 predictor:11 uniform:2 successful:1 too:1 optimally:1 combined:2 grand:1 randomized:6 international:4 lee:1 off:1 participates:1 michael:1 synthesis:1 sanjeev:1 choose:1 possibly:2 huang:1 creating:1 american:1 toy:1 li:1 suggesting:1 syst:1 diversity:2 sec:1 bold:1 inc:1 notable:1 ranking:1 depends:2 vi:6 h1:8 observer:1 hazan:1 characterizes:1 netflix:2 aggregation:10 inherited:1 abalsubr:1 jia:1 contribution:1 minimize:1 ni:1 accuracy:2 publicly:1 variance:1 who:1 largely:2 ensemble:43 yield:1 correspond:1 kaufmann:1 bayesian:1 russakovsky:1 inquiry:1 parallelizable:1 definition:2 against:1 inexpensive:1 outnumber:1 gain:3 proved:1 dataset:2 popular:2 ask:1 knowledge:1 color:1 sean:1 appears:1 higher:1 hashing:2 supervised:18 x6:1 follow:1 adaboost:2 response:1 formulation:7 done:2 though:4 shrink:1 wildly:1 furthermore:1 just:3 generality:1 smola:1 correlation:4 overfit:1 hand:2 langford:1 jerome:1 su:1 overlapping:1 scikit:1 minibatch:1 logistic:1 behaved:1 stimulate:1 believe:2 semisupervised:1 effect:1 concept:1 true:6 regularization:1 nonzero:1 white:1 x5:1 game:21 during:1 encourages:1 auc:1 generalized:1 occasion:1 theoretic:2 demonstrate:3 duchi:1 reasoning:2 zhiheng:1 abstain:4 recently:1 sigmoid:1 common:1 empirically:2 extend:3 interpretation:3 association:1 significant:5 refer:1 tuning:1 kaggle:3 hp:8 dot:1 chapelle:1 stable:1 gj:3 base:8 add:3 imbalanced:1 recent:1 dictated:1 perspective:1 optimizing:1 irrelevant:1 wellknown:1 scenario:1 claimed:1 susy:3 inequality:1 binary:3 arbitrarily:1 abstention:1 devise:1 yi:2 seen:3 minimum:2 additional:2 somewhat:1 nikos:1 morgan:1 deng:1 recognized:1 aggregated:1 converge:1 paradigm:1 semi:11 ii:1 zien:1 multiple:1 reduces:2 long:1 lin:1 paired:1 schematic:1 prediction:35 scalable:3 variant:2 regression:1 heterogeneous:1 essentially:1 vision:1 sometimes:1 represent:2 cell:1 addition:2 whereas:1 krause:1 addressed:1 grow:2 source:1 crucial:1 sch:1 extra:2 rest:2 unlike:1 operate:1 publisher:1 tend:1 facilitates:1 incorporates:1 seem:2 effectiveness:1 near:1 unused:1 bernstein:1 split:2 easy:1 concerned:1 variety:1 xj:12 fit:2 zi:1 gave:1 independence:1 restrict:1 reduce:2 idea:6 codebooks:1 avenue:1 absent:1 narrowly:1 whether:1 six:5 utility:1 bartlett:1 peter:1 cause:1 repeatedly:1 dramatically:1 useful:2 proportionally:1 karpathy:1 nonparametric:1 induces:1 clip:1 generate:1 fz:4 schapire:5 http:2 zj:2 alters:1 estimated:4 correctly:1 per:1 write:1 promise:1 dasgupta:1 falling:1 drawn:2 changing:1 libsvm:1 graph:1 subgradient:1 sum:2 run:3 package:1 everywhere:1 secstr:1 place:1 reasonable:1 yfreund:1 draw:1 decision:4 appendix:2 bound:6 hi:6 guaranteed:2 koren:1 purest:1 evinced:1 nonnegative:1 g:2 nontrivial:1 annual:4 precisely:1 constraint:11 constrain:1 awake:5 x2:7 fei:2 alex:1 sake:1 min:6 extremely:2 kumar:1 performing:1 format:1 skene:1 according:2 marginbased:1 poor:1 instantiates:1 anirban:1 describes:1 remain:1 em:1 partitioned:1 making:1 b:9 vji:1 intuitively:1 restricted:2 erm:2 thorsten:1 computationally:2 agree:2 previously:1 discus:3 slack:7 loose:1 turn:3 singer:2 moosmann:1 merit:2 available:2 operation:1 apply:1 balsubramani:2 hierarchical:1 appropriate:2 attenberg:1 inasmuch:1 specialist:25 robustness:1 weinberger:1 rp:1 original:2 bagging:2 assumes:1 top:2 worthless:1 include:1 clustering:2 yoram:2 giving:2 epsilon:1 build:1 especially:1 classical:1 added:1 quantity:1 flipping:1 question:1 strategy:3 dependence:1 usual:1 traditional:1 exhibit:2 gradient:2 fabio:1 distance:1 link:1 sci:1 majority:7 philip:1 participate:4 evenly:1 me:1 assuming:1 equivalently:1 setup:3 mostly:1 robert:5 frank:1 hao:1 stated:1 rise:1 implementation:3 satheesh:1 twenty:1 perform:4 disagree:2 upper:1 imbalance:1 francesco:1 datasets:7 acknowledge:1 descent:1 jin:1 situation:3 ucsd:2 ninth:2 arbitrary:2 thm:1 introduced:2 required:1 specified:1 componentwise:1 imagenet:2 z1:1 connection:1 advocating:1 narrow:1 nip:1 address:1 beyond:1 adversary:5 able:1 pattern:1 regime:1 challenge:4 summarize:2 rf:5 including:2 max:5 interpretability:1 memory:1 webspam:1 power:1 explanation:1 natural:4 rely:1 regularized:2 predicting:5 zhu:1 minimax:9 improve:2 xiaojin:1 text:2 literature:1 acknowledgement:1 python:1 freund:8 fully:2 lecture:1 bear:1 interesting:3 foundation:2 ripe:1 sufficient:1 consistent:1 viewpoint:1 corrupt:1 storing:1 row:5 foil:1 course:1 bias:1 wide:1 neighbor:3 taking:2 akshay:2 face:1 sparse:1 benefit:3 curve:1 calculated:1 xn:10 dimension:4 evaluating:1 superficially:1 rich:1 author:2 concretely:2 adaptive:3 san:2 preprocessing:1 counted:1 far:1 transaction:1 approximate:1 emphasize:1 uni:1 boaz:1 bernhard:1 keep:1 sz:2 decides:1 overfitting:1 conclude:1 xi:9 discriminative:1 search:1 khosla:1 table:7 reality:2 learn:2 pack:1 robust:3 inherently:1 contributes:1 forest:24 necessarily:1 artificially:1 complex:2 domain:1 protocol:1 main:4 noise:1 subsample:1 allowed:1 complementary:2 x1:10 fig:3 site:1 representative:1 roc:1 wish:1 deterministically:1 comput:1 weighting:4 learns:1 anil:1 theorem:5 departs:1 maxi:1 explored:2 covtype:1 svm:1 dominates:2 derives:1 frederic:1 adding:3 effectively:1 magnitude:2 margin:3 gap:1 easier:1 smoothly:1 intersection:1 simply:4 explore:2 twentieth:1 visual:2 josh:1 aditya:1 g2:1 applies:1 corresponds:4 minimizer:1 semiboost:1 acm:5 mart:1 ma:1 conditional:3 goal:1 targeted:1 viewed:2 consequently:3 towards:1 eventual:1 lipschitz:1 shared:1 feasible:1 change:1 strino:1 included:1 yuval:1 olga:1 total:1 experimental:1 player:2 vote:7 formally:1 ceteris:1 berg:1 support:2 jonathan:1 alexander:3 incorporate:4 evaluate:1 crowdsourcing:1 |
5,461 | 5,943 | Spherical Random Features for Polynomial Kernels
Jeffrey Pennington
Felix X. Yu
Sanjiv Kumar
Google Research
{jpennin, felixyu, sanjivk}@google.com
Abstract
Compact explicit feature maps provide a practical framework to scale kernel methods to large-scale learning, but deriving such maps for many types of kernels
remains a challenging open problem. Among the commonly used kernels for nonlinear classification are polynomial kernels, for which low approximation error
has thus far necessitated explicit feature maps of large dimensionality, especially
for higher-order polynomials. Meanwhile, because polynomial kernels are unbounded, they are frequently applied to data that has been normalized to unit `2
norm. The question we address in this work is: if we know a priori that data is
normalized, can we devise a more compact map? We show that a putative affirmative answer to this question based on Random Fourier Features is impossible
in this setting, and introduce a new approximation paradigm, Spherical Random
Fourier (SRF) features, which circumvents these issues and delivers a compact
approximation to polynomial kernels for data on the unit sphere. Compared to
prior work, SRF features are less rank-deficient, more compact, and achieve better kernel approximation, especially for higher-order polynomials. The resulting
predictions have lower variance and typically yield better classification accuracy.
1
Introduction
Kernel methods such as nonlinear support vector machines (SVMs) [1] provide a powerful framework for nonlinear learning, but they often come with significant computational cost. Their training
complexity varies from O(n2 ) to O(n3 ), which becomes prohibitive when the number of training
examples, n, grows to the millions. Testing also tends to be slow, with an O(nd) complexity for
d-dimensional vectors.
Explicit kernel maps provide a practical alternative for large-scale applications since they rely on
properties of linear methods, which can be trained in O(n) time [2, 3, 4] and applied in O(d) time,
independent of n. The idea is to determine an explicit nonlinear map Z(?) : Rd ? RD such
that K(x, y) ? hZ(x), Z(y)i, and to perform linear learning in the resulting feature space. This
procedure can utilize the fast training and testing of linear methods while still preserving much of
the expressive power of the nonlinear methods.
Following this reasoning, Rahimi and Recht [5] proposed a procedure for generating such a nonlinear map, derived from the Monte Carlo integration of an inverse Fourier transform arising from
Bochner?s theorem [6]. Explicit nonlinear random feature maps have also been proposed for other
types of kernels, such as intersection kernels [7], generalized RBF kernels [8], skewed multiplicative
histogram kernels [9], additive kernels [10], and semigroup kernels [11].
Another type of kernel that is used widely in many application domains is the polynomial kernel
p
[12, 13], defined by K(x, y) = (hx, yi + q) , where q is the bias and p is the degree of the polynomial. Approximating polynomial kernels with explicit nonlinear maps is a challenging problem, but
substantial progress has been made in this area recently. Kar and Karnick [14] catalyzed this line of
1
researchQ
by introducing
Random Maclaurin (RM) technique, which approximates hx, yip by the
Qthe
p
p
product i=1 hwi , xi i=1 hwi , yi, where wi is a vector consisting of Bernoulli random variables.
Another technique, Tensor Sketch [15], offers further improvement by instead writing hx, yip as
hx(p) , y(p) i, where x(p) is the p-level tensor product of x, and then estimating this tensor product
with a convolution of count sketches.
Although these methods are applicable to any real-valued input data, in practice polynomial kernels
are commonly used on `2 -normalized input data [15] because they are otherwise unbounded. Moreover, much of the theoretical analysis developed in former work is based on normalized vectors [16],
and it has been shown that utilizing norm information improves the estimates of random projections
[17]. Therefore, a natural question to ask is, if we know a priori that data is `2 -normalized, can we
come up with a better nonlinear map?1 Answering this question is the main focus of this work and
will lead us to the development of a new form of kernel approximation.
Restricting the input domain to the unit sphere implies that hx, yi = 2?2||x?y||2 , ? x, y ? S d?1 ,
so that a polynomial kernel can be viewed as a shift-invariant kernel in this restricted domain. As
such, one might expect the random feature maps developed in [5] to be applicable in this case. Unfortunately, this expectation turns out to be false because Bochner?s theorem cannot be applied in
this setting. The obstruction is an inherent limitation of polynomial kernels and is examined extensively in Section 3.1. In Section 3.2, we propose an alternative formulation that overcomes these
limitations by approximating the Fourier transform of the kernel function as the positive projection of an indefinite combination of Gaussians. We provide a bound on the approximation error
of these Spherical Random Fourier (SRF) features in Section 4, and study their performance on a
variety of standard datasets including a large-scale experiment on ImageNet in Section 5 and in the
Supplementary Material.
Compared to prior work, the SRF method is able to achieve lower kernel approximation error with
compact nonlinear maps, especially for higher-order polynomials. The variance in kernel approximation error is much lower than that of existing techniques, leading to more stable predictions. In
addition, it does not suffer from the rank deficiency problem seen in other methods. Before describing the SRF method in detail, we begin by reviewing the method of Random Fourier Features.
2
Background: Random Fourier Features
In [5], a method for the explicit construction of compact nonlinear randomized feature maps was
presented. The technique relies on two important properties of the kernel: i) The kernel is shiftinvariant, i.e. K(x, y) = K(z) where z = x?y and ii) The function K(z) is positive
on Rd .
R d definite ihw,zi
1
Property (ii) guarantees that the Fourier transform of K(z), k(w) = (2?)d/2 d z K(z) e
,
admits an interpretation as a probability distribution. This fact follows from Bochner?s celebrated
characterization of positive definite functions,
Theorem 1. (Bochner [6]) A function K ? C(Rd ) is positive definite on Rd if and only if it is the
Fourier transform of a finite non-negative Borel measure on Rd .
A consequence of Bochner?s theorem is that the inverse Fourier transform of k(w) can be interpreted
as the computation of an expectation, i.e.,
Z
1
K(z) =
dd w k(w) e?ihw,zi
(2?)d/2
=
=
Ew?p(w) e?ihw,x?yi
2 E w?p(w) cos(hw, xi + b) cos(hw, yi + b) ,
(1)
b?U (0,2?)
where p(w) = (2?)?d/2 k(w) and U (0, 2?) is the uniform distribution on [0, 2?). If the above
expectation is approximated using Monte Carlo with D random samples wi , then K(x, y) ?
p
T
T
hZ(x), Z(y)i with Z(x) = 2/D cos(w1T x + b1 ), ..., cos(wD
x + bD ) . This identification is
1
We are not claiming total generality of this setting; nevertheless, in cases where the vector length carries
useful information and should be preserved, it could be added as an additional feature before normalization.
2
made possible by property (i), which guarantees that the functional dependence on x and y factorizes
multiplicatively in frequency space.
Such Random Fourier Features have been used to approximate different types of positive-definite
shift-invariant kernels, including the Gaussian kernel, the Laplacian kernel, and the Cauchy kernel.
However, they have not yet been applied to polynomial kernels, because this class of kernels does
not satisfy the positive-definiteness prerequisite for the application of Bochner?s theorem. This
statement may seem counter-intuitive given the known result that polynomial kernels K(x, y) are
positive definite kernels. The subtlety is that this statement does not necessarily imply that the
associated single variable functions K(z) = K(x ? y) are positive definite on Rd for all d. We
will prove this fact in the next section, along with the construction of an efficient and effective
modification of the Random Fourier method that can be applied to polynomial kernels defined on
the unit sphere.
3
Polynomial kernels on the unit sphere
In this section, we consider approximating the polynomial kernel defined on S d?1 ? S d?1 ,
||x ? y||2 p
K(x, y) = 1 ?
= ? (q + hx, yi)p
a2
with q = a2 /2 ? 1, ? = (2/a2 )p . We will restrict our attention to p ? 1, a ? 2.
(2)
The kernel is a shift-invariant radial function of the single variable z = x ? y, which with a slight
abuse of notation we write as K(x, y) = K(z) = K(z), with z = ||z||.2 In Section 3.1, we show
that the Fourier transform of K(z) is not a non-negative function, so a straightforward application
of Bochner?s theorem to produce Random Fourier Features as in [5] is impossible in this case.
Nevertheless, in Section 3.2, we propose a fast and accurate approximation of K(z) by a surrogate
positive definite function which enables us to construct compact Fourier features.
3.1
Obstructions to Random Fourier Features
?
Because z = ||x ? y|| = 2 ? 2 cos ? ? 2, the behavior of K(z) for z > 2 is undefined and
arbitrary since it does not affect the original kernel function in eqn. (2). On the other hand, it should
be specified in order to perform the Fourier transform, which requires an integration over all values
of z. We first consider the natural choice of K(z) = 0 for z > 2 before showing that all other
choices lead to the same conclusion.
Lemma 1. The Fourier transform of {K(z), z ? 2; 0, z > 2} is not a non-negative function of w
for any values of a, p, and d.
Proof. (See the Supplementary Material for details). A direct calculation gives,
p?i i d/2+i
p
X
4
p!
2
2
1? 2
Jd/2+i (2w) ,
k(w) =
2
(p
?
i)!
a
a
w
i=0
where J? (z) is the Bessel function of the first kind. Expanding for large w yields,
p d/2
1
4
2
?
?
k(w) ?
1? 2
cos (d + 1) ?2w ,
a
w
4
?w
(3)
which takes negative values for some w for all a > 2, p, and d.
So a Monte Carlo approximation of K(z) as in eqn. (1) is impossible in this case. However, there is
still the possibility of defining the behavior of K(z) for z > 2 differently, and in such a way that the
Fourier transform is positive and integrable on Rd . The latter condition should hold for all d, since
the vector dimensionality d can vary arbitrarily depending on input data.
We now show that such a function cannot exist. To this end, we first recall a theorem due to Schoenberg regarding completely monotone functions,
2
We also follow this practice in frequency space, i.e. if k(w) is radial, we also write k(w) = k(w).
3
Definition 1. A function f is said to be completely monotone on an interval [a, b] ? R if it is continuous on the closed interval, f ? C([a, b]), infinitely differentiable in its interior, f ? C ? ((a, b)),
and (?1)l f (l) (x) ? 0 , x ? (a, b), l = 0, 1, 2, . . .
Theorem 2. (Schoenberg [18]) A function ? is completely monotone on [0, ?) if and only if ? ?
?(|| ? ||2 ) is positive definite and radial on Rd for all d.
?
Together with Theorem 1, Theorem 2 shows that ?(z) = K( z) must be completely monotone
if k(w) is to be interpreted as a probability distribution. We
? now establish that ?(z) cannot be
completely monotone and simultaneously satisfy ?(z) = K( z) for z ? 2.
?
Proposition 1. The function ?(z) = K( z) is completely monotone on [0, a2 ].
p
Proof. From the definition of ?, ?(z) = 1 ? az2 , ? is continuous on [0, a2 ], infinitely differen?(z)
p!
? 0,
tiable on (0, a2 ), and its derivatives vanish for l > p. They obey (?1)l ?(l) (z) = (p?l)!
(a2 ?z)l
where the inequality follows since z < a2 . Therefore ? is completely monotone on [0, a2 ].
Theorem 3. Suppose f is a completely monotone polynomial of degree n on the interval [0, c],
c < ?, with f (c) = 0. Then there is no completely monotone function on [0, ?) that agrees with f
on [0, a] for any nonzero a < c.
T
Proof. Let g ? C([0, ?)) C ? ((0, ?)) be a non-negative function that agrees with f on [0, a]
and let h = g ? f . We show that for all non-negative integers m there exists a point ?m satisfying
a < ?m ? c such that h(m) (?m ) > 0. For m = 0, the point ?0 = c obeys h(?0 ) = g(?0 )?f (?0 ) =
g(?0 ) > 0 by the definition of g. Now, suppose there is a point ?m such that a < ?m ? c and
h(m) (?m ) > 0. The mean value theorem then guarantees the existence of a point ?m+1 such that
(m)
(m)
(m)
(a)
m)
= h ?m(?
a < ?m+1 < ?m and h(m+1) (?m+1 ) = h (??mm)?h
?a
?a > 0, where we have utilized
the fact that h(m) (a) = 0 and the induction hypothesis. Noting that f (m) = 0 for all m > n, this
result implies that g (m) (?m ) > 0 for all m > n. Therefore g cannot be completely monotone.
Corollary 1. There does not exist a finite non-negative Borel measure on Rd whose Fourier transform agrees with K(z) on [0, 2].
3.2
Spherical Random Fourier features
From the section above, we see that the Bochner?s theorem cannot be directly applied to the poly?
nomial kernel. In addition, it is impossible to construct a positive integrable k(w)
whose inverse
?
Fourier transform K(z)
equals K(z) exactly on [0, 2]. Despite this result, it is nevertheless possible
?
to find K(z)
that is a good approximation of K(z) on [0, 2], which is all that is necessary given
?
that we will be approximating K(z)
by Monte Carlo integration anyway. We present our method of
Spherical Random Fourier (SRF) features in this section.
We recall a characterization of radial functions that are positive definite on Rd for all d due to
Schoenberg.
Theorem 4. (Schoenberg [18]) A continuous function f : [0, ?) ? R is positive definite and
R?
2 2
radial on Rd for all d if and only if it is of the form f (r) = 0 e?r t d?(t), where ? is a finite
non-negative Borel measure on [0, ?).
?
This characterization motivates an approximation for K(z) as a sum of N Gaussians, K(z)
=
PN
??i2 z 2
c
e
.
To
increase
the
accuracy
of
the
approximation,
we
allow
the
c
to
take
negative
vali
i
i=1
ues. Doing so enables its Fourier transform (which is also a sum of Gaussians) to become negative.
We circumvent this problem by mapping those negative values to zero,
!
N
1 d
X
2
2
?w
/4?
?
i
e
,
(4)
k(w)
= max 0,
ci ?
2?i
i=1
?
and simply defining K(z)
as its inverse Fourier transform. Owing to the max in eqn. (4), it is not
?
possible to calculate an analytical expression for K(z).
Thankfully, this isn?t necessary since we
4
?3
p(w)
K(z)
0.5
x 10
?3
1
1.5
Original
Approx
0.5
p(w)
1
Original
Approx
K(z)
1
0.5
x 10
1
0.5
0
0
0.5
1
z
1.5
2
0
0
20
40
w
60
0
0
80
(a) p = 10
0.5
1
z
1.5
2
0
0
20
w
40
60
(b) p = 20
?
Figure 1: K(z), its approximation K(z),
and the corresponding pdf p(w) for d = 256, a = 2 for
polynomial orders (a) 10 and (b) 20. Higher-order polynomials are approximated better, see eqn. (6).
Algorithm 1 Spherical Random Fourier (SRF) Features
Input: A polynomial kernel K(x, y) = K(z), z = ||x ? y||2 , ||x||2 = 1, ||y||2 = 1, with bias a ? 2,
order p ? 1, input dimensionality d and feature dimensionality D.
Output: A randomized feature map Z(?) : Rd ? RD such that hZ(x), Z(y)i ? K(x, y).
h
i2
R2
?
?
?
?
for k(w),
where K(z)
is the inverse Fourier transform of k(w),
1. Solve argminK? 0 dz K(z) ? K(z)
?
whose form is given in eqn. (4). Let p(w) = (2?)?d/2 k(w).
2. Draw D iid samples w1 , ..., wD from p(w).
3. Draw D q
iid samples b1 , ..., bD ? R from the uniform distribution on [0, 2?].
T
T
2
4. Z(x) = D
x + bD )
cos(w1T x + b1 ), ..., cos(wD
can evaluate it numerically by performing a one dimensional numerical integral,
Z ?
d/2?1
?
?
K(z)
=
dw w k(w)(w/z)
Jd/2?1 (wz) ,
0
which is well-approximated using a fixed-width grid in w and z, and can be computed via a single
matrix multiplication. We then optimize the following cost function, which is just the MSE between
K(z) and our approximation of it,
Z
h
i2
1 2
?
dz K(z) ? K(z)
,
(5)
L=
2 0
which defines an optimal probability distribution p(w) through eqn. (4) and the relation p(w) =
(2?)?d/2 k(w). We can then follow the Random Fourier Feature [5] method to generate the nonlinear maps. The entire SRF process is summarized in Algorithm 1. Note that for any given of kernel
parameters (a, p, d), p(w) can be pre-computed, independently of the data.
4
Approximation error
The total MSE comes from two sources: error approximating the function, i.e. L from eqn. (5),
and error from Monte Carlo sampling. The expected MSE of Monte-Carlo converges at a rate of
O(1/D) and a bound on the supremum of the absolute error was given in [5]. Therefore, we focus
on analyzing the first type of error.
p
2
?
We describe a simple method to obtain an upper bound on L. Consider the functionpK(z)
= e? a2 z ,
which is a special case of eqn. (4) obtained by setting N = 1, ci = 1, and ?1 = p/a2 . The MSE
between K(z) and this function thus provides an upper bound to our approximation error,
Z
Z
1 2
1 a
2
?
?
L=
dz [K(z) ? K(z)] ?
dz [K(z)
? K(z)]2
2 0
2 0
2p
p
Z
p
1 a
2p
z2
z2
=
dz exp ? 2 z 2 + 1 ? 2
? 2 exp ? 2 z 2
1? 2
2 0
a
a
a
a
r
p
?
?
a ?
a
?(p + 1)
a
?(p + 1)
=
erf( 2p) +
?
?
?
M ( 12 , p + 23 , ?p) .
4 2p
4
?(p + 32 ) 2
?(p + 32 )
5
?3
x 10
5
RM
TS
SRF
1
8
RM
TS
SRF
4
3
2
11
12
13
0
9
14
10
log (Dimensionality)
11
12
13
0
9
14
x 10
6
RM
TS
SRF
x 10
10
1
11
12
13
0
9
14
RM
TS
SRF
RM
TS
SRF
0.006
0.004
11
12
13
14
10
log (Dimensionality)
13
14
6
RM
TS
SRF
5
RM
TS
SRF
4
MSE
2
11
12
13
2
10
11
12
13
14
0
9
10
11
12
13
2
0
9
14
(a) p = 3
12
13
14
RM
TS
SRF
0.02
0.015
0.01
0.005
10
11
12
13
14
0
9
log (Dimensionality)
(b) p = 7
11
0.025
RM
TS
SRF
3
log (Dimensionality)
log (Dimensionality)
10
log (Dimensionality)
1
1
RM
TS
SRF
0.02
0
9
14
x 10
4
MSE
3
0
9
10
?3
x 10
14
0.03
log (Dimensionality)
?3
x 10
4
MSE
12
log (Dimensionality)
?3
5
11
13
0.01
0
9
MSE
10
0
9
12
0.04
0.002
0
9
11
0.05
0.008
2
10
log (Dimensionality)
0.01
4
0.01
log (Dimensionality)
MSE
MSE
2
0.015
0.005
log (Dimensionality)
?3
MSE
3
4
RM
TS
SRF
0.02
MSE
10
?3
0.025
RM
TS
SRF
2
1
0
9
x 10
6
MSE
MSE
MSE
2
?3
x 10
MSE
?3
3
(c) p = 10
10
11
12
13
14
log (Dimensionality)
(d) p = 20
Figure 2: Comparison of MSE of kernel approximation on different datasets with various polynomial
orders (p) and feature map dimensionalities. The first to third rows show results of usps, gisette,
adult, respectively. SRF gives better kernel approximation, especially for large p.
In the first line we have used the fact that integrand is positive and a ? 2. The three terms on the
second line are integrated using the standard integral definitions of the error function, beta function,
and Kummer?s confluent hypergeometric function [19], respectively. To expose the functional dependence of this result more clearly, we perform an expansion for large p. We use the asymptotic
expansions of the error function and the Gamma function,
erf(z)
=
2 ?
(2k ? 1)!!
e?z X
(?1)k
,
1? ?
(2z 2 )k
z ?
k=0
?
log ?(z)
= z log z ? z ?
X Bk
1
z
log
+
z 1?k ,
2
2?
k(k ? 1)
k=2
where Bk are Bernoulli numbers. For the third term, we write the series representation of M (a, b, z),
?
M (a, b, z) =
?(b) X ?(a + k) z k
,
?(a)
?(b + k) k!
k=0
expand each term for large p, and sum the result. All together, we obtain the following bound,
r
105 ? a
L?
,
4096 2 p5/2
(6)
which decays at a rate of O(p?2.5 ) and becomes negligible for higher-order polynomials. This is
remarkable, as the approximation error of previous methods increases as a function of p. Figure 1
?
shows two kernel functions K(z), their approximations K(z),
and the corresponding pdfs p(w).
5
Experiments
We compare the SRF method with Random Maclaurin (RM) [14] and Tensor Sketch (TS) [15],
the other polynomial kernel approximation approaches. Throughout the experiments, we choose the
number of Gaussians, N , to equal 10, though the specific number had negligible effect on the results.
The bias term is set as a = 4. Other choices such as a = 2, 3 yield similar performance; results
with a variety of parameter settings can be found in the Supplementary Material. The error bars and
standard deviations are obtained by conducting experiments 10 times across the entire dataset.
6
Dataset
Method
RM
TS
SRF
RM
TS
SRF
RM
TS
SRF
RM
TS
SRF
RM
TS
SRF
RM
TS
SRF
RM
TS
SRF
RM
TS
SRF
usps
p=3
usps
p=7
usps
p = 10
usps
p = 20
gisette
p=3
gisette
p=7
gisette
p = 10
gisette
p = 20
D = 29
87.29 ? 0.87
89.85 ? 0.35
90.91 ? 0.32
88.86 ? 1.08
92.30 ? 0.52
92.44 ? 0.31
88.95 ? 0.60
92.41 ? 0.48
92.63 ? 0.46
88.67 ? 0.98
91.73 ? 0.88
92.27 ? 0.48
89.53 ? 1.43
93.52 ? 0.60
91.72 ? 0.92
89.44 ? 1.44
92.89 ? 0.66
92.75 ? 1.01
89.91 ? 0.58
92.48 ? 0.62
92.42 ? 0.85
89.40 ? 0.98
90.49 ? 1.07
92.12 ? 0.62
D = 210
89.11 ? 0.53
90.99 ? 0.42
92.08 ? 0.32
91.01 ? 0.44
93.59 ? 0.20
93.85 ? 0.32
91.41 ? 0.46
93.85 ? 0.34
94.33 ? 0.33
91.09 ? 0.42
93.92 ? 0.28
94.30 ? 0.46
92.77 ? 0.40
95.28 ? 0.71
94.39 ? 0.65
92.77 ? 0.57
95.29 ? 0.39
94.85 ? 0.53
93.16 ? 0.40
94.61 ? 0.60
95.10 ? 0.47
92.46 ? 0.67
92.88 ? 0.42
94.22 ? 0.45
D = 211
90.43 ? 0.49
91.37 ? 0.19
92.50 ? 0.48
92.70 ? 0.38
94.53 ? 0.20
94.79 ? 0.19
93.27 ? 0.28
94.75 ? 0.26
95.18 ? 0.26
93.22 ? 0.39
94.68 ? 0.28
95.48 ? 0.39
94.49 ? 0.48
96.12 ? 0.36
95.62 ? 0.47
95.15 ? 0.60
96.32 ? 0.47
96.42 ? 0.49
94.94 ? 0.72
95.72 ? 0.53
96.35 ? 0.42
94.37 ? 0.55
94.43 ? 0.69
95.85 ? 0.54
D = 212
91.09 ? 0.44
91.68 ? 0.19
93.10 ? 0.26
94.03 ? 0.30
94.84 ? 0.10
95.06 ? 0.21
94.29 ? 0.34
95.31 ? 0.28
95.60 ? 0.27
94.32 ? 0.27
95.26 ? 0.31
95.97 ? 0.32
95.90 ? 0.31
96.76 ? 0.40
96.50 ? 0.40
96.37 ? 0.46
96.66 ? 0.34
97.07 ? 0.30
96.19 ? 0.49
96.60 ? 0.58
97.15 ? 0.34
95.67 ? 0.43
95.41 ? 0.71
96.94 ? 0.29
D = 213
91.48 ? 0.31
91.85 ? 0.18
93.31 ? 0.16
94.54 ? 0.30
95.06 ? 0.23
95.37 ? 0.12
95.19 ? 0.21
95.55 ? 0.25
95.78 ? 0.23
95.24 ? 0.27
95.90 ? 0.20
96.18 ? 0.23
96.69 ? 0.33
97.06 ? 0.19
96.91 ? 0.36
96.90 ? 0.46
97.16 ? 0.25
97.50 ? 0.24
96.88 ? 0.23
96.99 ? 0.28
97.57 ? 0.23
96.14 ? 0.55
96.24 ? 0.44
97.47 ? 0.24
D = 214
91.78 ? 0.32
91.90 ? 0.23
93.28 ? 0.24
94.97 ? 0.26
95.27 ? 0.12
95.51 ? 0.17
95.53 ? 0.25
95.91 ? 0.17
95.85 ? 0.16
95.62 ? 0.24
96.07 ? 0.19
96.28 ? 0.15
97.01 ? 0.26
97.12 ? 0.27
97.05 ? 0.19
97.27 ? 0.22
97.58 ? 0.25
97.53 ? 0.15
97.15 ? 0.40
97.41 ? 0.20
97.75 ? 0.14
96.63 ? 0.40
96.97 ? 0.28
97.75 ? 0.32
Exact
96.21
96.51
96.56
96.81
98.00
97.90
98.10
98.00
Table 1: Comparison of classification accuracy (in %) on different datasets for different polynomial
orders (p) and varying feature map dimensionality (D). The Exact column refers to the accuracy of
exact polynomial kernel trained with libSVM. More results are given in the Supplementary Material.
?3
92
RM
TS
SRF
CRAFT RM
CRAFT TS
CRAFT SRF
6
0
5
?0.5
?1
RM
TS
SRF
CRAFT RM
CRAFT TS
CRAFT SRF
?1.5
?2
?2.5
0
x 10
200
400
600
3
2
90
RM
TS
SRF
CRAFT RM
CRAFT TS
CRAFT SRF
89
1
800
Eigenvalue Rank
(a)
4
91
Accuracy %
7
MSE
log (Eigenvalue Ratio)
0.5
1000
0
9
10
11
12
13
log (Dimensionality)
(b)
14
88
9
10
11
12
13
14
log (Dimensionality)
(c)
Figure 3: Comparison of CRAFT features on usps dataset with polynomial order p = 10 and
feature maps of dimension D = 212 . (a) Logarithm of ratio of ith-leading eigenvalue of the approximate kernel to that of the exact kernel, constructed using 1,000 points. CRAFT features are
projected from 214 dimensional maps. (b) Mean squared error. (c) Classification accuracy.
Kernel approximation. The main focus of this work is to improve the quality of kernel approximation, which we measure by computing the mean squared error (MSE) between the exact kernel and
its approximation across the entire dataset. Figure 2 shows MSE as a function of the dimensionality
(D) of the nonlinear maps. SRF provides lower MSE than other methods, especially for higher order
polynomials. This observation is consistent with our theoretical analysis in Section 4. As a corollary,
SRF provides more compact maps with the same kernel approximation error. Furthermore, SRF is
stable in terms of the MSE, whereas TS and RM have relatively large variance.
Classification with linear SVM. We train linear classifiers with liblinear [3] and evaluate classification accuracy on various datasets, two of which are summarized in Table 1; additional results are
available in the Supplementary Material. As expected, accuracy improves with higher-dimensional
nonlinear maps and higher-order polynomials. It is important to note that better kernel approximation does not necessarily lead to better classification performance because the original kernel might
not be optimal for the task [20, 21]. Nevertheless, we observe that SRF features tend to yield better
classification performance in most cases.
Rank-Deficiency. Hamid et al. [16] observe that RM and TS produce nonlinear features that are
rank deficient. Their approximation quality can be improved by first mapping the input to a higher
dimensional feature space, and then randomly projecting it to a lower dimensional space. This
method is known as CRAFT. Figure 3(a) shows the logarithm of the ratio of the ith eigenvalue
7
5
RM
TS
SRF
4
Time (sec)
Time (sec)
0.4
0.3
0.2
0.1
Top?1 Test Error (%)
0.5
RM
TS
SRF
3
2
1
0
1000
2000
3000
4000
5000
0
1000
2000
3000
4000
Dimensionality
Dimensionality
(a) d = 1000
(b) d = D
5000
Figure 4: Computational time to generate randomized feature
map for 1,000 random samples on a fixed hardware with p = 3.
(a) d = 1, 000. (b) d = D.
54
SRF
RFF
52
50
48
46
5
10
15
20
25
Training Examples (M)
Figure 5: Doubly stochastic gradient learning curves with RFF
and SRF features on ImageNet.
of the various approximate kernel matrices to that of the exact kernel. For a full-rank, accurate
approximation, this value should be constant and equal to zero, which is close to the case for SRF.
RM and TS deviate from zero significantly, demonstrating their rank-deficiency.
Figures 3(b) and 3(c) show the effect of the CRAFT method on MSE and classification accuracy.
CRAFT improves RM and TS but it has no or even a negative effect on SRF. These observations all
indicate that the SRF is less rank-deficient than RM and TS.
Computational Efficiency. Both RM and SRF have computational complexity O(ndD), whereas
TS scales as O(np(d + D log D)), where D is the number of nonlinear maps, n is the number of
samples, d is the original feature dimension, and p is the polynomial order. Therefore the scalability
of TS is better than SRF when D is of the same order as d (O(D log D) vs. O(D2 )). However,
the computational cost of SRF does not depend on p, making SRF more efficient for higher-order
polynomials. Moreover, there is little computational overhead involved in the SRF method, which
enables it to outperform T S for practical values of D, even though it is asymptotically inferior. As
shown in Figure 4(a), even for the low-order case (p = 3), SRF is more efficient than TS for a fixed
d = 1000. In Figure 4(b), where d = D, SRF is still more efficient than TS up to D . 4000.
Large-scale Learning. We investigate the scalability of the SRF method on the ImageNet 2012
dataset, which consists of 1.3 million 256 ? 256 color images from 1000 classes. We employ the
doubly stochastic gradient method of Dai et al. [22], which utilizes two stochastic approximations
? one from random training points and the other from random features associated with the kernel.
We use the same architecture and parameter settings as [22] (including the fixed convolutional neural
network parameters), except we replace the RFF kernel layer with an `2 normalization step and an
SRF kernel layer with parameters a = 4 and p = 10. The learning curves in Figure 5 suggest that
SRF features may perform better than RFF features on this large-scale dataset. We also evaluate
the model with multi-view testing, in which max-voting is performed on 10 transformations of the
test set. We obtain Top-1 test error of 44.4%, which is comparable to the 44.5% error reported in
[22]. These results demonstrate that the unit norm restriction does not have a negative impact on
performance in this case, and that polynomial kernels can be successfully scaled to large datasets
using the SRF method.
6
Conclusion
We have described a novel technique to generate compact nonlinear features for polynomial kernels
applied to data on the unit sphere. It approximates the Fourier transform of kernel functions as
the positive projection of an indefinite combination of Gaussians. It achieves more compact maps
compared to the previous approaches, especially for higher-order polynomials. SRF also shows less
feature redundancy, leading to lower kernel approximation error. Performance of SRF is also more
stable than the previous approaches due to reduced variance. Moreover, the proposed approach
could easily extend beyond polynomial kernels: the same techniques would apply equally well to
any shift-invariant radial kernel function, positive definite or not. In the future, we would also like
to explore adaptive sampling procedures tuned to the training data distribution in order to further
improve the kernel approximation accuracy, especially when D is large, i.e. when the Monte-Carlo
error is low and the kernel approximation error dominates.
Acknowledgments. We thank the anonymous reviewers for their valuable feedback and Bo Xie for
facilitating experiments with the doubly stochastic gradient method.
8
References
[1] Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine Learning, 20(3):273?297, 1995.
[2] T. Joachims. Training linear svms in linear time. In Proceedings of the 12th ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining, pages 217?226. ACM, 2006.
[3] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. Liblinear: A library
for large linear classification. The Journal of Machine Learning Research, 9:1871?1874, 2008.
[4] Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro, and Andrew Cotter. Pegasos: Primal estimated subgradient solver for SVM. Mathematical Programming, 127(1):3?30, 2011.
[5] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances in neural
information processing systems, pages 1177?1184, 2007.
[6] Salomon Bochner. Harmonic analysis and the theory of probability. Dover Publications, 1955.
[7] Subhransu Maji and Alexander C Berg. Max-margin additive classifiers for detection. In International
Conference on Computer Vision, pages 40?47. IEEE, 2009.
[8] V Sreekanth, Andrea Vedaldi, Andrew Zisserman, and C Jawahar. Generalized RBF feature maps for
efficient detection. In British Machine Vision Conference, 2010.
[9] Fuxin Li, Catalin Ionescu, and Cristian Sminchisescu. Random fourier approximations for skewed multiplicative histogram kernels. In Pattern Recognition, pages 262?271. Springer, 2010.
[10] A. Vedaldi and A. Zisserman. Efficient additive kernels via explicit feature maps. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 34(3):480?492, 2012.
[11] Jiyan Yang, Vikas Sindhwani, Quanfu Fan, Haim Avron, and Michael Mahoney. Random laplace feature
maps for semigroup kernels on histograms. In Computer Vision and Pattern Recognition (CVPR), pages
971?978. IEEE, 2014.
[12] Hideki Isozaki and Hideto Kazawa. Efficient support vector classifiers for named entity recognition. In
Proceedings of the 19th International Conference on Computational Linguistics-Volume 1, pages 1?7.
Association for Computational Linguistics, 2002.
[13] Kwang In Kim, Keechul Jung, and Hang Joon Kim. Face recognition using kernel principal component
analysis. Signal Processing Letters, IEEE, 9(2):40?42, 2002.
[14] Purushottam Kar and Harish Karnick. Random feature maps for dot product kernels. In International
Conference on Artificial Intelligence and Statistics, pages 583?591, 2012.
[15] Ninh Pham and Rasmus Pagh. Fast and scalable polynomial kernels via explicit feature maps. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,
pages 239?247. ACM, 2013.
[16] Raffay Hamid, Ying Xiao, Alex Gittens, and Dennis Decoste. Compact random feature maps. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 19?27, 2014.
[17] Ping Li, Trevor J Hastie, and Kenneth W Church. Improving random projections using marginal information. In Learning Theory, pages 635?649. Springer, 2006.
[18] Isaac J Schoenberg. Metric spaces and completely monotone functions. Annals of Mathematics, pages
811?841, 1938.
[19] EE Kummer. De integralibus quibusdam definitis et seriebus infinitis. Journal f?ur die reine und angewandte Mathematik, 17:228?242, 1837.
[20] Felix X Yu, Sanjiv Kumar, Henry Rowley, and Shih-Fu Chang. Compact nonlinear maps and circulant
extensions. arXiv preprint arXiv:1503.03893, 2015.
[21] Dmitry Storcheus, Mehryar Mohri, and Afshin Rostamizadeh. Foundations of coupled nonlinear dimensionality reduction. arXiv preprint arXiv:1509.08880, 2015.
[22] Bo Dai, Bo Xie, Niao He, Yingyu Liang, Anant Raj, Maria-Florina F Balcan, and Le Song. Scalable
kernel methods via doubly stochastic gradients. In Advances in Neural Information Processing Systems,
pages 3041?3049, 2014.
9
| 5943 |@word polynomial:37 norm:3 nd:1 open:1 d2:1 hsieh:1 carry:1 reduction:1 liblinear:2 celebrated:1 series:1 tuned:1 reine:1 existing:1 com:1 wd:3 z2:2 yet:1 bd:3 must:1 sanjiv:2 additive:3 numerical:1 enables:3 v:1 intelligence:2 prohibitive:1 ith:2 dover:1 characterization:3 provides:3 unbounded:2 mathematical:1 along:1 constructed:1 direct:1 become:1 beta:1 prove:1 doubly:4 consists:1 overhead:1 yingyu:1 introduce:1 expected:2 andrea:1 behavior:2 frequently:1 multi:1 spherical:6 little:1 decoste:1 solver:1 becomes:2 begin:1 estimating:1 moreover:3 notation:1 gisette:5 kind:1 interpreted:2 affirmative:1 developed:2 transformation:1 guarantee:3 avron:1 voting:1 exactly:1 rm:36 classifier:3 scaled:1 unit:7 positive:17 felix:2 before:3 negligible:2 tends:1 consequence:1 despite:1 analyzing:1 abuse:1 might:2 examined:1 challenging:2 salomon:1 co:8 obeys:1 practical:3 acknowledgment:1 testing:3 practice:2 definite:11 procedure:3 area:1 significantly:1 vedaldi:2 projection:4 pre:1 radial:6 refers:1 jui:1 suggest:1 cannot:5 interior:1 close:1 pegasos:1 impossible:4 writing:1 optimize:1 restriction:1 sanjivk:1 map:32 dz:5 reviewer:1 straightforward:1 attention:1 independently:1 utilizing:1 deriving:1 dw:1 anyway:1 schoenberg:5 laplace:1 annals:1 construction:2 suppose:2 exact:6 programming:1 hypothesis:1 approximated:3 satisfying:1 utilized:1 recognition:4 p5:1 preprint:2 wang:1 calculate:1 counter:1 valuable:1 substantial:1 benjamin:1 und:1 complexity:3 rowley:1 trained:2 depend:1 reviewing:1 ali:1 efficiency:1 completely:11 usps:6 easily:1 differently:1 various:3 maji:1 train:1 fast:3 effective:1 describe:1 monte:7 artificial:1 shalev:1 whose:3 widely:1 valued:1 supplementary:5 solve:1 kai:1 otherwise:1 cvpr:1 erf:2 statistic:1 transform:15 cristian:1 differentiable:1 eigenvalue:4 analytical:1 propose:2 product:4 argmink:1 achieve:2 intuitive:1 scalability:2 rff:4 produce:2 generating:1 converges:1 depending:1 andrew:2 progress:1 come:3 implies:2 indicate:1 owing:1 stochastic:5 material:5 hx:6 hamid:2 anonymous:1 proposition:1 rong:1 extension:1 hold:1 mm:1 pham:1 exp:2 maclaurin:2 mapping:2 vary:1 achieves:1 a2:11 applicable:2 jawahar:1 expose:1 agrees:3 successfully:1 cotter:1 clearly:1 gaussian:1 pn:1 factorizes:1 varying:1 publication:1 corollary:2 derived:1 focus:3 joachim:1 improvement:1 pdfs:1 rank:8 bernoulli:2 maria:1 sigkdd:2 kim:2 rostamizadeh:1 typically:1 entire:3 qthe:1 integrated:1 relation:1 expand:1 subhransu:1 issue:1 among:1 classification:10 priori:2 development:1 integration:3 yip:2 special:1 marginal:1 equal:3 construct:2 sampling:2 yu:2 icml:1 future:1 np:1 inherent:1 employ:1 randomly:1 simultaneously:1 gamma:1 consisting:1 jeffrey:1 detection:2 possibility:1 investigate:1 mining:2 mahoney:1 hwi:2 undefined:1 primal:1 accurate:2 integral:2 fu:1 necessary:2 necessitated:1 logarithm:2 theoretical:2 column:1 cost:3 introducing:1 deviation:1 uniform:2 reported:1 answer:1 varies:1 cho:1 recht:2 st:1 international:6 randomized:3 pagh:1 michael:1 together:2 w1:1 squared:2 choose:1 derivative:1 leading:3 li:2 de:1 summarized:2 sec:2 satisfy:2 multiplicative:2 view:1 performed:1 closed:1 doing:1 shai:1 accuracy:10 convolutional:1 variance:4 conducting:1 yield:4 identification:1 iid:2 carlo:7 ping:1 trevor:1 definition:4 frequency:2 involved:1 isaac:1 associated:2 proof:3 dataset:6 ask:1 recall:2 color:1 knowledge:2 dimensionality:24 improves:3 confluent:1 ndd:1 higher:11 xie:2 follow:2 zisserman:2 improved:1 wei:1 formulation:1 though:2 generality:1 furthermore:1 just:1 sketch:3 eqn:8 hand:1 dennis:1 expressive:1 nonlinear:19 google:2 defines:1 quality:2 fuxin:1 grows:1 effect:3 normalized:5 former:1 semigroup:2 nonzero:1 i2:3 skewed:2 width:1 inferior:1 die:1 generalized:2 pdf:1 demonstrate:1 delivers:1 balcan:1 reasoning:1 image:1 harmonic:1 novel:1 recently:1 functional:2 volume:1 million:2 extend:1 interpretation:1 approximates:2 slight:1 numerically:1 association:1 he:1 significant:1 rd:14 approx:2 grid:1 mathematics:1 had:1 dot:1 henry:1 stable:3 purushottam:1 raj:1 kar:2 inequality:1 arbitrarily:1 yi:6 devise:1 integrable:2 preserving:1 seen:1 additional:2 dai:2 isozaki:1 determine:1 paradigm:1 bochner:9 bessel:1 signal:1 ii:2 catalin:1 full:1 rahimi:2 hideto:1 calculation:1 offer:1 sphere:5 lin:1 jiyan:1 equally:1 laplacian:1 impact:1 prediction:2 scalable:2 florina:1 vision:3 expectation:3 metric:1 arxiv:4 histogram:3 kernel:79 normalization:2 preserved:1 addition:2 background:1 whereas:2 interval:3 source:1 hz:3 tend:1 deficient:3 seem:1 integer:1 ee:1 noting:1 yang:1 variety:2 affect:1 zi:2 architecture:1 restrict:1 hastie:1 idea:1 regarding:1 shift:4 expression:1 song:1 suffer:1 useful:1 obstruction:2 extensively:1 hardware:1 svms:2 reduced:1 generate:3 outperform:1 exist:2 estimated:1 arising:1 ionescu:1 write:3 redundancy:1 indefinite:2 shih:1 nevertheless:4 demonstrating:1 libsvm:1 kenneth:1 utilize:1 asymptotically:1 subgradient:1 monotone:11 sum:3 inverse:5 letter:1 powerful:1 named:1 throughout:1 chih:1 utilizes:1 putative:1 circumvents:1 draw:2 comparable:1 bound:5 layer:2 haim:1 fan:2 jpennin:1 deficiency:3 alex:1 n3:1 fourier:30 integrand:1 nathan:1 kumar:2 performing:1 relatively:1 combination:2 across:2 ur:1 wi:2 nomial:1 gittens:1 modification:1 making:1 projecting:1 invariant:4 restricted:1 remains:1 mathematik:1 turn:1 count:1 describing:1 ninh:1 singer:1 know:2 end:1 available:1 gaussians:5 prerequisite:1 apply:1 obey:1 observe:2 differen:1 alternative:2 corinna:1 kazawa:1 jd:2 original:5 existence:1 top:2 vikas:1 linguistics:2 harish:1 felixyu:1 yoram:1 especially:7 establish:1 approximating:5 tensor:4 question:4 added:1 dependence:2 niao:1 surrogate:1 said:1 gradient:4 ihw:3 thank:1 entity:1 cauchy:1 induction:1 afshin:1 length:1 vali:1 multiplicatively:1 ratio:3 rasmus:1 vladimir:1 sreekanth:1 ying:1 liang:1 unfortunately:1 statement:2 claiming:1 negative:13 motivates:1 perform:4 upper:2 convolution:1 observation:2 datasets:5 finite:3 t:38 defining:2 arbitrary:1 bk:2 specified:1 anant:1 imagenet:3 srf:60 hypergeometric:1 hideki:1 address:1 able:1 adult:1 bar:1 beyond:1 pattern:3 including:3 max:4 wz:1 power:1 natural:2 rely:1 circumvent:1 improve:2 imply:1 library:1 church:1 coupled:1 ues:1 isn:1 deviate:1 prior:2 discovery:2 multiplication:1 asymptotic:1 xiang:1 expect:1 limitation:2 srebro:1 remarkable:1 foundation:1 degree:2 consistent:1 xiao:1 dd:1 row:1 mohri:1 jung:1 bias:3 allow:1 circulant:1 kwang:1 face:1 absolute:1 curve:2 dimension:2 feedback:1 karnick:2 commonly:2 made:2 projected:1 adaptive:1 far:1 transaction:1 approximate:3 compact:12 hang:1 dmitry:1 overcomes:1 supremum:1 b1:3 xi:2 shwartz:1 continuous:3 thankfully:1 table:2 expanding:1 angewandte:1 improving:1 sminchisescu:1 expansion:2 mse:23 mehryar:1 necessarily:2 meanwhile:1 poly:1 domain:3 main:2 joon:1 n2:1 w1t:2 facilitating:1 en:1 borel:3 definiteness:1 slow:1 explicit:9 answering:1 vanish:1 third:2 hw:2 theorem:14 british:1 specific:1 jen:1 showing:1 r2:1 decay:1 admits:1 svm:2 cortes:1 dominates:1 exists:1 restricting:1 false:1 pennington:1 vapnik:1 ci:2 rui:1 margin:1 intersection:1 simply:1 explore:1 infinitely:2 bo:3 subtlety:1 chang:2 sindhwani:1 springer:2 relies:1 acm:4 viewed:1 rbf:2 catalyzed:1 replace:1 except:1 lemma:1 principal:1 total:2 craft:14 shiftinvariant:1 ew:1 berg:1 support:3 latter:1 alexander:1 evaluate:3 |
5,462 | 5,944 | Fast and Guaranteed Tensor Decomposition via
Sketching
Yining Wang, Hsiao-Yu Tung, Alex Smola
Machine Learning Department
Carnegie Mellon University, Pittsburgh, PA 15213
{yiningwa,htung}@cs.cmu.edu
alex@smola.org
Anima Anandkumar
Department of EECS
University of California Irvine
Irvine, CA 92697
a.anandkumar@uci.edu
Abstract
Tensor CANDECOMP/PARAFAC (CP) decomposition has wide applications in
statistical learning of latent variable models and in data mining. In this paper,
we propose fast and randomized tensor CP decomposition algorithms based on
sketching. We build on the idea of count sketches, but introduce many novel ideas
which are unique to tensors. We develop novel methods for randomized computation of tensor contractions via FFTs, without explicitly forming the tensors.
Such tensor contractions are encountered in decomposition methods such as tensor power iterations and alternating least squares. We also design novel colliding
hashes for symmetric tensors to further save time in computing the sketches. We
then combine these sketching ideas with existing whitening and tensor power iterative techniques to obtain the fastest algorithm on both sparse and dense tensors.
The quality of approximation under our method does not depend on properties
such as sparsity, uniformity of elements, etc. We apply the method for topic modeling and obtain competitive results.
Keywords: Tensor CP decomposition, count sketch, randomized methods, spectral methods, topic modeling
1
Introduction
In many data-rich domains such as computer vision, neuroscience and social networks consisting
of multi-modal and multi-relational data, tensors have emerged as a powerful paradigm for handling the data deluge. An important operation with tensor data is its decomposition, where the
input tensor is decomposed into a succinct form. One of the popular decomposition methods is the
CANDECOMP/PARAFAC (CP) decomposition, also known as canonical polyadic decomposition
[12, 5], where the input tensor is decomposed into a succinct sum of rank-1 components. The CP
decomposition has found numerous applications in data mining [4, 18, 20], computational neuroscience [10, 21], and recently, in statistical learning for latent variable models [1, 30, 28, 6]. For
latent variable modeling, these methods yield consistent estimates under mild conditions such as
non-degeneracy and require only polynomial sample and computational complexity [1, 30, 28, 6].
Given the importance of tensor methods for large-scale machine learning, there has been an increasing interest in scaling up tensor decomposition algorithms to handle gigantic real-world data
tensors [27, 24, 8, 16, 14, 2, 29]. However, the previous works fall short in many ways, as described
subsequently. In this paper, we design and analyze efficient randomized tensor methods using ideas
from sketching [23]. The idea is to maintain a low-dimensional sketch of an input tensor and then
perform implicit tensor decomposition using existing methods such as tensor power updates, alternating least squares or online tensor updates. We obtain the fastest decomposition methods for both
sparse and dense tensors. Our framework can easily handle modern machine learning applications
with billions of training instances, and at the same time, comes with attractive theoretical guarantees.
1
Our main contributions are as follows:
Efficient tensor sketch construction: We propose efficient construction of tensor sketches when
the input tensor is available in factored forms such as in the case of empirical moment tensors, where
the factor components correspond to rank-1 tensors over individual data samples. We construct
the tensor sketch via efficient FFT operations on the component vectors. Sketching each rank-1
component takes O(n + b log b) operations where n is the tensor dimension and b is the sketch
length. This is much faster than the O(np ) complexity for brute force computations of a pth-order
tensor. Since empirical moment tensors are available in the factored form with N components,
where N is the number of samples, it takes O((n + b log b)N ) operations to compute the sketch.
Implicit tensor contraction computations: Almost all tensor manipulations can be expressed in
terms of tensor contractions, which involves multilinear combinations of different tensor fibres [19].
For example, tensor decomposition methods such as tensor power iterations, alternating least squares
(ALS), whitening and online tensor methods all involve tensor contractions. We propose a highly
efficient method to directly compute the tensor contractions without forming the input tensor explicitly. In particular, given the sketch of a tensor, each tensor contraction can be computed in
O(n + b log b) operations, regardless of order of the source and destination tensors. This significantly accelerates the brute-force implementation that requires O(np ) complexity for pth-order tensor contraction. In addition, in many applications, the input tensor is not directly available and needs
to be computed from samples, such as the case of empirical moment tensors for spectral learning
of latent variable models. In such cases, our method results in huge savings by combining implicit
tensor contraction computation with efficient tensor sketch construction.
Novel colliding hashes for symmetric tensors: When the input tensor is symmetric, which is the
case for empirical moment tensors that arise in spectral learning applications, we propose a novel
colliding hash design by replacing the Boolean ring with the complex ring C to handle multiplicities.
As a result, it makes the sketch building process much faster and avoids repetitive FFT operations.
Though the computational complexity remains the same, the proposed colliding hash design results
in significant speed-up in practice by reducing the actual number of computations.
Theoretical and empirical guarantees: We show that the quality of the tensor sketch does not
depend on sparseness, uniform entry distribution, or any other properties of the input tensor. On the
other hand, previous works assume specific settings such as sparse tensors [24, 8, 16], or tensors
having entries with similar magnitude [27]. Such assumptions are unrealistic, and in practice, we
may have both dense and spiky tensors, for example, unordered word trigrams in natural language
processing. We prove that our proposed randomized method for tensor decomposition does not lead
to any significant degradation of accuracy.
Experiments on synthetic and real-world datasets show highly competitive results. We demonstrate
a 10x to 100x speed-up over exact methods for decomposing dense, high-dimensional tensors. For
topic modeling, we show a significant reduction in computational time over existing spectral LDA
implementations with small performance loss. In addition, our proposed algorithm outperforms
collapsed Gibbs sampling when running time is constrained. We also show that if a Gibbs sampler is
initialized with our output topics, it converges within several iterations and outperforms a randomly
initialized Gibbs sampler run for much more iterations. Since our proposed method is efficient and
avoids local optima, it can be used to accelerate the slow burn-in phase in Gibbs sampling.
Related Works: There have been many works on deploying efficient tensor decomposition methods [27, 24, 8, 16, 14, 2, 29]. Most of these works except [27, 2] implement the alternating least
squares (ALS) algorithm [12, 5]. However, this is extremely expensive since the ALS method is
run in the input space, which requires O(n3 ) operations to execute one least squares step on an
n-dimensional (dense) tensor. Thus, they are only suited for extremely sparse tensors.
An alternative method is to first reduce the dimension of the input tensor through procedures such as
whitening to O(k) dimension, where k is the tensor rank, and then carry out ALS in the dimensionreduced space on k ? k ? k tensor [13]. This results in significant reduction of computational
complexity when the rank is small (k n). Nonetheless, in practice, such complexity is still
prohibitively high as k could be several thousands in many settings. To make matters even worse,
when the tensor corresponds to empirical moments computed from samples, such as in spectral
learning of latent variable models, it is actually much slower to construct the reduced dimension
2
Table 1: Summary of notations. See also Appendix F.
Variables
a, b ? Cn
a, b ? Cn
a, b ? Cn
Operator
a ? b ? Cn
a ? b ? Cn
a ? b ? Cn?n
Meaning
Element-wise product
Convolution
Tensor product
Variables
a ? Cn
A, B ? Cn?m
T ? Cn?n?n
Operator
a?3 ? Cn?n?n
2
A B ? Cn ?m
n?n2
T(1) ? C
Meaning
a?a?a
Khatri-Rao product
Mode expansion
k ? k ? k tensor from training data than to decompose it, since the number of training samples is
typically very large. Another alternative is to carry out online tensor decomposition, as opposed to
batch operations in the above works. Such methods are extremely fast [14], but can suffer from high
variance. The sketching ideas developed in this paper will improve our ability to handle larger sizes
of mini-batches and therefore result in reduced variance in online tensor methods.
Another alternative method is to consider a randomized sampling of the input tensor in each iteration
of tensor decomposition [27, 2]. However, such methods can be expensive due to I/O calls and
are sensitive to the sampling distribution. In particular, [27] employs uniform sampling, which is
incapable of handling tensors with spiky elements. Though non-uniform sampling is adopted in [2],
it requires an additional pass over the training data to compute the sampling distribution. In contrast,
our sketch based method takes only one pass of the data.
2
Preliminaries
Tensor, tensor product and tensor decomposition A 3rd order tensor 1 T of dimension n has n3
entries. Each entry can be represented as Tijk for i, j, k ? {1, ? ? ? , n}. For an n ? n ? n tensor T
and a vector u ? Rn , we define two forms of tensor products (contractions) as follows:
?
?
n
n
n
X
X
X
Tn,j,k uj uk ? .
Ti,j,k ui uj uk ; T(I, u, u) = ?
T1,j,k uj uk , ? ? ? ,
T(u, u, u) =
i,j,k=1
j,k=1
j,k=1
Note that T(u, u, u) ? R and T(I, u, u) ? Rn . For two P
complex tensors A, B of the same order
and dimension, its inner product is defined as hA, Bi := l Al Bl , wherepl ranges over all tuples
that index the tensors. The Frobenius norm of a tensor is simply kAkF = hA, Ai.
The rank-k CP decomposition of a 3rd-order n-dimensional tensor T ? Rn?n?n involves scalars {?i }ki=1 and n-dimensional vectors {ai , bi , ci }ki=1 such that the residual kT ?
Pk
2
i=1 ?i ai ? bi ? ci kF is minimized. Here R = a ? b ? c is a 3rd order tensor defined as
Rijk = ai bj ck . Additional notations are defined in Table 1 and Appendix F.
Robust tensor power method The method was proposed in [1] and was shown to provably succeed if the input tensor is a noisy perturbation of the sum of k rank-1 tensors whose base vectors
are orthogonal. Fix an input tensor T ? Rn?n?n , The basic idea is to randomly generate L initial
? = T(I, u, u)/kT(I, u, u)k2 . The vector that results
vectors and perform T power update steps: u
in the largest eigenvalue T(u, u, u) is then kept and subsequent eigenvectors can be obtained via
deflation. If implemented naively, the algorithm takes O(kn3 LT ) time to run 2 , requiring O(n3 )
storage. In addition, in certain cases when a second-order moment matrix is available, the tensor
power method can be carried out on a k ? k ? k whitened tensor [1], thus improving the time complexity by avoiding dependence on the ambient dimension n. Apart from the tensor power method,
other algorithms such as Alternating Least Squares (ALS, [12, 5]) and Stochastic Gradient Descent
(SGD, [14]) have also been applied to tensor CP decomposition.
Tensor sketch Tensor sketch was proposed in [23] as a generalization of count sketch [7]. For
a tensor T of dimension n1 ? ? ? ? ? np , random hash functions h1 , ? ? ? , hp : [n] ? [b] with
Prhj [hj (i) = t] = 1/b for every i ? [n], j ? [p], t ? [b] and binary Rademacher variables
?1 , ? ? ? , ?p : [n] ? {?1}, the sketch sT : [b] ? R of tensor T is defined as
X
sT (t) =
?1 (i1 ) ? ? ? ?p (ip )Ti1 ,??? ,ip ,
(1)
H(i1 ,??? ,ip )=t
1
2
Though we mainly focus on 3rd order tensors in this work, extension to higher order tensors is easy.
L is usually set to be a linear function of k and T is logarithmic in n; see Theorem 5.1 in [1].
3
where H(i1 , ? ? ? , ip ) = (h1 (i1 ) + ? ? ? + hp (ip )) mod b. The corresponding recovery rule is
b i ,??? ,i = ?1 (i1 ) ? ? ? ?p (ip )sT (H(i1 , ? ? ? , ip )). For accurate recovery, H needs to be 2-wise inT
1
p
dependent, which is achieved by independently selecting h1 , ? ? ? , hp from a 2-wise independent
hash family [26]. Finally, the estimation can be made more robust by the standard approach of
taking B independent sketches of the same tensor and then report the median of the B estimates [7].
3
Fast tensor decomposition via sketching
In this section we first introduce an efficient procedure for computing sketches of factored or empirical moment tensors, which appear in a wide variety of applications such as parameter estimation of
latent variable models. We then show how to run tensor power method directly on the sketch with
reduced computational complexity. In addition, when an input tensor is symmetric (i.e., Tijk the
same for all permutations of i, j, k) we propose a novel ?colliding hash? design, which speeds up
the sketch building process. Due to space limits we only consider the robust tensor power method
in the main text. Methods and experiments for sketching based ALS are presented in Appendix C.
To avoid confusions, we emphasize that n is used to denote the dimension of the tensor to be decomposed, which is not necessarily the same as the dimension of the original data tensor. Indeed, once
whitening is applied n could be as small as the intrinsic dimension k of the original data tensor.
3.1
Efficient sketching of empirical moment tensors
Sketching a 3rd-order dense n-dimensional tensor via Eq. (1) takes O(n3 ) operations, which in
general cannot be improved because the input size is ?(n3 ). However, in practice data tensors are
usually structured. One notable example is empirical moment tensors, which arises naturally in
parameter estimation problems of latent variable models. More specifically, an empirical moment
? ?3 ] = 1 PN x?3 , where N is the total number of training
tensor can be expressed as T = E[x
i=1 i
N
data points and xi is the ith data point. In this section we show that computing sketches of such
tensors can be made significantly more efficient than the brute-force implementations via Eq. (1).
The main idea is to sketch low-rank components of T efficiently via FFT, a trick inspired by previous
efforts on sketching based matrix multiplication and kernel learning [22, 23].
We consider the more generalized case when an input tensor T can be written as a weighted sum
PN
of known rank-1 components: T = i=1 ai ui ? v i ? wi , where ai are scalars and ui , v i , wi are
known n-dimensional vectors. The key observation is that the sketch of each rank-1 component
Ti = ui ? v i ? wi can be efficiently computed by FFT. In particular, sTi can be computed as
sTi = s1,ui ? s2,vi ? s3,wi = F ?1 (F(s1,ui ) ? F(s2,vi ) ? F(s3,wi )),
(2)
where
? denotes convolution and ? stands for element-wise vector product. s1,u (t) =
P
?1
deh1 (i)=t ?1 (i)ui is the count sketch of u and s2,v , s3,w are defined similarly. F and F
note the Fast Fourier Transform (FFT) and its inverse operator. By applying FFT, we reduce the
convolution computation into element-wise product evaluation in the Fourier space. Therefore, sT
can be computed using O(n + b log b) operations, where the O(b
arises from FFT evaluaPlog b) termP
tions. Finally, because the sketching operator is linear (i.e., s( i ai Ti ) = i ai s(Ti )), sT can be
computed in O(N (n + b log b)), which is much cheaper than brute-force that takes O(N n3 ) time.
3.2
Fast robust tensor power method
We are now ready to present the fast robust tensor power method, the main algorithm of this paper.
The computational bottleneck of the original robust tensor power method is the computation of two
tensor products: T(I, u, u) and T(u, u, u). A naive implementation requires O(n3 ) operations.
In this section, we show how to speed up computation of these products. We show that given the
sketch of an input tensor T, one can approximately compute both T(I, u, u) and T(u, u, u) in
O(b log b + n) steps, where b is the hash length.
Before going into details, we explain the key idea behind our fast tensor product computation. For
any two tensors A, B, its inner product hA, Bi can be approximated by 4
hA, Bi ? hsA , sB i.
(3)
3
4
<(?) denotes the real part of a complex number. med(?) denotes the median.
All approximations will be theoretically justified in Section 4 and Appendix E.2.
4
Algorithm 1 Fast robust tensor power method
? = T + E ? Rn?n?n ; target rank k; number of initializations
1: Input: noisy symmetric tensor T
L, number of iterations T , hash length b, number of independent sketches B.
(m) (m)
(m)
2: Initialization: hj , ?j for j ? {1, 2, 3} and m ? [B]; compute sketches sT
? Cb .
?
3: for ? = 1 to L do
(? )
4:
Draw u0 uniformly at random from unit sphere.
5:
for t = 1 to T do
(? )
(m) (m)
6:
For each m ? [B], j ? {2, 3} compute the sketch of ut?1 using hj ,?j via Eq. (1).
(?
)
(?
)
(m)
? u , u ) as follows: first evaluate s
?(m) = F ?1 (F(s ? ) ?
7:
Compute v (m) ? T(I,
(m)
F(s2,u )
?
(m)
F(s3,u )).
Set [v
(m)
t?1
t?1
]i as [v
(m)
(1)
T
]i ? ?1 (i)[?
s(m) ]h1 (i) for every i ? [n].
(B)
(? )
? i ? med(<(v i ), ? ? ? , <(v i ))3 . Update: ut = v
? /k?
Set v
v k.
(m)
? (? ) , u(? ) , u(? ) ) using s(m)
9: Selection Compute ?? ? T(u
for
?
?
[L]
and m ? [B]. Evaluate
?
T
T
T
T
(1)
(B)
(? ? )
?
? = ?? ? and u
? = uT .
?? = med(?? , ? ? ? , ?? ) and ? = argmax? ?? . Set ?
(m)
? u?3 .
??T for the rank-1 tensor ?T = ??
10: Deflation For each m ? [B] compute sketch s
? u
? ? ?T.
? ) and sketches of the deflated tensor T
11: Output: the eigenvalue/eigenvector pair (?,
8:
Table 2: Computational complexity of sketched and plain tensor power method. n is the tensor dimension; k is
the intrinsic tensor rank; b is the sketch length. Per-sketch time complexity is shown.
preprocessing: general tensors
preprocessing: factored tensors
with N components
per tensor contraction time
P LAIN
-
S KETCH
O(n3 )
P LAIN +W HITENING
O(kn3 )
S KETCH +W HITENING
O(n3 )
O(N n3 )
O(N (n + b log b))
O(N (nk + k 3 ))
O(N (nk + b log b))
O(n3 )
O(n + b log b)
O(k 3 )
O(k + b log b)
Eq. (3) immediately results in a fast approximation procedure of T(u, u, u) because T(u, u, u) =
hT, Xi where X = u?u?u is a rank one tensor, whose sketch can be built in O(n+b log b) time by
Eq. (2). Consequently, the product can be approximately computed using O(n + b log b) operations
if the tensor sketch of T is available. For tensor product of the form T(I, u, u). The ith coordinate
in the result can be expressed as hT, Yi i where Yi = ei ? u ? u; ei = (0, ? ? ? , 0, 1, 0, ? ? ? , 0) is
the ith indicator vector. We can then apply Eq. (3) to approximately compute hT, Yi i efficiently.
However, this method is not completely satisfactory because it requires sketching n rank-1 tensors
(Y1 through Yn ), which results in O(n) FFT evaluations by Eq. (2). Below we present a proposition
that allows us to use only O(1) FFTs to approximate T(I, u, u).
Proposition 1. hsT , s1,ei ? s2,u ? s3,u i = hF ?1 (F(sT ) ? F(s2,u ) ? F(s3,u )), s1,ei i.
Proposition 1 is proved in Appendix E.1. The main idea is to ?shift? all terms not depending on i to
the left side of the inner product and eliminate the inverse FFT operation on the right side so that sei
contains only one nonzero entry. As a result, we can compute F ?1 (F(sT ) ? F(s2,u ) ? F(s3,u ))
once and read off each entry of T(I, u, u) in constant time. In addition, the technique can be
further extended to symmetric tensor sketches, with details deferred to Appendix B due to space
limits. When operating on an n-dimensional tensor, The algorithm requires O(kLT (n + Bb log b))
?T
running time (excluding the time for building s
? ) and O(Bb) memory, which significantly improves
the O(kn3 LT ) time and O(n3 ) space complexity over the brute force tensor power method. Here
L, T are algorithm parameters for robust tensor power method. Previous analysis shows that T =
O(log k) and L = poly(k), where poly(?) is some low order polynomial function. [1]
Finally, Table 2 summarizes computational complexity of sketched and plain tensor power method.
3.3
Colliding hash and symmetric tensor sketch
For symmetric input tensors, it is possible to design a new style of tensor sketch that can be built
more efficiently. The idea is to design hash functions that deliberately collide symmetric entries, i.e.,
(i, j, k), (j, i, k), etc. Consequently, we only need to consider entries Tijk with i ? j ? k when
building tensor sketches. An intuitive idea is to use the same hash function and Rademacher random
variable for each order, that is, h1 (i) = h2 (i) = h3 (i) =: h(i) and ?1 (i) = ?2 (i) = ?3 (i) =: ?(i).
5
In this way, all permutations of (i, j, k) will collide with each other. However, such a design has an
issue with repeated entries because ?(i) can only take ?1 values. Consider (i, i, k) and (j, j, k) as
an example: ?(i)2 ?(k) = ?(j)2 ?(k) with probability 1 even if i 6= j. On the other hand, we need
E[?(a)?(b)] = 0 for any pair of distinct 3-tuples a and b.
To address the above-mentioned issue, we extend the Rademacher random variables to the complex
2?j
m?1
domain and consider all roots of z m = 1, that is, ? = {?j }j=0
where ?j = ei m . Suppose ?(i) is
a Rademacher random variable with Pr[?(i) = ?i ] = 1/m. By elementary algebra, E[?(i)p ] = 0
whenever m is relative prime to p or m can be divided by p. Therefore, by setting m = 4 we avoid
collisions of repeated entries in a 3rd order tensor. More specifically, The symmetric tensor sketch
of a symmetric tensor T ? Rn?n?n can be defined as
X
s?T (t) :=
Ti,j,k ?(i)?(j)?(k),
(4)
?
H(i,j,k)=t
? j, k) = (h(i) + h(j) + h(k)) mod b. To recover an entry, we use
where H(i,
b i,j,k = 1/? ? ?(i) ? ?(j) ? ?(k) ? s?T (H(i, j, k)),
T
(5)
where ? = 1 if i = j = k; ? = 3 if i = j or j = k or i = k; ? = 6 otherwise. For higher order
tensors, the coefficients can be computed via the Young tableaux which characterizes symmetries
under the permutation group. Compared to asymmetric tensor sketches, the hash function h needs
to satisfy stronger independence conditions because we are using the same hash function for each
? 2-wise independent. The fact is due
order. In our case, h needs to be 6-wise independent to make H
to the following proposition, which is proved in Appendix E.1.
? : [n]p ? [b] as
Proposition 2. Fix p and q. For h : [n] ? [b] define symmetric mapping H
?
H(i1 , ? ? ? , ip ) = h(i1 ) + ? ? ? + h(ip ). If h is (pq)-wise independent then H is q-wise independent.
The symmetric tensor sketch described above can significantly speed up sketch building processes.
?T one only needs to consider roughly M/6
For a general tensor with M nonzero entries, to build s
entries (those Tijk 6= 0 with i ? j ? k). For a rank-1 tensor u?3 , only one FFT is needed to build
F(?
s); in contrast, to compute Eq. (2) one needs at least 3 FFT evaluations.
Finally, in Appendix B we give details on how to seamlessly combine symmetric hashing and techniques in previous sections to efficiently construct and decompose a tensor.
4
Error analysis
In this section we provide theoretical analysis on approximation error of both tensor sketch and the
fast sketched robust tensor power method. We mainly focus on symmetric tensor sketches, while
extension to asymmetric settings is trivial. Due to space limits, all proofs are placed in the appendix.
4.1 Tensor sketch concentration bounds
Theorem 1 bounds the approximation error of symmetric tensor sketches when computing
T(u, u, u) and T(I, u, u). Its proof is deferred to Appendix E.2.
Theorem 1. Fix a symmetric real tensor T ? Rn?n?n and a real vector u ? Rn with kuk2 = 1.
Suppose ?1,T (u) ? R and ?2,T (u) ? Rn are estimation errors of T(u, u, u) and T(I, u, u)
b
using B independent symmetric tensor sketches; that is, ?1,T (u) = T(u,
u, u) ? T(u, u, u) and
b
?2,T (u) = T(I, u, u)?T(I, u, u). If B = ?(log(1/?)) then with probability ? 1?? the following
error bounds hold:
?
?
?1,T (u) = O(kTkF / b); [?2,T (u)] = O(kTkF / b), ?i ? {1, ? ? ? , n}.
(6)
i
In addition, for any fixed w ? Rn , kwk2 = 1 with probability ? 1 ? ? we have
2
hw, ?2,T (u)i = O(kTk2F /b).
(7)
4.2 Analysis of the fast tensor power method
We present a theorem analyzing robust tensor power method with tensor sketch approximations. A
more detailed theorem statement along with its proof can be found in Appendix E.3.
? = T + E ? Rn?n?n where T = Pk ?i v ?3 with an orthonorTheorem 2. Suppose T
i
i=1
?i, v
? i )}k be the eigenmal basis {v i }k , ?1 > ? ? ? > ?k > 0 and kEk = . Let {(?
i=1
i=1
6
? = .01
Table 3: Squared residual norm on top 10 recovered eigenvectors of 1000d tensors and running time (excluding
I/O and sketch building time) for plain (exact) and sketched robust tensor power methods. Two vectors are
? k22 > 0.1. A extended version is shown as Table 5 in Appendix A.
considered mismatch (wrong) if kv ? v
12
.40
.26
.17
.07
log2 (b):
B = 20
B = 30
B = 40
Exact
Residual norm
13 14 15
.19 .10 .09
.10 .09 .08
.10 .08 .08
16
.08
.07
.07
No. of wrong vectors
12 13 14 15 16
8
6
3
0
0
7
5
2
0
0
7
4
0
0
0
0
Running time (min.)
12 13 14
15
16
.85 1.6 3.5 7.4 16.6
1.3 2.4 5.3 11.3 24.6
1.8 3.3 7.3 15.2 33.0
293.5
Table 4: Negative log-likelihood and running time (min) on the large Wikipedia dataset for 200 and 300 topics.
Spectral
Gibbs
Hybrid
like.
7.49
6.85
6.77
time
34
561
144
log2 b
12
12
iters
30
5
k
300
200
k
like.
7.39
6.38
6.31
time
56
818
352
log2 b
13
13
iters
30
10
value/eigenvector pairs obtained by Algorithm 1. Suppose = O(1/(?1 n)), T = ?(log(n/?) +
log(1/) maxi ?i /(?i ? ?i?1 )) and L grows linearly with k. Assume the randomness of the tensor
sketch is independent among tensor product evaluations. If B = ?(log(n/?)) and b satisfies
?2
kTk2F ? ?4 n2 kTk2F
b = ? max
,
(8)
?(?)2
r(?)2 ?21
where ?(?) = mini (?i ? ?i?1 ) and r(?) = maxi,j>i (?i /?j ), then with probability ? 1 ? ? there
exists a permutation ? over [k] such that
? i | ? ?i /2, ?i ? {1, ? ? ? , k}
? i k2 ? , |??(i) ? ?
kv ?(i) ? v
(9)
Pk ? ?3
? i k ? c for some constant c.
and kT ? i=1 ?i v
Theorem 1 shows that the sketch length b can be set as o(n3 ) to provably approximately decompose
a 3rd-order tensor with dimension n. Theorem 1 together with time complexity comparison in Table
2 shows that the sketching based fast tensor decomposition algorithm has better computational complexity over brute-force implementation. One potential drawback of our analysis is the assumption
that sketches are independently built for each tensor product (contraction) evaluation. This is an artifact of our analysis and we conjecture that it can be removed by incorporating recent development
of differentially private adaptive query framework [9].
5
Experiments
We demonstrate the effectiveness and efficiency of our proposed sketch based tensor power method
on both synthetic tensors and real-world topic modeling problems. Experimental results involving
the fast ALS method are presented in Appendix C.3. All methods are implemented in C++ and
tested on a single machine with 8 Intel X5550@2.67Ghz CPUs and 32GB memory. For synthetic
tensor decomposition we use only a single thread; for fast spectral LDA 8 to 16 threads are used.
5.1
Synthetic tensors
In Table 5 we compare our proposed algorithms with exact decomposition methods on synthetic
tensors. Let n = 1000 be the dimension of the input tensor. We first generate
a random orthonormal
Pn
basis {v i }ni=1 and then set the input tensor T as T = normalize( i=1 ?i v ?3
i ) + E, where the
eigenvalues ?i satisfy ?i = 1/i. The normalization step makes kTk2F = 1 before imposing noise.
The Gaussian noise matrix E is symmetric with Eijk ? N (0, ?/n1.5 ) for i ? j ? k and noise-tosignal level ?. Due to time constraints, we only compare the recovery error and running time on the
top 10 recovered eigenvectors of the full-rank input tensor T. Both L and T are set to 30. Table 3
shows that our proposed algorithms achieve reasonable approximation error within a few minutes,
which is much faster then exact methods. A complete version (Table 5) is deferred to Appendix A.
5.2
Topic modeling
We implement a fast spectral inference algorithm for Latent Dirichlet Allocation (LDA [3]) by combining tensor sketching with existing whitening technique for dimensionality reduction. Implemen7
Negative Log?likelihood
k=50
k=100
k=200
Exact, k=50
Exact, k=100
Exact, k=200
8.4
8.2
8
Gibbs sampling,
100 iterations, 145 mins
7.8
9
10
11
12
13
Log hash length
14
15
16
Figure 1: Left: negative log-likelihood for fast and exact tensor power method on Wikipedia dataset. Right:
negative log-likelihood for collapsed Gibbs sampling, fast LDA and Gibbs sampling using Fast LDA as initialization.
tation details are provided in Appendix D. We compare our proposed fast spectral LDA algorithm
with baseline spectral methods and collapsed Gibbs sampling (using GibbsLDA++ [25] implementation) on two real-world datasets: Wikipedia and Enron. Dataset details are presented in A Only
the most frequent V words are kept and the vocabulary size V is set to 10000. For the robust tensor
power method the parameters are set to L = 50 and T = 30. For ALS we iterate until convergence,
or a maximum number of 1000 iterations is reached. ?0 is set to 1.0 and B is set to 30.
Obtained topic models ? ? RV ?K are evaluated on a held-out dataset consisting of 1000 documents
randomly picked out from training datasets. For each testing document d, we fit a topic mixing vector
? d ? RK by solving the following optimization problem: ?
? d = argmink?k1 =1,??0 kwd ? ??k2 ,
?
where wd is the empirical word distribution of document d. The per-document log-likelihood is
Pnd
PK
? k ?wdi ,k . Finally, the average
then defined as Ld = n1d i=1
ln p(wdi ), where p(wdi ) = k=1 ?
Ld over all testing documents is reported.
Figure 1 left shows the held-out negative log-likelihood for fast spectral LDA under different hash
lengths b. We can see that as b increases, the performance approaches the exact tensor power method
because sketching approximation becomes more accurate. On the other hand, Table 6 shows that
fast spectral LDA runs much faster than exact tensor decomposition methods while achieving comparable performance on both datasets.
Figure 1 right compares the convergence of collapsed Gibbs sampling with different number of
iterations and fast spectral LDA with different hash lengths on Wikipedia dataset. For collapsed
Gibbs sampling, we set ? = 50/K and ? = 0.1 following [11]. As shown in the figure, fast spectral
LDA achieves comparable held-out likelihood while running faster than collapsed Gibbs sampling.
We further take the dictionary ? output by fast spectral LDA and use it as initializations for collapsed
Gibbs sampling (the word topic assignments z are obtained by 5-iteration Gibbs sampling, with the
dictionary ? fixed). The resulting Gibbs sampler converges much faster: with only 3 iterations
it already performs much better than a randomly initialized Gibbs sampler run for 100 iterations,
which takes 10x more running time.
We also report performance of fast spectral LDA and collapsed Gibbs sampling on a larger dataset
in Table 4. The dataset was built by crawling 1,085,768 random Wikipedia pages and a held-out
evaluation set was built by randomly picking out 1000 documents from the dataset. Number of
topics k is set to 200 or 300, and after getting topic dictionary ? from fast spectral LDA we use 2iteration Gibbs sampling to obtain word topic assignments z. Table 4 shows that the hybrid method
(i.e., collapsed Gibbs sampling initialized by spectral LDA) achieves the best likelihood performance
in a much shorter time, compared to a randomly initialized Gibbs sampler.
6
Conclusion
In this work we proposed a sketching based approach to efficiently compute tensor CP decomposition with provable guarantees. We apply our proposed algorithm on learning latent topics of
unlabeled document collections and achieve significant speed-up compared to vanilla spectral and
collapsed Gibbs sampling methods. Some interesting future directions include further improving
the sample complexity analysis and applying the framework to a broader class of graphical models.
Acknowledgement: Anima Anandkumar is supported in part by the Microsoft Faculty Fellowship
and the Sloan Foundation. Alex Smola is supported in part by a Google Faculty Research Grant.
8
References
[1] A. Anandkumar, R. Ge, D. Hsu, S. Kakade, and M. Telgarsky. Tensor decompositions for learning latent
variable models. Journal of Machine Learning Research, 15:2773?2832, 2014.
[2] S. Bhojanapalli and S. Sanghavi. A new sampling technique for tensors. arXiv:1502.05023, 2015.
[3] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. Journal of machine Learning research,
3:993?1022, 2003.
[4] A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. R. Hruschka Jr, and T. M. Mitchell. Toward an architecture for never-ending language learning. In AAAI, 2010.
[5] J. D. Carroll and J.-J. Chang. Analysis of individual differences in multidimensional scaling via an n-way
generalization of ?eckart-young decomposition. Psychometrika, 35(3):283?319, 1970.
[6] A. Chaganty and P. Liang. Estimating latent-variable graphical models using moments and likelihoods.
In ICML, 2014.
[7] M. Charikar, K. Chen, and M. Farach-Colton. Finding frequent items in data streams. Theoretical Computer Science, 312(1):3?15, 2004.
[8] J. H. Choi and S. Vishwanathan. DFacTo: Distributed factorization of tensors. In NIPS, 2014.
[9] C. Dwork, V. Feldman, M. Hardt, T. Pitassi, O. Reingold, and A. Roth. Preserving statistical validity in
adaptive data analysis. In STOC, 2015.
[10] A. S. Field and D. Graupe. Topographic component (parallel factor) analysis of multichannel evoked
potentials: practical issues in trilinear spatiotemporal decomposition. Brain Topography, 3(4):407?423,
1991.
[11] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy of
Sciences, 101(suppl 1):5228?5235, 2004.
[12] R. A. Harshman. Foundations of the PARAFAC procedure: Models and conditions for an explanatory
multi-modal factor analysis. UCLA Working Papers in Phonetics, 16:1?84, 1970.
[13] F. Huang, S. Matusevych, A. Anandkumar, N. Karampatziakis, and P. Mineiro. Distributed latent dirichlet
allocation via tensor factorization. In NIPS Optimization Workshop, 2014.
[14] F. Huang, U. N. Niranjan, M. U. Hakeem, and A. Anandkumar. Fast detection of overlapping communities
via online tensor methods. arXiv:1309.0787, 2013.
[15] A. Jain. Fundamentals of digital image processing, 1989.
[16] U. Kang, E. Papalexakis, A. Harpale, and C. Faloutsos. Gigatensor: Scaling tensor analysis up by 100
times - algorithms and discoveries. In KDD, 2012.
[17] B. Klimt and Y. Yang. Introducing the enron corpus. In CEAS, 2004.
[18] T. Kolda and B. Bader. The tophits model for higher-order web link analysis. In Workshop on link
analysis, counterterrorism and security, 2006.
[19] T. Kolda and B. Bader. Tensor decompositions and applications. SIAM Review, 51(3):455?500, 2009.
[20] T. G. Kolda and J. Sun. Scalable tensor decompositions for multi-aspect data mining. In ICDM, 2008.
[21] M. M?rup, L. K. Hansen, C. S. Herrmann, J. Parnas, and S. M. Arnfred. Parallel factor analysis as an
exploratory tool for wavelet transformed event-related eeg. NeuroImage, 29(3):938?947, 2006.
[22] R. Pagh. Compressed matrix multiplication. In ITCS, 2012.
[23] N. Pham and R. Pagh. Fast and scalable polynomial kernels via explicit feature maps. In KDD, 2013.
[24] A.-H. Phan, P. Tichavsky, and A. Cichocki. Fast alternating LS algorithms for high order CANDECOMP/PARAFAC tensor factorizations. IEEE Transactions on Signal Processing, 61(19):4834?4846,
2013.
[25] X.-H. Phan and C.-T. Nguyen. GibbsLDA++: A C/C++ implementation of latent dirichlet allocation
(lda), 2007.
[26] M. Ptras?cu and M. Thorup. The power of simple tabulation hashing. Journal of the ACM, 59(3):14, 2012.
[27] C. Tsourakakis. MACH: Fast randomized tensor decompositions. In SDM, 2010.
[28] H.-Y. Tung and A. Smola. Spectral methods for indian buffet process inference. In NIPS, 2014.
[29] C. Wang, X. Liu, Y. Song, and J. Han. Scalable moment-based inference for latent dirichlet allocation. In
ECML/PKDD, 2014.
[30] Y. Wang and J. Zhu. Spectral methods for supervised topic models. In NIPS, 2014.
9
| 5944 |@word mild:1 private:1 faculty:2 version:2 polynomial:3 norm:3 stronger:1 cu:1 decomposition:33 contraction:12 sgd:1 ld:2 reduction:3 carry:2 liu:1 contains:1 moment:12 selecting:1 initial:1 document:7 outperforms:2 existing:4 ketch:2 recovered:2 wd:1 counterterrorism:1 crawling:1 written:1 subsequent:1 kdd:2 update:4 hash:17 item:1 ith:3 short:1 blei:1 org:1 along:1 prove:1 combine:2 introduce:2 theoretically:1 indeed:1 roughly:1 pkdd:1 multi:4 brain:1 inspired:1 decomposed:3 actual:1 cpu:1 increasing:1 becomes:1 provided:1 psychometrika:1 notation:2 estimating:1 bhojanapalli:1 eigenvector:2 developed:1 finding:2 guarantee:3 every:2 multidimensional:1 ti:5 prohibitively:1 k2:3 wrong:2 uk:3 brute:6 unit:1 grant:1 gigantic:1 appear:1 yn:1 harshman:1 t1:1 before:2 harpale:1 local:1 papalexakis:1 limit:3 tation:1 mach:1 analyzing:1 hsiao:1 approximately:4 burn:1 initialization:4 evoked:1 fastest:2 factorization:3 bi:5 range:1 unique:1 practical:1 testing:2 practice:4 implement:2 procedure:4 empirical:11 significantly:4 word:5 griffith:1 cannot:1 unlabeled:1 selection:1 operator:4 storage:1 collapsed:10 applying:2 map:1 roth:1 regardless:1 independently:2 l:1 recovery:3 immediately:1 factored:4 rule:1 orthonormal:1 steyvers:1 handle:4 exploratory:1 coordinate:1 kolda:3 construction:3 target:1 suppose:4 exact:11 pa:1 element:5 trick:1 expensive:2 approximated:1 asymmetric:2 tung:2 wang:3 thousand:1 eckart:1 sun:1 removed:1 mentioned:1 complexity:15 ui:7 rup:1 depend:2 uniformity:1 solving:1 algebra:1 efficiency:1 completely:1 basis:2 easily:1 accelerate:1 collide:2 represented:1 distinct:1 fast:31 jain:1 query:1 ktk2f:4 whose:2 emerged:1 larger:2 otherwise:1 compressed:1 ability:1 topographic:1 yiningwa:1 transform:1 noisy:2 ip:9 online:5 ceas:1 sdm:1 eigenvalue:3 propose:5 product:17 frequent:2 uci:1 combining:2 argmink:1 mixing:1 achieve:2 academy:1 intuitive:1 frobenius:1 kv:2 normalize:1 differentially:1 getting:1 billion:1 convergence:2 optimum:1 rademacher:4 telgarsky:1 ring:2 converges:2 tions:1 depending:1 develop:1 h3:1 keywords:1 eq:8 implemented:2 c:1 involves:2 come:1 hst:1 direction:1 drawback:1 subsequently:1 stochastic:1 bader:2 settle:1 require:1 fix:3 generalization:2 decompose:3 preliminary:1 proposition:5 multilinear:1 elementary:1 extension:2 hold:1 pham:1 considered:1 wdi:3 cb:1 mapping:1 bj:1 trigram:1 achieves:2 dictionary:3 estimation:4 hansen:1 sei:1 sensitive:1 largest:1 tool:1 weighted:1 gaussian:1 ck:1 avoid:2 hj:3 pn:3 broader:1 parafac:4 focus:2 klt:1 rank:17 likelihood:9 mainly:2 seamlessly:1 karampatziakis:1 contrast:2 baseline:1 inference:3 dependent:1 sb:1 typically:1 eliminate:1 explanatory:1 going:1 transformed:1 i1:8 provably:2 sketched:4 issue:3 among:1 development:1 constrained:1 iters:2 field:1 construct:3 saving:1 having:1 once:2 sampling:21 ng:1 never:1 yu:1 icml:1 future:1 minimized:1 sanghavi:1 np:3 report:2 employ:1 few:1 modern:1 randomly:6 national:1 individual:2 cheaper:1 phase:1 consisting:2 argmax:1 maintain:1 n1:2 microsoft:1 detection:1 interest:1 huge:1 mining:3 highly:2 dwork:1 evaluation:6 deferred:3 yining:1 kwd:1 behind:1 held:4 kt:3 ambient:1 accurate:2 shorter:1 orthogonal:1 initialized:5 theoretical:4 instance:1 modeling:6 boolean:1 rao:1 assignment:2 introducing:1 entry:13 uniform:3 reported:1 eec:1 spatiotemporal:1 synthetic:5 st:8 fundamental:1 randomized:7 siam:1 kn3:3 destination:1 off:1 pagh:2 picking:1 together:1 sketching:17 squared:1 aaai:1 opposed:1 huang:2 worse:1 style:1 potential:2 unordered:1 int:1 matter:1 coefficient:1 satisfy:2 notable:1 explicitly:2 sloan:1 vi:2 stream:1 tijk:4 h1:5 root:1 picked:1 analyze:1 characterizes:1 reached:1 competitive:2 hf:1 recover:1 parallel:2 klimt:1 contribution:1 square:6 ni:1 accuracy:1 variance:2 kek:1 efficiently:6 yield:1 correspond:1 farach:1 trilinear:1 itcs:1 anima:2 randomness:1 explain:1 khatri:1 deploying:1 whenever:1 nonetheless:1 naturally:1 proof:3 degeneracy:1 irvine:2 hsu:1 proved:2 dataset:8 popular:1 hardt:1 mitchell:1 ut:3 improves:1 dimensionality:1 actually:1 higher:3 hashing:2 supervised:1 modal:2 improved:1 execute:1 though:3 evaluated:1 smola:4 implicit:3 spiky:2 until:1 sketch:55 hand:3 working:1 web:1 replacing:1 ei:5 overlapping:1 google:1 mode:1 quality:2 lda:15 artifact:1 scientific:1 grows:1 building:6 k22:1 requiring:1 validity:1 deliberately:1 alternating:6 symmetric:19 nonzero:2 satisfactory:1 read:1 attractive:1 generalized:1 complete:1 demonstrate:2 confusion:1 tn:1 performs:1 cp:8 phonetics:1 dfacto:1 meaning:2 wise:9 image:1 novel:6 recently:1 wikipedia:5 tabulation:1 extend:1 kwk2:1 mellon:1 significant:5 gibbs:21 ai:8 imposing:1 chaganty:1 rd:7 vanilla:1 feldman:1 hp:3 similarly:1 language:2 pq:1 han:1 carroll:1 polyadic:1 whitening:5 etc:2 base:1 operating:1 pitassi:1 recent:1 apart:1 prime:1 manipulation:1 certain:1 incapable:1 binary:1 yi:3 preserving:1 additional:2 paradigm:1 signal:1 u0:1 rv:1 full:1 gigatensor:1 faster:6 sphere:1 divided:1 icdm:1 niranjan:1 involving:1 basic:1 scalable:3 whitened:1 vision:1 cmu:1 arxiv:2 iteration:13 repetitive:1 kernel:2 normalization:1 suppl:1 achieved:1 justified:1 addition:6 fellowship:1 median:2 source:1 enron:2 med:3 reingold:1 mod:2 effectiveness:1 jordan:1 anandkumar:6 call:1 yang:1 easy:1 fft:11 variety:1 independence:1 iterate:1 fit:1 architecture:1 reduce:2 idea:12 cn:11 inner:3 ti1:1 shift:1 bottleneck:1 thread:2 gb:1 effort:1 song:1 suffer:1 collision:1 detailed:1 involve:1 eigenvectors:3 parnas:1 multichannel:1 reduced:3 generate:2 canonical:1 s3:7 neuroscience:2 per:3 carnegie:1 group:1 key:2 achieving:1 ht:3 kept:2 eijk:1 lain:2 sum:3 fibre:1 run:6 sti:2 inverse:2 powerful:1 almost:1 family:1 reasonable:1 draw:1 appendix:15 scaling:3 summarizes:1 comparable:2 accelerates:1 ki:2 bound:3 guaranteed:1 n1d:1 encountered:1 constraint:1 vishwanathan:1 alex:3 colliding:6 n3:13 ucla:1 fourier:2 speed:6 aspect:1 extremely:3 min:3 conjecture:1 department:2 structured:1 charikar:1 combination:1 gibbslda:2 jr:1 wi:5 kakade:1 s1:5 multiplicity:1 pr:1 ln:1 remains:1 count:4 deflation:2 tableau:1 needed:1 deluge:1 ge:1 thorup:1 adopted:1 available:5 operation:13 decomposing:1 apply:3 spectral:21 hruschka:1 save:1 alternative:3 batch:2 faloutsos:1 slower:1 buffet:1 original:3 denotes:3 running:8 top:2 dirichlet:5 include:1 graphical:2 log2:3 carlson:1 k1:1 build:3 uj:3 bl:1 tensor:197 already:1 concentration:1 dependence:1 gradient:1 link:2 topic:16 trivial:1 toward:1 provable:1 length:8 index:1 mini:2 liang:1 hsa:1 statement:1 stoc:1 negative:5 design:8 implementation:7 tsourakakis:1 perform:2 convolution:3 observation:1 datasets:4 pnd:1 descent:1 ecml:1 kisiel:1 relational:1 extended:2 excluding:2 y1:1 rn:11 perturbation:1 community:1 pair:3 security:1 california:1 kang:1 nip:4 address:1 usually:2 below:1 mismatch:1 candecomp:3 sparsity:1 built:5 max:1 memory:2 power:27 unrealistic:1 event:1 natural:1 force:6 hybrid:2 indicator:1 residual:3 zhu:1 improve:1 numerous:1 carried:1 ready:1 naive:1 cichocki:1 text:1 review:1 acknowledgement:1 discovery:1 kf:1 multiplication:2 relative:1 loss:1 kakf:1 permutation:4 topography:1 interesting:1 allocation:5 digital:1 h2:1 foundation:2 consistent:1 summary:1 placed:1 supported:2 side:2 wide:2 fall:1 taking:1 sparse:4 ghz:1 distributed:2 dimension:14 plain:3 world:4 avoids:2 rich:1 stand:1 vocabulary:1 ending:1 made:2 adaptive:2 preprocessing:2 collection:1 herrmann:1 pth:2 nguyen:1 social:1 transaction:1 bb:2 approximate:1 emphasize:1 colton:1 corpus:1 pittsburgh:1 tuples:2 xi:2 latent:15 iterative:1 mineiro:1 table:14 robust:12 ca:1 symmetry:1 eeg:1 improving:2 expansion:1 complex:4 necessarily:1 poly:2 domain:2 pk:4 dense:6 main:5 linearly:1 s2:7 noise:3 arise:1 n2:2 succinct:2 repeated:2 intel:1 slow:1 neuroimage:1 explicit:1 wavelet:1 ffts:2 young:2 hw:1 theorem:7 kuk2:1 minute:1 rk:1 specific:1 choi:1 maxi:2 deflated:1 betteridge:1 naively:1 intrinsic:2 exists:1 incorporating:1 workshop:2 importance:1 ci:2 magnitude:1 sparseness:1 nk:2 chen:1 phan:2 suited:1 lt:2 logarithmic:1 simply:1 forming:2 hakeem:1 expressed:3 scalar:2 chang:1 corresponds:1 satisfies:1 acm:1 succeed:1 consequently:2 specifically:2 except:1 reducing:1 uniformly:1 sampler:5 degradation:1 total:1 pas:2 experimental:1 arises:2 indian:1 evaluate:2 tested:1 avoiding:1 handling:2 |
5,463 | 5,945 | Teaching Machines to Read and Comprehend
Karl Moritz Hermann? Tom?as? Ko?cisk?y?? Edward Grefenstette?
Lasse Espeholt? Will Kay? Mustafa Suleyman? Phil Blunsom??
?
Google DeepMind ? University of Oxford
{kmh,tkocisky,etg,lespeholt,wkay,mustafasul,pblunsom}@google.com
Abstract
Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions
posed on the contents of documents that they have seen, but until now large scale
training and test datasets have been missing for this type of evaluation. In this
work we define a new methodology that resolves this bottleneck and provides
large scale supervised reading comprehension data. This allows us to develop a
class of attention based deep neural networks that learn to read real documents and
answer complex questions with minimal prior knowledge of language structure.
1
Introduction
Progress on the path from shallow bag-of-words information retrieval algorithms to machines capable of reading and understanding documents has been slow. Traditional approaches to machine
reading and comprehension have been based on either hand engineered grammars [1], or information
extraction methods of detecting predicate argument triples that can later be queried as a relational
database [2]. Supervised machine learning approaches have largely been absent from this space due
to both the lack of large scale training datasets, and the difficulty in structuring statistical models
flexible enough to learn to exploit document structure.
While obtaining supervised natural language reading comprehension data has proved difficult, some
researchers have explored generating synthetic narratives and queries [3, 4]. Such approaches allow
the generation of almost unlimited amounts of supervised data and enable researchers to isolate the
performance of their algorithms on individual simulated phenomena. Work on such data has shown
that neural network based models hold promise for modelling reading comprehension, something
that we will build upon here. Historically, however, many similar approaches in Computational
Linguistics have failed to manage the transition from synthetic data to real environments, as such
closed worlds inevitably fail to capture the complexity, richness, and noise of natural language [5].
In this work we seek to directly address the lack of real natural language training data by introducing a novel approach to building a supervised reading comprehension data set. We observe that
summary and paraphrase sentences, with their associated documents, can be readily converted to
context?query?answer triples using simple entity detection and anonymisation algorithms. Using
this approach we have collected two new corpora of roughly a million news stories with associated
queries from the CNN and Daily Mail websites.
We demonstrate the efficacy of our new corpora by building novel deep learning models for reading
comprehension. These models draw on recent developments for incorporating attention mechanisms
into recurrent neural network architectures [6, 7, 8, 4]. This allows a model to focus on the aspects of
a document that it believes will help it answer a question, and also allows us to visualises its inference
process. We compare these neural models to a range of baselines and heuristic benchmarks based
upon a traditional frame semantic analysis provided by a state-of-the-art natural language processing
1
CNN
train valid
Daily Mail
test
train
valid
Top N
CNN Daily Mail
test
# months
95
1
1
56
1
1
# documents 90,266 1,220 1,093 196,961 12,148 10,397
# queries
380,298 3,924 3,198 879,450 64,835 53,182
Max # entities
527 187 396
371
232
245
Avg # entities
26.4 26.5 24.5
26.5
25.5 26.0
Avg # tokens
762 763 716
813
774
780
Vocab size
118,497
208,045
Cumulative %
1
2
3
5
10
30.5
47.7
58.1
70.6
85.1
25.6
42.4
53.7
68.1
85.5
Table 2: Percentage of time that
the correct answer is contained in
Table 1: Corpus statistics. Articles were collected starting in the top N most frequent entities
April 2007 for CNN and June 2010 for the Daily Mail, both until in a given document.
the end of April 2015. Validation data is from March, test data
from April 2015. Articles of over 2000 tokens and queries whose
answer entity did not appear in the context were filtered out.
(NLP) pipeline. Our results indicate that the neural models achieve a higher accuracy, and do so
without any specific encoding of the document or query structure.
2
Supervised training data for reading comprehension
The reading comprehension task naturally lends itself to a formulation as a supervised learning
problem. Specifically we seek to estimate the conditional probability p(a|c, q), where c is a context
document, q a query relating to that document, and a the answer to that query. For a focused
evaluation we wish to be able to exclude additional information, such as world knowledge gained
from co-occurrence statistics, in order to test a model?s core capability to detect and understand the
linguistic relationships between entities in the context document.
Such an approach requires a large training corpus of document?query?answer triples and until now
such corpora have been limited to hundreds of examples and thus mostly of use only for testing [9].
This limitation has meant that most work in this area has taken the form of unsupervised approaches
which use templates or syntactic/semantic analysers to extract relation tuples from the document to
form a knowledge graph that can be queried.
Here we propose a methodology for creating real-world, large scale supervised training data for
learning reading comprehension models. Inspired by work in summarisation [10, 11], we create two
machine reading corpora by exploiting online newspaper articles and their matching summaries. We
have collected 93k articles from the CNN1 and 220k articles from the Daily Mail2 websites. Both
news providers supplement their articles with a number of bullet points, summarising aspects of the
information contained in the article. Of key importance is that these summary points are abstractive
and do not simply copy sentences from the documents. We construct a corpus of document?query?
answer triples by turning these bullet points into Cloze [12] style questions by replacing one entity
at a time with a placeholder. This results in a combined corpus of roughly 1M data points (Table 1).
Code to replicate our datasets?and to apply this method to other sources?is available online3 .
2.1 Entity replacement and permutation
Note that the focus of this paper is to provide a corpus for evaluating a model?s ability to read
and comprehend a single document, not world knowledge or co-occurrence. To understand that
distinction consider for instance the following Cloze form queries (created from headlines in the
Daily Mail validation set): a) The hi-tech bra that helps you beat breast X; b) Could Saccharin help
beat X ?; c) Can fish oils help fight prostate X ? An ngram language model trained on the Daily Mail
would easily correctly predict that (X = cancer), regardless of the contents of the context document,
simply because this is a very frequently cured entity in the Daily Mail corpus.
1
www.cnn.com
www.dailymail.co.uk
3
http://www.github.com/deepmind/rc-data/
2
2
Original Version
Anonymised Version
Context
The BBC producer allegedly struck by Jeremy
Clarkson will not press charges against the ?Top
Gear? host, his lawyer said Friday. Clarkson, who
hosted one of the most-watched television shows
in the world, was dropped by the BBC Wednesday
after an internal investigation by the British broadcaster found he had subjected producer Oisin Tymon
?to an unprovoked physical and verbal attack.? . . .
the ent381 producer allegedly struck by ent212 will
not press charges against the ? ent153 ? host , his
lawyer said friday . ent212 , who hosted one of the
most - watched television shows in the world , was
dropped by the ent381 wednesday after an internal
investigation by the ent180 broadcaster found he
had subjected producer ent193 ? to an unprovoked
physical and verbal attack . ? . . .
Query
Producer X will not press charges against Jeremy
Clarkson, his lawyer says.
producer X will not press charges against ent212 ,
his lawyer says .
Answer
Oisin Tymon
ent193
Table 3: Original and anonymised version of a data point from the Daily Mail validation set. The
anonymised entity markers are constantly permuted during training and testing.
To prevent such degenerate solutions and create a focused task we anonymise and randomise our
corpora with the following procedure, a) use a coreference system to establish coreferents in each
data point; b) replace all entities with abstract entity markers according to coreference; c) randomly
permute these entity markers whenever a data point is loaded.
Compare the original and anonymised version of the example in Table 3. Clearly a human reader can
answer both queries correctly. However in the anonymised setup the context document is required
for answering the query, whereas the original version could also be answered by someone with the
requisite background knowledge. Therefore, following this procedure, the only remaining strategy
for answering questions is to do so by exploiting the context presented with each question. Thus
performance on our two corpora truly measures reading comprehension capability. Naturally a
production system would benefit from using all available information sources, such as clues through
language and co-occurrence statistics.
Table 2 gives an indication of the difficulty of the task, showing how frequent the correct answer is
contained in the top N entity markers in a given document. Note that our models don?t distinguish
between entity markers and regular words. This makes the task harder and the models more general.
3
Models
So far we have motivated the need for better datasets and tasks to evaluate the capabilities of machine
reading models. We proceed by describing a number of baselines, benchmarks and new models to
evaluate against this paradigm. We define two simple baselines, the majority baseline (maximum
frequency) picks the entity most frequently observed in the context document, whereas the exclusive majority (exclusive frequency) chooses the entity most frequently observed in the
context but not observed in the query. The idea behind this exclusion is that the placeholder is
unlikely to be mentioned twice in a single Cloze form query.
3.1 Symbolic Matching Models
Traditionally, a pipeline of NLP models has been used for attempting question answering, that is
models that make heavy use of linguistic annotation, structured world knowledge and semantic
parsing and similar NLP pipeline outputs. Building on these approaches, we define a number of
NLP-centric models for our machine reading task.
Frame-Semantic Parsing Frame-semantic parsing attempts to identify predicates and their arguments, allowing models access to information about ?who did what to whom?. Naturally this kind
of annotation lends itself to being exploited for question answering. We develop a benchmark that
3
makes use of frame-semantic annotations which we obtained by parsing our model with a state-ofthe-art frame-semantic parser [13, 14]. As the parser makes extensive use of linguistic information
we run these benchmarks on the unanonymised version of our corpora. There is no significant advantage in this as the frame-semantic approach used here does not possess the capability to generalise
through a language model beyond exploiting one during the parsing phase. Thus, the key objective
of evaluating machine comprehension abilities is maintained. Extracting entity-predicate triples?
denoted as (e1 , V, e2 )?from both the query q and context document d, we attempt to resolve queries
using a number of rules with an increasing recall/precision trade-off as follows (Table 4).
Strategy
1
2
3
4
5
6
Exact match
be.01.V match
Correct frame
Permuted frame
Matching entity
Back-off strategy
Pattern 2 q
Pattern 2 d
Example (Cloze / Context)
(p, V, y)
(x, V, y)
X loves Suse / Kim loves Suse
(p, be.01.V, y) (x, be.01.V, y) X is president / Mike is president
(p, V, y)
(x, V, z)
X won Oscar / Tom won Academy Award
(p, V, y)
(y, V, x)
X met Suse / Suse met Tom
(p, V, y)
(x, Z, y)
X likes candy / Tom loves candy
Pick the most frequent entity from the context that doesn?t appear in the query
Table 4: Resolution strategies using PropBank triples. x denotes the entity proposed as answer, V is
a fully qualified PropBank frame (e.g. give.01.V). Strategies are ordered by precedence and answers
determined accordingly. This heuristic algorithm was iteratively tuned on the validation data set.
For reasons of clarity, we pretend that all PropBank triples are of the form (e1 , V, e2 ). In practice,
we take the argument numberings of the parser into account and only compare like with like, except
in cases such as the permuted frame rule, where ordering is relaxed. In the case of multiple possible
answers from a single rule, we randomly choose one.
Word Distance Benchmark We consider another baseline that relies on word distance measurements. Here, we align the placeholder of the Cloze form question with each possible entity in the
context document and calculate a distance measure between the question and the context around the
aligned entity. This score is calculated by summing the distances of every word in q to their nearest
aligned word in d, where alignment is defined by matching words either directly or as aligned by the
coreference system. We tune the maximum penalty per word (m = 8) on the validation data.
3.2 Neural Network Models
Neural networks have successfully been applied to a range of tasks in NLP. This includes classification tasks such as sentiment analysis [15] or POS tagging [16], as well as generative problems such
as language modelling or machine translation [17]. We propose three neural models for estimating
the probability of word type a from document d answering query q:
p(a|d, q) / exp (W (a)g(d, q)) ,
4
s.t. a 2 V,
where V is the vocabulary , and W (a) indexes row a of weight matrix W and through a slight
abuse of notation word types double as indexes. Note that we do not privilege entities or variables,
the model must learn to differentiate these in the input sequence. The function g(d, q) returns a
vector embedding of a document and query pair.
The Deep LSTM Reader Long short-term memory (LSTM, [18]) networks have recently seen
considerable success in tasks such as machine translation and language modelling [17]. When used
for translation, Deep LSTMs [19] have shown a remarkable ability to embed long sequences into
a vector representation which contains enough information to generate a full translation in another
language. Our first neural model for reading comprehension tests the ability of Deep LSTM encoders
to handle significantly longer sequences. We feed our documents one word at a time into a Deep
LSTM encoder, after a delimiter we then also feed the query into the encoder. Alternatively we also
experiment with processing the query then the document. The result is that this model processes
each document query pair as a single long sequence. Given the embedded document and query the
network predicts which token in the document answers the query.
4
The vocabulary includes all the word types in the documents, questions, the entity maskers, and the question unknown entity marker.
4
g
r
r
r
g
r
s(1)y(1)
Mary
s(2)y(2)
went
u
s(3)y(3)
to
u
s(4)y(4)
England
X
visited England
Mary
(a) Attentive Reader.
went
to
England
visited England
X
(b) Impatient Reader.
g
X visited England |||
Mary went
to England
(c) A two layer Deep LSTM Reader with the question encoded before the document.
Figure 1: Document and query embedding models.
We employ a Deep LSTM cell with skip connections from each input x(t) to every hidden layer,
and from every hidden layer to the output y(t):
x0 (t, k) = x(t)||y 0 (t, k 1),
y(t) = y 0 (t, 1)|| . . . ||y 0 (t, K)
i(t, k) =
f (t, k) =
(Wkxi x0 (t, k) + Wkhi h(t 1, k) + Wkci c(t 1, k) + bki )
(Wkxf x(t) + Wkhf h(t 1, k) + Wkcf c(t 1, k) + bkf )
1, k) + i(t, k) tanh (Wkxc x0 (t, k) + Wkhc h(t
c(t, k) = f (t, k)c(t
0
o(t, k) = (Wkxo x (t, k) + Wkho h(t
h(t, k) = o(t, k) tanh (c(t, k))
1, k) + bkc )
1, k) + Wkco c(t, k) + bko )
y 0 (t, k) = Wky h(t, k) + bky
where || indicates vector concatenation h(t, k) is the hidden state for layer k at time t, and i, f ,
o are the input, forget, and output gates respectively. Thus our Deep LSTM Reader is defined by
g LSTM (d, q) = y(|d| + |q|) with input x(t) the concatenation of d and q separated by the delimiter |||.
The Attentive Reader The Deep LSTM Reader must propagate dependencies over long distances
in order to connect queries to their answers. The fixed width hidden vector forms a bottleneck for
this information flow that we propose to circumvent using an attention mechanism inspired by recent
results in translation and image recognition [6, 7]. This attention model first encodes the document
and the query using separate bidirectional single layer LSTMs [19].
We denote the outputs of the forward and backward LSTMs as !
y (t) and y (t) respectively. The
encoding u of a query of length |q| is formed by the concatenation of the final forward and backward
outputs, u = !
yq (|q|) || yq (1).
For the document the composite output for each token at position t is, y (t) = !
y (t) || y (t). The
d
d
d
representation r of the document d is formed by a weighted sum of these output vectors. These
weights are interpreted as the degree to which the network attends to a particular token in the document when answering the query:
m(t) = tanh (Wym yd (t) + Wum u) ,
|
s(t) / exp (wms
m(t)) ,
r = yd s,
where we are interpreting yd as a matrix with each column being the composite representation yd (t)
of document token t. The variable s(t) is the normalised attention at token t. Given this attention
5
score the embedding of the document r is computed as the weighted sum of the token embeddings.
The model is completed with the definition of the joint document and query embedding via a nonlinear combination:
g AR (d, q) = tanh (Wrg r + Wug u) .
The Attentive Reader can be viewed as a generalisation of the application of Memory Networks to
question answering [3]. That model employs an attention mechanism at the sentence level where
each sentence is represented by a bag of embeddings. The Attentive Reader employs a finer grained
token level attention mechanism where the tokens are embedded given their entire future and past
context in the input document.
The Impatient Reader The Attentive Reader is able to focus on the passages of a context document that are most likely to inform the answer to the query. We can go further by equipping the
model with the ability to reread from the document as each query token is read. At each token i
of the query q the model computes a document representation vector r(i) using the bidirectional
embedding yq (i) = !
yq (i) || yq (i):
m(i, t) = tanh (Wdm yd (t) + Wrm r(i 1) + Wqm yq (i)) , 1 ? i ? |q|,
|
s(i, t) / exp (wms
m(i, t)) ,
r(0) = r0 , r(i) = yd| s(i) + tanh (Wrr r(i 1)) 1 ? i ? |q|.
The result is an attention mechanism that allows the model to recurrently accumulate information
from the document as it sees each query token, ultimately outputting a final joint document query
representation for the answer prediction,
g IR (d, q) = tanh (Wrg r(|q|) + Wqg u) .
4
Empirical Evaluation
Having described a number of models in the previous section, we next evaluate these models on our
reading comprehension corpora. Our hypothesis is that neural models should in principle be well
suited for this task. However, we argued that simple recurrent models such as the LSTM probably
have insufficient expressive power for solving tasks that require complex inference. We expect that
the attention-based models would therefore outperform the pure LSTM-based approaches.
Considering the second dimension of our investigation, the comparison of traditional versus neural
approaches to NLP, we do not have a strong prior favouring one approach over the other. While numerous publications in the past few years have demonstrated neural models outperforming classical
methods, it remains unclear how much of that is a side-effect of the language modelling capabilities
intrinsic to any neural model for NLP. The entity anonymisation and permutation aspect of the task
presented here may end up levelling the playing field in that regard, favouring models capable of
dealing with syntax rather than just semantics.
With these considerations in mind, the experimental part of this paper is designed with a threefold aim. First, we want to establish the difficulty of our machine reading task by applying a wide
range of models to it. Second, we compare the performance of parse-based methods versus that of
neural models. Third, within the group of neural models examined, we want to determine what each
component contributes to the end performance; that is, we want to analyse the extent to which an
LSTM can solve this task, and to what extent various attention mechanisms impact performance.
All model hyperparameters were tuned on the respective validation sets of the two corpora.5 Our
experimental results are in Table 5, with the Attentive and Impatient Readers performing best across
both datasets.
5
For the Deep LSTM Reader, we consider hidden layer sizes [64, 128, 256], depths [1, 2, 4], initial learning
rates [1E 3, 5E 4, 1E 4, 5E 5], batch sizes [16, 32] and dropout [0.0, 0.1, 0.2]. We evaluate two types of
feeds. In the cqa setup we feed first the context document and subsequently the question into the encoder,
while the qca model starts by feeding in the question followed by the context document. We report results on
the best model (underlined hyperparameters, qca setup). For the attention models we consider hidden layer
sizes [64, 128, 256], single layer, initial learning rates [1E 4, 5E 5, 2.5E 5, 1E 5], batch sizes [8, 16, 32]
and dropout [0, 0.1, 0.2, 0.5]. For all models we used asynchronous RmsProp [20] with a momentum of 0.9
and a decay of 0.95. See Appendix A for more details of the experimental setup.
6
CNN
valid
Daily Mail
test valid
test
Maximum frequency
Exclusive frequency
Frame-semantic model
Word distance model
30.5
36.6
36.3
50.5
33.2
39.3
40.2
50.9
25.6
32.7
35.5
56.4
25.5
32.8
35.5
55.5
Deep LSTM Reader
Uniform Reader
Attentive Reader
Impatient Reader
55.0
39.0
61.6
61.8
57.0
39.4
63.0
63.8
63.3
34.6
70.5
69.0
62.2
34.4
69.0
68.0
Table 5: Accuracy of all the models and benchmarks on the CNN and Daily Mail datasets. The
Uniform Reader baseline sets all of the m(t) parameters to be equal.
Figure 2: Precision@Recall for the attention
models on the CNN validation data.
Frame-semantic benchmark While the one frame-semantic model proposed in this paper is
clearly a simplification of what could be achieved with annotations from an NLP pipeline, it does
highlight the difficulty of the task when approached from a symbolic NLP perspective.
Two issues stand out when analysing the results in detail. First, the frame-semantic pipeline has a
poor degree of coverage with many relations not being picked up by our PropBank parser as they
do not adhere to the default predicate-argument structure. This effect is exacerbated by the type
of language used in the highlights that form the basis of our datasets. The second issue is that
the frame-semantic approach does not trivially scale to situations where several sentences, and thus
frames, are required to answer a query. This was true for the majority of queries in the dataset.
Word distance benchmark More surprising perhaps is the relatively strong performance of the
word distance benchmark, particularly relative to the frame-semantic benchmark, which we had
expected to perform better. Here, again, the nature of the datasets used can explain aspects of this
result. Where the frame-semantic model suffered due to the language used in the highlights, the word
distance model benefited. Particularly in the case of the Daily Mail dataset, highlights frequently
have significant lexical overlap with passages in the accompanying article, which makes it easy for
the word distance benchmark. For instance the query ?Tom Hanks is friends with X?s manager,
Scooter Brown? has the phrase ?... turns out he is good friends with Scooter Brown, manager for
Carly Rae Jepson? in the context. The word distance benchmark correctly aligns these two while
the frame-semantic approach fails to pickup the friendship or management relations when parsing
the query. We expect that on other types of machine reading data where questions rather than Cloze
queries are used this particular model would perform significantly worse.
Neural models Within the group of neural models explored here, the results paint a clear picture
with the Impatient and the Attentive Readers outperforming all other models. This is consistent with
our hypothesis that attention is a key ingredient for machine reading and question answering due to
the need to propagate information over long distances. The Deep LSTM Reader performs surprisingly well, once again demonstrating that this simple sequential architecture can do a reasonable
job of learning to abstract long sequences, even when they are up to two thousand tokens in length.
However this model does fail to match the performance of the attention based models, even though
these only use single layer LSTMs.6
The poor results of the Uniform Reader support our hypothesis of the significance of the attention
mechanism in the Attentive model?s performance as the only difference between these models is
that the attention variables are ignored in the Uniform Reader. The precision@recall statistics in
Figure 2 again highlight the strength of the attentive approach.
We can visualise the attention mechanism as a heatmap over a context document to gain further
insight into the models? performance. The highlighted words show which tokens in the document
were attended to by the model. In addition we must also take into account that the vectors at each
6
Memory constraints prevented us from experimenting with deeper Attentive Readers.
7
...
...
Figure 3: Attention heat maps from the Attentive Reader for two correctly answered validation set
queries (the correct answers are ent23 and ent63, respectively). Both examples require significant
lexical generalisation and co-reference resolution in order to be answered correctly by a given model.
token integrate long range contextual information via the bidirectional LSTM encoders. Figure 3
depicts heat maps for two queries that were correctly answered by the Attentive Reader.7 In both
cases confidently arriving at the correct answer requires the model to perform both significant lexical
generalsiation, e.g. ?killed? ! ?deceased?, and co-reference or anaphora resolution, e.g. ?ent119 was
killed? ! ?he was identified.? However it is also clear that the model is able to integrate these signals
with rough heuristic indicators such as the proximity of query words to the candidate answer.
5
Conclusion
The supervised paradigm for training machine reading and comprehension models provides a
promising avenue for making progress on the path to building full natural language understanding
systems. We have demonstrated a methodology for obtaining a large number of document-queryanswer triples and shown that recurrent and attention based neural networks provide an effective
modelling framework for this task. Our analysis indicates that the Attentive and Impatient Readers are able to propagate and integrate semantic information over long distances. In particular we
believe that the incorporation of an attention mechanism is the key contributor to these results.
The attention mechanism that we have employed is just one instantiation of a very general idea
which can be further exploited. However, the incorporation of world knowledge and multi-document
queries will also require the development of attention and embedding mechanisms whose complexity to query does not scale linearly with the data set size. There are still many queries requiring
complex inference and long range reference resolution that our models are not yet able to answer.
As such our data provides a scalable challenge that should support NLP research into the future. Further, significantly bigger training data sets can be acquired using the techniques we have described,
undoubtedly allowing us to train more expressive and accurate models.
7
Note that these examples were chosen as they were short, the average CNN validation document contained
763 tokens and 27 entities, thus most instances were significantly harder to answer than these examples.
8
References
[1] Ellen Riloff and Michael Thelen. A rule-based question answering system for reading comprehension tests. In Proceedings of the ANLP/NAACL Workshop on Reading Comprehension
Tests As Evaluation for Computer-based Language Understanding Sytems.
[2] Hoifung Poon, Janara Christensen, Pedro Domingos, Oren Etzioni, Raphael Hoffmann, Chloe
Kiddon, Thomas Lin, Xiao Ling, Mausam, Alan Ritter, Stefan Schoenmackers, Stephen Soderland, Dan Weld, Fei Wu, and Congle Zhang. Machine reading at the University of Washington. In Proceedings of the NAACL HLT 2010 First International Workshop on Formalisms and
Methodology for Learning by Reading.
[3] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. CoRR, abs/1410.3916,
2014.
[4] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory
networks. CoRR, abs/1503.08895, 2015.
[5] Terry Winograd. Understanding Natural Language. Academic Press, Inc., Orlando, FL, USA,
1972.
[6] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by
jointly learning to align and translate. CoRR, abs/1409.0473, 2014.
[7] Volodymyr Mnih, Nicolas Heess, Alex Graves, and Koray Kavukcuoglu. Recurrent models of
visual attention. In Advances in Neural Information Processing Systems 27.
[8] Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW: A recurrent neural
network for image generation. CoRR, abs/1502.04623, 2015.
[9] Matthew Richardson, Christopher J. C. Burges, and Erin Renshaw. Mctest: A challenge dataset
for the open-domain machine comprehension of text. In Proceedings of EMNLP.
[10] Krysta Svore, Lucy Vanderwende, and Christopher Burges. Enhancing single-document summarization by combining RankNet and third-party sources. In Proceedings of EMNLP/CoNLL.
[11] Kristian Woodsend and Mirella Lapata. Automatic generation of story highlights. In Proceedings of ACL, 2010.
[12] Wilson L Taylor. ?Cloze procedure?: a new tool for measuring readability. Journalism Quarterly, 30:415?433, 1953.
[13] Dipanjan Das, Desai Chen, Andr?e F. T. Martins, Nathan Schneider, and Noah A. Smith. Framesemantic parsing. Computational Linguistics, 40(1):9?56, 2013.
[14] Karl Moritz Hermann, Dipanjan Das, Jason Weston, and Kuzman Ganchev. Semantic frame
identification with distributed word representations. In Proceedings of ACL, June 2014.
[15] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network
for modelling sentences. In Proceedings of ACL, 2014.
[16] Ronan Collobert, Jason Weston, L?eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel
Kuksa. Natural language processing (almost) from scratch. Journal of Machine Learning
Research, 12:2493?2537, November 2011.
[17] Ilya Sutskever, Oriol Vinyals, and Quoc V. V Le. Sequence to sequence learning with neural
networks. In Advances in Neural Information Processing Systems 27.
[18] Sepp Hochreiter and J?urgen Schmidhuber. Long short-term memory. Neural Computation,
9(8):1735?1780, November 1997.
[19] Alex Graves. Supervised Sequence Labelling with Recurrent Neural Networks, volume 385 of
Studies in Computational Intelligence. Springer, 2012.
[20] T. Tieleman and G. Hinton. Lecture 6.5?RmsProp: Divide the gradient by a running average
of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
9
| 5945 |@word cnn:9 version:6 kmh:1 wqm:1 replicate:1 open:1 seek:2 propagate:3 pavel:1 pick:2 attended:1 harder:2 initial:2 contains:1 efficacy:1 score:2 tuned:2 document:55 past:2 favouring:2 com:3 contextual:1 surprising:1 yet:1 must:3 readily:1 parsing:7 ronan:1 candy:2 designed:1 sukhbaatar:1 generative:1 intelligence:1 website:2 ivo:1 accordingly:1 summarisation:1 gear:1 smith:1 core:1 short:3 renshaw:1 filtered:1 provides:3 detecting:1 readability:1 attack:2 lawyer:4 zhang:1 rc:1 wierstra:1 dan:1 acquired:1 x0:3 tagging:1 expected:1 kuksa:1 roughly:2 frequently:4 love:3 multi:1 manager:2 vocab:1 inspired:2 resolve:2 considering:1 increasing:1 provided:1 estimating:1 notation:1 what:4 schoenmackers:1 kind:1 interpreted:1 deepmind:2 every:3 charge:4 uk:1 szlam:1 appear:2 danihelka:1 before:1 dropped:2 encoding:2 oxford:1 path:2 abuse:1 yd:6 blunsom:2 mirella:1 twice:1 acl:3 examined:1 someone:1 co:6 limited:1 ngram:1 range:5 hoifung:1 testing:2 delimiter:2 practice:1 procedure:3 area:1 empirical:1 online3:1 significantly:4 anonymised:5 matching:4 composite:2 word:21 wrg:2 regular:1 symbolic:2 context:21 applying:1 www:3 map:2 demonstrated:2 missing:1 phil:2 elusive:1 go:1 attention:24 starting:1 regardless:1 lexical:3 focused:2 resolution:4 sepp:1 computes:1 pure:1 rule:4 insight:1 ellen:1 kay:1 his:4 embedding:6 handle:1 traditionally:1 president:2 parser:4 exact:1 hypothesis:3 domingo:1 recognition:1 particularly:2 predicts:1 database:1 winograd:1 observed:3 mike:1 capture:1 calculate:1 thousand:1 news:2 richness:1 desai:1 went:3 cured:1 trade:1 ordering:1 coursera:1 mentioned:1 environment:1 complexity:2 rmsprop:2 ultimately:1 trained:1 solving:1 coreference:3 upon:2 bbc:2 basis:1 easily:1 po:1 joint:2 represented:1 various:1 train:3 separated:1 heat:2 effective:1 query:48 approached:1 analyser:1 kalchbrenner:1 whose:2 heuristic:3 posed:1 encoded:1 solve:1 say:2 grammar:1 ability:6 statistic:4 encoder:3 richardson:1 syntactic:1 analyse:1 itself:2 highlighted:1 final:2 online:1 jointly:1 differentiate:1 advantage:1 indication:1 sequence:8 mausam:1 propose:3 outputting:1 raphael:1 frequent:3 aligned:3 combining:1 poon:1 degenerate:1 achieve:1 translate:1 academy:1 exploiting:3 sutskever:1 double:1 generating:1 karol:1 help:4 attends:1 develop:2 recurrent:6 friend:2 nearest:1 progress:2 job:1 strong:2 wdm:1 coverage:1 exacerbated:1 skip:1 indicate:1 edward:2 met:2 hermann:2 correct:5 subsequently:1 human:1 engineered:1 enable:1 espeholt:1 oisin:2 argued:1 require:3 feeding:1 orlando:1 investigation:3 sainbayar:1 comprehension:17 precedence:1 hold:1 accompanying:1 around:1 proximity:1 exp:3 predict:1 matthew:1 narrative:1 bag:2 tanh:7 visited:3 headline:1 contributor:1 create:2 successfully:1 tool:1 weighted:2 ganchev:1 stefan:1 rough:1 clearly:2 aim:1 rather:2 wilson:1 publication:1 linguistic:3 structuring:1 focus:3 june:2 modelling:6 indicates:2 experimenting:1 tech:1 baseline:6 detect:1 kim:1 inference:3 unlikely:1 entire:1 fight:1 hidden:6 relation:3 pblunsom:1 semantics:1 issue:2 classification:1 flexible:1 denoted:1 development:2 wqg:1 art:2 heatmap:1 urgen:1 field:1 construct:1 equal:1 extraction:1 having:1 once:1 anaphora:1 propbank:4 washington:1 koray:2 unsupervised:1 future:2 report:1 prostate:1 wms:2 yoshua:1 producer:6 employ:3 few:1 randomly:2 individual:1 phase:1 replacement:1 privilege:1 attempt:2 ab:4 detection:1 undoubtedly:1 rae:1 mnih:1 evaluation:4 abstractive:1 alignment:1 truly:1 behind:1 visualises:1 bki:1 accurate:1 capable:2 daily:12 arthur:1 allegedly:2 respective:1 taylor:1 divide:1 minimal:1 instance:3 column:1 formalism:1 ar:1 measuring:1 phrase:1 introducing:1 wrr:1 hundred:1 uniform:4 predicate:4 sumit:1 svore:1 encoders:2 answer:25 dependency:1 connect:1 synthetic:2 combined:1 chooses:1 cho:1 lstm:16 international:1 ritter:1 off:2 michael:2 ilya:1 again:3 manage:1 management:1 choose:1 emnlp:2 worse:1 creating:1 style:1 friday:2 return:1 account:2 converted:1 exclude:1 jeremy:2 volodymyr:1 lapata:1 erin:1 includes:2 suse:4 inc:1 collobert:1 later:1 picked:1 closed:1 jason:4 start:1 capability:5 annotation:4 formed:2 ir:1 accuracy:2 convolutional:1 loaded:1 largely:1 who:3 identify:1 ofthe:1 identification:1 kavukcuoglu:2 krysta:1 provider:1 researcher:2 finer:1 explain:1 inform:1 whenever:1 aligns:1 hlt:1 definition:1 against:5 attentive:14 frequency:4 e2:2 naturally:3 associated:2 gain:1 proved:1 dataset:3 recall:3 knowledge:7 back:1 centric:1 feed:4 bidirectional:3 higher:1 supervised:10 tom:5 methodology:4 april:3 formulation:1 though:1 hank:1 just:2 equipping:1 until:3 hand:1 parse:1 christopher:2 lstms:4 replacing:1 expressive:2 nonlinear:1 google:2 lack:2 marker:6 perhaps:1 bullet:2 believe:1 mary:3 building:4 oil:1 effect:2 brown:2 true:1 requiring:1 naacl:2 usa:1 kyunghyun:1 read:5 moritz:2 iteratively:1 semantic:18 comprehend:2 during:2 width:1 maintained:1 won:2 syntax:1 demonstrate:1 performs:1 interpreting:1 passage:2 image:2 cisk:1 novel:2 recently:1 consideration:1 permuted:3 physical:2 volume:1 million:1 bko:1 he:4 slight:1 relating:1 visualise:1 accumulate:1 significant:4 measurement:1 queried:2 automatic:1 trivially:1 teaching:2 killed:2 language:19 had:3 access:1 longer:1 align:2 something:1 recent:3 exclusion:1 perspective:1 schmidhuber:1 underlined:1 outperforming:2 success:1 exploited:2 seen:2 additional:1 relaxed:1 schneider:1 employed:1 bra:1 r0:1 determine:1 paradigm:2 signal:1 stephen:1 multiple:1 full:2 karlen:1 alan:1 match:3 england:6 academic:1 long:10 retrieval:1 lin:1 host:2 e1:2 award:1 prevented:1 bigger:1 watched:2 impact:1 prediction:1 scalable:1 ko:1 breast:1 enhancing:1 achieved:1 cell:1 oren:1 hochreiter:1 whereas:2 background:1 want:3 addition:1 adhere:1 source:3 suffered:1 suleyman:1 posse:1 probably:1 isolate:1 bahdanau:1 flow:1 extracting:1 chopra:1 bengio:1 enough:2 embeddings:2 easy:1 architecture:2 identified:1 broadcaster:2 idea:2 avenue:1 absent:1 sytems:1 bottleneck:2 motivated:1 penalty:1 clarkson:3 sentiment:1 proceed:1 ranknet:1 deep:13 ignored:1 heess:1 clear:2 tune:1 amount:1 requisite:1 http:1 generate:1 outperform:1 percentage:1 andr:1 fish:1 correctly:6 per:1 threefold:1 promise:1 group:2 key:4 demonstrating:1 clarity:1 prevent:1 nal:1 backward:2 graph:1 sum:2 year:1 run:1 you:1 oscar:1 almost:2 reader:27 reasonable:1 wu:1 draw:2 appendix:1 conll:1 dropout:2 layer:9 hi:1 fl:1 followed:1 distinguish:1 simplification:1 strength:1 noah:1 constraint:1 incorporation:2 fei:1 alex:3 vanderwende:1 encodes:1 unlimited:1 weld:1 aspect:4 answered:4 argument:4 nathan:1 attempting:1 performing:1 relatively:1 martin:1 structured:1 numbering:1 according:1 march:1 combination:1 poor:2 across:1 shallow:1 rob:1 making:1 quoc:1 christensen:1 pipeline:5 taken:1 remains:2 describing:1 turn:1 fail:2 mechanism:11 mind:1 subjected:2 end:5 available:2 apply:1 observe:1 quarterly:1 occurrence:3 batch:2 gate:1 original:4 thomas:1 top:4 remaining:1 nlp:10 linguistics:2 denotes:1 completed:1 running:1 placeholder:3 exploit:1 pretend:1 bkc:1 eon:1 build:1 establish:2 classical:1 gregor:1 objective:1 question:19 paint:1 hoffmann:1 strategy:5 exclusive:3 traditional:3 antoine:1 said:2 unclear:1 gradient:1 lends:2 distance:13 separate:1 simulated:1 entity:28 majority:3 concatenation:3 mail:11 whom:1 collected:3 extent:2 reason:1 dzmitry:1 code:1 length:2 index:2 relationship:1 insufficient:1 kuzman:1 difficult:1 mostly:1 setup:4 summarization:1 unknown:1 perform:3 allowing:2 summarising:1 datasets:8 benchmark:12 daan:1 november:2 inevitably:1 pickup:1 beat:2 situation:1 relational:1 hinton:1 frame:20 paraphrase:1 pair:2 required:2 struck:2 extensive:1 sentence:6 connection:1 distinction:1 address:1 able:5 beyond:1 pattern:2 reading:25 challenge:3 confidently:1 max:1 memory:6 belief:1 terry:1 power:1 overlap:1 natural:8 difficulty:4 circumvent:1 turning:1 indicator:1 github:1 historically:1 yq:6 numerous:1 picture:1 created:1 extract:1 text:1 prior:2 understanding:4 relative:1 graf:3 embedded:2 fully:1 expect:2 permutation:2 highlight:6 lecture:1 generation:3 limitation:1 versus:2 remarkable:1 ingredient:1 triple:8 validation:9 integrate:3 etzioni:1 degree:2 consistent:1 article:8 principle:1 xiao:1 story:2 playing:1 bordes:1 heavy:1 production:1 karl:2 cancer:1 summary:3 token:17 surprisingly:1 translation:6 row:1 arriving:1 copy:1 qualified:1 verbal:2 side:1 allow:1 understand:2 generalise:1 normalised:1 wide:1 template:1 deeper:1 burges:2 benefit:1 regard:1 distributed:1 depth:1 default:1 dimension:1 stand:1 calculated:1 vocabulary:2 transition:1 world:8 valid:4 cumulative:1 evaluating:2 doesn:1 forward:2 avg:2 clue:1 dipanjan:2 far:1 party:1 newspaper:1 dealing:1 mustafa:1 instantiation:1 corpus:15 summing:1 tuples:1 masker:1 fergus:1 alternatively:1 don:1 table:10 promising:1 learn:3 nature:1 nicolas:1 obtaining:2 contributes:1 permute:1 bottou:1 complex:3 domain:1 da:2 jepson:1 did:2 cloze:7 significance:1 linearly:1 noise:1 hyperparameters:2 ling:1 lasse:1 benefited:1 hosted:2 depicts:1 slow:1 precision:3 fails:1 position:1 momentum:1 wish:1 candidate:1 answering:9 third:2 grained:1 british:1 embed:1 friendship:1 specific:1 showing:1 recurrently:1 explored:2 decay:1 soderland:1 incorporating:1 intrinsic:1 workshop:2 sequential:1 corr:4 gained:1 importance:1 supplement:1 magnitude:1 labelling:1 asynchronous:1 television:2 chen:1 suited:1 forget:1 lucy:1 simply:2 likely:1 visual:1 failed:1 vinyals:1 contained:4 ordered:1 scooter:2 wednesday:2 pedro:1 kristian:1 springer:1 tieleman:1 constantly:1 relies:1 grefenstette:2 conditional:1 weston:4 month:1 viewed:1 replace:1 content:2 considerable:1 analysing:1 specifically:1 determined:1 except:1 generalisation:2 experimental:3 internal:2 support:2 meant:1 chloe:1 phenomenon:1 oriol:1 cnn1:1 evaluate:4 bkf:1 tested:1 scratch:1 |
5,464 | 5,946 | Saliency, Scale and Information:
Towards a Unifying Theory
Neil D.B. Bruce
Department of Computer Science
University of Manitoba
bruce@cs.umanitoba.ca
Shafin Rahman
Department of Computer Science
University of Manitoba
shafin109@gmail.com
Abstract
In this paper we present a definition for visual saliency grounded in information
theory. This proposal is shown to relate to a variety of classic research contributions in scale-space theory, interest point detection, bilateral filtering, and to existing models of visual saliency. Based on the proposed definition of visual saliency,
we demonstrate results competitive with the state-of-the art for both prediction of
human fixations, and segmentation of salient objects. We also characterize different properties of this model including robustness to image transformations, and
extension to a wide range of other data types with 3D mesh models serving as an
example. Finally, we relate this proposal more generally to the role of saliency
computation in visual information processing and draw connections to putative
mechanisms for saliency computation in human vision.
1
Introduction
Many models of visual saliency have been proposed in the last decade with differences in defining
principles and also divergent objectives. The motivation for these models is divided among several
distinct but related problems including human fixation prediction, salient object segmentation, and
more general measures of objectness. Models also vary in intent and range from hypotheses for
saliency computation in human visual cortex to those motivated exclusively by applications in computer vision. At a high level the notion of saliency seems relatively straightforward and characterized
by patterns that stand out from their context according to unique colors, striking patterns, discontinuities in structure, or more generally, figure against ground. While this is a seemingly simplistic
concept, the relative importance of defining principles of a model, and fine grained implementation
details in determining output remains obscured. Given similarities in the motivation for different
models, there is also value in considering how different definitions of saliency relate to each other
while also giving careful consideration to parallels to related concepts in biological and computer
vision.
The characterization sought by models of visual saliency is reminiscent ideas expressed throughout
seminal work in computer vision. For example, early work in scale-space theory includes emphasis
on the importance of extrema in structure expressed across scale-space as an indicator of potentially important image content [1, 2]. Related efforts grounded in information theory that venture
closer to modern notions of saliency include Kadir and Brady [3] and Jagersand?s [4] analysis of
interaction between scale and local entropy in defining relevant image content. These concepts have
played a significant role in techniques for affine invariant keypoint matching [5], but have received
less attention in the direct prediction of saliency. Information theoretic models are found in the
literature directly addressing saliency prediction for determining gaze points or proto-objects. A
prominent example of this is the AIM model wherein saliency is based directly on measuring the
self-information of image patterns [6]. Alternative information theoretic definitions have been pro1
posed [7, 8] including numerous models based on measures of redundancy or compressibility that
are strongly related to information theoretic concepts given common roots in communication theory.
In this paper, we present a relatively simple information theoretic definition of saliency that is shown
to have strong ties to a number of classic concepts in the computer vision and visual saliency literature. Beyond a specific model, this also serves to establish formalism for characterizing relationships
between scale, information and saliency. This analysis also hints at the relative importance of fine
grained implementation details in differentiating performance across models that employ disparate,
but strongly related definitions of visual salience. The balance of the paper is structured as follows:
In section 2 we outline the principle for visual saliency computation proposed in this paper defined
by maxima in information scale-space (MISS). In section 3 we demonstrate different characteristics
of the proposed metric, and performance on standard benchmarks. Finally, section 4 summarizes
main points of this paper, and includes discussion of broader implications.
2
Maxima in Information Scale-Space (MISS)
In the following, we present a general definition of saliency that is strongly related to prior work
discussed in section 1. In short, according to our proposal, saliency corresponds to maxima in
information-scale space (MISS). The description of MISS follows, and is accompanied by more
specific discussion of related concepts in computer vision and visual saliency research.
Let us first assume that the saliency of statistics that define a local region of an image are a function
of the rarity (likelihood) of such statistics. We?ll further assume without loss of generality that these
local statistics correspond to pixel intensities.
The likelihood of observing a pixel at position p with intensity Ip in an image based on the global
statistics, is given by the frequency of intensity Ip relative to the total number
of pixels (i.e. a norP
malized histogram lookup). This may be expressed as follows: H(Ip ) = q?S ?(Iq ? Ip )/|S| with
? the Dirac delta function.
One may generalize this expression to a non-parametric (kernel) density
P
estimate: H(Ip ) = q?S G?i (Ip ? Iq ) where G?i corresponds to a kernel function (assumed to be
Gaussian in this case). This may be viewed as either smoothing the intensity histogram, or applying
a density estimate that is more robust to low sample density1 . In practice, the proximity of pixels to
one another is also relevant. Filtering operations applied to images are typically local in their extent,
and the correlation among pixel values inversely proportional to the spatial distance between them.
Adding a local spatial weighting to the likelihood estimate such that nearby pixels have a stronger
influence, the expression is as follows:
X
H(Ip ) =
G?b (||p ? q||)G?i (||Ip ? Iq ||)
(1)
q?S
This constitutes a locally weighted likelihood estimate of intensity values based on pixels in the
surround.
Having established the expression in equation 1, we shift to discussion of scale-space theory. In
traditional scale-space theory the scale-space representation L(x, y; t) is defined by convolution of
an image f (x, y) with a Gaussian kernel g(x, y) such that L(x, y; t) = g(., ., t) ? f (., .) with t the
variance of a Gaussian filter. Scale-space features are often derived from the family of Gaussian
derivatives defined by Lxm yn (., .; t) = ?xm yn g(., ., t) ? f (., .) with differential invariants produced
by combining Gaussian derivatives of different orders in a weighted combination. An important
concept in scale-space theory is the notion that scale selection or the size and position of relevant
structure in the data, is related to the scale at which features (e.g. normalized derivatives) assume
a maximum value. This consideration forms the basis for early definitions of saliency which derive
a measure of saliency corresponding to the scale at which local entropy is maximal. This point is
revisited later in this section.
The scale-space representation may also be defined as the solution to theRheat equation: ?I
?t = ?I =
Ixx + Iyy which may be rewritten as G[I]p ? I ? ?I where G[I]p = S G?s Iq dq and S the local
1
Although this example is based on pixels intensities, the same analysis may be applied to statistics of
arbitrary dimensionality. For higher dimensional feature vectors, appropriate sampling is especially important.
2
?
spatial support. This expression is the solution to the heat equation when ?s = 2t. This corresponds to a diffusion process that is isotropic. There are also a variety of operations in image analysis
and filtering that correspond to a more general process of anisotropic diffusion. One prominent example is the that proposed by Perona and Malik [9] that implementsRedge preserving smoothing. A
1
similar process is captured by the Yaroslavsky filter: Y [I]p = C(p)
G?r (||Ip ? Iq ||)Iq dq [10]
B?S
with B?S reflecting the spatial range of the filter. The difference between these techniques and an
isotropic diffusion process is that relative intensity values among local pixels determine the degree
of diffusion (or weighted local sampling).
The Yaroslavsky filter may be shown to be a special case of the more general bilateral P
filter corresponding to a step-function for the Pspatial weight factor [11]: B[I]p =
1
q?S G?b (||p ? q||)G?i (||Ip ? Iq ||)Iq with Wp =
q?S G?b (||p ? q||)G?i (||Ip ? Iq ||).
Wp
In the same manner that selection of scale-space extrema defined by an isotropic diffusion process
carries value in characterizing relevant image content and scale, we propose to consider scale-space
extrema that carry a relationship to an anisotropic diffusion process.
Note that the normalization term Wp appearing in the equation for the bilateral filter is equivalent
to the expression appearing in equation 1. In contrast to bilateral filtering, we are not interested in
producing a weighted sample of local intensities but we instead consider the sum of the weights
themselves which correspond to a robust estimate of the likelihood of Ip . One may further relate
this to an information theoretic quantity of self-information in considering ?log(p(Ip )), the selfinformation associated with the observation of intensity Ip .
With the above terms defined, Maxima in Information Scale-Space are defined as:
M ISS(Ip ) = max
?b
? log(
X
G?b (||p ? q||)G?i (||Ip ? Iq ||))
(2)
q?S
Saliency is therefore equated to the local self-information for the scale at which this quantity has its
maximum value (for each pixel location) in a manner akin to scale selection based on normalized
gradients or differential invariants [12]. This also corresponds to scale (and value) selection based
on maxima in the sum of weights that define a local anisotropic diffusion process. In what follows,
we comment further on conceptual connections to related work:
1. Scale space extrema: The definition expressed in equation 2 has a strong relationship to the idea
of selecting extrema corresponding to normalized gradients in scale-space [1] or in curvature-scale
space [13]. In this case, rather than a Gaussian blurred intensity profile scale extrema are evaluated
with respect to local information expressed across scale space.
2. Kadir and Brady: In Kadir and Brady?s proposal, interest points or saliency in general is related
to the scale at which entropy is maximal [3]. While entropy and self-information are related, maxima in local entropy alone are insufficient to define salient content. Regions are therefore selected on
the basis of the product of maximal local entropy and magnitude change of the probability density
function. In contrast, the approach employed by MISS relies only on the expression in equation 2,
and does not require additional normalization. It is worth noting that success in matching keypoints
relies on the distinctness of keypoint descriptors which is a notion closely related to saliency.
3. Attention based on Information Maximization (AIM): The quantity expressed in equation
2 is identical to the definition of saliency assumed by the AIM model [6] for a specific choice of
local features, and a fixed scale. The method proposed in equation 2 considers the maximum selfinformation expressed across scale space for each local observation to determine relative saliency.
4. Bilateral filtering: Bilateral filtering produces a weighted sample of local intensity values based
on proximity in space and feature space. The sum of weights in the normalization term provides a
direct estimate of the likelihood of the intensity (or statistics) at the Kernel center, and is directly
related to self-information.
5. Graph Based Saliency and Random Walks: Proposals for visual saliency also include techniques defined by graphs and random walks [14]. There is also common ground between this family
of approaches and those grounded in information theory. Specifically, a random walk or Markov
process defined on a lattice may be seen as a process related to anisotropic diffusion where the transition probabilities between nodes define diffusion on the lattice. For a model such as Graph Based
Visual Saliency (GBVS) [14], a directed edge from node (i, j) to node (p, q) is given a weight
3
w((i, j), (p, q)) = d((i, j)||(p, q))F (i ? p, j ? q) where d is a measure of dissimilarity and F a 2-D
Gaussian profile. In the event that the dissimilarity measure is also defined by a Gaussian function
of intensity values at (i, j) and (p, q), the edge weight defining a transition probability is equivalent
to Wp and the expression in equation 1.
3
Evaluation
In this section we present an array of results that demonstrate the utility and generality of the proposed saliency measure. This includes typical saliency benchmark results for both fixation prediction and object segmentation based on MISS. We also consider the relative invariance of this measure
to image deformations (e.g. viewpoint, lighting) and demonstrate robustness to such deformations.
This is accompanied by demonstration of the value of MISS in a more general sense in assessing
saliency for a broad range of data types, with a demonstration based on 3D point cloud data. Finally,
we also contrast behavior against very recently proposed models of visual saliency that leverage
deep learning, revealing distinct and important facets of the overall problem.
The results that are included follow the framework established in section 2. However, the intensity
value appearing in equations in section 2 is replaced by a 3D vector of RGB values corresponding to
each pixel. ||.|| denotes the L2 norm, and is therefore a Euclidean distance in the RGB colorspace. It
is worth noting that the definition of MISS may be applied to arbitrary features including normalized
gradients, differential invariants or alternative features. The motivation for choosing pixel color
values is to demonstrate that a high level of performance may be achieved on standard benchmarks
using a relatively simple set of features in combination with MISS.
A variety of post-processing steps are commonplace in evaluating saliency models, including topological spatial bias of output, or local Gaussian blur of the saliency map. In some of our results (as
noted) bilateral blurring has been applied to the output saliency map in place of standard Gaussian
blurring. The reasons for this are detailed later on in in this section, but it is worth stating that this
has shown to be advantageous in comparison to the standard of Gaussian blur in our benchmark
results.
Benchmark results are provided for both fixation data and salient object segmentation. For segmentation based evaluation, we apply the methods described by Li et al. [15]. This involves segmentation
using MCG [16], with resulting segments weighted based on the saliency map 2 .
3.1
MISS versus Scale
In considering scale space extrema, plotting entropy or energy among normalized derivatives across
scale is revealing with respect to characteristic scale and regions of interest [3]. Following this line
of analysis, in Figure 1 we demonstrate variation in information scale-space values as a function
of ?b expressed in pixels. In Figure 1(a) three pixels are labeled corresponding to each of these
categories as indicated by colored dots. The plot in Figure 1(b) shows the self-information for all
of the selected pixels considering a wide range of scales. Object pixels, edge pixels and non-object
pixels tend to produce different characteristic curves across scale in considering ?log(p(Ip )).
3.2
Center bias via local connectivity
Center bias has been much discussed in the saliency literature, and as such, we include results in this
section that apply a different strategy for considering center bias. In particular, in the following center bias appears more directly as a factor that influences the relative weights assigned to a likelihood
estimate defined by local pixels. This effectively means that pixels closer to the center have more
influence in determining estimated likelihoods. One can imagine such an operation having a more
prominent role in a foveated vision system wherein centrally located photoreceptors have a much
greater density than thosein the periphery. The first variant of center bias proposed is as
follows:
P
M ISSCB?1 (Ip ) = max ? log
where, c
q?S G?b (||p ? q||)G?i (||Ip ? Iq ||)G?cb (||q ? c||)
?b
2
Note that while the authors originally employed CMPC [17] as a segmentation algorithm, more recent
recommendations from the authors prescribe the use of MCG [16].
4
(b)
(a)
8
object pixel 1
object pixel 2
object pixel 3
edge pixel 1
edge pixel 2
edge pixel 3
non?object pixel 1
non?object pixel 2
non?object pixel 3
self?information
7.5
7
6.5
6
5.5
5
20
40
60
80
100
120
140
kernel size in pixels
Figure 1: (a) Sample image with select pixel locations highlighted in color. (b) Self-information of
the corresponding pixel locations as a function of scale.
Figure 2: Input images in (a) and sample output for (b) raw saliency maps (c) with bilateral blur (d)
using CB-1 bias (e) using CB-2 bias (f) object segmentation using MCG+MISS
is the spatial center of the image, G?cb is a Gaussian function which controls the amount of center
bias based on ?cb . The second approach includes the center bias control parameters directly within
in the second Gaussian function.
P
M ISSCB?2 (Ip ) = max ? log
G
(||p
?
q||)G
(||I
?
I
||
?
(M
?
||q
?
c||))
where,
?i
p
q
q?S ?b
?b
M is the maximum possible distance from the center pixel c to any other pixel.
3.3
Salient objects and fixations
Evaluation results address two distinct and standard problems in saliency prediction. These are
fixation prediction, and salient object prediction respectively. The evaluation largely follows the
methodology employed by Li et al. [15]. Benchmarking metrics considered are common standards
in saliency model evaluation, and details are found in the supplementary material.
We have compared our results with several saliency and segmentation algorithms ITTI [18], AIM
[6], GBVS [14], DVA [19], SUN [20], SIG [21], AWS [22], FT [23], GC [24], SF [25], PCAS [26],
and across different datasets. Note that for segmentation based tests comparison among saliency
algorithms considers only MCG+GBVS. The reason for this is that this was the highest performing
of all of the saliency algorithms considered by Li et al. [15].
In our results, we exercise a range of parameters to gauge their relative importance. The size of
Gaussian kernel G?b determines the spatial scale. 25 different Kernel sizes are considered in a range
from 3x3 to 125x125 pixels with the standard deviation ?b equal to one third of the kernel width. For
fixation prediction, only a subset of smaller scales is sufficient to achieve good performance, but the
complete set of scales is necessary for segmentation. The Gaussian kernel that defines color distance
G?i is determined by the standard deviation ?i . We tested values for ?i ranging from 0.1 to 10. For
5
post processing standard bilateral filtering (BB), a kernel size of 9 ? 9 is used, and center bias results
are based on a fixed ?cb = 5 for the kernel G?cb for CB-1. For the second alternative method (CB-2)
one Gaussian kernel G?i is used with ?i = 10. All of these settings have also considered different
scaling factors applied to the overall image 0.25, 0.5 and 1 and in most cases, results corresponding
to the resize factor of 0.25 are best. Scaling down the image implies a shift in the scales spanned in
scale space towards lower spatial frequencies.
Table 1: Benchmarking results for fixation prediction
s-AUC
aws
aim
sig
dva
gbvs
sun
itti
bruce
cerf
judd
imgsal
pascal
0.7171
0.7343
0.8292
0.8691
0.8111
0.6973
0.756
0.824
0.854
0.803
0.714
0.7432
0.812
0.862
0.8072
0.684
0.716
0.807
0.856
0.795
0.67
0.706
0.777
0.83
0.758
0.665
0.691
0.806
0.8682
0.8044
0.656
0.681
0.794
0.851
0.773
miss
Basic
0.68
0.7431
0.807
0.8653
0.802
miss
BB
0.6914
0.72
0.809
0.8644
0.803
miss
CB-1
0.625
0.621
0.8321
0.832
0.8043
miss
CB-2
0.672
0.7264
0.8253
0.845
0.801
Table 2: Benchmarking results for salient object prediction (saliency algorithms)
F-score
ft
imgsal
pascal
aws
aim
2
0.693
0.5951
0.569
4
0.656
0.536
0.5871
sig
dva
gbvs
sun
itti
0.652
0.5902
0.566
0.633
0.491
0.529
0.649
0.5574
0.529
0.638
0.438
0.514
0.623
0.520
0.5852
miss
Basic
0.640
0.432
0.486
miss
BB
0.6853
0.521
0.508
miss
CB-1
0.653
0.527
0.5833
miss
CB-2
0.7131
0.5823
0.5744
Table 3: Benchmarking results for salient object prediction (segmentation algorithms)
F-score
sf
gc
pcas
ft
ft
imgsal
pascal
0.8533
0.494
0.534
0.804
0.5712
0.582
0.833
0.6121
0.600
0.709
0.418
0.415
mcg+gbvs
[15]
0.8532
0.5423
0.6752
mcg+miss
Basic
0.8493
0.5354
0.6674
mcg+miss
BB
0.8454
0.513
0.666
mcg+miss
CB-1
0.8551
0.514
0.6791
mcg+miss
CB-2
0.839
0.521
0.6733
In Figure 2, we show some qualitative results of output corresponding to MISS with different postprocessing variants of center bias weighting for both saliency prediction and object segmentation.
3.4
Lighting and viewpoint invariance
Given the relationship between MISS and models that address the problem of invariant keypoint
selection, it is interesting to consider the relative invariance in saliency output subject to changing
viewpoint, lighting or other imaging conditions. This is especially true given that saliency models
have been shown to typically exhibit a high degree of sensitivity to imaging conditions [27]. This
implies that this analysis is relevant not only to interest point selection, but also to measuring the
relative robustness to small changes in viewpoint, lighting or optics in predicting fixations or salient
targets.
To examine affine invariance, we have to used image samples from a classic benchmark [5] which
represent changes in zoom+rotation, blur, lighting and viewpoint. In all of these sequence, the first
image is the reference image and the imaging conditions change gradually throughout the sequence.
We have applied the MISS algorithm (without considering any center bias) to all of the full-size
images in those sequences. From the raw saliency output, we have selected keypoints based on nonmaxima suppression with radius = 5 pixels, and threshold = 0.1. For every detected keypoint we
assign a circular region centered at the keypoint. The radius of this circular region is based on the
width of the Gaussian kernel G?b defining the characteristic scale at which self-information achieves
a maximum response. Keypoint regions are compared across images subject to their repeatability
[5]. Repeatability measures the similarity among detected regions across different frames and is a
standard way of gauging the capability to detect common regions across different types of image
deformations. We compare our results with several other region detectors including Harris, Hessian,
MSER, IBR and EBR [5].
Figure 3 demonstrates that output corresponding to the proposed saliency measure, revealing a considerable degree of invariance to affine transformations and changing image characteristics suggesting robustness for applications for gaze prediction and object selection.
6
70
60
40
20
60
50
(d) wall sequence
80
80
75
70
repeatebility %
80
(c) leuven sequence
repeatebility %
(b) bikes sequence
80
repeatebility %
repeatebility %
(a) bark sequence
100
70
65
60
55
60
50
40
30
40
50
0
haraff
hesaff
mseraf
ibraff
ebraff
miss
1
2
3
4
increasing zoom + rotation
5
30
1
2
3
4
5
45
20
1
increasing blur
2
3
4
increasing light
5
10
1
2
3
4
5
increasing viewpoint angle
Figure 3: A demonstration of invariance to varying image conditions including viewpoint, lighting
and blur based on a standard benchmark [5].
3.5
Beyond Images
While the discussion in this paper has focused
almost exclusively on image input, it is worth
noting that the proposed definition of saliency
is sufficiently general that this may be applied to alternative forms of data including images/videos, 3D models, audio signals or any
form of data with locality in space or time.
To demonstrate this, we present saliency output
based on scale-space information for a 3D mesh
model. Given that vertices are sparsely represented in a 3D coordinate space in contrast to Figure 4: Saliency for 2 different scales on a mesh
the continuous discretized grid representation model. Results correspond to a surround based
present for images, some differences are nec- on 100 nearest neighbors (left) and 4000 nearest
essary in how likelihood estimates are derived. neighbors (right) respectively.
In this case, the spatial support is defined according to the k nearest (spatial) neighbors of
each vertex. Instead of color values, each vertex belonging to the mesh is characterized by a three
dimensional vector defining a surface normal in the x, y and z directions. Computation is otherwise
identical to the process outlined in equation 2. An example of output associated with two different
choices of k is shown in figure 4 corresponding to k = 100 and k = 4000 respectively for a 3D
model with 172974 vertices. For demonstrative purposes, the output for two individual spatial scales
is shown rather than the maximum across scales. Red indicates high levels of saliency, and green
low. Values on the mesh are histogram equalized to equate any contrast differences. It is interesting to note that this saliency metric (and output) is very similar to proposals in computer graphics
for determining local mesh saliency serving mesh simplification [28]. Note that this method allows
determination of a characteristic scale for vertices on the mesh in addition to defining saliency. This
may also useful to inferring the relationship between different parts (e.g. hands vs. fingers).
There is considerable generality in that the measure of saliency assumed is agnostic to the features
considered, with a few caveats. Given that our results are based on local color values, this implies
a relatively low dimensional feature space on which likelihoods are estimated. However, one can
imagine an analogous scenario wherein each image location is characterized by a feature vector
(e.g. outputs of a bank of log-Gabor filters) resulting in much higher dimensionality in the statistics.
As dimensionality increases in feature space, the finite number of samples within a local spatial or
temporal window implies an exponential decline in the sample density for likelihood estimation.
This consideration can be solved in applying an approximation based on marginal statistics (as in
[29, 20, 30]). Such an approximation relies on assumptions such as independence which may be
achieved for arbitrary data sets in first encoding raw feature values via stacked (sparse) autoencoders
or related feature learning strategies. One might also note that saliency values may be assigned to
units across different layers of a hierarchical representation based on such a feature representation.
3.6
Saliency, context and human vision
Solutions to quantifying visual saliency based on deep learning have begun to appear in the literature. This has been made possible in part by efforts to scale up data collection via crowdsourcing in
7
defining tasks that serve as an approximation of traditional gaze tracking studies [31]. Recent (yet
to be published) methods of this variety show a considerable improvement on some standard benchmarks over traditional models. It is therefore interesting to consider what differences exist between
such approaches, and more traditional approaches premised on measures of local feature contrast.
To this end, we present some examples in Figure 5 where output differs significantly between a
model based on deep learning (SALICON [31]) and one based on feature contrast (MISS).
The importance of this example is in highlighting different aspects of saliency computation that
contribute to the bigger picture. It is evident that models capable of detecting specific objects and
modeling context are may perform well on saliency benchmarks. However, it is also evident that
there is some deficit in their capacity to represent saliency defined by strong feature contrast or
according to factors of importance in human visual search behavior. In the same vane, in human
vision, hierarchical feature extraction from edges to complex objects, and local measures for gain
control, normalization and feature contrast play a significant role, all acting in concert. It is therefore
natural to entertain the idea that a comprehensive solution to the problem involves considering both
high-level features of the nature implemented in deep learning models coupled with contrastive
saliency akin to MISS. In practice, the role of salience in a distributed representation in modulating
object and context specific signals presents one promising avenue for addressing this problem.
It has been argued that normalization is a canonical operation in sensory neural information processing. Under the assumption of Generalized Gaussian statistics, it can be shown that divisive
normalization implements an operation equivalent to a log likelihood of a neural response in reference to cells in the surround [30]. The nature of computation assumed by MISS therefore finds a
strong correlate in basic operations that implement feature contrast in human vision, and that pairs
naturally with the structure of computation associated with representing objects and context.
Figure 5: Examples where a deep learning model produces counterintuitive results relative to models
based on feature contrast. Top: Original Image. Middle: SALICON output. Bottom: MISS output.
4
Discussion
In this paper we present a generalized information theoretic characterization of saliency based on
maxima in information scale-space. This definition is shown to be related to a variety of classic
research contributions in scale-space theory, interest point detection, bilateral filtering, and existing
models of visual saliency. Based on a relatively simplistic definition, the proposal is shown to be
competitive against contemporary saliency models for both fixation based and object based saliency
prediction. This also includes a demonstration of the relative robustness to image transformations
and generalization of the proposal to a broad range of data types. Finally, we motivate an important
distinction between contextual and contrast related factors in driving saliency, and draw connections
to associated mechanisms for saliency computation in human vision.
Acknowledgments
The authors acknowledge financial support from the NSERC Canada Discovery Grants program,
University of Manitoba GETS funding, and ONR grant #N00178-14-Q-4583.
References
[1] J.J. Koenderink. The structure of images. Biological cybernetics, 50(5):363?370, 1984.
8
[2] T. Lindeberg. Scale-space theory: A basic tool for analyzing structures at different scales. Journal of
applied statistics, 21(1-2):225?270, 1994.
[3] T. Kadir and M. Brady. Saliency, scale and image description. IJCV, 45(2):83?105, 2001.
[4] M. Jagersand. Saliency maps and attention selection in scale and spatial coordinates: An information
theoretic approach. In ICCV 1995, pages 195?202. IEEE, 1995.
[5] K. Mikolajczyk et al. A comparison of affine region detectors. IJCV, 65(1-2):43?72, 2005.
[6] N.D.B. Bruce and J.K. Tsotsos. Saliency based on information maximization. NIPS 2005, pages 155?162,
2005.
[7] M. Toews and W.M. Wells. A mutual-information scale-space for image feature detection and featurebased classification of volumetric brain images. In CVPR Workshops, pages 111?116. IEEE, 2010.
[8] A. Borji, D. Sihite, and L. Itti. Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study. IEEE TIP, 22(1):55?69, 2013.
[9] P. Perona, T. Shiota, and J. Malik. Anisotropic diffusion. In Geometry-driven diffusion in computer vision,
pages 73?92. Springer, 1994.
[10] A. Buades, B. Coll, and J-M Morel. A non-local algorithm for image denoising. In CVPR 2005, volume 2,
pages 60?65. IEEE, 2005.
[11] S. Paris and F. Durand. A fast approximation of the bilateral filter using a signal processing approach. In
ECCV 2006, pages 568?580. Springer, 2006.
[12] LMJ Florack, BM Ter Haar Romeny, Jan J Koenderink, and Max A Viergever. General intensity transformations and differential invariants. Journal of Mathematical Imaging and Vision, 4(2):171?187, 1994.
[13] F. Mokhtarian and R. Suomela. Robust image corner detection through curvature scale space. IEEE T
PAMI, 20(12):1376?1381, 1998.
[14] J. Harel, C. Koch, and P. Perona. Graph-based visual saliency. NIPS 2006, 19:545, 2007.
[15] Y. Li, X. Hou, C. Koch, J.M. Rehg, and A.L. Yuille. The secrets of salient object segmentation. CVPR
2014, pages 280?287, 2014.
[16] P. Arbelez, J. Pont-Tuset, J. Barron, F. Marques, and J. Malik. Multiscale combinatorial grouping. CVPR
2014, pages 328?335, 2014.
[17] J. Carreira and C. Sminchisescu. Cpmc: Automatic object segmentation using constrained parametric
min-cuts. IEEE TPAMI, 34(7):1312?1328, 2012.
[18] L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE
T PAMI, 20(11):1254?1259, 1998.
[19] X. Hou and L. Zhang. Dynamic visual attention: Searching for coding length increments. NIPS 2008,
pages 681?688, 2009.
[20] L. Zhang, M.H. Tong, T.K. Marks, H. Shan, and G.W. Cottrell. Sun: A bayesian framework for saliency
using natural statistics. Journal of Vision, 8(7), 2008.
[21] X. Hou, J. Harel, and C. Koch. Image signature: Highlighting sparse salient regions. IEEE TPAMI,
34(1):194?201, 2012.
[22] A. Garcia-Diaz, V. Lebor n, X.R. Fdez-Vidal, and X.M. Pardo. On the relationship between optical
variability, visual saliency, and eye fixations: A computational approach. Journal of Vision, 12(6), 2012.
[23] R. Achanta, S. Hemamiz, F. Estrada, and S. Susstrunk. Frequency-tuned salient region detection. CVPR
2009 Workshops, pages 1597?1604, 2009.
[24] M.-M. Cheng, N.J. Mitra, X. Huang, P.H.S. Torr, and S.-M. Hu. Global contrast based salient region
detection. IEEE TPAMI, 37(3):569?582, 2015.
[25] F. Perazzi, P. Krahenbuhl, Y. Pritch, and A. Hornung. Saliency filters: Contrast based filtering for salient
region detection. CVPR 2012, pages 733?740, 2012.
[26] Tal A. Zelnik-Manor L. Margolin, R.?What makes a patch distinct? CVPR 2013, pages 1139?1146, 2013.
[27] A. Andreopoulos and J.K. Tsotsos. On sensor bias in experimental methods for comparing interest-point,
saliency, and recognition algorithms. IEEE TPAMI, 34(1):110?126, 2012.
[28] C-H Lee, A. Varshney, and D.W. Jacobs. Mesh saliency. ACM SIGGRAPH 2005, pages 659?666, 2005.
[29] N.D.B. Bruce and J.K. Tsotsos. Saliency, attention, and visual search: An information theoretic approach.
Journal of vision, 9(3):5, 2009.
[30] D. Gao and N. Vasconcelos. Decision-theoretic saliency: computational principles, biological plausibility,
and implications for neurophysiology and psychophysics. Neural computation, 21(1):239?271, 2009.
[31] M. Jiang et al. Salicon: Saliency in context. CVPR 2015, pages 1072?1080, 2015.
9
| 5946 |@word neurophysiology:1 middle:1 ixx:1 seems:1 stronger:1 norm:1 advantageous:1 hu:1 zelnik:1 rgb:2 jacob:1 contrastive:1 carry:2 shiota:1 exclusively:2 selecting:1 score:2 tuned:1 existing:2 com:1 contextual:1 comparing:1 gmail:1 yet:1 reminiscent:1 malized:1 hou:3 mesh:9 cottrell:1 blur:6 plot:1 concert:1 v:1 alone:1 selected:3 isotropic:3 short:1 colored:1 caveat:1 characterization:2 provides:1 revisited:1 location:4 node:3 contribute:1 detecting:1 zhang:2 mathematical:1 direct:2 differential:4 qualitative:1 fixation:11 ijcv:2 manner:2 secret:1 gbvs:6 rapid:1 behavior:2 themselves:1 examine:1 brain:1 discretized:1 romeny:1 window:1 considering:8 increasing:4 lindeberg:1 provided:1 buades:1 bike:1 agnostic:1 what:3 extremum:7 transformation:4 brady:4 temporal:1 quantitative:1 every:1 borji:1 tie:1 demonstrates:1 control:3 unit:1 grant:2 yn:2 producing:1 appear:1 local:27 selfinformation:2 mitra:1 encoding:1 analyzing:1 jiang:1 pami:2 might:1 emphasis:1 achanta:1 range:8 directed:1 unique:1 acknowledgment:1 practice:2 implement:2 differs:1 x3:1 jan:1 gabor:1 revealing:3 matching:2 significantly:1 get:1 selection:8 context:6 applying:2 seminal:1 influence:3 equivalent:3 map:5 center:14 mser:1 straightforward:1 attention:6 focused:1 array:1 spanned:1 counterintuitive:1 financial:1 rehg:1 classic:4 searching:1 notion:4 distinctness:1 variation:1 coordinate:2 analogous:1 increment:1 imagine:2 target:1 play:1 prescribe:1 hypothesis:1 sig:3 agreement:1 recognition:1 located:1 sparsely:1 cut:1 labeled:1 featurebased:1 bottom:1 role:5 cloud:1 ft:4 solved:1 commonplace:1 region:14 sun:4 highest:1 contemporary:1 dynamic:1 signature:1 motivate:1 segment:1 serve:1 yuille:1 blurring:2 basis:2 siggraph:1 represented:1 finger:1 stacked:1 distinct:4 heat:1 fast:1 detected:2 equalized:1 choosing:1 proto:1 posed:1 kadir:4 supplementary:1 cvpr:8 hornung:1 otherwise:1 fdez:1 statistic:11 neil:1 highlighted:1 ip:20 seemingly:1 sequence:7 tpami:4 propose:1 interaction:1 maximal:3 product:1 relevant:5 combining:1 achieve:1 colorspace:1 description:2 venture:1 dirac:1 assessing:1 produce:3 comparative:1 object:27 iq:11 derive:1 stating:1 nearest:3 received:1 strong:4 ebr:1 implemented:1 c:1 involves:2 implies:4 direction:1 margolin:1 radius:2 closely:1 filter:9 centered:1 human:10 material:1 require:1 argued:1 assign:1 generalization:1 wall:1 biological:3 extension:1 proximity:2 sufficiently:1 considered:5 ground:2 normal:1 koch:4 cb:15 driving:1 vary:1 sought:1 early:2 achieves:1 purpose:1 estimation:1 combinatorial:1 modulating:1 density1:1 gauge:1 tool:1 weighted:6 morel:1 sensor:1 gaussian:18 aim:6 manor:1 rather:2 varying:1 broader:1 derived:2 susstrunk:1 improvement:1 likelihood:12 indicates:1 contrast:14 suppression:1 sense:1 detect:1 typically:2 perona:3 pont:1 interested:1 pixel:36 overall:2 among:6 classification:1 pascal:3 vane:1 art:1 smoothing:2 spatial:13 special:1 marginal:1 equal:1 mutual:1 constrained:1 having:2 extraction:1 sampling:2 vasconcelos:1 identical:2 broad:2 constitutes:1 gauging:1 hint:1 employ:1 few:1 modern:1 harel:2 zoom:2 individual:1 comprehensive:1 replaced:1 geometry:1 detection:7 interest:6 circular:2 evaluation:5 light:1 implication:2 edge:7 closer:2 capable:1 necessary:1 entertain:1 euclidean:1 walk:3 obscured:1 deformation:3 formalism:1 modeling:2 facet:1 measuring:2 maximization:2 lattice:2 vertex:5 addressing:2 deviation:2 subset:1 graphic:1 characterize:1 density:5 sensitivity:1 lee:1 gaze:3 tip:1 connectivity:1 huang:1 corner:1 derivative:4 itti:5 koenderink:2 li:4 suggesting:1 premised:1 lookup:1 accompanied:2 coding:1 includes:5 blurred:1 bilateral:11 root:1 later:2 observing:1 red:1 competitive:2 parallel:1 capability:1 bruce:5 contribution:2 variance:1 characteristic:6 descriptor:1 largely:1 correspond:4 saliency:89 repeatability:2 equate:1 generalize:1 raw:3 bayesian:1 produced:1 niebur:1 worth:4 lighting:6 pcas:2 published:1 cybernetics:1 detector:2 definition:14 volumetric:1 against:3 energy:1 frequency:3 naturally:1 associated:4 gain:1 ibr:1 begun:1 color:6 dimensionality:3 segmentation:15 reflecting:1 appears:1 higher:2 originally:1 follow:1 methodology:1 wherein:3 response:2 evaluated:1 strongly:3 generality:3 correlation:1 rahman:1 hand:1 autoencoders:1 multiscale:1 defines:1 indicated:1 concept:7 normalized:5 true:1 assigned:2 wp:4 ll:1 demonstrative:1 self:9 width:2 auc:1 noted:1 generalized:2 prominent:3 outline:1 theoretic:9 demonstrate:7 complete:1 evident:2 postprocessing:1 image:38 ranging:1 consideration:3 recently:1 funding:1 common:4 rotation:2 volume:1 anisotropic:5 discussed:2 significant:2 surround:3 leuven:1 automatic:1 grid:1 outlined:1 dot:1 manitoba:3 cortex:1 similarity:2 surface:1 curvature:2 recent:2 driven:1 periphery:1 scenario:1 onr:1 success:1 durand:1 preserving:1 captured:1 additional:1 seen:1 greater:1 estrada:1 employed:3 determine:2 signal:3 full:1 keypoints:2 characterized:3 determination:1 plausibility:1 divided:1 post:2 bigger:1 prediction:15 variant:2 simplistic:2 basic:5 vision:16 metric:3 histogram:3 grounded:3 kernel:13 normalization:6 represent:2 achieved:2 cell:1 proposal:8 addition:1 fine:2 aws:3 comment:1 subject:2 tend:1 noting:3 leverage:1 ter:1 variety:5 independence:1 idea:3 decline:1 avenue:1 shift:2 motivated:1 expression:7 utility:1 effort:2 akin:2 hessian:1 deep:5 generally:2 useful:1 detailed:1 cerf:1 amount:1 locally:1 category:1 exist:1 canonical:1 pritch:1 delta:1 estimated:2 serving:2 diaz:1 redundancy:1 salient:14 threshold:1 changing:2 diffusion:11 imaging:4 graph:4 tsotsos:3 sum:3 cpmc:1 angle:1 striking:1 place:1 throughout:2 family:2 almost:1 patch:1 putative:1 draw:2 decision:1 summarizes:1 dva:3 scaling:2 resize:1 krahenbuhl:1 layer:1 shan:1 played:1 centrally:1 simplification:1 cheng:1 topological:1 sihite:1 optic:1 scene:1 tal:1 nearby:1 pardo:1 aspect:1 min:1 lxm:1 performing:1 optical:1 relatively:5 department:2 structured:1 according:4 combination:2 belonging:1 across:12 smaller:1 invariant:6 gradually:1 iccv:1 equation:12 remains:1 mechanism:2 serf:1 end:1 operation:6 rewritten:1 vidal:1 apply:2 hierarchical:2 barron:1 appropriate:1 appearing:3 alternative:4 robustness:5 original:1 denotes:1 top:1 include:3 unifying:1 giving:1 especially:2 establish:1 lmj:1 psychophysics:1 objective:1 malik:3 quantity:3 parametric:2 strategy:2 traditional:4 exhibit:1 gradient:3 distance:4 deficit:1 capacity:1 perazzi:1 extent:1 considers:2 reason:2 length:1 relationship:6 insufficient:1 balance:1 demonstration:4 potentially:1 relate:4 disparate:1 intent:1 implementation:2 perform:1 convolution:1 observation:2 markov:1 datasets:1 benchmark:9 finite:1 acknowledge:1 marque:1 defining:8 communication:1 variability:1 frame:1 gc:2 compressibility:1 arbitrary:3 salicon:3 intensity:15 canada:1 pair:1 paris:1 connection:3 distinction:1 established:2 discontinuity:1 nip:3 address:2 beyond:2 pattern:3 xm:1 andreopoulos:1 program:1 including:8 max:4 video:1 green:1 event:1 natural:2 predicting:1 indicator:1 haar:1 representing:1 mcg:9 keypoint:6 numerous:1 inversely:1 picture:1 eye:1 coupled:1 prior:1 literature:4 l2:1 bark:1 discovery:1 determining:4 relative:12 loss:1 interesting:3 filtering:9 proportional:1 versus:1 degree:3 affine:4 sufficient:1 principle:4 dq:2 viewpoint:7 plotting:1 bank:1 eccv:1 last:1 salience:2 iyy:1 bias:14 wide:2 neighbor:3 characterizing:2 differentiating:1 sparse:2 distributed:1 curve:1 judd:1 stand:1 transition:2 evaluating:1 tuset:1 sensory:1 equated:1 author:3 made:1 collection:1 mikolajczyk:1 coll:1 bm:1 correlate:1 bb:4 varshney:1 global:2 conceptual:1 photoreceptors:1 assumed:4 continuous:1 search:2 decade:1 table:3 promising:1 nature:2 robust:3 ca:1 sminchisescu:1 complex:1 main:1 motivation:3 profile:2 i:1 rarity:1 benchmarking:4 tong:1 position:2 inferring:1 sf:2 exercise:1 exponential:1 weighting:2 third:1 grained:2 down:1 specific:5 divergent:1 grouping:1 workshop:2 adding:1 effectively:1 importance:6 magnitude:1 dissimilarity:2 nec:1 foveated:1 jagersand:2 locality:1 entropy:7 garcia:1 gao:1 visual:23 highlighting:2 expressed:8 nserc:1 tracking:1 recommendation:1 springer:2 corresponds:4 determines:1 relies:3 harris:1 acm:1 viewed:1 quantifying:1 careful:1 towards:2 viergever:1 content:4 change:4 considerable:3 objectness:1 specifically:1 typical:1 included:1 determined:1 acting:1 carreira:1 miss:31 denoising:1 torr:1 total:1 invariance:6 divisive:1 experimental:1 select:1 support:3 mark:1 audio:1 tested:1 crowdsourcing:1 |
5,465 | 5,947 | Semi-Supervised Learning with Ladder Networks
Antti Rasmus and Harri Valpola
The Curious AI Company, Finland
Mikko Honkala
Nokia Labs, Finland
Mathias Berglund and Tapani Raiko
Aalto University, Finland & The Curious AI Company, Finland
Abstract
We combine supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need
for layer-wise pre-training. Our work builds on top of the Ladder network proposed by Valpola [1] which we extend by combining the model with supervision. We show that the resulting model reaches state-of-the-art performance in
semi-supervised MNIST and CIFAR-10 classification in addition to permutationinvariant MNIST classification with all labels.
1
Introduction
In this paper, we introduce an unsupervised learning method that fits well with supervised learning.
Combining an auxiliary task to help train a neural network was proposed by Suddarth and Kergosien
[2]. There are multiple choices for the unsupervised task, for example reconstruction of the inputs
at every level of the model [e.g., 3] or classification of each input sample into its own class [4].
Although some methods have been able to simultaneously apply both supervised and unsupervised
learning [3, 5], often these unsupervised auxiliary tasks are only applied as pre-training, followed
by normal supervised learning [e.g., 6]. In complex tasks there is often much more structure in
the inputs than can be represented, and unsupervised learning cannot, by definition, know what
will be useful for the task at hand. Consider, for instance, the autoencoder approach applied to
natural images: an auxiliary decoder network tries to reconstruct the original input from the internal
representation. The autoencoder will try to preserve all the details needed for reconstructing the
image at pixel level, even though classification is typically invariant to all kinds of transformations
which do not preserve pixel values.
Our approach follows Valpola [1] who proposed a Ladder network where the auxiliary task is to
denoise representations at every level of the model. The model structure is an autoencoder with
skip connections from the encoder to decoder and the learning task is similar to that in denoising
autoencoders but applied at every layer, not just the inputs. The skip connections relieve the pressure
to represent details at the higher layers of the model because, through the skip connections, the
decoder can recover any details discarded by the encoder. Previously the Ladder network has only
been demonstrated in unsupervised learning [1, 7] but we now combine it with supervised learning.
The key aspects of the approach are as follows:
Compatibility with supervised methods. The unsupervised part focuses on relevant details found
by supervised learning. Furthermore, it can be added to existing feedforward neural networks, for
example multi-layer perceptrons (MLPs) or convolutional neural networks (CNNs).
1
Scalability due to local learning. In addition to supervised learning target at the top layer, the
model has local unsupervised learning targets on every layer making it suitable for very deep neural
networks. We demonstrate this with two deep supervised network architectures.
Computational efficiency. The encoder part of the model corresponds to normal supervised learning. Adding a decoder, as proposed in this paper, approximately triples the computation during training but not necessarily the training time since the same result can be achieved faster due to better
utilization of available information. Overall, computation per update scales similarly to whichever
supervised learning approach is used, with a small multiplicative factor.
As explained in Section 2, the skip connections and layer-wise unsupervised targets effectively turn
autoencoders into hierarchical latent variable models which are known to be well suited for semisupervised learning. Indeed, we obtain state-of-the-art results in semi-supervised learning in the
MNIST, permutation invariant MNIST and CIFAR-10 classification tasks (Section 4). However,
the improvements are not limited to semi-supervised settings: for the permutation invariant MNIST
task, we also achieve a new record with the normal full-labeled setting.For a longer version of this
paper with more complete descriptions, please see [8].
2
Derivation and justification
Latent variable models are an attractive approach to semi-supervised learning because they can combine supervised and unsupervised learning in a principled way. The only difference is whether the
class labels are observed or not. This approach was taken, for instance, by Goodfellow et al. [5] with
their multi-prediction deep Boltzmann machine. A particularly attractive property of hierarchical latent variable models is that they can, in general, leave the details for the lower levels to represent,
allowing higher levels to focus on more invariant, abstract features that turn out to be relevant for
the task at hand.
The training process of latent variable models can typically be split into inference and learning, that
is, finding the posterior probability of the unobserved latent variables and then updating the underlying probability model to better fit the observations. For instance, in the expectation-maximization
(EM) algorithm, the E-step corresponds to finding the expectation of the latent variables over the
posterior distribution assuming the model fixed and M-step then maximizes the underlying probability model assuming the expectation fixed.
The main problem with latent variable models is how to make inference and learning efficient. Suppose there are layers l of latent variables z(l) . Latent variable models often represent the probability
distribution of all the variables explicitly as a product of terms, such as p(z(l) | z(l+1) ) in directed
graphical models. The inference process and model updates are then derived from Bayes? rule, typically as some kind of approximation. Often the inference is iterative as it is generally impossible to
solve the resulting equations in a closed form as a function of the observed variables.
There is a close connection between denoising and probabilistic modeling. On the one hand, given
a probabilistic model, you can compute the optimal denoising. Say you want to reconstruct a latent
z using a prior p(z) and an observation z? = z + noise. We first compute the posterior distribution
p(z | z?), and use its center of gravity as the reconstruction z?. One can show that this minimizes
the expected denoising cost (?
z z)2 . On the other hand, given a denoising function, one can draw
samples from the corresponding distribution by creating a Markov chain that alternates between
corruption and denoising [9].
Valpola [1] proposed the Ladder network where the inference process itself can be learned by using
the principle of denoising which has been used in supervised learning [10], denoising autoencoders
(dAE) [11] and denoising source separation (DSS) [12] for complementary tasks. In dAE, an au? . Learning
toencoder is trained to reconstruct the original observation x from a corrupted version x
?
is based simply on minimizing the norm of the difference of the original x and its reconstruction x
? , that is the cost is k?
from the corrupted x
x xk2 .
While dAEs are normally only trained to denoise the observations, the DSS framework is based on
? = g(z) of latent variables z to train a mapping z = f (x)
the idea of using denoising functions z
which models the likelihood of the latent variables as a function of the observations. The cost
function is identical to that used in a dAE except that latent variables z replace the observations x,
2
y
y
?
?
z(2)
3
N (0,
2
2
Clean
g (2) (?, ?)
?
z(2)
)
f (2) (?)
f (2) (?)
1
?
z(1)
N (0,
0
2
)
g (1) (?, ?)
?
z(1)
f (1) (?)
x
?
N (0,
-2
-1
0
1
2
3
2
z(1)
(1)
Cd
f (1) (?)
-1
-2
z(2)
(2)
Cd
)
g (0) (?, ?)
x
?
x
(0)
Cd
4
Corrupted
x
x
Figure 1: Left: A depiction of an optimal denoising function for a bimodal distribution. The input
for the function is the corrupted value (x axis) and the target is the clean value (y axis). The denoising
function moves values towards higher probabilities as show by the green arrows. Right: A conceptual illustration of the Ladder network when L = 2. The feedforward path (x ! z(1) ! z(2) ! y)
?(1) ! z
?(2) ! y
? ).
shares the mappings f (l) with the corrupted feedforward path, or encoder (x ! z
(l)
(l)
(l)
(l)
? !x
? ) consists of denoising functions g and has cost functions Cd on
The decoder (?
z !z
?(l) and z(l) . The output y
? of the encoder can
each layer trying to minimize the difference between z
also be trained to match available labels t(n).
that is, the cost is k?
z zk2 . The only thing to keep in mind is that z needs to be normalized somehow
? = constant. In a dAE, this cannot happen as
as otherwise the model has a trivial solution at z = z
the model cannot change the input x.
Figure 1 (left) depicts the optimal denoising function z? = g(?
z ) for a one-dimensional bimodal
distribution which could be the distribution of a latent variable inside a larger model. The shape of
the denoising function depends on the distribution of z and the properties of the corruption noise.
With no noise at all, the optimal denoising function would be the identity function. In general, the
denoising function pushes the values towards higher probabilities as shown by the green arrows.
Figure 1 (right) shows the structure of the Ladder network. Every layer contributes to the cost
(l)
?(l) k2 which trains the layers above (both encoder and decoder)
function a term Cd = kz(l) z
?(l) = g (l) (?
?(l+1) ) which maps the corrupted z
?(l) onto the
to learn the denoising function z
z(l) , z
(l)
(l)
? . As the estimate z
? incorporates all prior knowledge about z, the same cost
denoised estimate z
function term also trains the encoder layers below to find cleaner features which better match the
prior expectation.
?(l) , during training the encoder is
Since the cost function needs both the clean z(l) and corrupted z
(l)
(l)
? . Another feature which differentiates the
run twice: a clean pass for z and a corrupted pass for z
Ladder network from regular dAEs is that each layer has a skip connection between the encoder and
decoder. This feature mimics the inference structure of latent variable models and makes it possible
for the higher levels of the network to leave some of the details for lower levels to represent. Rasmus
et al. [7] showed that such skip connections allow dAEs to focus on abstract invariant features on the
higher levels, making the Ladder network a good fit with supervised learning that can select which
information is relevant for the task at hand.
One way to picture the Ladder network is to consider it as a collection of nested denoising autoencoders which share parts of the denoising machinery between each other. From the viewpoint of the
autoencoder at layer l, the representations on the higher layers can be treated as hidden neurons. In
?(l+i) produced by the decoder should resemble the
other words, there is no particular reason why z
(l+i)
corresponding representations z(l+i) produced by the encoder. It is only the cost function Cd
that ties these together and forces the inference to proceed in a reverse order in the decoder. This
sharing helps a deep denoising autoencoder to learn the denoising process as it splits the task into
meaningful sub-tasks of denoising intermediate representations.
3
Algorithm 1 Calculation of the output y and cost function C of the Ladder network
Require: x(n)
# Corrupted encoder and classifier
? (0)
?(0)
h
z
x(n) + noise
for l = 1 to L do
? (l 1) ) + noise
?(l)
z
batchnorm(W(l) h
(l)
(l)
?
h
activation(
(?
z(l) + (l) ))
end for
? (L)
P (?
y | x)
h
# Clean encoder (for denoising targets)
h(0)
z(0)
x(n)
for l = 1 to L do
(l)
zpre
W(l) h(l 1)
(l)
(l)
?
batchmean(zpre )
(l)
(l)
batchstd(zpre )
(l)
(l)
z
batchnorm(zpre )
h(l)
activation( (l) (z(l) + (l) ))
end for
3
# Final classification:
P (y | x)
h(L)
# Decoder and denoising
for l = L to 0 do
if l = L then
? (L) )
u(L)
batchnorm(h
else
?(l+1) )
u(l)
batchnorm(V(l+1) z
end if
(l)
(l)
(l)
8i : z?i
g(?
z i , ui )
(l)
8i : z?i,BN
(l)
z?i
(l)
?i
(l)
i
end for
# Cost function C for training:
C
0
if t(n) then
C
log P (?
y = t(n) | x(n))
end if
2
PL
(l)
?BN
C
C + l=0 l z(l) z
Implementation of the Model
We implement the Ladder network for fully connected MLP networks and for convolutional networks. We used standard rectifier networks with batch normalization applied to each preactivation.
The feedforward pass of the full Ladder network is listed in Algorithm 1.
In the decoder, we parametrize the denoising function such that it supports denoising of condi?(l+1) of the layer
tionally independent Gaussian latent variables, conditioned on the activations z
(l)
(l)
(l)
above.
The denoising
function g is therefore coupled into components z?i = gi (?
zi , u i ) =
?
?
(l)
z?i
(l)
?i (ui )
(l)
i (ui )
(l)
(l)
+ ?i (ui ) where ui
(l)
?(l+1) by u(l) =
propagates information from z
(l)
?(l+1) ) . The functions ?i (ui ) and i (ui ) are modeled as expressive nonlinbatchnorm(V(l+1) z
(l)
(l)
(l) (l)
(l)
(l) (l)
(l)
earities: ?i (ui ) = a1,i sigmoid(a2,i ui + a3,i ) + a4,i ui + a5,i , with the form of the nonlinearity
similar for
ters ( and
(l)
i (ui ).
The decoder has thus 10 unit-wise parameters a, compared to the two parame[13]) in the encoder.
It is worth noting that a simple special case of the decoder is a model where l = 0 when l < L.
This corresponds to a denoising cost only on the top layer and means that most of the decoder can
be omitted. This model, which we call the -model due to the shape of the graph, is useful as it can
easily be plugged into any feedforward network without decoder implementation.
Further implementation details of the model can be found in the supplementary material or Ref. [8].
4
Experiments
We ran experiments both with the MNIST and CIFAR-10 datasets, where we attached the decoder
both to fully-connected MLP networks and to convolutional neural networks. We also compared the
performance of the simpler -model (Sec. 3) to the full Ladder network.
With convolutional networks, our focus was exclusively on semi-supervised learning. We make
claims neither about the optimality nor the statistical significance of the supervised baseline results.
We used the Adam optimization algorithm [14]. The initial learning rate was 0.002 and it was
decreased linearly to zero during a final annealing phase. The minibatch size was 100. The source
code for all the experiments is available at https://github.com/arasmus/ladder.
4
Table 1: A collection of previously reported MNIST test errors in the permutation invariant setting
followed by the results with the Ladder network. * = SVM. Standard deviation in parenthesis.
Test error % with # of used labels
Semi-sup. Embedding [15]
Transductive SVM [from 15]
MTC [16]
Pseudo-label [17]
AtlasRBF [18]
DGN [19]
DBM, Dropout [20]
Adversarial [21]
Virtual Adversarial [22]
Baseline: MLP, BN, Gaussian noise
-model (Ladder with only top-level cost)
Ladder, only bottom-level cost
Ladder, full
4.1
100
16.86
16.81
12.03
10.49
8.10 (? 0.95)
3.33 (? 0.14)
1000
5.73
5.38
3.64
3.46
3.68 (? 0.12)
2.40 (? 0.02)
2.12
21.74 (? 1.77)
3.06 (? 1.44)
1.09 (?0.32)
1.06 (? 0.37)
1.32
5.70 (? 0.20)
1.53 (? 0.10)
0.90 (? 0.05)
0.84 (? 0.08)
All
1.5
1.40*
0.81
1.31
0.96
0.79
0.78
0.64 (? 0.03)
0.80 (? 0.03)
0.78 (? 0.03)
0.59 (? 0.03)
0.57 (? 0.02)
MNIST dataset
For evaluating semi-supervised learning, we randomly split the 60 000 training samples into 10 000sample validation set and used M = 50 000 samples as the training set. From the training set, we
randomly chose N = 100, 1000, or all labels for the supervised cost.1 All the samples were used
for the decoder which does not need the labels. The validation set was used for evaluating the model
structure and hyperparameters. We also balanced the classes to ensure that no particular class was
over-represented. We repeated the training 10 times varying the random seed for the splits.
After optimizing the hyperparameters, we performed the final test runs using all the M = 60 000
training samples with 10 different random initializations of the weight matrices and data splits. We
trained all the models for 100 epochs followed by 50 epochs of annealing.
4.1.1
Fully-connected MLP
A useful test for general learning algorithms is the permutation invariant MNIST classification task.
We chose the layer sizes of the baseline model to be 784-1000-500-250-250-250-10.
The hyperparameters we tuned for each model are the noise level that is added to the inputs and
to each layer, and denoising cost multipliers (l) . We also ran the supervised baseline model with
various noise levels. For models with just one cost multiplier, we optimized them with a search
grid {. . ., 0.1, 0.2, 0.5, 1, 2, 5, 10, . . .}. Ladder networks with a cost function on all layers have a
much larger search space and we explored it much more sparsely. For the complete set of selected
denoising cost multipliers and other hyperparameters, please refer to the code.
The results presented in Table 1 show that the proposed method outperforms all the previously
reported results. Encouraged by the good results, we also tested with N = 50 labels and got a test
error of 1.62 % (? 0.65 %).
The simple -model also performed surprisingly well, particularly for N = 1000 labels. With
N = 100 labels, all models sometimes failed to converge properly. With bottom level or full cost
in Ladder, around 5 % of runs result in a test error of over 2 %. In order to be able to estimate the
average test error reliably in the presence of such random outliers, we ran 40 instead of 10 test runs
with random initializations.
1
In all the experiments, we were careful not to optimize any parameters, hyperparameters, or model choices
based on the results on the held-out test samples. As is customary, we used 10 000 labeled validation samples
even for those settings where we only used 100 labeled samples for training. Obviously this is not something
that could be done in a real case with just 100 labeled samples. However, MNIST classification is such an easy
task even in the permutation invariant case that 100 labeled samples there correspond to a far greater number
of labeled samples in many other datasets.
5
Table 2: CNN results for MNIST
Test error without data augmentation % with # of used labels
EmbedCNN [15]
SWWAE [24]
Baseline: Conv-Small, supervised only
Conv-FC
Conv-Small, -model
4.1.2
100
7.75
9.17
6.43 (? 0.84)
0.99 (? 0.15)
0.89 (? 0.50)
all
0.71
0.36
Convolutional networks
We tested two convolutional networks for the general MNIST classification task and focused on the
100-label case. The first network was a straight-forward extension of the fully-connected network
tested in the permutation invariant case. We turned the first fully connected layer into a convolution
with 26-by-26 filters, resulting in a 3-by-3 spatial map of 1000 features. Each of the 9 spatial locations was processed independently by a network with the same structure as in the previous section,
finally resulting in a 3-by-3 spatial map of 10 features. These were pooled with a global meanpooling layer. We used the same hyperparameters that were optimal for the permutation invariant
task. In Table 2, this model is referred to as Conv-FC.
With the second network, which was inspired by ConvPool-CNN-C from Springenberg et al. [23],
we only tested the -model. The exact architecture of this network is detailed in the supplementary
material or Ref. [8]. It is referred to as Conv-Small since it is a smaller version of the network used
for CIFAR-10 dataset.
The results in Table 2 confirm that even the single convolution on the bottom level improves the
results over the fully connected network. More convolutions improve the -model significantly although the variance is still high. The Ladder network with denoising targets on every level converges
much more reliably. Taken together, these results suggest that combining the generalization ability
of convolutional networks2 and efficient unsupervised learning of the full Ladder network would
have resulted in even better performance but this was left for future work.
4.2
Convolutional networks on CIFAR-10
The CIFAR-10 dataset consists of small 32-by-32 RGB images from 10 classes. There are 50 000
labeled samples for training and 10 000 for testing. We decided to test the simple -model with
the convolutional architecture ConvPool-CNN-C by Springenberg et al. [23]. The main differences
to ConvPool-CNN-C are the use of Gaussian noise instead of dropout and the convolutional perchannel batch normalization following Ioffe and Szegedy [25]. For a more detailed description of
the model, please refer to model Conv-Large in the supplementary material.
The hyperparameters (noise level, denoising cost multipliers and number of epochs) for all models
were optimized using M = 40 000 samples for training and the remaining 10 000 samples for
validation. After the best hyperparameters were selected, the final model was trained with these
settings on all the M = 50 000 samples. All experiments were run with with 4 different random
initializations of the weight matrices and data splits. We applied global contrast normalization and
whitening following Goodfellow et al. [26], but no data augmentation was used.
The results are shown in Table 3. The supervised reference was obtained with a model closer to the
original ConvPool-CNN-C in the sense that dropout rather than additive Gaussian noise was used
for regularization.3 We spent some time in tuning the regularization of our fully supervised baseline
model for N = 4 000 labels and indeed, its results exceed the previous state of the art. This tuning
was important to make sure that the improvement offered by the denoising target of the -model is
2
In general, convolutional networks excel in the MNIST classification task. The performance of the fully
supervised Conv-Small with all labels is in line with the literature and is provided as a rough reference only
(only one run, no attempts to optimize, not available in the code package).
3
Same caveats hold for this fully supervised reference result for all labels as with MNIST: only one run, no
attempts to optimize, not available in the code package.
6
Table 3: Test results for CNN on CIFAR-10 dataset without data augmentation
Test error % with # of used labels
All-Convolutional ConvPool-CNN-C [23]
Spike-and-Slab Sparse Coding [27]
Baseline: Conv-Large, supervised only
Conv-Large, -model
4 000
31.9
23.33 (? 0.61)
20.40 (? 0.47)
All
9.31
9.27
not a sign of poorly regularized baseline model. Although the improvement is not as dramatic as
with MNIST experiments, it came with a very simple addition to standard supervised training.
5
Related Work
Early works in semi-supervised learning [28, 29] proposed an approach where inputs x are first
assigned to clusters, and each cluster has its class label. Unlabeled data would affect the shapes and
sizes of the clusters, and thus alter the classification result. Label propagation methods [30] estimate
P (y | x), but adjust probabilistic labels q(y(n)) based on the assumption that nearest neighbors are
likely to have the same label. Weston et al. [15] explored deep versions of label propagation.
There is an interesting connection between our -model and the contractive cost used by Rifai et al.
(L)
(L)
[16]: a linear denoising function z?i = ai z?i + bi , where ai and bi are parameters, turns the
denoising cost into a stochastic estimate of the contractive cost. In other words, our -model seems
to combine clustering and label propagation with regularization by contractive cost.
Recently Miyato et al. [22] achieved impressive results with a regularization method that is similar
to the idea of contractive cost. They required the output of the network to change as little as possible
close to the input samples. As this requires no labels, they were able to use unlabeled samples for
regularization.
The Multi-prediction deep Boltzmann machine (MP-DBM) [5] is a way to train a DBM with backpropagation through variational inference. The targets of the inference include both supervised
targets (classification) and unsupervised targets (reconstruction of missing inputs) that are used in
training simultaneously. The connections through the inference network are somewhat analogous to
our lateral connections. Specifically, there are inference paths from observed inputs to reconstructed
inputs that do not go all the way up to the highest layers. Compared to our approach, MP-DBM requires an iterative inference with some initialization for the hidden activations, whereas in our case,
the inference is a simple single-pass feedforward procedure.
Kingma et al. [19] proposed deep generative models for semi-supervised learning, based on variational autoencoders. Their models can be trained with the variational EM algorithm, stochastic
gradient variational Bayes, or stochastic backpropagation. Compared with the Ladder network, an
interesting point is that the variational autoencoder computes the posterior estimate of the latent
variables with the encoder alone while the Ladder network uses the decoder too to compute an implicit posterior approximate (the encoder provides the likelihood part which gets combined with the
prior).
Zeiler et al. [31] train deep convolutional autoencoders in a manner comparable to ours. They define
max-pooling operations in the encoder to feed the max function upwards to the next layer, while the
argmax function is fed laterally to the decoder. The network is trained one layer at a time using a
cost function that includes a pixel-level reconstruction error, and a regularization term to promote
sparsity. Zhao et al. [24] use a similar structure and call it the stacked what-where autoencoder
(SWWAE). Their network is trained simultaneously to minimize a combination of the supervised
cost and reconstruction errors on each level, just like ours.
6
Discussion
We showed how a simultaneous unsupervised learning task improves CNN and MLP networks
reaching the state-of-the-art in various semi-supervised learning tasks. Particularly the performance
7
obtained with very small numbers of labels is much better than previous published results which
shows that the method is capable of making good use of unsupervised learning. However, the same
model also achieves state-of-the-art results and a significant improvement over the baseline model
with full labels in permutation invariant MNIST classification which suggests that the unsupervised
task does not disturb supervised learning.
The proposed model is simple and easy to implement with many existing feedforward architectures,
as the training is based on backpropagation from a simple cost function. It is quick to train and the
convergence is fast, thanks to batch normalization.
Not surprisingly, the largest improvements in performance were observed in models which have a
large number of parameters relative to the number of available labeled samples. With CIFAR-10,
we started with a model which was originally developed for a fully supervised task. This has the
benefit of building on existing experience but it may well be that the best results will be obtained
with models which have far more parameters than fully supervised approaches could handle.
An obvious future line of research will therefore be to study what kind of encoders and decoders are
best suited for the Ladder network. In this work, we made very little modifications to the encoders
whose structure has been optimized for supervised learning and we designed the parametrization of
the vertical mappings of the decoder to mirror the encoder: the flow of information is just reversed.
There is nothing preventing the decoder to have a different structure than the encoder.
An interesting future line of research will be the extension of the Ladder networks to the temporal domain. While there exist datasets with millions of labeled samples for still images, it is prohibitively
costly to label thousands of hours of video streams. The Ladder networks can be scaled up easily
and therefore offer an attractive approach for semi-supervised learning in such large-scale problems.
Acknowledgements
We have received comments and help from a number of colleagues who would all deserve to be
mentioned but we wish to thank especially Yann LeCun, Diederik Kingma, Aaron Courville, Ian
Goodfellow, S?ren S?nderby, Jim Fan and Hugo Larochelle for their helpful comments and suggestions. The software for the simulations for this paper was based on Theano [32] and Blocks [33].
We also acknowledge the computational resources provided by the Aalto Science-IT project. The
Academy of Finland has supported Tapani Raiko.
References
[1] Harri Valpola. From neural PCA to deep unsupervised learning. In Adv. in Independent Component
Analysis and Learning Machines, pages 143?171. Elsevier, 2015. arXiv:1411.7783.
[2] Steven C Suddarth and YL Kergosien. Rule-injection hints as a means of improving network performance
and learning time. In Proceedings of the EURASIP Workshop 1990 on Neural Networks, pages 120?129.
Springer, 1990.
[3] Marc? Aurelio Ranzato and Martin Szummer. Semi-supervised learning of compact document representations with deep networks. In Proc. of ICML 2008, pages 792?799. ACM, 2008.
[4] Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discriminative
unsupervised feature learning with convolutional neural networks. In Advances in Neural Information
Processing Systems 27 (NIPS 2014), pages 766?774, 2014.
[5] Ian Goodfellow, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Multi-prediction deep Boltzmann
machines. In Advances in Neural Information Processing Systems 26 (NIPS 2013), pages 548?556, 2013.
[6] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504?507, 2006.
[7] Antti Rasmus, Tapani Raiko, and Harri Valpola. Denoising autoencoder with modulated lateral connections learns invariant representations of natural images. arXiv:1412.7210, 2015.
[8] Antti Rasmus, Harri Valpola, Mikko Honkala, Mathias Berglund, and Tapani Raiko. Semi-supervised
learning with ladder networks. arXiv preprint arXiv:1507.02672, 2015.
[9] Yoshua Bengio, Li Yao, Guillaume Alain, and Pascal Vincent. Generalized denoising auto-encoders as
generative models. In Advances in Neural Information Processing Systems 26 (NIPS 2013), pages 899?
907. 2013.
8
[10] Jocelyn Sietsma and Robert JF Dow. Creating artificial neural networks that generalize. Neural networks,
4(1):67?79, 1991.
[11] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked
denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. JMLR, 11:3371?3408, 2010.
[12] Jaakko S?arel?a and Harri Valpola. Denoising source separation. JMLR, 6:233?272, 2005.
[13] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. In International Conference on Machine Learning (ICML), pages 448?456, 2015.
[14] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In the International
Conference on Learning Representations (ICLR 2015), San Diego, 2015. arXiv:1412.6980.
[15] Jason Weston, Fr?ed?eric Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semi-supervised
embedding. In Neural Networks: Tricks of the Trade, pages 639?655. Springer, 2012.
[16] Salah Rifai, Yann N Dauphin, Pascal Vincent, Yoshua Bengio, and Xavier Muller. The manifold tangent
classifier. In Advances in Neural Information Processing Systems 24 (NIPS 2011), pages 2294?2302,
2011.
[17] Dong-Hyun Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural
networks. In Workshop on Challenges in Representation Learning, ICML 2013, 2013.
[18] Nikolaos Pitelis, Chris Russell, and Lourdes Agapito. Semi-supervised learning using an unsupervised
atlas. In Machine Learning and Knowledge Discovery in Databases (ECML PKDD 2014), pages 565?
580. Springer, 2014.
[19] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised
learning with deep generative models. In Advances in Neural Information Processing Systems 27 (NIPS
2014), pages 3581?3589, 2014.
[20] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout:
A simple way to prevent neural networks from overfitting. JMLR, 15(1):1929?1958, 2014.
[21] Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In the International Conference on Learning Representations (ICLR 2015), 2015. arXiv:1412.6572.
[22] Takeru Miyato, Shin ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributional smoothing
by virtual adversarial examples. arXiv:1507.00677, 2015.
[23] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin A. Riedmiller. Striving for
simplicity: The all convolutional net. arxiv:1412.6806, 2014.
[24] Junbo Zhao, Michael Mathieu, Ross Goroshin, and Yann Lecun. Stacked what-where auto-encoders.
2015. arXiv:1506.02351.
[25] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. arXiv:1502.03167, 2015.
[26] Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout
networks. In Proc. of ICML 2013, 2013.
[27] Ian Goodfellow, Yoshua Bengio, and Aaron C Courville. Large-scale feature learning with spike-and-slab
sparse coding. In Proc. of ICML 2012, pages 1439?1446, 2012.
[28] G. McLachlan. Iterative reclassification procedure for constructing an asymptotically optimal rule of
allocation in discriminant analysis. J. American Statistical Association, 70:365?369, 1975.
[29] D. Titterington, A. Smith, and U. Makov. Statistical analysis of finite mixture distributions. In Wiley
Series in Probability and Mathematical Statistics. Wiley, 1985.
[30] Martin Szummer and Tommi Jaakkola. Partially labeled classification with Markov random walks. Advances in Neural Information Processing Systems 15 (NIPS 2002), 14:945?952, 2003.
[31] Matthew D Zeiler, Graham W Taylor, and Rob Fergus. Adaptive deconvolutional networks for mid and
high level feature learning. In ICCV 2011, pages 2018?2025. IEEE, 2011.
[32] Fr?ed?eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron,
Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements. Deep Learning
and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
[33] Bart van Merri?enboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-Farley, Jan
Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning. CoRR, abs/1506.00619,
2015. URL http://arxiv.org/abs/1506.00619.
9
| 5947 |@word cnn:8 version:4 norm:1 seems:1 simulation:1 bn:3 rgb:1 pressure:1 dramatic:1 initial:1 series:1 exclusively:1 jimenez:1 tuned:1 ours:2 document:1 deconvolutional:1 outperforms:1 existing:3 com:1 zpre:4 activation:4 diederik:3 additive:1 happen:1 ronan:1 shape:3 christian:3 designed:1 atlas:1 update:2 convpool:5 bart:1 alone:1 generative:3 selected:2 parametrization:1 smith:1 record:1 caveat:1 provides:1 pascanu:1 location:1 org:1 simpler:1 mathematical:1 junbo:1 consists:2 combine:4 inside:1 manner:1 introduce:1 expected:1 indeed:2 pkdd:1 nor:1 multi:4 ratle:1 inspired:1 salakhutdinov:2 company:2 little:2 conv:9 provided:2 project:1 underlying:2 maximizes:1 fuel:1 what:4 kind:3 minimizes:1 developed:1 titterington:1 finding:2 transformation:1 unobserved:1 pseudo:2 temporal:1 every:6 gravity:1 tie:1 laterally:1 k2:1 classifier:2 prohibitively:1 utilization:1 normally:1 unit:1 scaled:1 nakae:1 local:3 path:3 approximately:1 chose:2 twice:1 au:1 initialization:4 alexey:2 suggests:1 limited:1 contractive:4 bi:2 sietsma:1 directed:1 decided:1 lecun:2 testing:1 block:2 implement:2 backpropagation:4 razvan:1 procedure:2 shin:2 jan:1 riedmiller:2 got:1 significantly:1 pre:2 word:2 regular:1 bergeron:1 suggest:1 get:1 cannot:3 close:2 onto:1 unlabeled:2 impossible:1 optimize:3 map:3 demonstrated:1 center:1 missing:1 quick:1 go:1 independently:1 jimmy:1 focused:1 simplicity:1 rule:3 shlens:1 lamblin:1 embedding:2 handle:1 justification:1 analogous:1 merri:1 target:10 suppose:1 diego:1 exact:1 us:1 mikko:2 goodfellow:8 trick:1 particularly:3 updating:1 nderby:1 sparsely:1 distributional:1 labeled:10 database:1 observed:4 bottom:3 steven:1 preprint:1 thousand:1 connected:6 adv:1 ranzato:1 trade:1 highest:1 russell:1 ran:3 principled:1 balanced:1 mentioned:1 ui:11 jaakko:1 tobias:2 warde:2 trained:9 efficiency:1 eric:2 easily:2 represented:2 various:2 harri:5 derivation:1 train:7 stacked:3 fast:1 artificial:1 harnessing:1 whose:1 larger:2 solve:1 supplementary:3 say:1 dmitriy:1 reconstruct:3 otherwise:1 encoder:18 ability:1 statistic:1 gi:1 transductive:1 itself:1 final:4 shakir:1 obviously:1 net:1 reconstruction:6 product:1 fr:2 relevant:3 combining:3 turned:1 poorly:1 achieve:1 academy:1 description:2 scalability:1 sutskever:1 convergence:1 cluster:3 disturb:1 adam:2 leave:2 converges:1 help:3 spent:1 batchnorm:4 nearest:1 received:1 auxiliary:4 skip:6 resemble:1 larochelle:2 goroshin:1 tommi:1 cnns:1 filter:1 stochastic:4 mtc:1 jonathon:1 material:3 virtual:2 require:1 generalization:1 extension:2 pl:1 hold:1 ds:2 around:1 normal:3 seed:1 mapping:3 dbm:4 claim:1 slab:2 matthew:1 achieves:1 finland:5 a2:1 xk2:1 early:1 omitted:1 ruslan:2 proc:3 label:27 honkala:2 ross:1 largest:1 mclachlan:1 rough:1 gaussian:4 rather:1 reaching:1 varying:1 jaakkola:1 derived:1 focus:4 rezende:1 improvement:6 properly:1 likelihood:2 masanori:1 aalto:2 contrast:1 adversarial:4 ishii:1 baseline:9 sense:1 helpful:1 inference:13 elsevier:1 typically:3 hidden:2 pixel:3 compatibility:1 classification:14 overall:1 pascal:4 hossein:1 dauphin:1 smoothing:1 art:5 special:1 spatial:3 brox:2 encouraged:1 identical:1 unsupervised:21 icml:5 promote:1 alter:1 mimic:1 future:3 mirza:2 yoshua:8 dosovitskiy:2 hint:1 randomly:2 simultaneously:4 preserve:2 resulted:1 phase:1 argmax:1 attempt:2 ab:2 mlp:5 a5:1 adjust:1 mixture:1 farley:2 held:1 chain:1 closer:1 capable:1 experience:1 machinery:1 plugged:1 taylor:1 walk:1 dae:4 instance:3 modeling:1 maximization:1 cost:30 deviation:1 krizhevsky:1 too:1 reported:2 encoders:4 corrupted:9 combined:1 thanks:1 international:3 probabilistic:3 yl:1 dong:1 lee:1 michael:1 together:2 ilya:1 yao:1 augmentation:3 berglund:2 creating:2 american:1 zhao:2 li:1 szegedy:4 chorowski:1 makov:1 bergstra:1 sec:1 pooled:1 coding:2 relieve:1 includes:1 explicitly:1 mp:2 depends:1 collobert:1 stream:1 multiplicative:1 try:2 performed:2 lab:1 closed:1 jason:1 sup:1 dumoulin:1 bayes:2 recover:1 denoised:1 bouchard:1 minimize:3 mlps:1 convolutional:15 variance:1 who:2 correspond:1 generalize:1 vincent:4 produced:2 ren:1 worth:1 corruption:2 straight:1 published:1 simultaneous:1 reach:1 sharing:1 ed:2 definition:1 colleague:1 mohamed:1 james:1 obvious:1 dataset:4 knowledge:2 improves:2 dimensionality:1 feed:1 higher:7 originally:1 supervised:48 danilo:1 done:1 though:1 furthermore:1 just:5 implicit:1 autoencoders:7 hand:5 dow:1 expressive:1 mehdi:2 propagation:3 somehow:1 minibatch:1 semisupervised:1 building:1 agapito:1 normalized:1 atlasrbf:1 multiplier:4 xavier:1 regularization:6 assigned:1 arnaud:1 attractive:3 daes:3 during:3 please:3 criterion:1 generalized:1 trying:1 complete:2 demonstrate:1 upwards:1 image:5 wise:3 variational:5 recently:1 sigmoid:1 hugo:2 attached:1 million:1 extend:1 salah:1 association:1 jocelyn:1 refer:2 significant:1 isabelle:1 ai:4 tuning:2 grid:1 similarly:1 nonlinearity:1 tionally:1 supervision:1 longer:1 depiction:1 whitening:1 impressive:1 something:1 posterior:5 own:1 showed:2 optimizing:1 reverse:1 came:1 muller:1 tapani:4 greater:1 somewhat:1 converge:1 semi:18 full:7 suddarth:2 multiple:1 takeru:1 faster:1 match:2 calculation:1 offer:1 cifar:8 a1:1 parenthesis:1 prediction:3 jost:2 expectation:4 arxiv:11 represent:4 normalization:6 sometimes:1 bimodal:2 achieved:2 sergey:2 condi:1 addition:3 want:1 whereas:1 decreased:1 annealing:2 else:1 source:3 sure:1 comment:2 pooling:1 bahdanau:1 thing:1 incorporates:1 flow:1 call:2 curious:2 noting:1 presence:1 feedforward:7 split:6 intermediate:1 easy:2 exceed:1 bengio:8 affect:1 fit:3 zi:1 architecture:4 idea:2 rifai:2 shift:2 whether:1 pca:1 url:1 accelerating:2 proceed:1 deep:20 useful:4 generally:1 detailed:2 listed:1 cleaner:1 nikolaos:1 mid:1 processed:1 ken:1 http:2 dgn:1 exist:1 sign:1 per:1 ichi:1 key:1 prevent:1 neither:1 clean:5 graph:1 asymptotically:1 sum:1 run:7 package:2 you:2 springenberg:4 yann:3 separation:2 draw:1 comparable:1 graham:1 dropout:4 layer:25 followed:3 courville:4 fan:1 alex:1 software:1 aspect:1 speed:1 nitish:1 optimality:1 injection:1 martin:4 alternate:1 combination:1 smaller:1 reconstructing:1 em:2 pitelis:1 making:3 modification:1 rob:1 explained:1 invariant:12 outlier:1 theano:2 iccv:1 taken:2 equation:1 resource:1 previously:3 turn:3 differentiates:1 needed:1 know:1 mind:1 fed:1 whichever:1 zk2:1 end:5 available:6 parametrize:1 operation:1 apply:1 hierarchical:2 pierre:1 batch:5 customary:1 original:4 thomas:2 top:4 remaining:1 ensure:1 clustering:1 miyato:2 graphical:1 a4:1 include:1 zeiler:2 build:1 especially:1 move:1 added:2 spike:2 costly:1 antoine:1 gradient:1 iclr:2 reversed:1 valpola:8 thank:1 lateral:2 decoder:22 koyama:1 lajoie:1 parame:1 chris:1 manifold:1 discriminant:1 trivial:1 reason:1 arel:1 dzmitry:1 assuming:2 code:4 modeled:1 rasmus:4 illustration:1 minimizing:1 manzagol:1 robert:1 ba:1 implementation:3 reliably:2 boltzmann:3 allowing:1 vertical:1 observation:6 neuron:1 markov:2 discarded:1 datasets:3 convolution:3 acknowledge:1 hyun:1 finite:1 ecml:1 hinton:2 jim:1 david:2 required:1 connection:11 optimized:3 learned:1 kingma:4 hour:1 nip:7 deserve:1 able:3 below:1 maeda:1 sparsity:1 challenge:1 green:2 max:3 video:1 suitable:1 natural:2 treated:1 force:1 regularized:1 improve:1 github:1 ladder:29 picture:1 mathieu:1 axis:2 raiko:4 excel:1 started:1 autoencoder:8 coupled:1 auto:2 prior:4 epoch:3 literature:1 acknowledgement:1 tangent:1 discovery:1 relative:1 fully:11 permutation:8 interesting:3 suggestion:1 allocation:1 geoffrey:2 triple:1 validation:4 offered:1 reclassification:1 propagates:1 principle:1 viewpoint:1 share:2 cd:6 surprisingly:2 supported:1 antti:3 preactivation:1 alain:1 allow:1 neighbor:1 explaining:1 nokia:1 sparse:2 benefit:1 van:1 evaluating:2 kz:1 computes:1 forward:1 collection:2 made:1 preventing:1 san:1 adaptive:1 far:2 welling:1 reconstructed:1 approximate:1 compact:1 keep:1 confirm:1 global:2 overfitting:1 ioffe:3 conceptual:1 discriminative:1 fergus:1 search:2 latent:17 iterative:3 why:1 table:7 learn:2 nicolas:1 contributes:1 improving:1 serdyuk:1 complex:1 necessarily:1 constructing:1 domain:1 marc:1 significance:1 main:2 linearly:1 arrow:2 aurelio:1 noise:11 hyperparameters:8 denoise:2 nothing:1 repeated:1 complementary:1 ref:2 referred:2 swwae:2 depicts:1 wiley:2 sub:1 wish:1 jmlr:3 learns:1 ian:6 rectifier:1 covariate:2 bastien:1 mobahi:1 explored:2 striving:1 svm:2 a3:1 workshop:3 mnist:16 adding:1 effectively:1 corr:1 mirror:1 conditioned:1 push:1 suited:2 fc:2 simply:1 likely:1 failed:1 partially:1 ters:1 springer:3 corresponds:3 nested:1 acm:1 weston:2 identity:1 careful:1 towards:2 maxout:1 replace:1 jf:1 change:2 eurasip:1 specifically:1 except:1 reducing:3 denoising:41 mathias:2 pas:4 meaningful:1 perceptrons:1 aaron:4 select:1 guillaume:1 internal:3 support:1 szummer:2 modulated:1 tested:4 avoiding:1 srivastava:1 |
5,466 | 5,948 | Enforcing balance allows local supervised learning in
spiking recurrent networks
Sophie Deneve
Group For Neural Theory, ENS Paris
Rue dUlm, 29, Paris, France
sophie.deneve@ens.fr
Ralph Bourdoukan
Group For Neural Theory, ENS Paris
Rue dUlm, 29, Paris, France
ralph.bourdoukan@gmail.com
Abstract
To predict sensory inputs or control motor trajectories, the brain must constantly learn temporal dynamics based on error feedback. However, it remains
unclear how such supervised learning is implemented in biological neural networks. Learning in recurrent spiking networks is notoriously difficult because local changes in connectivity may have an unpredictable effect on the global dynamics. The most commonly used learning rules, such as temporal back-propagation,
are not local and thus not biologically plausible. Furthermore, reproducing the
Poisson-like statistics of neural responses requires the use of networks with balanced excitation and inhibition. Such balance is easily destroyed during learning.
Using a top-down approach, we show how networks of integrate-and-fire neurons can learn arbitrary linear dynamical systems by feeding back their error as
a feed-forward input. The network uses two types of recurrent connections: fast
and slow. The fast connections learn to balance excitation and inhibition using a
voltage-based plasticity rule. The slow connections are trained to minimize the
error feedback using a current-based Hebbian learning rule. Importantly, the balance maintained by fast connections is crucial to ensure that global error signals
are available locally in each neuron, in turn resulting in a local learning rule for
the slow connections. This demonstrates that spiking networks can learn complex
dynamics using purely local learning rules, using E/I balance as the key rather
than an additional constraint. The resulting network implements a given function
within the predictive coding scheme, with minimal dimensions and activity.
The brain constantly predicts relevant sensory inputs or motor trajectories. For example, there is
evidence that neural circuits mimic the dynamics of motor effectors using internal models [1]. If the
dynamics of the predicted sensory and motor variables change in time, these models may become
false [2] and therefore need to be readjusted through learning based on error feedback.
From a modeling perspective, supervised learning in recurrent networks faces many challenges.
Earlier models have succeeded in learning useful functions at the cost of non local learning rules
that are biologically implausible [3, 4]. More recent models based on reservoir computing [5?7]
transfer the learning from the recurrent network (with now ?random?, fixed weights) to the readout
weights. Using this simple scheme, the network can learn to generate complex patterns. However,
the majority of these models use abstract rate units and are yet to be translated into more realistic
spiking networks. Moreover, to provide a sufficiently large reservoir, the recurrent network needs
to be large, balanced and have a rich and high dimensional dynamics. This typically generates far
more activity than strictly required, a redundancy that can be seen as inefficient.
On the other hand, supervised learning models involving spiking neurons have essentially concentrated on the learning of precise spike sequences [8?10]. With some exceptions [10,11] these models
use feed-forward architectures [12]. In a balanced recurrent network with asynchronous, irregular
and highly variable spike trains, such as those found in cortex, the activity has been shown to be
1
chaotic [13, 14]. This leads to spike timing being intrinsically unreliable, rendering a representation
of the trajectory by precise spike sequences problematic. Moreover, many configurations of spike
times may achieve the same goal [15].
Here we derive two local learning rules that drive a network of leaky integrate-and-fire (LIF) neurons into implementing a desired linear dynamical system. The network is trained to minimize the
objective kx(t) ? x
?(t)k2 + H(r), Where x
?(t) is the output of the network decoded from the spikes,
x(t) is the desired output, and H(r) is a cost associated with firing (penalizing unnecessary activity, and thus enforcing efficiency). The dynamical system is linear, x? = Ax + c, with A being
a constant matrix and c a time varying command signal. We first study the learning of an autoencoder, i.e., a network where the desired output is fed to the network as a feedforward input. The
autoencoder learns to represent its inputs as precisely as possible in an unsupervised fashion. After
learning, each unit represents the encoding error made by the entire network. We then show that
the network can learn more complex computations if slower recurrent connections are added to the
autoencoder. Thus, it receives the command c along with an error signal and learns to generate the
output x
? with the desired temporal dynamics. Despite the spike-based nature of the representation
and of the plasticity rules, the learning does not enforce precise spike timing trajectories but, on the
contrary, enforces irregular and highly variable spike trains.
1
Learning a balance : global becomes local
Using a predictive coding strategy [15?17], we build a network that learns to efficiently represent its
inputs while expending the least amount of spikes. To introduce the learning rules and explain how
they work, we start by describing the optimized network (after learning).
Let us first consider a set of unconnected integrate-and-fire neurons receiving shared input signals
x = (xi ) through feedforward connections F = (Fji ). We assume that the network performs predic? obtained by decoding the
tive coding, i.e. it subtracts from each of theseP
input signals an estimate x
output spike trains (fig 1A). Specifically, x
?i =
Dij rj , where D = (Dij ) are the decoding
P weights
and r = (rj ) are the filtered spike trains which obey r?j = ??rj + oj with oj (t) = k ?(t ? tkj )
being the spike train of neuron j and tkj are the times of its spikes. Note that such an autoencoder
automatically maintains an accurate representation, because it responds to any encoding error larger
than the firing threshold by increasing its response and in turn decreasing the error. It is also efficient, because neurons respond only when input and decoded signals differ. The autoencoder can be
equivalently implemented by lateral connections, rather than feedback targeting the inputs (fig 1A).
These lateral connections combine the feedforward connections and the decoding weights and they
subtract from the feedforward inputs received by each neuron. The membrane potential dynamics
in this recurrent network are described by:
? = ??V + Fs + Wo
V
(1)
where V is the vector of the membrane potentials of the population, s = x? + ?x is the effective
input to the population, W = ?FD is the connectivity matrix, and o is the population vector of the
spike. Neuron i has threshold Ti = kFi k2 /2 [15]. When input channels are independent and the
feed-forward weights are distributed uniformly on a sphere then the optimal decoding weights D are
equal to the encoding weights F and hence the optimal recurrent connectivity W = ?FFT [17].
In the following we assume that this is always the case and we choose the feedforward weights
accordingly.
In this auto-encoding scheme having a precise representation of the inputs is equivalent to maintaining a precise balance between excitation and inhibition. In fact, the membrane potential of a
neuron is the projection of the global error of the network on the neurons?s feedforward weight
? ) [15]). If the output of the network matches the input, the recurrent term in the
(Vi = Fi (x ? x
? , should precisely cancel the feedforward term Fi x. Therefore, in order to
membrane potential, Fi x
learn the connectivity matrix W, we tackle the problem through balance, which is its physiological
characterization. The learning rule that we derive achieves efficient coding by enforcing a precise
balance at a single neuron level. The learning rule makes the network converge to a state where each
presynaptic spike cancels the recent charge that was accumulated by the postsynaptic neuron (Fig
1B). This accumulation of charge is naturally represented by the postsynaptic membrane potential
Vi , which jumps upon the arrival of a presynaptic spike by a magnitude given by the recurrent weight
2
C
F
D
F
D
After
x2 , x
?2
- x?
20
20
?
x
x
Neuron index
x +
Before
x1 , x
?1
A
10
10
Wf =
FD
10002000
20003000
30004000
4000
1000
Balanced
D
Unbalanced
Fi x
?
20
Fi x
?
Vpost
Wi r =
W =0
W >0
ms
E
100
EW
B
10002000
20003000
3000 4000
1000
200 4000
0
-20
-20
Fi x
10-5
0
20
Wi r
101
102 103
Time(s)
Figure 1: A: a network preforming predictive coding. Top panel: a set of unconnected leaky
integrate-and-fire neurons receiving the error between a signal and their own decoded spike trains.
Bottom panel: the previous architecture is equivalent to the recurrent network with lateral connections equal to the product of the encoding and the decoding weights. B: illustration of the learning
of an inhibitory weight. The trace of the membrane potential of a postsynaptic neuron is shown in
blue and red. The blue lines correspond to changes due to the integration of the feedforward input,
and the red to changes caused by the integration of spikes from neurons in the population. The black
line represents the resting potential of the neuron. In the right panel the presynaptic spike perfectly
cancels the accumulated feedforward current during a cycle and therefore there is no learning. In the
left panel the inhibitory weight is too strong and thus creates imbalance in the membrane potential;
therefore, it is depressed by learning. C: learning in a 20 neuron network. Top panels: the two
dimensions of the input (blue lines) and the output (red lines) before (left) and after (right) learning.
Bottom panels: raster plots of the spikes in the population. D: left panel: after learning each neuron
receives a local estimate of the output of the network through lateral connections (red arrows). right
panel: scatter plot of the output of the network projected on the feedforward weights of the neurons
versus the recurrent input they receive. E: the evolution of the mean error between the recurrent
weights of the network and the optimal recurrent weights ?FFT using the rule defined by equation
2 (black line) and the rule in [16] (gray line). Note that our rule is different from [16] because it
operates on a a finer time-scale and reaches the optimal balanced state with more than one order
of magnitude faster. This speed-up is important because, as we will see below, some computations
require a very fast restoration of this balance.
Wij due to the instantaneous nature of recurrent synapses. Because the two charges should cancel
each other, the greedy learning rule is proportional to the sum of both quantities:
?Wij ? ?(Vi + ?Wij )
(2)
where Vi is the membrane potential of the postsynaptic neuron, Wij is the recurrent weight from
neuron j to neuron i, and the factor ? controls the overall magnitude of lateral weights and, therefore,
the total spike count in the population. More importantly,
? regularizes the cost penalizing the total
P
spike count in the population (i.e. H(r) = ? i ri where ? is the effective linear cost [15]). The
example of an inhibitory synapse Wij < 0 is illustrated in figure 1B. If neuron i is too hyperpolarized
upon the arrival of a presynaptic spike from neuron j, i.e., if the inhibitory weight Wij is smaller
3
than ?Vi /?, the absolute weight of the synapse (the amplitude of the IPSP) is decreased. The
opposite occurs if the membrane is too depolarized. The synaptic weights thus converge when the
two quantities balance each other on average Wij = ?hVi itj /?, where tj are the spike times of the
presynaptic neuron j.
Fig 1C shows the learning in a 20-neuron network receiving random input signals. For illustration
purposes the weights are initialized with very small values. Before learning, the lack of lateral
connectivity causes neurons to fire synchronously and regularly. After learning, spike trains are
sparse, irregular and asynchronous, despite the quasi absence of noise in the network. Even though
the firing rates decrease globally, the quality of the input representation drastically improves over
the course of learning. Moreover, the convergence of recurrent weights to their optimal values is
typically quick and monotonic (Fig 1E).
By enforcing balance, the learning rule establishes an efficient and reliable communication between
? ), every neuron has access - through its recurrent
neurons. Because V = Fx ? FFT r = F(x ? x
input - to the network?s global coding error projected on its feedforward weight (Fig 1D). This local
representation of network?s the global performance is crucial in the supervised learning scheme we
describe in the following sections.
2
Generating temporal dynamics within the network
While in the previous section we presented a novel rule that drives a spiking network into efficiently
representing its inputs, we are generally interested in networks that perform more complex computations. It has been shown already that a network having two synaptic time scales can implement an
arbitrary linear dynamical system [15]. We briefly summarize this approach in this section.
A
A
C
B
ii
i
Ax + x
+c
x
?
x? + x
iii
x
?
Div
+c
Ev
+
+
F
FT
+c
A?
x+ x
?
+c
x
?
Wf = FFT
F
FT
x
?
Ws = F(A + I)FT
(A + I)?
x
Figure 2: The construction of a recurrent network that implements a linear dynamical system.
In the autoencoder presented above, the effective input to the network is s = x? + ?x (Fig 2A). We
assume that x follows linear dynamics x? = Ax + c, where A is a constant matrix and c(t) is a time
varying command. Thus, the input can be expanded to s = Ax + c + ?x = (A + ?I)x + c (Fig
? approximates x very precisely, they can be interchanged.
2B). Because the output of the network x
According to this self-consistency argument, the external input term (A + ?I)x is replaced by
(A + ?I)?
x which only depends on the activity of the network (Fig 2C). This replacement amounts
to including a global loop that adds the term (A + ?I)?
x to the source input (Fig 2D). As in the
autoencoder, this can be achieved using recurrent connections in the form of F(A + ?I)FT (Fig
2E). Note that this recurrent input is the filtered spike train r, not the raw spikes o. As a result, these
new connections have slower dynamics than the connections presented in the first section. This
motivates us to characterize connections as fast and slow depending on their underlying dynamics.
The dynamics of the membrane potentials are now described by:
? = ??V V + Fc + Ws r + Wf o
V
(3)
where ?V is the leak in the membrane potential, it is different from the leak in the decoder ?. It is
clear from the previous construction that the slow connectivity Ws = F(A + ?I)FT , is involved
4
in generating the temporal dynamics of x. Owing to the slow connections, the network is able to
generate autonomously the temporal dynamics of the output and thus, only needs the command
c as an external input. For example, if A = 0 (i.e. the network implements a pure integrator),
Ws = ?FFT compensates for the leak in the decoder by generating a positive feedback term that
prevents the activity form decaying. On the other hand, the fast connectivity matrix Wf = ?FFT ,
trained with the unsupervised, voltage based rule presented previously, plays the same role as in the
autoencoder; It insures that the global output and the global coding error of the network are available
locally to each neuron.
3
Teaching the network to implement a desired dynamical system
Our aim is to develop a supervised learning scheme where a network learns to generate a desired
output using an error feedback as well as a local learning rule. The learning rule targets the slow
recurrent connections responsible for the generation of the temporal dynamics in the output, as seen
in the previous section. Instead of deriving directly the learning rule for the recurrent connections,
we first derive a learning rule for the matrix A of the linear dynamical system using simple results
from control theory, and then we translate the learning to the recurrent network.
3.1
learning a linear dynamical system online
Consider the linear dynamical system x
?? = M?
x + c where M is a matrix. We derive an online
learning rule for the coefficients of the matrix M, such that the output x
? becomes after learning
equal to the desired output x. The latter undergoes the dynamics x? = Ax + c. Therefore, we define
e = x?x
? as the error vector between the actual and the desired output. This error is fed to the
mistuned system in order to correct and ?guide? its behavior (Fig 3A). Thus, the dynamics of the
system with this feedback are x
?? = M?
x + c + K(x ? x
?), where K is a scalar implementing the gain
of the loop. The previous equation can be rewritten in the following form:
x
?? = (M ? KI)?
x + c + Kx
(4)
where I is the identity matrix. If we assume that the spectra of the signals are bounded, it is straightforward to show, via a Laplace transform, that x
? ? x when K ? +?. The larger the gain of the
feedback, the smaller the error. Intuitively, if K is large, very small errors are immediately detected
and therefore, corrected by the system. Nevertheless our aim is not to correct the dynamical system
forever, but to teach it to generate the desired output itself without the error feedback. Thus, the
matrix M needs to be modified over time. To derive the learning rule for the matrix M, we operate
a gradient descent on the loss function L = eT e = kx ? x
?k2 with respect to the components of the
matrix. The component Mij is updated proportionally to the gradient of L,
?Mij = ?
?L
??
x T
=(
) e
?Mij
?Mij
(5)
To evaluate the term ??
x/?Mij , we solve the equation 4 in the simple case were inputs c are constant. If we assume that K is much larger than the eigenvalues of M, the gradient ??
x/?Mij is
approximated by Eij x
?, where Eij is a matrix of zeros except for component ij which is one. This
leads to the very simple learning rule ?Mij ? x
?j ei , which we can write in matrix form as:
?M ? e?
xT
(6)
The learning rule is simply the outer product of the output and the error. To derive the learning rule
we assume constant or slowly varying input. In practice, however, learning can be achieved also
using fast varying inputs (Fig 3).
3.2
learning rule for the slow connections
In the previous section we derived a simple learning rule for the state matrix M of a linear dynamical
system, driving it into a desired regime. We translate this learning scheme to the recurrent network
described in section 2. To do this, two things have to be determined. First, we have to define the
form of the error feedback in the recurrent network case. Second, we need to adapt the learning
5
rule of the matrix of the underlying dynamical system to the slow weights of the recurrent neural
network.
In the previous learning scheme the error is fed to the dynamical system as an additional input. Since
the input/decoding weight vector of a neuron Fi defines the direction that is relevant for its ?action?
space, the neuron should only receive the errors that are in this direction. Thus, the error vector is
projected on the feedforward weights vector of a neuron before being fed to it. The feedback weights
matrix is then simply equal to the feedforward weights matrix F (Fig 3A). Accordingly, equation 3
becomes:
? = ??V V + Fc + Ws r + Wf o + KFe
V
(7)
In the autoencoder, the membrane potential of a neuron represents the auto-coding error made by
the entire network along the direction of the neuron?s feedforward weights. With the addition of the
dynamic error feedback and the slow connections, the membrane potentials now represent the error
between obtained and desired network output trajectories.
To translate the learning rule of the dynamical system into a rule for the recurrent network, we assume that any modification of the recurrent weights directly reflects a modification in the underlying
dynamical system. This is achieved if the updates ?Ws of the slow connectivity matrix are in the
form of F(?M)FT . This ensures that the network always implements a linear dynamical system and
guarantees that the analysis is consistent. The learning rule of the slow connections Ws is obtained
by replacing ?M by its expression according to equation 6 in F(?M)FT :
?Ws ? (Fe)(F?
x)T
(8)
?Wijs ,
According to this learning rule, the weight update between two neurons,
is proportional to
the error feedback Fi e received as a current by the postsynaptic neuron i and to Fj x
?, the output of
the network projected on the feedforward weight of the presynaptic neuron j. The latter quantity is
available to the presynaptic neuron through its inward fast recurrent connections, as shown for the
autoencoder in Fig 1D.
One might object that the previous learning rule is not biologically plausible because it involves
currents present separately in the pre- and post-synaptic neurons. Indeed, the presynaptic term may
not be available to the synapse. However, as shown in the supplementary information of [15], the
filtered spike train rj of the presynaptic neuron is approximately proportional to bFj x
?c+ , a rectified
version of the presynaptic term in the previous learning rule. By replacing Fj x
? by rj in the equation
8 we obtain the following biologically plausible learning rule:
?Wijs = Ei rj
(9)
Where Ei = Fi e is the total error current received by the postsynaptic neuron.
3.3
Learning the underlying dynamical system while maintaining balance
For the previous analysis to hold, the fast connectivity Wf should be learned simultaneously with
the slow connections using the learning rule defined by equation 2. As shown in the first section,
the learning of the fast connections establishes a detailed balance on the level of the neuron and
guarantees that the output of the network is available to each neuron through the term Fj x
?. The
latter is the presynaptic term in the learning rule of equation 8. Despite not being involved in the
dynamics per se, these fast connections are crucial in order to learn any temporal dynamics. In other
words, learning a detailed balance is a pre-requirement to learn dynamics with local plasticity rules
in a spiking network. The plasticity of the fast connections restores very quickly any perturbation to
the balance caused by the learning of the slow connections.
3.4
Simulation
As a toy example, we simulated a 20-neuron network learning a 2D-damped oscillator using a feedback gain K = 100. The network is initialized with weak fast connections and weak slow connections. The learning is driven by smoothed gaussian noise as the command c. Note that in the initial
state, because of the absence of fast recurrent connections, the output of the network does not depend linearly on the input because membrane potentials are hyperpolarized (Fig 3B). The network?s
output is quickly linearized through the learning of the fast connections (equation 2 by enforcing a
6
c
B
M
-+
Ke
T
F x
?
F
-
F
x
8
8
4
4
-4
0
2 4
100102 1010
Time(s)
-8
0
100
+
Ke
0
0
10
M.P.
+
x
x
?
Error
c+
-4
-8
4
150
ms
D
Learned
C
1 Wf
0
-1
Learned
-1 0
102
W
1
s
0
-102 2
0 102
-10
Predicted
? 2 x1 , x
?1
Neuron index x2 , x
A
20
10
1
5000
10000 15000
5000
10000 15000
300 ms
Figure 3: Learning temporal dynamics in a recurrent network. A, Top panel: the linear dynamical
system characterized by the state matrix M, receives feedback signaling the difference between its
actual output and a desired output. Bottom panel: a recurrent network displaying slow and fast
connections is equivalent to the top architecture if the error feedback is fed into the network through
the feedforward matrix F. B: a 20 neuron network learns using equations 9 and 2. Left panel: the
evolution of the error between the desired and the actual output during learning. The black and
grey arrows represent instances where the time course of the membrane potential is shown in the
next plot. Right panel: the time course of the membrane potential of one neuron at two different
instances during learning. The gray line corresponds to the initial state while the black line is a few
iterations after. C: scatter plots of the learned versus the predicted weights at the end of learning for
fast (top panel) and slow (bottom panel) connections. D, top panels: the output of the network (red)
and the desired output (blue), before (left) and after (right) learning. The black solid line on the top
shows the impulse command that drives the network. Bottom panels: raster plots before and after
learning. In the left raster plot there is no spiking activity after the first 50 ms.
balance on the membrane potential (Fig 3B): initial membrane potentials exhibit large fluctuations
which reduce drastically after a few iterations (Fig 3B). On a slower time scale the slow connections learn to minimize the prediction error using the learning rule of equation 9. The error between
the output of the network and the desired output decreases drastically (Fig 3B). To compute this
error, different instances of the connectivity matrices were sampled during learning. The network
was then re-simulated using these instances while fixing K=0 in oder to mesure the performance
in the absence of feedback. At the end of learning the slow and fast connections converge to their
predicted values Ws = F(A + ?I)FT and Wf = ?FFT (Fig 3C). The presence of the feedback
is no longer required for the network to have the right dynamics (i.e. we set K = 0 and obtain the
desired output (Fig 3D and 3B). The output of the network is very accurate (representing the state
x with a precision of the order of the contribution of a single spike), parsimonious (i.e. it does not
spend more spikes than needed to represent the dynamical state with this level of accuracy) and the
spike trains are asynchronous and irregular. Note that because the slow connections are very weak
in the initial state, spiking activity decays quickly after the end of the command impulse due to the
absence of slow recurrent excitation (Fig 3D).
7
Simulation parameters Figure 1 : ? = 0.05, ? = 0.51, learning rate: 0.01. Figure 3 : ? = 50,
?V = 1, ? = 0.52, K = 100, learning rate of the fast connections: 0.03, learning rate of the slow
connections: 0.15.
4
Discussion
Using a top-down approach we derived a pair of spike-based and current-based plasticity rules that
enable precise supervised learning in a recurrent network of LIF neurons. The essence of this approach is that every neuron is a precise computational unit that represents the network error in a
subspace of dimension 1 in the the output space. The precise and distributed nature of this code
allows the derivation of local learning rules from global objectives.
To compute collectively, the neurons need to communicate to each other about their contributions to
the output of the network. The fast connections are trained in an unsupervised fashion using a spikebased rule to optimize this communication. It establishes this efficient communication by enforcing
a detailed balance between excitation and inhibition. The slow connections however are trained to
minimize the error between the actual output of the network and a target dynamical system. They
produce currents with long temporal correlations implementing the temporal dynamics of the underlying linear dynamical system. The plasticity rule for the slow connections is simply proportional
to an error feedback injected as a current in the postsynaptic neuron, and to a quantity akin to the
firing rate of the presynaptic neuron. To guide the behavior of the network during learning, the error
feedback must be strong and specific. Such strength and specialization is in agreement with data
on climbing fibers in the cerebellum [18?20], who are believed to bring information about errors
during motor learning [21]. However, in this model, the specificity of the error signals are defined
by a weight matrix through which the errors are fed to the neurons. Learning these weights is still
under investigation. We believe that they could be learned using a covariance-based rule.
Our approach is substantially different form usual supervised learning paradigms in spiking networks since it does not target the spike times explicitly. However, observing spike times may be
misleading since there are many combinations that can produce the same output [15, 16]. Thus, in
this framework, variability in spiking is not a lack of precision but is the consequence of the redundancy in the representation. Neurons having similar decoding weights may have their spike times
interchanged while the global representation is conserved. What is important is the cooperation
between the neurons and the precise spike timing relative to the population. For example, using independent poisson neurons with instantaneous firing rates identical to the predictive coding network
drastically degrades the quality of the representaion [15].
Our approach is also different from liquid computing in the sense that the network is small, structured, and fires only when needed. In addition, in these studies the feedback error used in the
learning rule has no clear physiological correlate, while here it is concretely injected as a current in
the neurons. This current is used simultaneously to drive the learning rule and to guide the dynamics
of the neuron in the short term. However, it is still unclear what the mechanisms are that could
implement such a current dependent learning rule in biological neurons.
An obvious limitation of our framework is that it is currently restricted to linear dynamical systems.
One possibility to overcome this limitation would be to introduce non-linearities in the decoder,
which would translate into specific non-linearities and structures in the dendrites. A similar strategy
has been employed recently to combine the approach of predictive coding and FORCE learning [7]
using two compartment LIF neurons [22]. We are currently exploring less constraining forms of
synaptic non-linearities, with the ultimate goal of being able to learn arbitrary dynamics in spiking
networks using purely local plasticity rules.
Acknowledgments
This work was supported by ANR-10-LABX-0087 IEC, ANR-10-IDEX-0001-02 PSL, ERC grant
FP7-PREDISPIKE and the James McDonnell Foundation Award - Human Cognition.
8
Refrences
[1] Kawato, M. (1999). Internal models for motor control and trajectory planning. Current opinion
in neurobiology, 9(6), 718-727.
[2] Lackner, J. R., & Dizio, P. (1998). Gravitoinertial force background level affects adaptation to
coriolis force perturbations of reaching movements. Journal of neurophysiology, 80(2), 546553.
[3] Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1988). Learning representations by backpropagating errors. Cognitive modeling, 5.
[4] Williams, R. J., & Zipser, D. (1989). A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2), 270-280.
[5] Jaeger, H. (2001). The echo state approach to analysing and training recurrent neural networkswith an erratum note. Bonn, Germany: German National Research Center for Information
Technology GMD Technical Report, 148, 34.
[6] Maass, W., Natschlger, T., & Markram, H. (2002). Real-time computing without stable states:
A new framework for neural computation based on perturbations. Neural computation, 14(11),
2531-2560.
[7] Sussillo, D., & Abbott, L. F. (2009). Generating coherent patterns of activity from chaotic
neural networks. Neuron, 63(4), 544-557.
[8] Legenstein, R., Naeger, C., & Maass, W. (2005). What can a neuron learn with spike-timingdependent plasticity?. Neural computation, 17(11), 2337-2382.
[9] Pfister, J., Toyoizumi, T., Barber, D., & Gerstner, W. (2006). Optimal spike-timing-dependent
plasticity for precise action potential firing in supervised learning. Neural computation, 18(6),
1318-1348.
[10] Ponulak, F., & Kasinski, A. (2010). Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification, and spike shifting. Neural Computation, 22(2), 467510.
[11] Memmesheimer, R. M., Rubin, R., lveczky, B. P., & Sompolinsky, H. (2014). Learning precisely timed spikes. Neuron, 82(4), 925-938.
[12] G?utig, R., & Sompolinsky, H. (2006). The tempotron: a neuron that learns spike timingbased
decisions. Nature neuroscience, 9(3), 420-428.
[13] van Vreeswijk, C., & Sompolinsky, H. (1996). Chaos in neuronal networks with balanced
excitatory and inhibitory activity. Science, 274(5293), 1724-1726.
[14] Brunel, N. (2000). Dynamics of networks of randomly connected excitatory and inhibitory
spiking neurons. Journal of Physiology-Paris, 94(5), 445-463.
[15] Boerlin, M., Machens, C. K., & Den`eve, S. (2013). Predictive coding of dynamical variables
in balanced spiking networks. PLoS computational biology, 9(11), e1003258.
[16] Bourdoukan, R., Barrett, D., Machens, C. K & Den`eve, S. (2012). Learning optimal spikebased representations. In Advances in neural information processing systems (pp. 2285-2293).
[17] Vertechi, P., Brendel, W., & Machens, C. K. (2014). Unsupervised learning of an efficient
short-term memory network. In Advances in Neural Information Processing Systems (pp. 36533661).
[18] Watanabe, M., & Kano, M. (2011). Climbing fiber synapse elimination in cerebellar Purkinje
cells. European Journal of Neuroscience, 34(10), 1697-1710.
[19] Chen, C., Kano, M., Abeliovich, A., Chen, L., Bao, S., Kim, J. J., ... & Tonegawa, S. (1995).
Impaired motor coordination correlates with persistent multiple climbing fiber innervation in
PKC mutant mice. Cell, 83(7), 1233-1242.
[20] Eccles, J. C., Llinas, R., & Sasaki, K. (1966). The excitatory synaptic action of climbing fibres
on the Purkinje cells of the cerebellum. The Journal of Physiology, 182(2), 268-296.
[21] Knudsen, E. I. (1994). Supervised Learning in the Brain. The Journal of Neuroscience 14(7),
39853997.
[22] Thalmeier, D., Uhlmann, M., Kappen, H.J., & Memmesheimer, R. Learning universal computations with spikes. under review.
9
| 5948 |@word neurophysiology:1 version:1 briefly:1 hyperpolarized:2 grey:1 simulation:2 linearized:1 covariance:1 solid:1 kappen:1 initial:4 configuration:1 liquid:1 current:12 com:1 gmail:1 yet:1 must:2 scatter:2 realistic:1 plasticity:9 motor:7 plot:6 update:2 greedy:1 accordingly:2 short:2 filtered:3 characterization:1 along:2 become:1 persistent:1 combine:2 introduce:2 indeed:1 behavior:2 planning:1 brain:3 integrator:1 globally:1 decreasing:1 automatically:1 actual:4 unpredictable:1 increasing:1 becomes:3 idex:1 innervation:1 moreover:3 underlying:5 circuit:1 panel:16 bounded:1 inward:1 what:3 linearity:3 substantially:1 guarantee:2 temporal:11 every:2 ti:1 charge:3 tackle:1 demonstrates:1 k2:3 control:4 unit:3 grant:1 continually:1 before:6 positive:1 local:14 timing:4 consequence:1 despite:3 encoding:5 firing:6 fluctuation:1 approximately:1 black:5 might:1 kfi:1 acknowledgment:1 responsible:1 enforces:1 practice:1 implement:7 coriolis:1 chaotic:2 signaling:1 universal:1 physiology:2 projection:1 pre:2 word:1 specificity:1 spikebased:2 targeting:1 accumulation:1 equivalent:3 optimize:1 quick:1 center:1 straightforward:1 williams:2 ke:2 immediately:1 pure:1 rule:50 importantly:2 deriving:1 bourdoukan:3 population:8 fx:1 laplace:1 updated:1 construction:2 play:1 target:3 us:1 machens:3 agreement:1 rumelhart:1 approximated:1 predicts:1 bottom:5 ft:8 role:1 readout:1 ensures:1 cycle:1 connected:1 sompolinsky:3 autonomously:1 plo:1 tkj:2 decrease:2 movement:1 balanced:7 leak:3 dynamic:28 trained:5 depend:1 predictive:6 purely:2 upon:2 creates:1 efficiency:1 translated:1 easily:1 represented:1 fiber:3 derivation:1 train:10 fast:20 effective:3 describe:1 detected:1 larger:3 plausible:3 solve:1 supplementary:1 spend:1 toyoizumi:1 anr:2 compensates:1 statistic:1 transform:1 itself:1 echo:1 online:2 sequence:3 eigenvalue:1 wijs:2 product:2 fr:1 adaptation:1 relevant:2 loop:2 translate:4 achieve:1 bao:1 convergence:1 readjusted:1 requirement:1 impaired:1 jaeger:1 produce:2 generating:4 object:1 derive:6 recurrent:39 depending:1 fixing:1 develop:1 sussillo:1 ij:1 received:3 strong:2 implemented:2 predicted:4 involves:1 differ:1 direction:3 correct:2 owing:1 human:1 enable:1 opinion:1 elimination:1 implementing:3 require:1 feeding:1 investigation:1 biological:2 strictly:1 exploring:1 hold:1 sufficiently:1 cognition:1 predict:1 driving:1 interchanged:2 achieves:1 hvi:1 boerlin:1 purpose:1 currently:2 coordination:1 uhlmann:1 vpost:1 establishes:3 reflects:1 always:2 gaussian:1 aim:2 modified:1 rather:2 reaching:1 varying:4 voltage:2 command:7 ax:5 derived:2 mutant:1 utig:1 wf:8 sense:1 kim:1 dependent:2 accumulated:2 typically:2 entire:2 w:9 quasi:1 france:2 wij:7 germany:1 interested:1 ralph:2 overall:1 classification:1 restores:1 lif:3 integration:2 equal:4 having:3 identical:1 represents:4 biology:1 unsupervised:4 cancel:4 mimic:1 report:1 few:2 randomly:1 simultaneously:2 national:1 replaced:1 replacement:1 fire:6 mistuned:1 fd:2 highly:2 possibility:1 tj:1 damped:1 accurate:2 succeeded:1 initialized:2 desired:16 re:1 timed:1 minimal:1 effector:1 instance:4 modeling:2 earlier:1 predic:1 purkinje:2 restoration:1 cost:4 dij:2 too:3 characterize:1 receiving:3 decoding:7 quickly:3 itj:1 vertechi:1 mouse:1 connectivity:10 choose:1 slowly:1 external:2 cognitive:1 inefficient:1 mesure:1 toy:1 potential:19 coding:11 coefficient:1 caused:2 explicitly:1 vi:5 depends:1 observing:1 red:5 start:1 decaying:1 maintains:1 contribution:2 minimize:4 brendel:1 compartment:1 accuracy:1 who:1 efficiently:2 correspond:1 climbing:4 resume:1 weak:3 raw:1 trajectory:6 notoriously:1 drive:4 finer:1 rectified:1 explain:1 implausible:1 reach:1 synapsis:1 synaptic:5 raster:3 pp:2 involved:2 james:1 obvious:1 naturally:1 associated:1 gain:3 sampled:1 intrinsically:1 improves:1 amplitude:1 back:2 feed:3 supervised:11 response:2 llinas:1 synapse:4 though:1 furthermore:1 correlation:1 hand:2 receives:3 ei:3 replacing:2 propagation:1 lack:2 defines:1 undergoes:1 quality:2 gray:2 impulse:2 believe:1 effect:1 evolution:2 hence:1 maass:2 lveczky:1 illustrated:1 cerebellum:2 during:7 self:1 maintained:1 essence:1 excitation:5 backpropagating:1 timingdependent:1 m:4 eccles:1 representaion:1 performs:1 bring:1 fj:3 instantaneous:2 novel:1 fi:9 recently:1 chaos:1 kawato:1 spiking:15 approximates:1 resting:1 consistency:1 teaching:1 depressed:1 erc:1 access:1 stable:1 cortex:1 longer:1 inhibition:4 add:1 pkc:1 own:1 recent:2 perspective:1 driven:1 conserved:1 seen:2 additional:2 employed:1 converge:3 paradigm:1 tempotron:1 signal:10 ii:1 multiple:1 rj:6 expending:1 hebbian:1 technical:1 match:1 faster:1 adapt:1 characterized:1 sphere:1 long:1 believed:1 post:1 award:1 prediction:1 involving:1 essentially:1 poisson:2 iteration:2 represent:5 cerebellar:1 achieved:3 cell:3 irregular:4 receive:2 addition:2 background:1 separately:1 decreased:1 source:1 crucial:3 operate:1 depolarized:1 thing:1 contrary:1 regularly:1 zipser:1 eve:2 presence:1 feedforward:16 iii:1 constraining:1 destroyed:1 rendering:1 fft:7 affect:1 architecture:3 perfectly:1 opposite:1 reduce:1 psl:1 expression:1 specialization:1 ultimate:1 akin:1 wo:1 f:1 cause:1 action:3 oder:1 useful:1 generally:1 clear:2 proportionally:1 detailed:3 se:1 amount:2 locally:2 concentrated:1 gmd:1 generate:5 problematic:1 inhibitory:6 neuroscience:3 per:1 blue:4 write:1 group:2 key:1 redundancy:2 threshold:2 nevertheless:1 penalizing:2 abbott:1 deneve:2 sum:1 fibre:1 injected:2 respond:1 communicate:1 parsimonious:1 legenstein:1 decision:1 ki:1 activity:10 strength:1 constraint:1 precisely:4 bfj:1 x2:2 ri:1 generates:1 bonn:1 speed:1 argument:1 expanded:1 structured:1 according:3 combination:1 mcdonnell:1 kano:2 membrane:18 smaller:2 postsynaptic:7 wi:2 biologically:4 modification:2 intuitively:1 restricted:1 den:2 equation:11 remains:1 previously:1 turn:2 describing:1 count:2 mechanism:1 needed:2 german:1 vreeswijk:1 fed:6 fp7:1 end:3 fji:1 available:5 rewritten:1 obey:1 enforce:1 slower:3 top:9 running:1 ensure:1 maintaining:2 build:1 objective:2 added:1 quantity:4 spike:43 occurs:1 strategy:2 already:1 degrades:1 usual:1 responds:1 unclear:2 div:1 gradient:3 exhibit:1 subspace:1 lateral:6 simulated:2 majority:1 decoder:3 outer:1 presynaptic:12 barber:1 enforcing:6 code:1 index:2 illustration:2 balance:18 equivalently:1 difficult:1 fe:1 teach:1 trace:1 motivates:1 perform:1 imbalance:1 neuron:68 descent:1 knudsen:1 regularizes:1 unconnected:2 communication:3 precise:11 variability:1 neurobiology:1 hinton:1 synchronously:1 reproducing:1 perturbation:3 arbitrary:3 smoothed:1 tive:1 pair:1 paris:5 required:2 connection:42 optimized:1 coherent:1 learned:5 able:2 dynamical:23 pattern:2 below:1 ev:1 regime:1 challenge:1 summarize:1 oj:2 reliable:1 including:1 memory:1 shifting:1 force:3 representing:2 scheme:7 misleading:1 technology:1 autoencoder:10 auto:2 review:1 relative:1 loss:1 fully:1 generation:1 limitation:2 proportional:4 tonegawa:1 versus:2 foundation:1 integrate:4 consistent:1 rubin:1 displaying:1 course:3 cooperation:1 excitatory:3 supported:1 asynchronous:3 drastically:4 guide:3 face:1 markram:1 absolute:1 leaky:2 sparse:1 distributed:2 van:1 feedback:21 dimension:3 overcome:1 rich:1 sensory:3 forward:3 commonly:1 made:2 jump:1 projected:4 subtracts:1 concretely:1 far:1 correlate:2 forever:1 unreliable:1 global:11 unnecessary:1 ipsp:1 xi:1 spectrum:1 learn:12 transfer:1 nature:4 channel:1 iec:1 ponulak:1 dendrite:1 gerstner:1 complex:4 european:1 rue:2 linearly:1 arrow:2 noise:2 arrival:2 x1:2 neuronal:1 reservoir:2 fig:22 en:3 fashion:2 slow:24 precision:2 decoded:3 watanabe:1 learns:6 down:2 xt:1 specific:2 decay:1 physiological:2 barrett:1 evidence:1 false:1 magnitude:3 kx:3 chen:2 subtract:1 fc:2 eij:2 insures:1 simply:3 erratum:1 prevents:1 scalar:1 monotonic:1 collectively:1 mij:7 corresponds:1 brunel:1 constantly:2 labx:1 goal:2 identity:1 oscillator:1 shared:1 absence:4 change:4 analysing:1 specifically:1 except:1 uniformly:1 sophie:2 operates:1 corrected:1 determined:1 total:3 pfister:1 sasaki:1 ew:1 exception:1 internal:2 latter:3 unbalanced:1 evaluate:1 |
5,467 | 5,949 | Semi-supervised Sequence Learning
Andrew M. Dai
Google Inc.
adai@google.com
Quoc V. Le
Google Inc.
qvl@google.com
Abstract
We present two approaches to use unlabeled data to improve Sequence Learning
with recurrent networks. The first approach is to predict what comes next in a
sequence, which is a language model in NLP. The second approach is to use a
sequence autoencoder, which reads the input sequence into a vector and predicts
the input sequence again. These two algorithms can be used as a ?pretraining?
algorithm for a later supervised sequence learning algorithm. In other words, the
parameters obtained from the pretraining step can then be used as a starting point
for other supervised training models. In our experiments, we find that long short
term memory recurrent networks after pretrained with the two approaches become more stable to train and generalize better. With pretraining, we were able to
achieve strong performance in many classification tasks, such as text classification
with IMDB, DBpedia or image recognition in CIFAR-10.
1
Introduction
Recurrent neural networks (RNNs) are powerful tools for modeling sequential data, yet training
them by back-propagation through time [37, 27] can be difficult [9]. For that reason, RNNs have
rarely been used for natural language processing tasks such as text classification despite their ability
to preserve word ordering.
On a variety of document classification tasks, we find that it is possible to train an LSTM [10] RNN
to achieve good performance with careful tuning of hyperparameters. We also find that a simple
pretraining step can significantly stabilize the training of LSTMs. A simple pretraining method is
to use a recurrent language model as a starting point of the supervised network. A slightly better
method is to use a sequence autoencoder, which uses a RNN to read a long input sequence into
a single vector. This vector will then be used to reconstruct the original sequence. The weights
obtained from pretraining can then be used as an initialization for the standard LSTM RNNs. We
believe that this semi-supervised approach [1] is superior to other unsupervised sequence learning
methods, e.g., Paragraph Vectors [19], because it can allow for easy fine-tuning.
In our experiments with document classification tasks with 20 Newsgroups [17] and DBpedia [20],
and sentiment analysis with IMDB [22] and Rotten Tomatoes [26], LSTMs pretrained by recurrent
language models or sequence autoencoders are usually better than LSTMs initialized randomly.
Another important result from our experiments is that it is possible to use unlabeled data from related tasks to improve the generalization of a subsequent supervised model. For example, using
unlabeled data from Amazon reviews to pretrain the sequence autoencoders can improve classification accuracy on Rotten Tomatoes from 79.0% to 83.3%, an equivalence of adding substantially
more labeled data. This evidence supports the thesis that it is possible to use unsupervised learning
with more unlabeled data to improve supervised learning. With sequence autoencoders, and outside
unlabeled data, LSTMs are able to match or surpass previously reported results.
1
Our semi-supervised learning approach is related to Skip-Thought vectors [14], with two differences.
The first difference is that Skip-Thought is a harder objective, because it predicts adjacent sentences.
The second is that Skip-Thought is a pure unsupervised learning algorithm, without fine-tuning.
2
Sequence autoencoders and recurrent language models
Our approach to sequence autoencoding is inspired by the work in sequence to sequence learning
(also known as seq2seq) by Sutskever et al. [32], which has been successfully used for machine
translation [21, 11], text parsing [33], image captioning [35], video analysis [31], speech recognition [4] and conversational modeling [28, 34]. Key to their approach is the use of a recurrent
network as an encoder to read in an input sequence into a hidden state, which is the input to a
decoder recurrent network that predicts the output sequence.
The sequence autoencoder is similar to the above concept, except that it is an unsupervised learning
model. The objective is to reconstruct the input sequence itself. That means we replace the output
sequence in the seq2seq framework with the input sequence. In our sequence autoencoders, the
weights for the decoder network and the encoder network are the same (see Figure 1).
Figure 1: The sequence autoencoder for the sequence ?WXYZ?. The sequence autoencoder uses
a recurrent network to read the input sequence in to the hidden state, which can then be used to
reconstruct the original sequence.
We find that the weights obtained from the sequence autoencoder can be used as an initialization
of another supervised network, one which tries to classify the sequence. We hypothesize that this
is because the network can already memorize the input sequence. This reason, and the fact that the
gradients have shortcuts, are our hypothesis of why the sequence autoencoder is a good and stable
approach in initializing recurrent networks.
A significant property of the sequence autoencoder is that it is unsupervised, and thus can be trained
with large quantities of unlabeled data to improve its quality. Our result is that additional unlabeled
data can improve the generalization ability of recurrent networks. This is especially useful for tasks
that have limited labeled data.
We also find that recurrent language models [2, 24] can be used as a pretraining method for LSTMs.
This is equivalent to removing the encoder part of the sequence autoencoder in Figure 1. Our
experimental results show that this approach works better than LSTMs with random initialization.
3
Overview of baselines
In our experiments, we use LSTM recurrent networks [10] because they are generally better than
RNNs. Our LSTM implementation is standard and has input gates, forget gates, and output gates [6,
7, 8]. We compare this basic LSTM against a LSTM initialized with the sequence autoencoder
method. When the LSTM is initialized with a sequence autoencoder, the method is called SA-LSTM
in our experiments. When LSTM is initialized with a language model, the method is called LMLSTM. We also compare our method to other baselines, e.g., bag-of-words methods or paragraph
vectors, previously reported on the same datasets.
In most of our experiments our output layer predicts the document label from the LSTM output
at the last timestep. We also experiment with the approach of putting the label at every timestep
and linearly increasing the weights of the prediction objectives from 0 to 1 [25]. This way we can
inject gradients to earlier steps in the recurrent networks. We call this approach linear label gain.
2
Lastly, we also experiment with the method of jointly training the supervised learning task with the
sequence autoencoder and call this method joint training.
4
Experiments
In our experiments with LSTMs, we follow the basic recipes as described in [7, 32] by clipping the
cell outputs and gradients. The benchmarks of focus are text understanding tasks, with all datasets
being publicly available. The tasks are sentiment analysis (IMDB and Rotten Tomatoes) and text
classification (20 Newsgroups and DBpedia). Commonly used methods on these datasets, such as
bag-of-words or n-grams, typically ignore long-range ordering information (e.g., modifiers and their
objects may be separated by many unrelated words); so one would expect recurrent methods which
preserve ordering information to perform well. Nevertheless, due to the difficulty in optimizing
these networks, recurrent models are not the method of choice for document classification.
In our experiments with the sequence autoencoder, we train it to reproduce the full document after
reading all the input words. In other words, we do not perform any truncation or windowing. We
add an end of sentence marker to the end of each input sequence and train the network to start
reproducing the sequence after that marker. To speed up performance and reduce GPU memory
usage, we perform truncated backpropagation up to 400 timesteps from the end of the sequence. We
preprocess the text so that punctuation is treated as separate tokens and we ignore any non-English
characters and words in the DBpedia text. We also remove words that only appear once in each
dataset and do not perform any term weighting or stemming.
After training the recurrent language model or the sequence autoencoder for roughly 500K steps
with a batch size of 128, we use both the word embedding parameters and the LSTM weights to
initialize the LSTM for the supervised task. We then train on that task while fine tuning both the
embedding parameters and the weights and use early stopping when the validation error starts to
increase. We choose the dropout parameters based on a validation set.
Using SA-LSTMs, we are able to match or surpass reported results for all datasets. It is important
to emphasize that previous best results come from various different methods. So it is significant
that one method achieves strong results for all datasets, presumably because such a method can be
used as a general model for any similar task. A summary of results in the experiments are shown in
Table 1. More details of the experiments are as follows.
Table 1: A summary of the error rates of SA-LSTMs and previous best reported results.
Dataset
SA-LSTM Previous best result
IMDB
Rotten Tomatoes
20 Newsgroups
DBpedia
4.1
7.24%
16.7%
15.6%
1.19%
7.42%
18.5%
17.1%
1.74%
Sentiment analysis experiments with IMDB
In this first set of experiments, we benchmark our methods on the IMDB movie sentiment dataset,
proposed by Maas et al. [22].1 There are 25,000 labeled and 50,000 unlabeled documents in the
training set and 25,000 in the test set. We use 15% of the labeled training documents as a validation
set. The average length of each document is 241 words and the maximum length of a document is
2,526 words. The previous baselines are bag-of-words, ConvNets [13] or Paragraph Vectors [19].
Since the documents are long, one might expect that it is difficult for recurrent networks to learn. We
however find that with tuning, it is possible to train LSTM recurrent networks to fit the training set.
For example, if we set the size of hidden state to be 512 units and truncate the backprop to be 400,
an LSTM can do fairly well. With random embedding dimension dropout [38] and random word
dropout (not published previously), we are able to reach performance of around 86.5% accuracy in
the test set, which is approximately 5% worse than most baselines.
1
http://ai.Stanford.edu/amaas/data/sentiment/index.html
3
Fundamentally, the main problem with this approach is that it is unstable: if we were to increase the
number of hidden units or to increase the number of backprop steps, the training breaks down very
quickly: the objective function explodes even with careful tuning of the gradient clipping. This is
because LSTMs are sensitive to the hyperparameters for long documents. In contrast, we find that
the SA-LSTM works better and is more stable. If we use the sequence autoencoders, changing the
size of the hidden state or the number of backprop steps hardly affects the training of LSTMs. This
is important because the models become more practical to train.
Using sequence autoencoders, we overcome the optimization instability in LSTMs in such a way
that it is fast and easy to achieve perfect classification on the training set. To avoid overfitting, we
again use input dimension dropout, with the dropout rate chosen on a validation set. We find that
dropping out 80% of the input embedding dimensions works well for this dataset. The results of
our experiments are shown in Table 2 together with previous baselines. We also add an additional
baseline where we initialize a LSTM with word2vec embeddings on the training set.
Table 2: Performance of models on the IMDB sentiment classification task.
Model
Test error rate
LSTM with tuning and dropout
LSTM initialized with word2vec embeddings
LM-LSTM (see Section 2)
SA-LSTM (see Figure 1)
SA-LSTM with linear gain (see Section 3)
SA-LSTM with joint training (see Section 3)
13.50%
10.00%
7.64%
7.24%
9.17%
14.70%
Full+Unlabeled+BoW [22]
WRRBM + BoW (bnc) [22]
NBSVM-bi (Na??ve Bayes SVM with bigrams) [36]
seq2-bown-CNN (ConvNet with dynamic pooling) [12]
Paragraph Vectors [19]
11.11%
10.77%
8.78%
7.67%
7.42%
The results confirm that SA-LSTM with input embedding dropout can be as good as previous best
results on this dataset. In contrast, LSTMs without sequence autoencoders have trouble in optimizing the objective because of long range dependencies in the documents.
Using language modeling (LM-LSTM) as an initialization works well, achieving 8.98%, but less
well compared to the SA-LSTM. This is perhaps because language modeling is a short-term objective, so that the hidden state only captures the ability to predict the next few words.
In the above table, we use 1,024 units for memory cells, 512 units for the input embedding layer in
the LM-LSTM and SA-LSTM. We also use a hidden layer 30 units with dropout of 50% between the
last hidden state and the classifier. We continue to use these settings in the following experiments.
In Table 3, we present some examples from the IMDB dataset that are correctly classified by SALSTM but not by a bigram NBSVM model. These examples often have long-term dependencies or
have sarcasm that is difficult to detect by solely looking at short phrases.
4.2
Sentiment analysis experiments with Rotten Tomatoes and the positive effects of
additional unlabeled data
The success on the IMDB dataset convinces us to test our methods on another sentiment analysis
task to see if similar gains can be obtained. The benchmark of focus in this experiment is the Rotten
Tomatoes dataset [26].2 The dataset has 10,662 documents, which are randomly split into 80% for
training, 10% for validation and 10% for test. The average length of each document is 22 words and
the maximum length is 52 words. Thus compared to IMDB, this dataset is smaller both in terms of
the number of documents and the number of words per document.
2
http://www.cs.cornell.edu/people/pabo/movie-review-data/
4
Table 3: IMDB sentiment classification examples that are correctly classified by SA-LSTM and
incorrectly by NBSVM-bi.
Text
Sentiment
Looking for a REAL super bad movie? If you wanna have great fun, don?t hesitate and
check this one! Ferrigno is incredibly bad but is also the best of this mediocrity.
Negative
A professional production with quality actors that simply never touched the heart or the
funny bone no matter how hard it tried. The quality cast, stark setting and excellent
cinemetography made you hope for Fargo or High Plains Drifter but sorry, the soup had
no seasoning...or meat for that matter. A 3 (of 10) for effort.
Negative
The screen-play is very bad, but there are some action sequences that i really liked. I
think the image is good, better than other romanian movies. I liked also how the actors
did their jobs.
Negative
Our first observation is that it is easier to train LSTMs on this dataset than on the IMDB dataset
and the gaps between LSTMs, LM-LSTMs and SA-LSTMs are smaller than before. This is because
movie reviews in Rotten Tomatoes are sentences whereas reviews in IMDB are paragraphs.
As this dataset is small, our methods tend to severely overfit the training set. Combining SA-LSTMs
with 95% input embedding and 50% word dropout improves generalization and allows the model
to achieve 19.3% test set error.Tuning the SA-LSTM further on the validation set can improve the
result to 19.3% error rate on the test set.
To better the performance, we add unlabeled data from the IMDB dataset in the previous experiment
and Amazon movie reviews [23] to the autoencoder training stage.3 We also run a control experiment
where we use the pretrained word vectors trained by word2vec from Google News.
Table 4: Performance of models on the Rotten Tomatoes sentiment classification task.
Model
Test error rate
LSTM with tuning and dropout
LM-LSTM
LSTM with linear gain
SA-LSTM
20.3%
21.9%
22.2%
19.3%
LSTM with word vectors from word2vec Google News
SA-LSTM with unlabeled data from IMDB
SA-LSTM with unlabeled data from Amazon reviews
20.5%
18.6%
16.7%
MV-RNN [29]
NBSVM-bi [36]
CNN-rand [13]
CNN-non-static (ConvNet with word vectors from word2vec Google News) [13]
21.0%
20.6%
23.5%
18.5%
The results for this set of experiments are shown in Table 4. Our observation is that if we use the
word vectors from word2vec, there is only a small gain of 0.5%. This is perhaps because the recurrent weights play an important role in our model and are not initialized properly in this experiment.
However, if we use IMDB to pretrain the sequence autoencoders, the error decreases from 20.5%
to 18.6%, nearly a 2% gain in accuracy; if we use Amazon reviews, a larger unlabeled dataset (7.9
million movie reviews), to pretrain the sequence autoencoders, the error goes down to 16.7% which
is another 2% gain in accuracy.
3
The dataset is available at http://snap.stanford.edu/data/web-Amazon.html, which has
34 million general product reviews, but we only use 7.9 million movie reviews in our experiments.
5
This brings us to the question of how well this method of using unlabeled data fares compared to
adding more labeled data. As argued by Socher et al. [30], a reason of why the methods are not
perfect yet is the lack of labeled training data, they proposed to use more labeled data by labeling an
addition of 215,154 phrases created by the Stanford Parser. The use of more labeled data allowed
their method to achieve around 15% error in the test set, an improvement of approximately 5% over
older methods with less labeled data.
We compare our method to their reported results [30] on sentence-level classification. As our method
does not have access to valuable labeled data, one might expect that our method is severely disadvantaged and should not perform on the same level. However, with unlabeled data and sequence
autoencoders, we are able to obtain 16.7%, ranking second amongst many other methods that have
access to a much larger corpus of labeled data. The fact that unlabeled data can compensate for the
lack of labeled data is very significant as unlabeled data are much cheaper than labeled data. The
results are shown in Table 5.
Table 5: More unlabeled data vs. more labeled data. Performance of SA-LSTM with additional
unlabeled data and previous models with additional labeled data on the Rotten Tomatoes task.
Model
Test error rate
LSTM initialized with word2vec embeddings trained on Amazon reviews
SA-LSTM with unlabeled data from Amazon reviews
21.7%
16.7%
NB [30]
SVM [30]
BiNB [30]
VecAvg [30]
RNN [30]
MV-RNN [30]
RNTN [30]
18.2%
20.6%
16.9%
19.9%
17.6%
17.1%
14.6%
4.3
Text classification experiments with 20 newsgroups
The experiments so far have been done on datasets where the number of tokens in a document is
relatively small, a few hundred words. Our question becomes whether it is possible to use SALSTMs for tasks that have a substantial number of words, such as web articles or emails and where
the content consists of many different topics.
For that purpose, we carry out the next experiments on the 20 newsgroups dataset [17].4 There are
11,293 documents in the training set and 7,528 in the test set. We use 15% of the training documents
as a validation set. Each document is an email with an average length of 267 words and a maximum
length of 11,925 words. Attachments, PGP keys, duplicates and empty messages are removed. As
the newsgroup documents are long, it was previously considered improbable for recurrent networks
to learn anything from the dataset. The best methods are often simple bag-of-words.
We repeat the same experiments with LSTMs and SA-LSTMs on this dataset. Similar to observations made in previous experiments, SA-LSTMs are generally more stable to train than LSTMs.
To improve generalization of the models, we again use input embedding dropout and word dropout
chosen on the validation set. With 70% input embedding dropout and 75% word dropout, SA-LSTM
achieves 15.6% test set error which is much better than previous classifiers in this dataset. Results
are shown in Table 6.
4.4
Character-level document classification experiments with DBpedia
In this set of experiments, we turn our attention to another challenging task of categorizing
Wikipedia pages by reading character-by-character inputs. The dataset of attention is the DBpedia
dataset [20], which was also used to benchmark convolutional neural nets in Zhang and LeCun [39].
4
http://qwone.com/?jason/20Newsgroups/
6
Table 6: Performance of models on the 20 newsgroups classification task.
Model
Test error rate
LSTM
LM-LSTM
LSTM with linear gain
SA-LSTM
18.0%
15.3%
71.6%
15.6%
Hybrid Class RBM [18]
RBM-MLP [5]
SVM + Bag-of-words [3]
Na??ve Bayes [3]
23.8%
20.5%
17.1%
19.0%
Note that unlike other datasets in Zhang and LeCun [39], DBpedia has no duplication or tainting
issues so we assume that their experimental results are valid on this dataset. DBpedia is a crowdsourced effort to extract information from Wikipedia and categorize it into an ontology.
For this experiment, we follow the same procedure suggested in Zhang and LeCun [39]. The task is
to classify DBpedia abstracts into one of 14 categories after reading the character-by-character input.
The dataset is split into 560,000 training examples and 70,000 test examples. A DBpedia document
has an average of 300 characters while the maximum length of all documents is 13,467 characters.
As this dataset is large, overfitting is not an issue and thus we do not perform any dropout on the
input or recurrent layers. For this dataset, we use a two-layered LSTM, each layer has 512 hidden
units and and the input embedding has 128 units.
Table 7: Performance of models on the DBpedia character level classification task.
Model
Test error rate
LSTM
LM-LSTM
LSTM with linear gain
SA-LSTM
SA-LSTM with linear gain
SA-LSTM with 3 layers and linear gain
SA-LSTM (word-level)
13.64%
1.50%
1.32%
2.34%
1.23%
1.19%
1.40%
Bag-of-words
Small ConvNet
Large ConvNet
3.57%
1.98%
1.73%
In this dataset, we find that the linear label gain as described in Section 3 is an effective mechanism to
inject gradients to earlier steps in LSTMs. This linear gain method works well and achieves 1.32%
test set error, which is better than SA-LSTM. Combining SA-LSTM and the linear gain method
achieves 1.19% test set error, a significant improvement from the results of convolutional networks
as shown in Table 7.
4.5
Object classification experiments with CIFAR-10
In these experiments, we attempt to see if our pre-training methods extend to non-textual data. To
do this, we train a LSTM to read the CIFAR-10 image dataset row-by-row (where the input at
each timestep is an entire row of pixels) and output the class of the image at the end. We use the
same method as in [16] to perform data augmentation. We also trained a LSTM to do next row
prediction given the current row (we denote this as LM-LSTM) and a LSTM to predict the image
by rows after reading all its rows (SA-LSTM). We then fine-tune these on the classification task.
We present the results in Table 8. While we do not achieve the results attained by state of the
art convolutional networks, our 2-layer pretrained LM-LSTM is able to exceed the results of the
7
baseline convolutional DBN model [15] despite not using any convolutions and outperforms the non
pre-trained LSTM.
Table 8: Performance of models on the CIFAR-10 object classification task.
Model
5
Test error rate
1-layer LSTM
1-layer LM-LSTM
1-layer SA-LSTM
25.0%
23.1%
25.1%
2-layer LSTM
2-layer LM-LSTM
2-layer SA-LSTM
26.0%
18.7%
26.0%
Convolution DBNs [15]
21.1%
Discussion
In this paper, we found that it is possible to use LSTM recurrent networks for NLP tasks such as
document classification. We also find that a language model or a sequence autoencoder can help
stabilize the learning in recurrent networks. On five benchmarks that we tried, LSTMs can become
a general classifier that reaches or surpasses the performance levels of all previous baselines.
Acknowledgements: We thank Oriol Vinyals, Ilya Sutskever, Greg Corrado, Vijay Vasudevan,
Manjunath Kudlur, Rajat Monga, Matthieu Devin, and the Google Brain team for their help.
References
[1] R. K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks
and unlabeled data. J. Mach. Learn. Res., 6:1817?1853, December 2005.
[2] Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin. A neural probabilistic language model. In
JMLR, 2003.
[3] A. Cardoso-Cachopo. Datasets for single-label text categorization. http://web.ist.
utl.pt/acardoso/datasets/, 2015. [Online; accessed 25-May-2015].
[4] William Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. Listen, attend and spell. arXiv
preprint arXiv:1508.01211, 2015.
[5] Y. Dauphin and Y. Bengio. Stochastic ratio matching of RBMs for sparse high-dimensional
inputs. In NIPS, 2013.
[6] F. A. Gers, J. Schmidhuber, and F. Cummins. Learning to forget: Continual prediction with
LSTM. Neural Computation, 2000.
[7] A. Graves. Generating sequences with recurrent neural networks. In Arxiv, 2013.
[8] K. Greff, R. K. Srivastava, J. Koutn??k, B. R. Steunebrink, and J. Schmidhuber. LSTM: A search
space odyssey. In ICML, 2015.
[9] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient flow in recurrent nets: the
difficulty of learning long-term dependencies. A Field Guide to Dynamical Recurrent Neural
Networks, 2001.
[10] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 1997.
[11] S. Jean, K. Cho, R. Memisevic, and Y. Bengio. On using very large target vocabulary for neural
machine translation. In ICML, 2014.
[12] R. Johnson and T. Zhang. Effective use of word order for text categorization with convolutional
neural networks. In NAACL, 2014.
[13] Y. Kim. Convolutional neural networks for sentence classification, 2014.
8
[14] R. Kiros, Y. Zhu, R. Salakhutdinov, R. S. Zemel, A. Torralba, R. Urtasun, and S. Fidler. Skipthought vectors. In NIPS, 2015.
[15] A. Krizhevsky. Convolutional deep belief networks on CIFAR-10. Technical report, University
of Toronto, 2010.
[16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional
neural networks. In NIPS, 2012.
[17] K. Lang. Newsweeder: Learning to filter netnews. In ICML, 1995.
[18] H. Larochelle, M. Mandel, R. Pascanu, and Y. Bengio. Learning algorithms for the classification restricted boltzmann machine. JMLR, 2012.
[19] Q. V. Le and T. Mikolov. Distributed representations of sentences and documents. In ICML,
2014.
[20] J. Lehmann, R. Isele, M. Jakob, A. Jentzsch, D. Kontokostas, P. N. Mendes, S. Hellmann,
M. Morsey, P. van Kleef, S. Auer, et al. DBpedia ? a large-scale, multilingual knowledge base
extracted from wikipedia. Semantic Web, 2014.
[21] T. Luong, I. Sutskever, Q. V. Le, O. Vinyals, and W. Zaremba. Addressing the rare word
problem in neural machine translation. arXiv preprint arXiv:1410.8206, 2014.
[22] A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts. Learning word vectors
for sentiment analysis. In ACL, 2011.
[23] J. McAuley and J. Leskovec. Hidden factors and hidden topics: understanding rating dimensions with review text. In RecSys, pages 165?172. ACM, 2013.
[24] T. Mikolov, M. Karafi?at, L. Burget, J. Cernock`y, and S. Khudanpur. Recurrent neural network
based language model. In INTERSPEECH, 2010.
[25] J. Y. H. Ng, M. J. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici.
Beyond short snippets: Deep networks for video classification. In CVPR, 2015.
[26] B. Pang and L. Lee. Seeing stars: Exploiting class relationships for sentiment categorization
with respect to rating scales. In ACL, 2005.
[27] D. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating
errors. Nature, 1986.
[28] L. Shang, Z. Lu, and H. Li. Neural responding machine for short-text conversation. In EMNLP,
2015.
[29] R. Socher, B. Huval, C. D. Manning, and A. Y. Ng. Semantic compositionality through recursive matrix-vector spaces. In EMNLP, 2012.
[30] R. Socher, A. Perelygin, J. Y. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts. Recursive
deep models for semantic compositionality over a sentiment treebank. In EMNLP, 2013.
[31] N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representations using LSTMs. In ICML, 2015.
[32] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks.
In NIPS, 2014.
[33] O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton. Grammar as a foreign
language. In NIPS, 2015.
[34] O. Vinyals and Q. V. Le. A neural conversational model. In ICML Deep Learning Workshop,
2015.
[35] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption
generator. In CVPR, 2014.
[36] S. I. Wang and C. D. Manning. Baselines and bigrams: Simple, good sentiment and topic
classification. In ACL, 2012.
[37] P. J. Werbos. Beyond regression: New tools for prediction and analysis in the behavioral
sciences. PhD thesis, Harvard, 1974.
[38] W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. arXiv
preprint arXiv:1409.2329, 2014.
[39] X. Zhang and Y. LeCun. Character-level convolutional networks for text classification. In
NIPS, 2015.
9
| 5949 |@word cnn:3 bigram:3 tried:2 harder:1 mcauley:1 carry:1 wanna:1 document:26 outperforms:1 current:1 com:3 lang:1 yet:2 gpu:1 parsing:1 stemming:1 devin:1 subsequent:1 hypothesize:1 remove:1 v:1 short:6 pascanu:1 toronto:1 zhang:6 five:1 accessed:1 become:3 consists:1 behavioral:1 paragraph:5 ontology:1 roughly:1 kiros:1 brain:1 inspired:1 salakhutdinov:2 toderici:1 increasing:1 becomes:1 unrelated:1 what:1 substantially:1 every:1 fun:1 continual:1 zaremba:2 classifier:3 mansimov:1 control:1 unit:7 appear:1 positive:1 before:1 attend:1 severely:2 despite:2 mach:1 solely:1 koo:1 approximately:2 might:2 rnns:4 acl:3 initialization:4 equivalence:1 challenging:1 limited:1 range:2 bi:3 practical:1 lecun:4 recursive:2 backpropagation:1 procedure:1 rnn:5 significantly:1 thought:3 matching:1 burget:1 word:36 pre:2 seeing:1 mandel:1 unlabeled:22 layered:1 nb:1 vijayanarasimhan:1 instability:1 www:1 equivalent:1 go:1 attention:2 starting:2 incredibly:1 williams:1 amazon:7 pure:1 matthieu:1 embedding:10 dbns:1 play:2 parser:1 dbpedia:13 pt:1 target:1 caption:1 us:2 hypothesis:1 jaitly:1 harvard:1 rumelhart:1 recognition:2 werbos:1 predicts:4 labeled:15 role:1 preprint:3 initializing:1 capture:1 wang:1 news:3 ordering:3 decrease:1 removed:1 valuable:1 substantial:1 dynamic:1 trained:5 predictive:1 imdb:16 joint:2 various:1 train:10 separated:1 fast:1 effective:2 zemel:1 labeling:1 tell:1 netnews:1 outside:1 jean:1 stanford:3 larger:2 ducharme:1 snap:1 cvpr:2 reconstruct:3 encoder:3 ability:3 grammar:1 think:1 jointly:1 itself:1 online:1 autoencoding:1 sequence:55 net:2 product:1 combining:2 bow:2 achieve:6 recipe:1 sutskever:7 exploiting:1 empty:1 captioning:1 categorization:3 perfect:2 liked:2 generating:1 object:3 help:2 andrew:1 recurrent:29 propagating:1 job:1 sa:33 strong:2 c:1 skip:3 come:2 memorize:1 larochelle:1 filter:1 stochastic:1 backprop:3 argued:1 odyssey:1 generalization:4 really:1 koutn:1 pham:1 around:2 considered:1 presumably:1 great:1 predict:3 lm:11 achieves:4 early:1 torralba:1 purpose:1 daly:1 bag:6 label:5 sensitive:1 successfully:1 tool:2 hope:1 super:1 avoid:1 cornell:1 categorizing:1 focus:2 properly:1 improvement:2 potts:2 check:1 pretrain:3 contrast:2 baseline:9 detect:1 kim:1 utl:1 stopping:1 jauvin:1 foreign:1 typically:1 entire:1 hidden:11 sorry:1 rntn:1 reproduce:1 pixel:1 issue:2 classification:27 html:2 dauphin:1 art:1 initialize:2 fairly:1 field:1 once:1 never:1 frasconi:1 ng:4 unsupervised:6 nearly:1 icml:6 report:1 fundamentally:1 duplicate:1 few:2 randomly:2 preserve:2 ve:2 cheaper:1 william:1 ando:1 attempt:1 mlp:1 message:1 punctuation:1 word2vec:7 improbable:1 hausknecht:1 initialized:7 re:1 seq2seq:2 leskovec:1 classify:2 modeling:4 earlier:2 clipping:2 phrase:2 surpasses:1 addressing:1 rare:1 hundred:1 krizhevsky:2 johnson:1 reported:5 dependency:3 kudlur:1 cho:1 lstm:71 memisevic:1 probabilistic:1 lee:1 together:1 quickly:1 ilya:1 na:2 thesis:2 again:3 augmentation:1 choose:1 huang:1 emnlp:3 worse:1 luong:1 inject:2 stark:1 li:1 huval:1 star:1 stabilize:2 inc:2 matter:2 mv:2 ranking:1 later:1 try:1 break:1 bone:1 jason:1 start:2 bayes:2 crowdsourced:1 pang:1 publicly:1 accuracy:4 convolutional:9 greg:1 preprocess:1 generalize:1 vincent:1 lu:1 published:1 classified:2 reach:2 email:2 petrov:1 against:1 rbms:1 rbm:2 static:1 gain:14 dataset:28 jentzsch:1 knowledge:1 listen:1 improves:1 conversation:1 auer:1 back:2 attained:1 supervised:11 follow:2 rand:1 done:1 stage:1 lastly:1 autoencoders:11 convnets:1 overfit:1 lstms:25 web:4 marker:2 propagation:1 google:8 lack:2 brings:1 quality:3 perhaps:2 believe:1 usage:1 effect:1 naacl:1 concept:1 qwone:1 vasudevan:1 spell:1 fidler:1 regularization:1 read:5 semantic:3 adjacent:1 interspeech:1 anything:1 soup:1 greff:1 image:7 superior:1 wikipedia:3 overview:1 million:3 extend:1 fare:1 significant:4 ai:1 tuning:9 dbn:1 language:14 had:1 stable:4 actor:2 access:2 add:3 base:1 chan:1 optimizing:2 schmidhuber:4 continue:1 success:1 dai:1 additional:5 corrado:1 cummins:1 semi:3 full:2 windowing:1 multiple:1 technical:1 match:2 long:10 cifar:5 compensate:1 prediction:4 basic:2 regression:1 navdeep:1 arxiv:7 monga:2 hesitate:1 cell:2 hochreiter:2 whereas:1 addition:1 fine:4 nbsvm:4 unlike:1 explodes:1 pooling:1 tend:1 duplication:1 december:1 flow:1 call:2 exceed:1 split:2 easy:2 embeddings:3 bengio:6 variety:1 newsgroups:7 fit:1 timesteps:1 affect:1 reduce:1 whether:1 effort:2 manjunath:1 sentiment:15 speech:1 pretraining:7 hardly:1 action:1 deep:5 useful:1 generally:2 cardoso:1 tune:1 category:1 http:5 correctly:2 per:1 dropping:1 ist:1 putting:1 key:2 nevertheless:1 achieving:1 changing:1 timestep:3 run:1 powerful:1 you:2 lehmann:1 wu:1 funny:1 modifier:1 layer:13 dropout:15 disadvantaged:1 toshev:1 speed:1 conversational:2 mikolov:2 relatively:1 truncate:1 manning:3 smaller:2 slightly:1 character:10 karafi:1 quoc:2 restricted:1 pabo:1 heart:1 previously:4 turn:1 mechanism:1 end:4 available:2 batch:1 professional:1 gate:3 original:2 chuang:1 responding:1 nlp:2 trouble:1 especially:1 objective:6 already:1 quantity:1 question:2 kaiser:1 gradient:6 amongst:1 convnet:4 separate:1 thank:1 decoder:2 recsys:1 topic:3 unstable:1 urtasun:1 reason:3 length:7 index:1 relationship:1 ratio:1 romanian:1 difficult:3 perelygin:1 negative:3 implementation:1 boltzmann:1 perform:7 observation:3 convolution:2 datasets:9 cachopo:1 benchmark:5 snippet:1 truncated:1 incorrectly:1 hinton:3 looking:2 team:1 reproducing:1 jakob:1 rating:2 compositionality:2 cast:1 sentence:6 imagenet:1 textual:1 nip:6 able:6 suggested:1 beyond:2 usually:1 dynamical:1 reading:4 adai:1 memory:4 video:3 belief:1 difficulty:2 natural:1 treated:1 hybrid:1 cernock:1 zhu:1 older:1 improve:8 movie:8 attachment:1 fargo:1 created:1 autoencoder:16 extract:1 text:14 review:13 understanding:2 acknowledgement:1 graf:1 expect:3 generator:1 validation:8 article:1 tomato:9 treebank:1 translation:3 row:7 production:1 summary:2 token:2 maas:2 last:2 truncation:1 english:1 bnc:1 repeat:1 guide:1 allow:1 sparse:1 distributed:1 van:1 overcome:1 dimension:4 plain:1 gram:1 valid:1 qvl:1 vocabulary:1 commonly:1 made:2 far:1 erhan:1 emphasize:1 ignore:2 meat:1 multilingual:1 confirm:1 overfitting:2 corpus:1 hellmann:1 don:1 search:1 why:2 table:17 learn:3 nature:1 steunebrink:1 excellent:1 did:1 main:1 linearly:1 hyperparameters:2 allowed:1 screen:1 gers:1 jmlr:2 weighting:1 pgp:1 touched:1 removing:1 down:2 bad:3 svm:3 evidence:1 workshop:1 socher:3 sequential:1 adding:2 phd:1 gap:1 easier:1 vijay:1 forget:2 simply:1 vinyals:9 khudanpur:1 newsweeder:1 pretrained:4 srivastava:2 extracted:1 acm:1 careful:2 replace:1 shortcut:1 hard:1 content:1 except:1 surpass:2 shang:1 called:2 experimental:2 newsgroup:1 rarely:1 support:1 people:1 categorize:1 rajat:1 oriol:2 mendes:1 |
5,468 | 595 | Learning Spatio-Temporal Planning from
a Dynamic Programming Teacher:
Feed-Forward N eurocontrol for Moving
Obstacle A voidance
Gerald Fahner *
Department of Neuroinformatics
University of Bonn
Romerstr. 164
W -5300 Bonn 1, Germany
Rolf Eckmiller
Department of Neuroinformatics
University of Bonn
Romerstr. 164
W-5300 Bonn 1, Germany
Abstract
Within a simple test-bed, application of feed-forward neurocontrol
for short-term planning of robot trajectories in a dynamic environment is studied. The action network is embedded in a sensorymotoric system architecture that contains a separate world model.
It is continuously fed with short-term predicted spatio-temporal
obstacle trajectories, and receives robot state feedback. The action net allows for external switching between alternative planning tasks. It generates goal-directed motor actions - subject to
the robot's kinematic and dynamic constraints - such that collisions with moving obstacles are avoided. Using supervised learning, we distribute examples of the optimal planner mapping over
a structure-level adapted parsimonious higher order network. The
training database is generated by a Dynamic Programming algorithm. Extensive simulations reveal, that the local planner mapping is highly nonlinear, but can be effectively and sparsely represented by the chosen powerful net model. Excellent generalization
occurs for unseen obstacle configurations. We also discuss the limitations of feed-forward neurocontrol for growing planning horizons.
*Tel.: (228)-550-364
342
FAX: (228)-550-425
e-mail: gerald@nero.uni-bonn.de
Learning Spatio-Temporal Planning from a Dynamic Programming Teacher
1
INTRODUCTION
Global planning of goal directed trajectories subject to cluttered spatio-temporal,
state-dependent constraints - as in the kinodynamic path planning problem (Donald, 1989) considered here - is a difficult task, probably best suited for systems with
embedded sequential behavior; theoretical insights indicate that the related problem of connectedness is of unbounded order (Minsky, 1969). However, considering
practical situations, there is a lack of globally disposable constraints at planning
time, due to partially unmodelled environments. The question then arises, to what
extent feed-f )rward neurocontrol may be effective for local planning horizons.
In this paper, we put aside problems of credit assignment, and world model identification. We focus on the complexity of representing a local version of the generic
kinodynamic path planning problem by a feed-forward net. We investigate the
capacity of sparse distributed planner representations to generalize from example
plans.
2
2.1
ENVIRONMENT AND ROBOT MODELS
ENVIRONMENT
The world around the robot is a two-dimensional scene, occupied by obstacles moving all in parallel to the y-axis, with randomly choosen discretized x-positions, and
with a continuous velocity spectrum. The environment's state is given by a list
reporting position (Xi,Yi) E (X,Y), X E {0, ... ,8}, Y = [y-,y+], and velocity
(0, Vi) ; Vi E [v- ,v+] of each obstacle i. The environment dynamics is given by
(1)
Obstacles are inserted at random positions, and with random velocities, into some
region distant from the robot's workspace. At each time step, the obstacle's positions are updated according to eqn.(l), so that they will cross the robot's workspace
some time.
2.2
ROBOT
We consider a point-like robot of unit mass, which is confined to move within some
interval along the x-axis. Its state is denote~. by (xr,x r ) E (X,X);X = {-1,0, I}.
At each time step, a motor command u E X
{-I, 0, I} is applied to the robot.
The robot dynamics is given by
=
Xr(t
zr(t
+ 1) =
+ 1) =
xr(t) + u(t)
zr(t) + xr(t + 1) .
(2)
Notice that the set of admissible motor commands depends on the present robot
state. With these settings, the robot faces a fluctuating number of obstacles crossing
its baseline, similar to the situation of a pedestrian who wants to cross a busy street
(Figure 1).
343
344
Fahner and Eckmiller
o
dyno.MiC
obsto.cles
robot
gOo.l
o
Figure 1: Obstacles Crossing the Robot's Workspace
3
SYSTEM ARCHITECTURE AND FUNCTIONALITY
Adequate modeling of the perception-action cycle is of decisive importance for the
design of intelligent reactive systems. We partition the overall system into two
modules: an active Perception Module (PM) with built-in capabilities for short-term
environment forecasts, and a subsequent Action Module (AM) for motor command
generation (Figure 2). Either module may be represented by a 'classical' algorithm,
or by a neural net. PM is fed with a sensory data stream reporting the observed
lon9terM
goal
sens~
infor~
roloot
state
JJJJ
Perception
Moclule
Action
Moclule
interno.l
representa tion
JJ
Motor
COMMancl
Figure 2: Sensory-Motoric System Architecture
dynamic scene of time-varying obstacle positions. From this, it assembles a spatio-
Learning Spatio-Temporal Planning from a Dynamic Programming Teacher
temporal internal representation of near-future obstacle trajectories. At each time
step t, it actnalizes the incidence function
occupancy(x, k)
= { _11
=
(x Xi and - s
otherwise,
< Yi(t + k) < s) for any obstacle i
where s is some safety margin accounting for the y-extension of obstacles. The
incidence furlction is defined on a spatio-temporal cone-shaped cell array, based at
the actual rc bot position:
Ix -
(3)
xr(t)1 ~ k ; k = I, .'" HORIZON
The opening angle of this cone-shaped region is given by the robot's speed limit
(here: one cell per time step). Only those cells that can potentially be reached by
the robot within the local prediction-/planning horizon are thus represented by PM
(see Figure 3). The functionality of AM is to map the current PM representation to
x
I,
4Ir
,. .....
/
T
/
r2J
i--';"
x ...
(~o
0
~
~@]
[3]-[3]-
0
0
~
./
,~I---I?J
[5J
T
T
o
1
,....
2
3
Figure 3: Space-Time Representation with Solution Path Indicated
an appropriate robot motor command, taking into account the present robot state,
and paying regard to the currently specified long-term goal. Firstly, we realize
the optimal AM by the Dynamic Programming (DP) algorithm (Bellman, 1957).
Secondly, we use supervised learning to distribute optimal planning examples over
a neural network.
4
DYNAMIC PROGRAMMING SOLUTION
Given PM's internal representation at time t, the present robot state, and some
specification of the desired long-term goal, DP determines a sequence of motor
commands minimizing some cost functional. Here we use
HORIZON
cost{u(t), ... ,u(t+HORIZON)} =
(xr(t + k) - xo)2 + c u(t + k)2 ,
(4)
L::
k=O
345
346
Fahner and Eckmiller
with xr(t + k) given by the dynamics eqns.(2) (see solution path in Figure 3). By
xo, we denote the desired robot position or long-term goal. Deviations from this
position are punished by higher costs, just as are costly accelerations. Obstacle
collisions are excluded by restricting search to admissible cells (x, X, t + k )admiuible
in phase-space-time (obeying occupancy(x,t+k) = -1). Training targets for timet
are constituted by the optimal present motor actions uopt(t), for which the minimum
is attained in eqn.( 4). For cases with degenerated optimal solutions, we consistently
break symmetry, in order to obtain a deterministic target mapping.
5
NEURAL ACTION MODEL
For neural motor command generation, we use a single layer of structure-adapted
parsimonious Higher Order Neurons (parsiHONs) (Fahner, I992a, b), computing
outputs Yi E [0,1] ; i
1,2,3. Target values for each single neuron are given by
yfe& = 1, if motor-action i is the optimal one, otherwise, yfe& = 0. As input, each
neuron receives a bit-vector x = Xl, ... ,XN E {-I, I}N, whose components specify
the values of PM's incidence function, the binary encoded robot state, and some
task bits encoding the long-term goal. Using batch training, we maximize the loglikelihood criterion for each neuron independently. For recall, the motor command
is obtained by a winner-takes-all decision: the index of the most active neuron yields
the motor action applied.
Generally, atoms for nonlinear interactions within a bipolar-input HON are modelled by input monomials of the form
=
N
1]Ot
=II xji ;
Cl'
= Cl'l ... Cl'N E n
i=1
={O,
I}N .
(5)
Here, the ph bit of Cl' is understood as exponent of Xi. It is well known that the
complete set of monomials forms a basis for Boolean functions expansions (Karpovski, 1976). Combinatorial growth of the number of terms with increasing input
dimension renders allocation of the complete basis impractical in our case. Moreover, an action model employing excessive numbers of basis functions would overfit
trainig data, thus preventing generalization.
We therefore use a structural adaptation algorithm, as discussed in detail in (Fahner, I992a, b), for automatic identification and inclusion of a sparse set of relevant
nonlinearities present in the problem. In effect, this algorithm performs a guided
stochastic search exploring the space of nonlinear interactions by means of an intertwined process of weight adaptation, and competition between nonlinear terms.
The parsiHON model restricts the number of terms used, not their orders: instead
of the exponential size set {1]Ot : Cl' En}, just a small subset {1]{3 : /3 ESC n} of
terms is used within a parsimonious higher order function expansion
ye,t(x)
=f
[2:
w{31]{3(X)] ;
w{3
E 1R .
(6)
{3ES
He~'e,
f denotes the usual sigmoid transfer function.
parsiHONs with high degrees of sparsity were effectively trained and emerged robust
generalization for difficult nonlinear classification benchmarks (Fahner, I992a, b).
Learning Spatia-Temporal Planning from a Dynamic Programming Teacher
6
SIMULATION RESULTS
We performed extensive simulations to evaluate the neural action network's capabilities to generalize from learned optimal planning examples. The planner was trained
with respect to two alternative long-term goals: XO 0, or XO 8. Firstly, optimal
DP planner actions were assembled over about 6,000 time steps of the simulated environment (fa.irly crowded with moving obstacles), for both long-term goals. At each
time step, optimd motor commands were computed for all 9 x 3 27 available robot
states. From this bunch of situations we excluded those, where no collision-free
path existed within the planning horizon considered: (HORIZON
3). A total
of 115,000 admissible training situations were left, out of the 6,000 x 27 = 162,000
one's generated. Thus, out of the full spectrum of robot states which were checked
every time step, just about 19 states were not doomed to collide, at an average.
These findings corrobate the difficulty of the choosen task.
Many repetitions are present in these accumulated patterns, reflecting the statistics
of the simulated environment. We collapsed the original training set by removing repeated patterns, providing the learner with more information per pattern: a
working data base containing about 20.000 different patterns was left.
Input to the neural action net consisted of a bit-vector of length N = 21, where
3 + 5 + 7 bits encode PM's internal representation (cone size in Figure 3), 6 bits
encode the robot's state, and a single task bit reports the desired goal. For training, we delimited single neuron learning to a maximum of 1000 epochs. In most
cases, this was sufficient for successful training set classification for any of the three
0, and Yi > .8 for yfe&
1; i
1,2,3). But even if
neurons (Yi < .8 for yfe&
some training patterns were misclassified by individual motor neurons, additional
robustness stemming from the winner-takes-all decision rescued fault-free recall of
the voting community. To test generalization of the neural action model, we par-
=
=
=
=
=
=
=
6
a)-HON
9)-HON
9)-HON
93-HON
llO-HON
llO-HON
UO-HON
c
"...
.-.-II>
..a.
5
+
0
+
C
)(
?'0"
'0
II>
.....
.
.,."
....
3
".,
o
e
'"o
....
II>
2
o
+
0C
II>
"...II>
a.
o
)(
+
t
..
O~----~----~----~----~----~----~----~~
4
6
a
10
12
o
14
2
5ize of tra~nin9 set
"1000
Figure 4: Generalization Behavior
titioned the data base into two parts, one containing training patterns, the other
347
348
Fahner and Eckmiller
containing new test patterns, not present in the training set. Several runs were
performed with parsiHONs of sizes between 83 and 110 terms. Results for varying
training set sizes are depicted in Figure 4. Test error decreases with increasing
training set size, and falls as low as about one percent for about 12,000 training
patterns. It continues to decrease for larger training sets. These findings corrobate
that the trained architectures emerge sensible robust generalization.
To get some insight into the complexity of the mapping, we counted the number
of terms which carry a given order. The resulting distribution has its maximum at
order 3, exhibits many terms of orders 4 and higher, and finally decreases to zero for
oruers exceeding 10 (Figure 5). This indicates that the planner mapping considered
is highly nonlinear.
o.25
r------~----_,_----__,_----_..,.-___,
averaged over several
ne~~orks
~
0.2
'"
u
c
(1/
::I
0.15
IT
.......
(II
(II
.....,.,>
....
..
0.1
(II
0.05
o~------~----~~~
o
5
______
10
order
-+________
15
~
__
~
20
Figure 5: Distribution of Orders
7
DISCUSSION AND CONCLUSIONS
Sparse representation of planner mappings is desirable when representation of complete policy look-up tables becomes impracticable (Bellman's "curse of dimensionality"), or when computation of plans becomes expensive or conflicting with real-time
requirements. For these reasons, it is urgent to investigate the capacity of neurocontrol for effective distributed representation and for robust generalization of planner
mappmgs.
Here, we focused on a new type of shallow feed-forward action network for the local
kinodynamic trajectory planning problem. An advantage with feed- forward nets
is their low-latency recall, which is an important requirement for systems acting in
rapidly changing environments. However, from theoretical considerations concerning the related problem of connectedness with its inherent serial character (Minsky,
1969), the planning problem under focus is expected to be hard for feed-forward
nets. Even for rather local planning horizons, complex and nonlinear planner map-
Learning Spatio-Temporal Planning from a Dynamic Programming Teacher
pings must be expected. Using a powerful new neuron model that identifies the
relevant nonlinearities inherent in the problem, we determined extremely parsimonious architectures for representation of the planner mapping. This indicates that
some compact set of important features determines the optimal plan. The adapted
networks emerged excellent generalization.
We encourage use of feed-forward nets for difficult local planning tasks, if care is
taken that the models support effective representation of high-order nonlinearities.
For growing planning horizons, it is expected that feed-forward neurocontrol will
run into limitatioml (Werbos, 1992). The simple test-bed presented here would allow for inser tion a.Dd testing also of other net models and system designs, including
recurrent networks.
Acknowledgements
This work was supported by Federal Ministry of Research and Technology (BMFTproject SENROB), grant 01 IN 105 AID)
References
E. B. Baum, F. Wilczek (1987). Supervised Learning of Probability Distributions
by Neural Networks. In D. Anderson (Ed.), Neural Information Processing Systems,
52-61. Denver, CO: American Institute of Physics.
R. E. Bellman (1957). Dynamic Programming. Princeton University Press.
B. Donald (1989). Near-Optimal Kinodynamic Planning for Robots With Coupled
Dynamic Bounds, Proc. IEEE Int. Conf. on Robotics and Automation.
G. Fahner, N. Goerke, R. Eckmiller (1992).
Structural Adaptation of Boolean
Higher Order Neurons: Superior Classification with Parsimonious Topologies, Proc.
ICANN, Brighton, UK.
G. Fahner, R. Eckmiller. Structural Adaptation of Parsimonious Higher Order
Classifiers, subm. to Neural Networks.
M. G. Karpovski (1976). Finite Orthogonal Series in the Design of Digital Devices.
New York: John Wiley & Sons.
M. Minsky, S. A. Papert (1969). Perceptrons. Cambridge: The MIT Press.
P. Werbos (1992). Approximate Dynamic Programming for Real-Time Control and
Neural Modeling. In D. White, D. Sofge (eds.) Handbook of Intelligent Control,
493-525. New York: Van Nostrand.
349
| 595 |@word version:1 simulation:3 llo:2 accounting:1 carry:1 configuration:1 contains:1 kinodynamic:4 series:1 current:1 incidence:3 must:1 john:1 realize:1 stemming:1 subsequent:1 partition:1 distant:1 rward:1 motor:14 aside:1 device:1 short:3 firstly:2 unbounded:1 rc:1 along:1 expected:3 xji:1 behavior:2 planning:23 growing:2 discretized:1 bellman:3 globally:1 actual:1 curse:1 considering:1 increasing:2 becomes:2 moreover:1 mass:1 what:1 finding:2 impractical:1 temporal:9 every:1 voting:1 growth:1 bipolar:1 classifier:1 uk:1 control:2 unit:1 uo:1 grant:1 safety:1 understood:1 local:7 limit:1 switching:1 encoding:1 path:5 connectedness:2 studied:1 co:1 jjjj:1 averaged:1 directed:2 practical:1 testing:1 xr:7 donald:2 get:1 put:1 collapsed:1 map:2 deterministic:1 baum:1 cluttered:1 independently:1 focused:1 insight:2 array:1 updated:1 target:3 programming:10 velocity:3 crossing:2 mic:1 expensive:1 continues:1 werbos:2 sparsely:1 database:1 observed:1 inserted:1 module:4 region:2 cycle:1 goo:1 decrease:3 environment:10 complexity:2 dynamic:17 gerald:2 trained:3 learner:1 basis:3 collide:1 represented:3 effective:3 neuroinformatics:2 cles:1 whose:1 encoded:1 emerged:2 larger:1 loglikelihood:1 otherwise:2 statistic:1 unseen:1 sequence:1 timet:1 advantage:1 net:9 sen:1 interaction:2 adaptation:4 relevant:2 rapidly:1 bed:2 competition:1 requirement:2 recurrent:1 paying:1 predicted:1 r2j:1 indicate:1 guided:1 functionality:2 stochastic:1 generalization:8 secondly:1 extension:1 exploring:1 around:1 considered:3 credit:1 mapping:7 proc:2 combinatorial:1 currently:1 punished:1 repetition:1 federal:1 mit:1 representa:1 rather:1 occupied:1 varying:2 command:8 encode:2 focus:2 consistently:1 indicates:2 baseline:1 am:3 assembles:1 dependent:1 accumulated:1 misclassified:1 germany:2 infor:1 overall:1 classification:3 hon:8 exponent:1 plan:3 shaped:2 atom:1 look:1 excessive:1 future:1 report:1 intelligent:2 inherent:2 opening:1 randomly:1 individual:1 minsky:3 phase:1 highly:2 kinematic:1 investigate:2 encourage:1 orthogonal:1 desired:3 theoretical:2 modeling:2 obstacle:16 boolean:2 assignment:1 cost:3 deviation:1 subset:1 monomials:2 successful:1 teacher:5 workspace:3 physic:1 continuously:1 containing:3 external:1 conf:1 american:1 busy:1 distribute:2 account:1 de:1 nonlinearities:3 automation:1 crowded:1 pedestrian:1 tra:1 int:1 vi:2 depends:1 decisive:1 stream:1 tion:2 break:1 performed:2 reached:1 orks:1 parallel:1 capability:2 ir:1 who:1 yield:1 generalize:2 modelled:1 identification:2 trajectory:5 bunch:1 ping:1 checked:1 ed:2 recall:3 dimensionality:1 reflecting:1 feed:10 higher:7 attained:1 supervised:3 delimited:1 specify:1 anderson:1 just:3 overfit:1 working:1 receives:2 eqn:2 wilczek:1 nonlinear:7 lack:1 indicated:1 reveal:1 effect:1 ye:1 consisted:1 ize:1 excluded:2 voidance:1 white:1 eqns:1 criterion:1 brighton:1 complete:3 performs:1 percent:1 consideration:1 sigmoid:1 superior:1 functional:1 i992a:3 denver:1 winner:2 discussed:1 he:1 doomed:1 cambridge:1 automatic:1 pm:7 inclusion:1 moving:4 impracticable:1 robot:26 specification:1 base:2 spatia:1 nostrand:1 binary:1 fault:1 yi:5 ministry:1 minimum:1 additional:1 care:1 maximize:1 ii:9 full:1 desirable:1 cross:2 long:6 concerning:1 serial:1 prediction:1 robotics:1 cell:4 confined:1 want:1 interval:1 ot:2 probably:1 subject:2 structural:3 near:2 architecture:5 topology:1 render:1 york:2 jj:1 action:16 adequate:1 generally:1 collision:3 latency:1 ph:1 restricts:1 notice:1 bot:1 eurocontrol:1 per:2 intertwined:1 eckmiller:6 changing:1 cone:3 run:2 angle:1 powerful:2 reporting:2 planner:10 parsimonious:6 decision:2 bit:7 layer:1 bound:1 existed:1 adapted:3 constraint:3 scene:2 bonn:5 generates:1 speed:1 extremely:1 romerstr:2 department:2 according:1 son:1 character:1 urgent:1 shallow:1 xo:4 taken:1 discus:1 fed:2 available:1 fluctuating:1 generic:1 appropriate:1 alternative:2 batch:1 robustness:1 original:1 uopt:1 denotes:1 esc:1 classical:1 subm:1 move:1 question:1 occurs:1 fa:1 costly:1 usual:1 exhibit:1 dp:3 separate:1 simulated:2 capacity:2 street:1 sensible:1 mail:1 extent:1 reason:1 degenerated:1 length:1 index:1 providing:1 minimizing:1 difficult:3 potentially:1 design:3 policy:1 neuron:10 benchmark:1 finite:1 situation:4 community:1 specified:1 extensive:2 learned:1 conflicting:1 assembled:1 perception:3 pattern:8 sparsity:1 rolf:1 built:1 including:1 difficulty:1 zr:2 representing:1 occupancy:2 technology:1 ne:1 identifies:1 axis:2 coupled:1 fax:1 epoch:1 acknowledgement:1 embedded:2 par:1 generation:2 limitation:1 allocation:1 digital:1 degree:1 sufficient:1 dd:1 neurocontrol:5 supported:1 free:2 choosen:2 allow:1 institute:1 fall:1 face:1 taking:1 emerge:1 sparse:3 distributed:2 regard:1 feedback:1 dimension:1 xn:1 world:3 van:1 sensory:2 forward:9 preventing:1 avoided:1 counted:1 employing:1 approximate:1 compact:1 uni:1 global:1 active:2 sofge:1 handbook:1 spatio:8 xi:3 spectrum:2 continuous:1 search:2 table:1 transfer:1 robust:3 trainig:1 tel:1 symmetry:1 expansion:2 excellent:2 cl:5 complex:1 icann:1 constituted:1 repeated:1 en:1 aid:1 wiley:1 papert:1 position:8 exceeding:1 obeying:1 exponential:1 xl:1 ix:1 unmodelled:1 admissible:3 removing:1 list:1 restricting:1 sequential:1 effectively:2 importance:1 horizon:10 forecast:1 margin:1 suited:1 depicted:1 partially:1 determines:2 goal:10 acceleration:1 hard:1 determined:1 acting:1 total:1 e:1 perceptrons:1 internal:3 support:1 arises:1 reactive:1 evaluate:1 princeton:1 |
5,469 | 5,950 | Skip-Thought Vectors
Ryan Kiros 1 , Yukun Zhu 1 , Ruslan Salakhutdinov 1,2 , Richard S. Zemel 1,2
Antonio Torralba 3 , Raquel Urtasun 1 , Sanja Fidler 1
University of Toronto 1
Canadian Institute for Advanced Research 2
Massachusetts Institute of Technology 3
Abstract
We describe an approach for unsupervised learning of a generic, distributed sentence encoder. Using the continuity of text from books, we train an encoderdecoder model that tries to reconstruct the surrounding sentences of an encoded
passage. Sentences that share semantic and syntactic properties are thus mapped
to similar vector representations. We next introduce a simple vocabulary expansion method to encode words that were not seen as part of training, allowing us
to expand our vocabulary to a million words. After training our model, we extract and evaluate our vectors with linear models on 8 tasks: semantic relatedness,
paraphrase detection, image-sentence ranking, question-type classification and 4
benchmark sentiment and subjectivity datasets. The end result is an off-the-shelf
encoder that can produce highly generic sentence representations that are robust
and perform well in practice.
1
Introduction
Developing learning algorithms for distributed compositional semantics of words has been a longstanding open problem at the intersection of language understanding and machine learning. In recent
years, several approaches have been developed for learning composition operators that map word
vectors to sentence vectors including recursive networks [1], recurrent networks [2], convolutional
networks [3, 4] and recursive-convolutional methods [5, 6] among others. All of these methods
produce sentence representations that are passed to a supervised task and depend on a class label in
order to backpropagate through the composition weights. Consequently, these methods learn highquality sentence representations but are tuned only for their respective task. The paragraph vector
of [7] is an alternative to the above models in that it can learn unsupervised sentence representations
by introducing a distributed sentence indicator as part of a neural language model. The downside is
at test time, inference needs to be performed to compute a new vector.
In this paper we abstract away from the composition methods themselves and consider an alternative loss function that can be applied with any composition operator. We consider the following
question: is there a task and a corresponding loss that will allow us to learn highly generic sentence
representations? We give evidence for this by proposing a model for learning high-quality sentence
vectors without a particular supervised task in mind. Using word vector learning as inspiration, we
propose an objective function that abstracts the skip-gram model of [8] to the sentence level. That
is, instead of using a word to predict its surrounding context, we instead encode a sentence to predict
the sentences around it. Thus, any composition operator can be substituted as a sentence encoder
and only the objective function becomes modified. Figure 1 illustrates the model. We call our model
skip-thoughts and vectors induced by our model are called skip-thought vectors.
Our model depends on having a training corpus of contiguous text. We chose to use a large collection
of novels, namely the BookCorpus dataset [9] for training our models. These are free books written
by yet unpublished authors. The dataset has books in 16 different genres, e.g., Romance (2,865
books), Fantasy (1,479), Science fiction (786), Teen (430), etc. Table 1 highlights the summary
statistics of the book corpus. Along with narratives, books contain dialogue, emotion and a wide
range of interaction between characters. Furthermore, with a large enough collection the training
set is not biased towards any particular domain or application. Table 2 shows nearest neighbours
1
Figure 1: The skip-thoughts model. Given a tuple (si?1 , si , si+1 ) of contiguous sentences, with si
the i-th sentence of a book, the sentence si is encoded and tries to reconstruct the previous sentence
si?1 and next sentence si+1 . In this example, the input is the sentence triplet I got back home. I
could see the cat on the steps. This was strange. Unattached arrows are connected to the encoder
output. Colors indicate which components share parameters. heosi is the end of sentence token.
# of books
11,038
# of sentences
74,004,228
# of words
984,846,357
# of unique words
1,316,420
mean # of words per sentence
13
Table 1: Summary statistics of the BookCorpus dataset [9]. We use this corpus to training our
model.
of sentences from a model trained on the BookCorpus dataset. These results show that skip-thought
vectors learn to accurately capture semantics and syntax of the sentences they encode.
We evaluate our vectors in a newly proposed setting: after learning skip-thoughts, freeze the model
and use the encoder as a generic feature extractor for arbitrary tasks. In our experiments we consider 8 tasks: semantic-relatedness, paraphrase detection, image-sentence ranking and 5 standard
classification benchmarks. In these experiments, we extract skip-thought vectors and train linear
models to evaluate the representations directly, without any additional fine-tuning. As it turns out,
skip-thoughts yield generic representations that perform robustly across all tasks considered.
One difficulty that arises with such an experimental setup is being able to construct a large enough
word vocabulary to encode arbitrary sentences. For example, a sentence from a Wikipedia article
might contain nouns that are highly unlikely to appear in our book vocabulary. We solve this problem
by learning a mapping that transfers word representations from one model to another. Using pretrained word2vec representations learned with a continuous bag-of-words model [8], we learn a
linear mapping from a word in word2vec space to a word in the encoder?s vocabulary space. The
mapping is learned using all words that are shared between vocabularies. After training, any word
that appears in word2vec can then get a vector in the encoder word embedding space.
2
Approach
2.1
Inducing skip-thought vectors
We treat skip-thoughts in the framework of encoder-decoder models 1 . That is, an encoder maps
words to a sentence vector and a decoder is used to generate the surrounding sentences. Encoderdecoder models have gained a lot of traction for neural machine translation. In this setting, an
encoder is used to map e.g. an English sentence into a vector. The decoder then conditions on this
vector to generate a translation for the source English sentence. Several choices of encoder-decoder
pairs have been explored, including ConvNet-RNN [10], RNN-RNN [11] and LSTM-LSTM [12].
The source sentence representation can also dynamically change through the use of an attention
mechanism [13] to take into account only the relevant words for translation at any given time. In our
model, we use an RNN encoder with GRU [14] activations and an RNN decoder with a conditional
GRU. This model combination is nearly identical to the RNN encoder-decoder of [11] used in neural
machine translation. GRU has been shown to perform as well as LSTM [2] on sequence modelling
tasks [14] while being conceptually simpler. GRU units have only 2 gates and do not require the use
of a cell. While we use RNNs for our model, any encoder and decoder can be used so long as we
can backpropagate through it.
Assume we are given a sentence tuple (si?1 , si , si+1 ). Let wit denote the t-th word for sentence si
and let xti denote its word embedding. We describe the model in three parts: the encoder, decoder
and objective function.
Encoder. Let wi1 , . . . , wiN be the words in sentence si where N is the number of words in the
sentence. At each time step, the encoder produces a hidden state hti which can be interpreted as the
representation of the sequence wi1 , . . . , wit . The hidden state hN
i thus represents the full sentence.
1
A preliminary version of our model was developed in the context of a computer vision application [9].
2
Query and nearest sentence
he ran his hand inside his coat , double-checking that the unopened letter was still there .
he slipped his hand between his coat and his shirt , where the folded copies lay in a brown envelope .
im sure youll have a glamorous evening , she said , giving an exaggerated wink .
im really glad you came to the party tonight , he said , turning to her .
although she could tell he had n?t been too invested in any of their other chitchat , he seemed genuinely curious about this .
although he had n?t been following her career with a microscope , he ?d definitely taken notice of her appearances .
an annoying buzz started to ring in my ears , becoming louder and louder as my vision began to swim .
a weighty pressure landed on my lungs and my vision blurred at the edges , threatening my consciousness altogether .
if he had a weapon , he could maybe take out their last imp , and then beat up errol and vanessa .
if he could ram them from behind , send them sailing over the far side of the levee , he had a chance of stopping them .
then , with a stroke of luck , they saw the pair head together towards the portaloos .
then , from out back of the house , they heard a horse scream probably in answer to a pair of sharp spurs digging deep into its flanks .
? i ?ll take care of it , ? goodman said , taking the phonebook .
? i ?ll do that , ? julia said , coming in .
he finished rolling up scrolls and , placing them to one side , began the more urgent task of finding ale and tankards .
he righted the table , set the candle on a piece of broken plate , and reached for his flint , steel , and tinder .
Table 2: In each example, the first sentence is a query while the second sentence is its nearest
neighbour. Nearest neighbours were scored by cosine similarity from a random sample of 500,000
sentences from our corpus.
To encode a sentence, we iterate the following sequence of equations (dropping the subscript i):
rt
t
=
?(Wr xt + Ur ht?1 )
t
t?1
?(Wz x + Uz h
(1)
z
?
ht
=
=
tanh(Wx + U(r h
ht
=
?t
(1 ? zt ) ht?1 + zt h
t
)
(2)
t
t?1
))
(3)
(4)
? t is the proposed state update at time t, zt is the update gate, rt is the reset gate () denotes
where h
a component-wise product. Both update gates takes values between zero and one.
Decoder. The decoder is a neural language model which conditions on the encoder output hi . The
computation is similar to that of the encoder except we introduce matrices Cz , Cr and C that are
used to bias the update gate, reset gate and hidden state computation by the sentence vector. One
decoder is used for the next sentence si+1 while a second decoder is used for the previous sentence
si?1 . Separate parameters are used for each decoder with the exception of the vocabulary matrix V,
which is the weight matrix connecting the decoder?s hidden state for computing a distribution over
words. In what follows we describe the decoder for the next sentence si+1 although an analogous
computation is used for the previous sentence si?1 . Let hti+1 denote the hidden state of the decoder
at time t. Decoding involves iterating through the following sequence of equations (dropping the
subscript i + 1):
rt
t
z
?
ht
hti+1
= ?(Wrd xt?1 + Udr ht?1 + Cr hi )
=
?(Wzd xt?1 + Udz ht?1 +
d t?1
d t
= tanh(W x
=
Cz hi )
t?1
+ U (r h
) + Chi )
?t
(1 ? zt ) ht?1 + zt h
(5)
(6)
(7)
(8)
t
Given hti+1 , the probability of word wi+1
given the previous t ? 1 words and the encoder vector is
t
<t
t
P (wi+1
|wi+1
, hi ) ? exp(vwi+1
hti+1 )
(9)
t
t
where vwi+1
denotes the row of V corresponding to the word of wi+1
. An analogous computation
is performed for the previous sentence si?1 .
Objective. Given a tuple (si?1 , si , si+1 ), the objective optimized is the sum of the log-probabilities
for the forward and backward sentences conditioned on the encoder representation:
X
X
t
<t
t
<t
logP (wi+1
|wi+1
, hi ) +
logP (wi?1
|wi?1
, hi )
(10)
t
t
The total objective is the above summed over all such training tuples.
3
2.2
Vocabulary expansion
We now describe how to expand our encoder?s vocabulary to words it has not seen during training.
Suppose we have a model that was trained to induce word representations, such as word2vec. Let
Vw2v denote the word embedding space of these word representations and let Vrnn denote the RNN
word embedding space. We assume the vocabulary of Vw2v is much larger than that of Vrnn . Our
goal is to construct a mapping f : Vw2v ? Vrnn parameterized by a matrix W such that v0 = Wv
for v ? Vw2v and v0 ? Vrnn . Inspired by [15], which learned linear mappings between translation
word spaces, we solve an un-regularized L2 linear regression loss for the matrix W. Thus, any word
from Vw2v can now be mapped into Vrnn for encoding sentences.
3
Experiments
In our experiments, we evaluate the capability of our encoder as a generic feature extractor after
training on the BookCorpus dataset. Our experimentation setup on each task is as follows:
? Using the learned encoder as a feature extractor, extract skip-thought vectors for all sentences.
? If the task involves computing scores between pairs of sentences, compute component-wise features between pairs. This is described in more detail specifically for each experiment.
? Train a linear classifier on top of the extracted features, with no additional fine-tuning or backpropagation through the skip-thoughts model.
We restrict ourselves to linear classifiers for two reasons. The first is to directly evaluate the representation quality of the computed vectors. It is possible that additional performance gains can be
made throughout our experiments with non-linear models but this falls out of scope of our goal. Furthermore, it allows us to better analyze the strengths and weaknesses of the learned representations.
The second reason is that reproducibility now becomes very straightforward.
3.1
Details of training
To induce skip-thought vectors, we train two separate models on our book corpus. One is a unidirectional encoder with 2400 dimensions, which we subsequently refer to as uni-skip. The other is
a bidirectional model with forward and backward encoders of 1200 dimensions each. This model
contains two encoders with different parameters: one encoder is given the sentence in correct order, while the other is given the sentence in reverse. The outputs are then concatenated to form a
2400 dimensional vector. We refer to this model as bi-skip. For training, we initialize all recurrent
matricies with orthogonal initialization [16]. Non-recurrent weights are initialized from a uniform
distribution in [-0.1,0.1]. Mini-batches of size 128 are used and gradients are clipped if the norm of
the parameter vector exceeds 10. We used the Adam algorithm [17] for optimization. Both models were trained for roughly two weeks. As an additional experiment, we also report experimental
results using a combined model, consisting of the concatenation of the vectors from uni-skip and
bi-skip, resulting in a 4800 dimensional vector. We refer to this model throughout as combine-skip.
After our models are trained, we then employ vocabulary expansion to map word embeddings into
the RNN encoder space. The publically available CBOW word vectors are used for this purpose
2
. The skip-thought models are trained with a vocabulary size of 20,000 words. After removing
multiple word examples from the CBOW model, this results in a vocabulary size of 930,911 words.
Thus even though our skip-thoughts model was trained with only 20,000 words, after vocabulary
expansion we can now successfully encode 930,911 possible words.
Since our goal is to evaluate skip-thoughts as a general feature extractor, we keep text pre-processing
to a minimum. When encoding new sentences, no additional preprocessing is done other than basic
tokenization. This is done to test the robustness of our vectors. As an additional baseline, we also
consider the mean of the word vectors learned from the uni-skip model. We refer to this baseline as
bow. This is to determine the effectiveness of a standard baseline trained on the BookCorpus.
3.2
Semantic relatedness
Our first experiment is on the SemEval 2014 Task 1: semantic relatedness SICK dataset [30]. Given
two sentences, our goal is to produce a score of how semantically related these sentences are, based
on human generated scores. Each score is the average of 10 different human annotators. Scores
take values between 1 and 5. A score of 1 indicates that the sentence pair is not at all related, while
2
http://code.google.com/p/word2vec/
4
r
?
MSE
Method
Acc
Illinois-LH [18]
UNAL-NLP [19]
Meaning Factory [20]
ECNU [21]
0.7993
0.8070
0.8268
0.8414
0.7538
0.7489
0.7721
?
0.3692
0.3550
0.3224
?
feats [24]
RAE+DP [24]
RAE+feats [24]
RAE+DP+feats [24]
73.2
72.6
74.2
76.8
83.6
Mean vectors [22]
DT-RNN [23]
SDT-RNN [23]
LSTM [22]
Bidirectional LSTM [22]
Dependency Tree-LSTM [22]
0.7577
0.7923
0.7900
0.8528
0.8567
0.8676
0.6738
0.7319
0.7304
0.7911
0.7966
0.8083
0.4557
0.3822
0.3848
0.2831
0.2736
0.2532
FHS [25]
PE [26]
WDDP [27]
MTMETRICS [28]
TF-KLD [29]
75.0
76.1
75.6
77.4
80.4
82.7
82.7
83.0
84.1
86.0
bow
uni-skip
bi-skip
combine-skip
combine-skip+COCO
0.7823
0.8477
0.8405
0.8584
0.8655
0.7235
0.7780
0.7696
0.7916
0.7995
0.3975
0.2872
0.2995
0.2687
0.2561
bow
uni-skip
bi-skip
combine-skip
combine-skip + feats
67.8
73.0
71.2
73.0
75.8
80.3
81.9
81.2
82.0
83.0
Method
F1
Table 3: Left: Test set results on the SICK semantic relatedness subtask. The evaluation metrics
are Pearson?s r, Spearman?s ?, and mean squared error. The first group of results are SemEval 2014
submissions, while the second group are results reported by [22]. Right: Test set results on the
Microsoft Paraphrase Corpus. The evaluation metrics are classification accuracy and F1 score. Top:
recursive autoencoder variants. Middle: the best published results on this dataset.
a score of 5 indicates they are highly related. The dataset comes with a predefined split of 4500
training pairs, 500 development pairs and 4927 testing pairs. All sentences are derived from existing
image and video annotation datasets. The evaluation metrics are Pearson?s r, Spearman?s ?, and
mean squared error.
Given the difficulty of this task, many existing systems employ a large amount of feature engineering
and additional resources. Thus, we test how well our learned representations fair against heavily engineered pipelines. Recently, [22] showed that learning representations with LSTM or Tree-LSTM
for the task at hand is able to outperform these existing systems. We take this one step further
and see how well our vectors learned from a completely different task are able to capture semantic
relatedness when only a linear model is used on top to predict scores.
To represent a sentence pair, we use two features. Given two skip-thought vectors u and v, we
compute their component-wise product u ? v and their absolute difference |u ? v| and concatenate
them together. These two features were also used by [22]. To predict a score, we use the same
setup as [22]. Let r> = [1, . . . , 5] be an integer vector from 1 to 5. We compute a distribution p
as a function of prediction scores y given by pi = y ? byc if i = byc + 1, pi = byc ? y + 1 if
i = byc and 0 otherwise. These then become our targets for a logistic regression classifier. At test
time, given new sentence pairs we first compute targets p? and then compute the related score as r> p?.
As an additional comparison, we also explored appending features derived from an image-sentence
embedding model trained on COCO (see section 3.4). Given vectors u and v, we obtain vectors u0
and v 0 from the learned linear embedding model and compute features u0 ? v 0 and |u0 ? v 0 |. These
are then concatenated to the existing features.
Table 3 (left) presents our results. First, we observe that our models are able to outperform all
previous systems from the SemEval 2014 competition. It highlights that skip-thought vectors learn
representations that are well suited for semantic relatedness. Our results are comparable to LSTMs
whose representations are trained from scratch on this task. Only the dependency tree-LSTM of [22]
performs better than our results. We note that the dependency tree-LSTM relies on parsers whose
training data is very expensive to collect and does not exist for all languages. We also observe
using features learned from an image-sentence embedding model on COCO gives an additional
performance boost, resulting in a model that performs on par with the dependency tree-LSTM. To
get a feel for the model outputs, Table 4 shows example cases of test set pairs. Our model is able to
accurately predict relatedness on many challenging cases. On some examples, it fails to pick up on
small distinctions that drastically change a sentence meaning, such as tricks on a motorcycle versus
tricking a person on a motorcycle.
3.3
Paraphrase detection
The next task we consider is paraphrase detection on the Microsoft Research Paraphrase Corpus [31]. On this task, two sentences are given and one must predict whether or not they are
5
Sentence 1
Sentence 2
GT
pred
A little girl is looking at a woman in costume
A little girl is looking at a woman in costume
A little girl is looking at a woman in costume
A young girl is looking at a woman in costume
The little girl is looking at a man in costume
A little girl in costume looks like a woman
4.7
3.8
2.9
4.5
4.0
3.5
A sea turtle is hunting for fish
A sea turtle is not hunting for fish
A sea turtle is hunting for food
A sea turtle is hunting for fish
4.5
3.4
4.5
3.8
A man is driving a car
There is no man driving the car
The car is being driven by a man
A man is driving a car
5
3.6
4.9
3.5
A large duck is flying over a rocky stream
A large duck is flying over a rocky stream
A duck, which is large, is flying over a rocky stream
A large stream is full of rocks, ducks and flies
4.8
2.7
4.9
3.1
A person is performing acrobatics on a motorcycle
A person is performing tricks on a motorcycle
A person is performing tricks on a motorcycle
The performer is tricking a person on a motorcycle
4.3
2.6
4.4
4.4
Someone is pouring ingredients into a pot
Nobody is pouring ingredients into a pot
Someone is pouring ingredients into a pot
Someone is adding ingredients to a pot
Someone is pouring ingredients into a pot
A man is removing vegetables from a pot
4.4
3.5
2.4
4.0
4.2
3.6
Table 4: Example predictions from the SICK test set. GT is the ground truth relatedness, scored
between 1 and 5. The last few results show examples where slight changes in sentence structure
result in large changes in relatedness which our model was unable to score correctly.
Model
Random Ranking
DVSA [32]
GMM+HGLMM [33]
m-RNN [34]
bow
uni-skip
bi-skip
combine-skip
R@1
0.1
38.4
39.4
41.0
33.6
30.6
32.7
33.8
COCO Retrieval
Image Annotation
R@5 R@10 Med r
0.6
1.1
631
69.6
80.5
1
67.9
80.9
2
73.0
83.5
2
65.8
79.7
3
64.5
79.8
3
67.3
79.6
3
67.7
82.1
3
R@1
0.1
27.4
25.1
29.0
24.4
22.7
24.2
25.9
Image Search
R@5 R@10
0.5
1.0
60.2
74.8
59.8
76.6
42.2
77.0
57.1
73.5
56.4
71.7
57.1
73.2
60.0
74.6
Med r
500
3
4
3
4
4
4
4
Table 5: COCO test-set results for image-sentence retrieval experiments. R@K is Recall@K (high
is good). Med r is the median rank (low is good).
paraphrases. The training set consists of 4076 sentence pairs (2753 which are positive) and the
test set has 1725 pairs (1147 are positive). We compute a vector representing the pair of sentences
in the same way as on the SICK dataset, using the component-wise product u ? v and their absolute
difference |u ? v| which are then concatenated together. We then train logistic regression on top to
predict whether the sentences are paraphrases. Cross-validation is used for tuning the L2 penalty.
As in the semantic relatedness task, paraphrase detection has largely been dominated by extensive
feature engineering, or a combination of feature engineering with semantic spaces. We report experiments in two settings: one using the features as above and the other incorporating basic statistics
between sentence pairs, the same features used by [24]. These are referred to as feats in our results.
We isolate the results and baselines used in [24] as well as the top published results on this task.
Table 3 (right) presents our results, from which we can observe the following: (1) skip-thoughts
alone outperform recursive nets with dynamic pooling when no hand-crafted features are used, (2)
when other features are used, recursive nets with dynamic pooling works better, and (3) when skipthoughts are combined with basic pairwise statistics, it becomes competitive with the state-of-the-art
which incorporate much more complicated features and hand-engineering. This is a promising result
as many of the sentence pairs have very fine-grained details that signal if they are paraphrases.
3.4
Image-sentence ranking
We next consider the task of retrieving images and their sentence descriptions. For this experiment,
we use the Microsoft COCO dataset [35] which is the largest publicly available dataset of images
with high-quality sentence descriptions. Each image is annotated with 5 captions, each from different annotators. Following previous work, we consider two tasks: image annotation and image
search. For image annotation, an image is presented and sentences are ranked based on how well
they describe the query image. The image search task is the reverse: given a caption, we retrieve
images that are a good fit to the query. The training set comes with over 80,000 images each with 5
captions. For development and testing we use the same splits as [32]. The development and test sets
each contain 1000 images and 5000 captions. Evaluation is performed using Recall@K, namely the
mean number of images for which the correct caption is ranked within the top-K retrieved results
6
(and vice-versa for sentences). We also report the median rank of the closest ground truth result
from the ranked list.
The best performing results on image-sentence ranking have all used RNNs for encoding sentences,
where the sentence representation is learned jointly. Recently, [33] showed that by using Fisher
vectors for representing sentences, linear CCA can be applied to obtain performance that is as strong
as using RNNs for this task. Thus the method of [33] is a strong baseline to compare our sentence
representations with. For our experiments, we represent images using 4096-dimensional OxfordNet
features from their 19-layer model [36]. For sentences, we simply extract skip-thought vectors for
each caption. The training objective we use is a pairwise ranking loss that has been previously
used by many other methods. The only difference is the scores are computed using only linear
transformations of image and sentence inputs. The loss is given by:
XX
XX
max{0, ? ? s(Ux, Vy) + s(Ux, Vyk )} +
max{0, ? ? s(Vy, Ux) + s(Vy, Uxk )},
x
y
k
k
where x is an image vector, y is the skip-thought vector for the groundtruth sentence, yk are vectors
for constrastive (incorrect) sentences and s(?, ?) is the image-sentence score. Cosine similarity is
used for scoring. The model parameters are {U, V} where U is the image embedding matrix and
V is the sentence embedding matrix. In our experiments, we use a 1000 dimensional embedding,
margin ? = 0.2 and k = 50 contrastive terms. We trained for 15 epochs and saved our model
anytime the performance improved on the development set.
Table 5 illustrates our results on this task. Using skip-thought vectors for sentences, we get performance that is on par with both [32] and [33] except for R@1 on image annotation, where other methods perform much better. Our results indicate that skip-thought vectors are representative enough
to capture image descriptions without having to learn their representations from scratch. Combined
with the results of [33], it also highlights that simple, scalable embedding techniques perform very
well provided that high-quality image and sentence vectors are available.
3.5
Classification benchmarks
For our final quantitative experiments, we report results on several classification benchmarks which
are commonly used for evaluating sentence representation learning methods.
We use 5 datasets: movie review sentiment (MR), customer product reviews (CR), subjectivity/objectivity classification (SUBJ), opinion polarity (MPQA) and question-type classification
(TREC). On all datasets, we simply extract skip-thought vectors and train a logistic regression classifier on top. 10-fold cross-validation is used for evaluation on the first 4 datasets, while TREC has
a pre-defined train/test split. We tune the L2 penality using cross-validation (and thus use a nested
cross-validation for the first 4 datasets).
Method
MR
CR
SUBJ
MPQA
TREC
NB-SVM [37]
MNB [37]
cBoW [6]
79.4
79.0
77.2
81.8
80.0
79.9
93.2
93.6
91.3
86.3
86.3
86.4
87.3
GrConv [6]
RNN [6]
BRNN [6]
CNN [4]
AdaSent [6]
76.3
77.2
82.3
81.5
83.1
81.3
82.3
82.6
85.0
86.3
89.5
93.7
94.2
93.4
95.5
84.5
90.1
90.3
89.6
93.3
88.4
90.2
91.0
93.6
92.4
Paragraph-vector [7]
74.8
78.1
90.5
74.2
91.8
bow
uni-skip
bi-skip
combine-skip
combine-skip + NB
75.0
75.5
73.9
76.5
80.4
80.4
79.3
77.9
80.1
81.3
91.2
92.1
92.5
93.6
93.6
87.0
86.9
83.3
87.1
87.5
84.8
91.4
89.4
92.2
On these tasks, properly tuned bag-ofwords models have been shown to perform exceptionally well. In particular,
the NB-SVM of [37] is a fast and robust performer on these tasks. Skipthought vectors potentially give an alternative to these baselines being just as
fast and easy to use. For an additional
comparison, we also see to what effect augmenting skip-thoughts with bigram Naive Bayes (NB) features improves performance 3 .
Table 6 presents our results. On most
tasks, skip-thoughts performs about as
well as the bag-of-words baselines but
Table 6: Classification accuracies on several standard bench- fails to improve over methods whose
marks. Results are grouped as follows: (a): bag-of-words mod- sentence representations are learned diels; (b): supervised compositional models; (c) Paragraph Vector rectly for the task at hand. This indicates
(unsupervised learning of sentence representations); (d) ours. that for tasks like sentiment classificaBest results overall are bold while best results outside of group tion, tuning the representations, even on
(b) are underlined.
small datasets, are likely to perform better than learning a generic unsupervised
3
We use the code available at https://github.com/mesnilgr/nbsvm
7
(a) TREC
(b) SUBJ
(c) SICK
Figure 2: t-SNE embeddings of skip-thought vectors on different datasets. Points are colored based
on their labels (question type for TREC, subjectivity/objectivity for SUBJ). On the SICK dataset,
each point represents a sentence pair and points are colored on a gradient based on their relatedness
labels. Results best seen in electronic form.
sentence vector on much bigger datasets. Finally, we observe that the skip-thoughts-NB combination is effective, particularly on MR. This results in a very strong new baseline for text classification:
combine skip-thoughts with bag-of-words and train a linear model.
3.6
Visualizing skip-thoughts
As a final experiment, we applied t-SNE [38] to skip-thought vectors extracted from TREC, SUBJ
and SICK datasets and the visualizations are shown in Figure 2. For the SICK visualization, each
point represents a sentence pair, computed using the concatenation of component-wise and absolute
difference of features. Even without the use of relatedness labels, skip-thought vectors learn to
accurately capture this property.
4
Conclusion
We evaluated the effectiveness of skip-thought vectors as an off-the-shelf sentence representation
with linear classifiers across 8 tasks. Many of the methods we compare against were only evaluated
on 1 task. The fact that skip-thought vectors perform well on all tasks considered highlight the
robustness of our representations.
We believe our model for learning skip-thought vectors only scratches the surface of possible objectives. Many variations have yet to be explored, including (a) deep encoders and decoders, (b) larger
context windows, (c) encoding and decoding paragraphs, (d) other encoders, such as convnets. It is
likely the case that more exploration of this space will result in even higher quality representations.
Acknowledgments
We thank Geoffrey Hinton for suggesting the name skip-thoughts. We also thank Felix Hill, Kelvin
Xu, Kyunghyun Cho and Ilya Sutskever for valuable comments and discussion. This work was
supported by NSERC, Samsung, CIFAR, Google and ONR Grant N00014-14-1-0232.
References
[1] Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and
Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In
EMNLP, 2013.
[2] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?
1780, 1997.
[3] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network for modelling
sentences. ACL, 2014.
[4] Yoon Kim. Convolutional neural networks for sentence classification. EMNLP, 2014.
[5] Kyunghyun Cho, Bart van Merri?nboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of
neural machine translation: Encoder-decoder approaches. SSST-8, 2014.
[6] Han Zhao, Zhengdong Lu, and Pascal Poupart. Self-adaptive hierarchical sentence model. IJCAI, 2015.
[7] Quoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. ICML, 2014.
[8] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations
in vector space. ICLR, 2013.
[9] Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and
Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and
reading books. In ICCV, 2015.
8
[10] Nal Kalchbrenner and Phil Blunsom. Recurrent continuous translation models. In EMNLP, pages 1700?
1709, 2013.
[11] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua
Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation.
EMNLP, 2014.
[12] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural networks. In
NIPS, 2014.
[13] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning
to align and translate. ICLR, 2015.
[14] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated
recurrent neural networks on sequence modeling. NIPS Deep Learning Workshop, 2014.
[15] Tomas Mikolov, Quoc V Le, and Ilya Sutskever. Exploiting similarities among languages for machine
translation. arXiv preprint arXiv:1309.4168, 2013.
[16] Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of
learning in deep linear neural networks. ICLR, 2014.
[17] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015.
[18] Alice Lai and Julia Hockenmaier. Illinois-lh: A denotational and distributional approach to semantics.
SemEval 2014, 2014.
[19] Sergio Jimenez, George Duenas, Julia Baquero, Alexander Gelbukh, Av Juan Dios B?tiz, and Av Mendiz?bal. Unal-nlp: Combining soft cardinality features for semantic textual similarity, relatedness and
entailment. SemEval 2014, 2014.
[20] Johannes Bjerva, Johan Bos, Rob van der Goot, and Malvina Nissim. The meaning factory: Formal
semantics for recognizing textual entailment and determining semantic similarity. SemEval 2014, page
642, 2014.
[21] Jiang Zhao, Tian Tian Zhu, and Man Lan. Ecnu: One stone two birds: Ensemble of heterogenous measures for semantic relatedness and textual entailment. SemEval 2014, 2014.
[22] Kai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from
tree-structured long short-term memory networks. ACL, 2015.
[23] Richard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng. Grounded
compositional semantics for finding and describing images with sentences. TACL, 2014.
[24] Richard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y Ng. Dynamic
pooling and unfolding recursive autoencoders for paraphrase detection. In NIPS, 2011.
[25] Andrew Finch, Young-Sook Hwang, and Eiichiro Sumita. Using machine translation evaluation techniques to determine sentence-level semantic equivalence. In IWP, 2005.
[26] Dipanjan Das and Noah A Smith. Paraphrase identification as probabilistic quasi-synchronous recognition. In ACL, 2009.
[27] Stephen Wan, Mark Dras, Robert Dale, and C?cile Paris. Using dependency-based features to take the
"para-farce" out of paraphrase. In Proceedings of the Australasian Language Technology Workshop, 2006.
[28] Nitin Madnani, Joel Tetreault, and Martin Chodorow. Re-examining machine translation metrics for
paraphrase identification. In NAACL, 2012.
[29] Yangfeng Ji and Jacob Eisenstein. Discriminative improvements to distributional sentence similarity. In
EMNLP, pages 891?896, 2013.
[30] Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. SemEval-2014, 2014.
[31] Bill Dolan, Chris Quirk, and Chris Brockett. Unsupervised construction of large paraphrase corpora:
Exploiting massively parallel news sources. In Proceedings of the 20th international conference on Computational Linguistics, 2004.
[32] A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In
CVPR, 2015.
[33] Benjamin Klein, Guy Lev, Gil Sadeh, and Lior Wolf. Associating neural word embeddings with deep
image representations using fisher vectors. In CVPR, 2015.
[34] Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, and Alan Yuille. Deep captioning with multimodal recurrent
neural networks (m-rnn). ICLR, 2015.
[35] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll?r,
and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, pages 740?755. 2014.
[36] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. ICLR, 2015.
[37] Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classification. In ACL, 2012.
[38] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. JMLR, 2008.
9
| 5950 |@word cnn:1 version:1 middle:1 bigram:2 norm:1 scroll:1 annoying:1 open:1 jacob:1 contrastive:1 pressure:1 pick:1 hunting:4 contains:1 score:15 jimenez:1 tuned:2 ours:1 document:1 existing:4 com:2 si:20 yet:2 activation:1 must:1 diederik:1 written:1 romance:1 concatenate:1 wx:1 update:4 bart:2 alone:1 smith:1 short:2 colored:2 nitin:1 toronto:1 simpler:1 along:1 become:1 retrieving:1 incorrect:1 consists:1 combine:9 inside:1 paragraph:4 introduce:2 pairwise:2 roughly:1 themselves:1 kiros:2 uz:1 wzd:1 shirt:1 chi:1 salakhutdinov:2 inspired:1 food:1 xti:1 little:5 window:1 cardinality:1 becomes:3 provided:1 xx:2 what:2 fantasy:1 interpreted:1 developed:2 proposing:1 finding:2 transformation:1 quantitative:1 classifier:5 highquality:1 grant:1 ramanan:1 appear:1 unit:1 kelvin:1 positive:2 felix:1 engineering:4 treat:1 encoding:4 jiang:2 lev:1 subscript:2 becoming:1 might:1 rnns:3 chose:1 initialization:1 blunsom:2 acl:4 bird:1 dynamically:1 collect:1 challenging:1 someone:4 alice:1 equivalence:1 tian:2 bi:6 range:1 unique:1 acknowledgment:1 testing:2 practice:1 recursive:7 backpropagation:1 maire:1 rnn:14 empirical:1 got:1 thought:36 luisa:1 word:47 induce:2 pre:2 get:3 andrej:1 operator:3 nb:5 context:4 kld:1 bill:1 dean:1 customer:1 phil:2 map:4 send:1 straightforward:1 sepp:1 attention:1 jimmy:1 wit:2 tomas:3 his:6 retrieve:1 embedding:11 variation:1 merri:1 analogous:2 feel:1 target:2 construction:1 heavily:1 suppose:1 parser:1 caption:6 exact:1 trick:3 recognition:2 expensive:1 particularly:1 genuinely:1 lay:1 submission:1 distributional:3 yoon:1 fly:1 preprint:1 wang:2 capture:4 connected:1 news:1 luck:1 valuable:1 ran:1 yk:1 subtask:1 benjamin:1 broken:1 dynamic:4 trained:10 depend:1 deva:1 flying:3 yuille:1 eric:1 completely:1 girl:6 multimodal:1 samsung:1 schwenk:1 cat:1 pennin:1 genre:1 surrounding:3 train:8 fast:2 describe:5 effective:1 query:4 zemel:2 objectivity:2 horse:1 unal:2 outside:1 tell:1 pearson:2 kalchbrenner:2 jean:1 whose:3 kai:2 larger:2 cvpr:2 solve:2 encoded:2 reconstruct:2 otherwise:1 encoder:29 statistic:4 simonyan:1 invested:1 jointly:2 syntactic:1 final:2 sequence:7 net:2 rock:1 propose:1 interaction:1 product:4 reset:2 coming:1 relevant:1 combining:1 motorcycle:6 bow:5 reproducibility:1 spur:1 translate:1 description:4 inducing:1 competition:1 sutskever:3 ijcai:1 double:1 exploiting:2 sea:4 captioning:1 generating:1 adam:2 ring:1 produce:4 object:1 andrew:6 recurrent:6 augmenting:1 quirk:1 nearest:4 strong:3 edward:1 pot:6 skip:58 indicate:2 come:2 involves:2 laurens:1 correct:2 annotated:1 saved:1 stochastic:1 subsequently:1 exploration:1 human:2 engineered:1 saxe:1 opinion:1 require:1 f1:2 landed:1 really:1 preliminary:1 merrienboer:1 ryan:2 im:2 marco:2 tricking:2 around:1 considered:2 eiichiro:1 ground:2 exp:1 lawrence:1 mapping:5 predict:7 scope:1 week:1 rgen:1 driving:3 torralba:2 purpose:1 baroni:1 ruslan:2 narrative:1 estimation:1 wi1:2 bag:5 label:4 tanh:2 vanessa:1 saw:1 largest:1 grouped:1 vice:1 tf:1 successfully:1 unfolding:1 modified:1 shelf:2 cr:4 encode:6 derived:2 she:2 potts:1 rank:2 modelling:2 indicates:3 improvement:1 properly:1 kim:1 bos:1 baseline:9 inference:1 ganguli:1 scream:1 publically:1 stopping:1 unlikely:1 brockett:1 hidden:5 her:3 perona:1 expand:2 quasi:1 semantics:5 overall:1 classification:11 among:2 pascal:1 development:4 noun:1 art:1 initialize:1 summed:1 tokenization:1 emotion:1 construct:2 having:2 ng:3 piotr:1 identical:1 placing:1 represents:3 look:1 icml:1 unsupervised:5 nearly:1 holger:1 imp:1 yoshua:4 others:1 report:4 richard:6 employ:2 few:1 neighbour:3 sumita:1 ourselves:1 consisting:1 jeffrey:2 microsoft:4 detection:6 rae:3 highly:4 threatening:1 evaluation:8 joel:1 alignment:1 weakness:1 udr:1 behind:1 tonight:1 matricies:1 word2vec:5 predefined:1 tuple:3 coat:2 edge:1 lh:2 respective:1 orthogonal:1 tree:6 initialized:1 re:1 modeling:1 soft:1 downside:1 phonebook:1 contiguous:2 ssst:1 logp:2 phrase:1 introducing:1 bookcorpus:5 rolling:1 uniform:1 recognizing:1 examining:1 too:1 reported:1 encoders:4 answer:1 para:1 dependency:5 finch:1 combined:3 cho:5 my:5 person:5 definitely:1 international:1 lstm:11 probabilistic:1 off:2 decoding:2 michael:1 connecting:1 together:3 ilya:3 squared:2 ear:1 huang:1 hn:1 woman:5 wan:1 emnlp:5 guy:1 watching:1 juan:1 cbow:3 book:12 zhao:2 dialogue:1 chung:1 suggesting:1 account:1 sdt:1 australasian:1 bold:1 blurred:1 ranking:6 depends:1 stream:4 piece:1 performed:3 try:2 lot:1 jason:1 tsung:1 analyze:1 tion:1 reached:1 competitive:1 bayes:1 lung:1 parallel:1 capability:1 unidirectional:1 complicated:1 annotation:5 publicly:1 greg:1 accuracy:2 convolutional:5 largely:1 ensemble:1 serge:1 yield:1 conceptually:1 zhengdong:1 identification:2 accurately:3 lu:1 buzz:1 published:2 stroke:1 acc:1 tinder:1 ecnu:2 errol:1 against:2 james:2 subjectivity:3 lior:1 flint:1 newly:1 gain:1 dataset:12 massachusetts:1 recall:2 anytime:1 color:1 car:4 improves:1 back:2 appears:1 bidirectional:2 higher:1 dt:1 supervised:3 zisserman:1 improved:2 wei:1 entailment:4 evaluated:2 done:2 though:1 furthermore:2 just:1 autoencoders:1 tacl:1 sheng:1 convnets:1 hand:6 lstms:1 christopher:6 nonlinear:1 google:2 continuity:1 logistic:3 quality:5 hwang:1 believe:1 name:1 effect:1 naacl:1 contain:3 brown:1 fidler:2 kyunghyun:5 vrnn:5 inspiration:1 consciousness:1 semantic:19 visualizing:2 ll:2 during:1 self:1 fethi:1 eisenstein:1 cosine:2 bal:1 stone:1 plate:1 mpqa:2 syntax:1 hill:1 julia:3 performs:3 stefano:1 passage:1 meaning:3 wise:5 image:35 novel:1 recently:2 began:2 common:1 wikipedia:1 ji:1 teen:1 pouring:4 sailing:1 million:1 slight:1 he:13 bougares:1 refer:4 composition:5 louder:2 versa:1 freeze:1 tuning:4 illinois:2 language:6 had:4 nobody:1 sanja:2 han:1 similarity:6 surface:1 v0:2 etc:1 align:1 aligning:1 gt:2 sick:8 closest:1 rectly:1 recent:1 sergio:1 exaggerated:1 retrieved:1 showed:2 driven:1 coco:7 schmidhuber:1 massively:1 reverse:2 n00014:1 hay:1 underlined:1 onr:1 came:1 wv:1 yi:2 der:2 scoring:1 seen:3 minimum:1 george:1 care:1 additional:10 mr:3 performer:2 determine:2 corrado:1 ale:1 u0:3 stephen:1 full:3 multiple:1 sida:1 signal:1 alan:1 exceeds:1 cross:4 long:3 retrieval:2 cifar:1 lin:1 lai:1 bigger:1 prediction:2 scalable:1 regression:4 basic:3 variant:1 vision:3 metric:4 arxiv:2 represent:2 cz:2 grounded:1 hochreiter:1 cell:1 microscope:1 fine:3 median:2 source:3 goodman:1 biased:1 envelope:1 nbsvm:1 weapon:1 sure:1 probably:1 isolate:1 med:3 comment:1 pooling:3 induced:1 bahdanau:2 mod:1 effectiveness:2 encoderdecoder:2 integer:1 call:1 curious:1 yang:1 canadian:1 split:3 enough:3 easy:1 bengio:4 iterate:1 embeddings:3 fit:1 semeval:9 restrict:1 associating:1 synchronous:1 whether:2 passed:1 swim:1 sentiment:5 penalty:1 karen:1 compositional:4 antonio:2 deep:9 heard:1 vegetable:1 iterating:1 johannes:1 tune:1 maybe:1 karpathy:2 amount:1 traction:1 mcclelland:1 generate:2 http:2 outperform:3 exist:1 vy:3 fiction:1 notice:1 fish:3 gil:1 wr:1 correctly:1 per:1 klein:1 dropping:2 group:3 lan:1 gmm:1 nal:2 ht:8 backward:2 ram:1 pietro:1 year:1 sum:1 letter:1 you:1 parameterized:1 raquel:2 clipped:1 throughout:2 strange:1 electronic:1 groundtruth:1 wu:1 home:1 maaten:1 comparable:1 cca:1 hi:6 layer:1 fold:1 strength:1 yangfeng:1 subj:5 noah:1 fei:2 alex:1 dominated:1 turtle:4 mnb:1 mikolov:3 performing:4 nboer:1 glad:1 martin:1 structured:1 developing:1 combination:3 manning:5 spearman:2 across:2 character:1 ur:1 wi:8 urgent:1 hockenmaier:1 rob:1 candle:1 quoc:4 iccv:1 taken:1 pipeline:1 equation:2 visualization:2 previously:1 resource:1 tai:1 turn:1 mechanism:1 describing:1 mind:1 end:2 gulcehre:2 available:4 costume:6 experimentation:1 constrastive:1 doll:1 observe:4 hierarchical:1 away:1 generic:7 appending:1 robustly:1 alternative:3 robustness:2 batch:1 gate:6 altogether:1 chuang:1 denotes:2 top:7 linguistics:1 nlp:2 giving:1 concatenated:3 objective:8 question:4 ofwords:1 rt:3 said:4 gradient:2 win:1 iclr:6 convnet:1 separate:2 thank:2 unable:1 concatenation:2 decoder:19 dp:2 mapped:2 poupart:1 chris:2 topic:1 nissim:1 urtasun:2 reason:2 dzmitry:2 code:2 polarity:1 mini:1 setup:3 digging:1 vyk:1 robert:1 sne:3 potentially:1 perelygin:1 steel:1 ba:1 zt:5 perform:8 gated:1 allowing:1 av:2 datasets:10 benchmark:4 caglar:2 beat:1 hinton:2 looking:5 head:1 trec:6 sharp:1 arbitrary:2 paraphrase:15 dras:1 compositionality:1 pred:1 unpublished:1 gru:4 namely:2 extensive:1 pair:19 paris:1 optimized:1 sentence:110 learned:12 distinction:1 textual:4 kingma:1 heterogenous:1 nip:3 boost:1 able:5 reading:1 wz:1 memory:2 explanation:1 max:2 including:3 video:1 difficulty:2 ranked:3 regularized:1 indicator:1 turning:1 advanced:1 zhu:3 representing:2 improve:1 movie:3 technology:2 github:1 rocky:3 finished:1 started:1 naive:1 extract:5 autoencoder:1 roberto:1 text:4 review:2 understanding:1 epoch:1 l2:3 checking:1 determining:1 dolan:1 loss:5 par:2 highlight:4 geoffrey:2 versus:1 ingredient:5 annotator:2 validation:4 article:1 treebank:1 story:1 share:2 pi:2 translation:12 row:1 eccv:1 summary:2 token:1 supported:1 last:2 copy:1 free:1 english:2 drastically:1 formal:1 bias:1 side:2 allow:1 institute:2 fall:1 vwi:2 taking:1 vv:1 wide:1 absolute:3 distributed:4 van:4 dimension:2 vocabulary:14 gram:1 evaluating:1 seemed:1 dale:1 forward:2 made:1 commonly:1 preprocessing:1 dipanjan:1 collection:2 adaptive:1 far:1 longstanding:1 author:1 party:1 uni:7 relatedness:16 feat:5 keep:1 brnn:1 corpus:8 belongie:1 tuples:1 discriminative:1 surya:1 un:1 evening:1 continuous:2 triplet:1 search:3 table:14 sadeh:1 promising:1 learn:8 johan:1 transfer:1 robust:2 career:1 flank:1 expansion:4 mse:1 domain:1 substituted:1 zitnick:1 da:1 arrow:1 junhua:1 scored:2 heosi:1 fair:1 xu:2 crafted:1 wink:1 referred:1 representative:1 junyoung:1 fails:2 mao:1 duck:4 factory:2 house:1 pe:1 jmlr:1 extractor:4 hti:5 young:2 grained:1 removing:2 xt:3 list:1 explored:3 svm:2 evidence:1 incorporating:1 socher:4 workshop:2 adding:1 gained:1 illustrates:2 conditioned:1 margin:1 chen:1 suited:1 backpropagate:2 intersection:1 simply:2 likely:2 appearance:1 visual:2 vinyals:1 nserc:1 ux:3 pretrained:1 nested:1 truth:2 wolf:1 relies:1 chance:1 extracted:2 grefenstette:1 conditional:1 yukun:2 goal:4 consequently:1 towards:3 shared:1 fisher:2 exceptionally:1 change:4 man:7 folded:1 specifically:1 except:2 semantically:1 total:1 called:1 experimental:2 exception:1 mark:2 arises:1 alexander:1 oriol:1 incorporate:1 evaluate:6 bench:1 scratch:3 |
5,470 | 5,951 | Learning to Linearize Under Uncertainty
1
Ross Goroshin?1 Michael Mathieu?1 Yann LeCun1,2
Dept. of Computer Science, Courant Institute of Mathematical Science, New York, NY
2
Facebook AI Research, New York, NY
{goroshin,mathieu,yann}@cs.nyu.edu
Abstract
Training deep feature hierarchies to solve supervised learning tasks has achieved
state of the art performance on many problems in computer vision. However, a
principled way in which to train such hierarchies in the unsupervised setting has
remained elusive. In this work we suggest a new architecture and loss for training
deep feature hierarchies that linearize the transformations observed in unlabeled
natural video sequences. This is done by training a generative model to predict
video frames. We also address the problem of inherent uncertainty in prediction
by introducing latent variables that are non-deterministic functions of the input
into the network architecture.
1
Introduction
The recent success of deep feature learning in the supervised setting has inspired renewed interest
in feature learning in weakly supervised and unsupervised settings. Recent findings in computer
vision problems have shown that the representations learned for one task can be readily transferred
to others [10], which naturally leads to the question: does there exist a generically useful feature
representation, and if so what principles can be exploited to learn it?
Recently there has been a flurry of work on learning features from video using varying degrees of
supervision [14][12][13]. Temporal coherence in video can be considered as a form of weak supervision that can be exploited for feature learning. More precisely, if we assume that data occupies
some low dimensional ?manifold? in a high dimensional space, then videos can be considered as
one-dimensional trajectories on this manifold parametrized by time. Many unsupervised learning
algorithms can be viewed as various parameterizations (implicit or explicit) of the data manifold
[1]. For instance, sparse coding implicitly assumes a locally linear model of the data manifold [9].
In this work, we assume that deep convolutional networks are good parametric models for natural
data. Parameterizations of the data manifold can be learned by training these networks to linearize
short temporal trajectories, thereby implicitly learning a local parametrization.
In this work we cast the linearization objective as a frame prediction problem. As in many other
unsupervised learning schemes, this necessitates a generative model. Several recent works have also
trained deep networks for the task of frame prediction [12][14][13]. However, unlike other works
that focus on prediction as a final objective, in this work prediction is regarded as a proxy for learning representations. We introduce a loss and architecture that addresses two main problems in frame
prediction: (1) minimizing L2 error between the predicted and actual frame leads to unrealistically
blurry predictions, which potentially compromises the learned representation, and (2) copying the
most recent frame to the input seems to be a hard-to-escape trap of the objective function, which
results in the network learning little more than the identity function. We argue that the source of
blur partially stems from the inherent unpredictability of natural data; in cases where multiple valid
predictions are plausible, a deterministic network will learn to average between all the plausible predictions. To address the first problem we introduce a set of latent variables that are non-deterministic
?
Equal contribution
1
functions of the input, which are used to explain the unpredictable aspects of natural videos. The
second problem is addressed by introducing an architecture that explicitly formulates the prediction
in the linearized feature space.
The paper is organized as follows. Section 2 reviews relevant prior work. Section 3 introduces the
basic architecture used for learning linearized representations. Subsection 3.1 introduces ?phasepooling??an operator that facilitates linearization by inducing a topology on the feature space. Subsection 3.2 introduces a latent variable formulation as a means of learning to linearize under uncertainty. Section 4 presents experimental results on relatively simple datasets to illustrate the main
ideas of our work. Finally, Section 5 offers directions for future research.
2
Prior Work
This work was heavily inspired by the philosophy revived by Hinton et al. [5], which introduced
?capsule? units. In that work, an equivariant representation is learned by the capsules when the
true latent states were provided to the network as implicit targets. Our work allows us to move
to a more unsupervised setting in which the true latent states are not only unknown, but represent
completely arbitrary qualities. This was made possible with two assumptions: (1) that temporally
adjacent samples also correspond to neighbors in the latent space, (2) predictions of future samples
can be formulated as linear operations in the latent space. In theory, the representation learned
by our method is very similar to the representation learned by the ?capsules?; this representation
has a locally stable ?what? component and a locally linear, or equivariant ?where? component.
Theoretical properties of linearizing features were studied in [3].
Several recent works propose schemes for learning representations from video which use varying
degrees of supervision[12][14][13][4]. For instance, [13] assumes that the pre-trained network from
[7] is already available and training consists of learning to mimic this network. Similarly, [14]
learns a representation by receiving supervision from a tracker. This work is more closely related to
fully unsupervised approaches for learning representations from video such as [4][6][2][15][8]. It
is most related to [12] which also trains a decoder to explicitly predict video frames. Our proposed
architecture was inspired by those presented in in [11] and [16].
3
Learning Linearized Representations
Our goal is to obtain a representation of each input sequence that varies linearly in time by transforming each frame individually. Furthermore, we assume that this transformation can be learned
by a deep, feed forward network referred to as the encoder, denoted by the function FW . Denote
the code for frame xt by z t = FW (xt ). Assume that the dataset is parameterized by a temporal
index t so it is described by the sequence X = {..., xt?1 , xt , xt+1 , ...} with a corresponding feature
sequence produced by the encoder Z = {..., z t?1 , z t , z t+1 , ...}. Thus our goal is to train FW to
produce a sequence Z whose average local curvature is smaller than sequence X. A scale invariant
local measure of curvature is the cosine distance between the two vectors formed by three temporally adjacent samples. However, minimizing the curvature directly can result in the trivial solutions:
zt = ct ? t and zt = c ? t. These solutions are trivial because they are virtually uninformative with
respect to the input xt and therefore cannot be a meaningful representation of the input. To avoid this
solution, we also minimize the prediction error in the input space. The predicted frame is generated
in two steps: (i) linearly extrapolation in code space to obtain a predicted code z?t+1 = a[z t z t?1 ]T
followed by (ii) a decoding with GW , which generates the predicted frame x
?t+1 = GW (?
z t+1 ). For
t+1
example, if a = [2, ?1] the predicted code z?
corresponds to a constant speed linear extrapolation
of z t and z t?1 . The L2 prediction error is minimized by jointly training the encoder and decoder
networks. Note that minimizing prediction error alone will not necessarily lead to low curvature
trajectories in Z since the decoder is unconstrained; the decoder may learn a many to one mapping
which maps different codes to the same output image without forcing them to be equal. To prevent
this, we add an explicit curvature penalty to the loss, corresponding to the cosine distance between
(z t ? z t?1 ) and (z t+1 ? z t ). The complete loss to minimize is:
L=
1
kGW (a z t
2
z t?1
T
) ? xt+1 k22 ? ?
2
(z t ? z t?1 )T (z t+1 ? z t )
kz t ? z t?1 kkz t+1 ? z t k
(1)
z?Intensity
Three?Pixel Video
time
x
y
z
y?Intensity
x?Intensity
(a)
(b)
x1
x2
x3
enc
enc
enc
pool
m1
p1
pool
m2
p2
pool
m3
p3
prediction
Figure 1: (a) A video generated by translating a Gaussian intensity bump over a three pixel array
(x,y,z), (b) the corresponding manifold parametrized by time in three dimensional space
~
m
3
~
p3
unpool
dec
~
x3
L2
x3
cosine
distance
Figure 2: The basic linear prediction architecture with shared weight encoders
This feature learning scheme can be implemented using an autoencoder-like network with shared
encoder weights.
3.1
Phase Pooling
Thus far we have assumed a generic architecture for FW and GW . We now consider custom architectures and operators that are particularly suitable for the task of linearization. To motivate the
definition of these operators, consider a video generated by translating a Gaussian ?intensity bump?
over a three pixel region at constant speed. The video corresponds to a one dimensional manifold in
three dimensional space, i.e. a curve parameterized by time (see Figure 1). Next, assume that some
convolutional feature detector fires only when centered on the bump. Applying the max-pooling
operator to the activations of the detector in this three-pixel region signifies the presence of the feature somewhere in this region (i.e. the ?what?). Applying the argmax operator over the region
returns the position (i.e. the ?where?) with respect to some local coordinate frame defined over the
pooling region. This position variable varies linearly as the bump translates, and thus parameterizes
the curve in Figure 1b. These two channels, namely the what and the where, can also be regarded
as generalized magnitude m and phase p, corresponding to a factorized representation: the magnitude represents the active set of parameters, while the phase represents the set of local coordinates
in this active set. We refer to the operator that outputs both the max and argmax channels as the
?phase-pooling? operator.
In this example, spatial pooling was used to linearize the translation of a fixed feature. More generally, the phase-pooling operator can locally linearize arbitrary transformations if pooling is performed not only spatially, but also across features in some topology.
In order to be able to back-propagate through p, we define a soft version of the max and argmax
operators within each pool group. For simplicity, assume that the encoder has a fully convolutional
architecture which outputs a set of feature maps, possibly of a different resolution than the input.
Although we can define an arbitrary topology in feature space, for now assume that we have the
3
familiar three-dimensional spatial feature map representation where each activation is a function
z(f, x, y), where x and y correspond to the spatial location, and f is the feature map index. Assuming that the feature activations are positive, we define our soft ?max-pooling? operator for the k th
neighborhood Nk as:
mk =
X
e?z(f,x,y)
? max z(f, x, y),
?z(f 0,x0,y0)
Nk
Nk e
z(f, x, y) P
Nk
(2)
where ? ? 0. Note that the fraction in the sum is a softmax operation (parametrized by ?), which
is positive and sums to one in each pooling region. The larger the ?, the closer it is to a unimodal
distribution and therefore the better mk approximates the max operation. On the other hand, if
? = 0, Equation 2 reduces to average-pooling. Finally, note that mk is simply the expected value
of z (restricted to Nk ) under the softmax distribution.
Assuming that the activation pattern within each neighborhood is approximately unimodal, we can
define a soft versions of the argmax operator. The vector pk approximates the local coordinates
in the feature topology at which the max activation value occurred. Assuming that pooling is done
volumetrically, that is, spatially and across features, pk will have three components. In general, the
number of components in pk is equal to the dimension of the topology of our feature space induced
by the pooling neighborhood. The dimensionality of pk can also be interpreted as the maximal
intrinsic dimension of the data. If we define a local standard coordinate system in each pooling
volume to be bounded between -1 and +1, the soft ?argmax-pooling? operator is defined by the
vector-valued sum:
" #
X f
e?z(f,x,y)
x P
pk =
? arg max z(f, x, y),
(3)
?z(f 0,x0,y0)
Nk
Nk e
Nk y
where the indices f, x, y take values from -1 to 1 in equal increments over the pooling region. Again,
T
we observe that pk is simply the expected value of [f x y] under the softmax distribution.
The phase-pooling operator acts on the output of the encoder, therefore it can simply be considered
as the last encoding step. Correspondingly we define an ?un-pooling? operation as the first step of the
decoder, which produces reconstructed activation maps by placing the magnitudes m at appropriate
locations given by the phases p.
Because the phase-pooling operator produces both magnitude and phase signals for each of the two
input frames, it remains to define the predicted magnitude and phase of the third frame. In general,
this linear extrapolation operator can be learned, however ?hard-coding? this operator allows us to
place implicit priors on the magnitude and phase channels. The predicted magnitude and phase are
defined as follows:
mt+1 =
t+1
p
=
mt +mt?1
2
t
t?1
2p ? p
(4)
(5)
Predicting the magnitude as the mean of the past imposes an implicit stability prior on m, i.e. the
temporal sequence corresponding to the m channel should be stable between adjacent frames. The
linear extrapolation of the phase variable imposes an implicit linear prior on p. Thus such an architecture produces a factorized representation composed of a locally stable m and locally linearly
varying p. When phase-pooling is used curvature regularization is only applied to the p variables.
The full prediction architecture is shown in Figure 2.
3.2
Addressing Uncertainty
Natural video can be inherently unpredictable; objects enter and leave the field of view, and out of
plane rotations can also introduce previously invisible content. In this case, the prediction should
correspond to the most likely outcome that can be learned by training on similar video. However, if
multiple outcomes are present in the training set then minimizing the L2 distance to these multiple
outcomes induces the network to predict the average outcome. In practice, this phenomena results in
blurry predictions and may lead the encoder to learn a less discriminative representation of the input.
To address this inherent unpredictability we introduce latent variables ? to the prediction architecture
that are not deterministic functions of the input. These variables can be adjusted using the target
4
xt+1 in order to minimize the prediction L2 error. The interpretation of these variables is that they
explain all aspects of the prediction that are not captured by the encoder. For example, ? can be used
to switch between multiple, equally likely predictions. It is important to control the capacity of ? to
prevent it from explaining the entire prediction on its own. Therefore ? is restricted to act only as
a correction term in the code space output by the encoder. To further restrict the capacity of ? we
enforce that dim(?) dim(z). More specifically, the ?-corrected code is defined as:
T
z??t+1 = z t + (W1 ?) a z t z t?1
(6)
Where W1 is a trainable matrix of size dim(?) ? dim(z), and denotes the component-wise product. During training, ? is inferred (using gradient descent) for each training sample by minimizing
the loss in Equation 7. The corresponding adjusted z??t+1 is then used for back-propagation through
W and W1 . At test time ? can be selected via sampling, assuming its distribution on the training set
has been previously estimated.
L = min kGW (?
z?t+1 ) ? xt+1 k22 ? ?
?
(z t ? z t?1 )T (z t+1 ? z t )
kz t ? z t?1 kkz t+1 ? z t k
(7)
The following algorithm details how the above loss is minimized using stochastic gradient descent:
Algorithm 1 Minibatch stochastic gradient descent training for prediction with uncertainty. The
number of ?-gradient descent steps (k) is treated as a hyper-parameter.
for number of training epochs do
Sample a mini-batch of temporal triplets {xt?1 , xt , xt+1 }
Set ?0 = 0
Forward propagate xt?1 , xt through the network and obtain the codes z t?1 , z t and the prediction x
?t+1
0
for i =1 to k do
Compute the L2 prediction error
Back propagate the error through the decoder to compute the gradient ???L
i?1
Update ?i = ?i?1 ? ? ???L
i?1
T
= z t + (W1 ?i ) a z t z t?1
Compute z??t+1
i
Compute x
?t+1
= GW (z?t+1
)
i
i
end for
Back propagate the full encoder/predictor loss from Equation 7 using ?k , and update the weight
matrices W and W1
end for
When phase pooling is used we allow ? to only affect the phase variables in Equation 5, this further
encourages the magnitude to be stable and places all the uncertainty in the phase.
4
Experiments
The following experiments evaluate the proposed feature learning architecture and loss. In the first
set of experiments we train a shallow architecture on natural data and visualize the learned features
in order gain a basic intuition. In the second set of experiments we train a deep architecture on
simulated movies generated from the NORB dataset. By generating frames from interpolated and
extrapolated points in code space we show that a linearized representation of the input is learned.
Finally, we explore the role of uncertainty by training on only partially predictable sequences, we
show that our latent variable formulation can account for this uncertainty enabling the encoder to
learn a linearized representation even in this setting.
4.1
Shallow Architecture Trained on Natural Data
To gain an intuition for the features learned by a phase-pooling architecture let us consider an encoder architecture comprised of the following stages: convolutional filter bank, rectifying point-wise
nonlinearity, and phase-pooling. The decoder architecture is comprised of an un-pooling stage followed by a convolutional filter bank. This architecture was trained on simulated 32 ? 32 movie
5
Shallow Architecture 1
Shallow Architecture 2
Deep Architecture 1
Deep Architecture 2
Deep Architecture 3
Encoder
Conv+ReLU 64 ? 9 ? 9
Phase Pool 4
Conv+ReLU 64 ? 9 ? 9
Phase Pool 4 stride 2
Prediction
Average Mag.
Linear Extrap. Phase
Average Mag.
Linear Extrap. Phase
Conv+ReLU 16 ? 9 ? 9
Conv+ReLU 32 ? 9 ? 9
FC+ReLU 8192 ? 4096
None
Conv+ReLU 16 ? 9 ? 9
Conv+ReLU 32 ? 9 ? 9
FC+ReLU 8192 ? 4096
Linear Extrapolation
Conv+ReLU 16 ? 9 ? 9
Conv+ReLU 32 ? 9 ? 9
FC+ReLU 8192 ? 4096
Reshape 64 ? 8 ? 8
Phase Pool 8 ? 8
Average Mag.
Linear Extrap. Phase
Decoder
Conv 64 ? 9 ? 9
Conv 64 ? 9 ? 9
FC+ReLU 8192 ? 8192
Reshape 32 ? 16 ? 16
SpatialPadding 8 ? 8
Conv+ReLU 16 ? 9 ? 9
SpatialPadding 8 ? 8
Conv 1 ? 9 ? 9
FC+ReLU 4096 ? 8192
Reshape 32 ? 16 ? 16
SpatialPadding 8 ? 8
Conv+ReLU 16 ? 9 ? 9
SpatialPadding 8 ? 8
Conv 1 ? 9 ? 9
Unpool 8 ? 8
FC+ReLU 4096 ? 8192
Reshape 32 ? 16 ? 16
SpatialPadding 8 ? 8
Conv+ReLU 16 ? 9 ? 9
SpatialPadding 8 ? 8
Conv 1 ? 9 ? 9
Table 1: Summary of architectures
frames taken from YouTube videos [4]. Each frame triplet is generated by transforming still frames
with a sequence of three rigid transformations (translation, scale, rotation). More specifically let A
be a random rigid transformation parameterized by ? , and let x denote a still image reshaped into
a column vector, the generated triplet of frames is given by {f1 = A? = 13 x, f2 = A? = 23 x, f3 =
A? =1 x}. Two variants of this architecture were trained, their full architecture is summarized in
the first two lines of Table 1. In Shallow Architecture 1, phase pooling is performed spatially in
non-overlapping groups of 4 ? 4 and across features in a one-dimensional topology consisting of
non-overlapping groups of four. Each of the 16 pool-groups produce a code consisting of a scalar m
and a three-component p = [pf , px , py ]T (corresponding to two spatial and one feature dimensions);
thus the encoder architecture produces a code of size 16 ? 4 ? 8 ? 8 for each frame. The corresponding filters whose activations were pooled together are laid out horizontally in groups of four in
Figure 3(a). Note that each group learns to exhibit a strong ordering corresponding to the linearized
variable pf . Because global rigid transformations can be locally well approximated by translations,
the features learn to parameterize local translations. In effect the network learns to linearize the
input by tracking common features in the video sequence. Unlike the spatial phase variables, pf can
linearize sub-pixel translations. Next, the architecture described in column 2 of Table 1 was trained
on natural movie patches with the natural motion present in the real videos. The architecture differs
in only in that pooling across features is done with overlap (groups of 4, stride of 2). The resulting
decoder filters are displayed in Figure 3 (b). Note that pooling with overlap introduces smoother
transitions between the pool groups. Although some groups still capture translations, more complex
transformations are learned from natural movies.
4.2
Deep Architecture trained on NORB
In the next set of experiments we trained deep feature hierarchies that have the capacity to linearize
a richer class of transformations. To evaluate the properties of the learned features in a controlled
setting, the networks were trained on simulated videos generated using the NORB dataset rescaled
to 32 ? 32 to reduce training time. The simulated videos are generated by tracing constant speed
trajectories with random starting points in the two-dimensional latent space of pitch and azimuth rotations. In other words, the models are trained on triplets of frames ordered by their rotation angles.
As before, presented with two frames as input, the models are trained to predict the third frame.
Recall that prediction is merely a proxy for learning linearized feature representations. One way to
evaluate the linearization properties of the learned features is to linearly interpolate (or extrapolate)
6
(a) Shallow Architecture 1
(b) Shallow Architecture 2
Figure 3: Decoder filters learned by shallow phase-pooling architectures
(a)
(b)
Figure 4: (a) Test samples input to the network (b) Linear interpolation in code space learned by our
Siamese-encoder network
new codes and visualize the corresponding images via forward propagation through the decoder.
This simultaneously tests the encoder?s capability to linearize the input and the decoder?s (generative) capability to synthesize images from the linearized codes. In order to perform these tests we
must have an explicit code representation, which is not always available. For instance, consider a
simple scheme in which a generic deep network is trained to predict the third frame from the concatenated input of two previous frames. Such a network does not even provide an explicit feature
representation for evaluation. A simple baseline architecture that affords this type of evaluation is a
Siamese encoder followed by a decoder, this exactly corresponds to our proposed architecture with
the linear prediction layer removed. Such an architecture is equivalent to learning the weights of the
linear prediction layer of the model shown in Figure 2. In the following experiment we evaluate
the effects of: (1) fixing v.s. learning the linear prediction operator, (2) including the phase pooling
operation, (3) including explicit curvature regularization (second term in Equation 1).
Let us first consider Deep Architecture 1 summarized in Table 1. In this architecture a Siamese
encoder produces a code of size 4096 for each frame. The codes corresponding to the two frames
are concatenated together and propagated to the decoder. In this architecture the first linear layer of
the decoder can be interpreted as a learned linear prediction layer. Figure 4a shows three frames from
the test set corresponding to temporal indices 1,2, and 3, respectively. Figure 4b shows the generated
frames corresponding to interpolated codes at temporal indices: 0, 0.5, 1, 1.5, 2, 2.5, 3. The images
were generated by propagating the corresponding codes through the decoder. Codes corresponding
to non-integer temporal indices were obtained by linearly interpolating in code space.
Deep Architecture 2 differs from Deep Architecture 1 in that it generates the predicted code via
a fixed linear extrapolation in code space. The extrapolated code is then fed to the decoder that
generates the predicted image. Note that the fully connected stage of the decoder has half as many
free parameters compared to the previous architecture. This architecture is further restricted by
propagating only the predicted code to the decoder. For instance, unlike in Deep Architecture 1, the
decoder cannot copy any of the input frames to the output. The generated images corresponding
to this architecture are shown in Figure 5a. These images more closely resemble images from
the dataset. Furthermore, Deep Architecture 2 achieves a lower L2 prediction error than Deep
Architecture 1.
7
(a)
(b)
(c)
(d)
Figure 5: Linear interpolation in code space learned by our model. (a) no phase-pooling, no curvature regularization, (b) with phase pooling and curvature regularization Interpolation results obtained
by minimizing (c) Equation 1 and (d) Equation 7 trained with only partially predictable simulated
video
Finally, Deep Architecture 3 uses phase-pooling in the encoder, and ?un-pooling? in the decoder.
This architecture makes use of phase-pooling in a two-dimensional feature space arranged on an
8 ? 8 grid. The pooling is done in a single group over all the fully-connected features producing a
feature vector of dimension 192 (64 ? 3) compared to 4096 in previous architectures. Nevertheless
this architecture achieves the best overall L2 prediction error and generates the most visually realistic
images (Figure 5b). In this subsection we compare the representation learned by minimizing the loss
in Equation 1 to Equation 7. Uncertainty is simulated by generating triplet sequences where the third
frame is skipped randomly with equal probability, determined by Bernoulli variable s. For example,
the sequences corresponding to models with rotation angles 0? , 20? , 40? and 0? , 20? , 60? are equally
likely. Minimizing Equation 1 with Deep Architecture 3 results in the images displayed in Figure
5c. The interpolations are blurred due to the averaging effect discussed in Subsection 3.2. On the
other hand minimizing Equation 7 (Figure 5d) partially recovers the sharpness of Figure 5b. For this
experiment, we used a three-dimensional, real valued ?. Moreover training a linear predictor to infer
binary variable s from ? (after training) results in a 94% test set accuracy. This suggests that ? does
indeed capture the uncertainty in the data.
5
Discussion
In this work we have proposed a new loss and architecture for learning locally linearized features from video. We have also proposed a method that introduces latent variables that are nondeterministic functions of the input for coping with inherent uncertainty in video. In future work
we will suggest methods for ?stacking? these architectures that will linearize more complex features
over longer temporal scales.
Acknowledgments
We thank Jonathan Tompson, Joan Bruna, and David Eigen for many insightful discussions. We
also gratefully acknowledge NVIDIA Corporation for the donation of a Tesla K40 GPU used for
this research.
8
References
[1] Yoshua Bengio, Aaron C. Courville, and Pascal Vincent. Representation learning: A review
and new perspectives. Technical report, University of Montreal, 2012.
[2] Charles F. Cadieu and Bruno A. Olshausen. Learning intermediate-level representations of
form and motion from natural movies. Neural Computation, 2012.
[3] Taco S Cohen and Max Welling. Transformation properties of learned visual representations.
arXiv preprint arXiv:1412.7659, 2014.
[4] Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, and Yann LeCun. Unsupervised
learning of spatiotemporally coherent metrics. arXiv preprint arXiv:1412.6056, 2014.
[5] Geoffrey E Hinton, Alex Krizhevsky, and Sida D Wang. Transforming auto-encoders. In
Artificial Neural Networks and Machine Learning?ICANN 2011, pages 44?51. Springer, 2011.
[6] Christoph Kayser, Wolfgang Einhauser, Olaf Dummer, Peter Konig, and Konrad Kding. Extracting slow subspaces from natural videos leads to complex cells. In ICANN?2001, 2001.
[7] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep
convolutional neural networks. In NIPS, volume 1, page 4, 2012.
[8] Hossein Mobahi, Ronana Collobert, and Jason Weston. Deep learning from temporal coherence in video. In ICML, 2009.
[9] Bruno A Olshausen and David J Field. Sparse coding of sensory inputs. Current opinion in
neurobiology, 14(4):481?487, 2004.
[10] Maxime Oquab, Leon Bottou, Ivan Laptev, and Josef Sivic. Learning and transferring midlevel image representations using convolutional neural networks. In Computer Vision and
Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 1717?1724. IEEE, 2014.
[11] M Ranzato, Fu Jie Huang, Y-L Boureau, and Yann LeCun. Unsupervised learning of invariant
feature hierarchies with applications to object recognition. In Computer Vision and Pattern
Recognition, 2007. CVPR?07. IEEE Conference on, pages 1?8. IEEE, 2007.
[12] MarcAurelio Ranzato, Arthur Szlam, Joan Bruna, Michael Mathieu, Ronan Collobert, and
Sumit Chopra. Video (language) modeling: a baseline for generative models of natural videos.
arXiv preprint arXiv:1412.6604, 2014.
[13] Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Anticipating the future by watching
unlabeled video. arXiv preprint arXiv:1504.08023, 2015.
[14] Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using
videos. arXiv preprint arXiv:1505.00687, 2015.
[15] Laurenz Wiskott and Terrence J. Sejnowski. Slow feature analysis: Unsupervised learning of
invariances. Neural Computation, 2002.
[16] Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Robert Fergus. Deconvolutional
networks. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on,
pages 2528?2535. IEEE, 2010.
9
| 5951 |@word version:2 seems:1 linearized:9 propagate:4 thereby:1 mag:3 renewed:1 deconvolutional:1 past:1 current:1 activation:7 must:1 gpu:1 readily:1 ronan:1 realistic:1 blur:1 update:2 alone:1 generative:4 selected:1 half:1 plane:1 parametrization:1 short:1 parameterizations:2 location:2 mathematical:1 consists:1 nondeterministic:1 introduce:4 x0:2 indeed:1 expected:2 equivariant:2 p1:1 inspired:3 actual:1 little:1 unpredictable:2 pf:3 laurenz:1 conv:16 provided:1 bounded:1 moreover:1 factorized:2 what:4 interpreted:2 finding:1 transformation:9 corporation:1 temporal:10 act:2 exactly:1 control:1 unit:1 szlam:1 producing:1 positive:2 before:1 local:8 encoding:1 interpolation:4 approximately:1 studied:1 suggests:1 christoph:1 acknowledgment:1 lecun:2 practice:1 differs:2 x3:3 kayser:1 coping:1 pre:1 word:1 spatiotemporally:1 suggest:2 cannot:2 unlabeled:2 operator:17 applying:2 py:1 equivalent:1 deterministic:4 map:5 elusive:1 starting:1 resolution:1 sharpness:1 simplicity:1 m2:1 array:1 regarded:2 stability:1 coordinate:4 increment:1 hierarchy:5 target:2 heavily:1 carl:1 us:1 synthesize:1 approximated:1 particularly:1 recognition:4 observed:1 role:1 preprint:5 wang:2 capture:2 parameterize:1 region:7 connected:2 k40:1 ordering:1 ranzato:2 rescaled:1 removed:1 principled:1 intuition:2 transforming:3 predictable:2 flurry:1 trained:13 weakly:1 motivate:1 laptev:1 compromise:1 f2:1 completely:1 necessitates:1 various:1 einhauser:1 train:5 sejnowski:1 artificial:1 hyper:1 neighborhood:3 outcome:4 whose:2 richer:1 larger:1 solve:1 plausible:2 valued:2 cvpr:3 encoder:19 reshaped:1 jointly:1 final:1 sequence:12 propose:1 maximal:1 product:1 relevant:1 enc:3 inducing:1 konig:1 olaf:1 sutskever:1 produce:7 generating:2 unpredictability:2 leave:1 object:2 illustrate:1 linearize:11 donation:1 propagating:2 fixing:1 montreal:1 strong:1 p2:1 implemented:1 c:1 predicted:10 resemble:1 goroshin:3 direction:1 closely:2 filter:5 stochastic:2 centered:1 occupies:1 translating:2 opinion:1 f1:1 adjusted:2 correction:1 tracker:1 considered:3 visually:1 mapping:1 predict:5 visualize:2 bump:4 matthew:1 achieves:2 torralba:1 ross:2 individually:1 gaussian:2 always:1 avoid:1 varying:3 focus:1 bernoulli:1 skipped:1 baseline:2 dilip:1 dim:4 rigid:3 entire:1 lecun1:1 transferring:1 josef:1 pixel:5 arg:1 overall:1 classification:1 pascal:1 denoted:1 hossein:1 art:1 spatial:5 softmax:3 equal:5 field:2 f3:1 sampling:1 cadieu:1 represents:2 placing:1 unsupervised:10 icml:1 future:4 mimic:1 report:1 others:1 minimized:2 inherent:4 escape:1 yoshua:1 randomly:1 composed:1 simultaneously:1 interpolate:1 familiar:1 phase:33 argmax:5 consisting:2 fire:1 interest:1 custom:1 evaluation:2 generically:1 introduces:5 tompson:2 xiaolong:1 fu:1 closer:1 arthur:1 taylor:1 theoretical:1 mk:3 instance:4 column:2 soft:4 modeling:1 formulates:1 signifies:1 stacking:1 introducing:2 addressing:1 predictor:2 comprised:2 krizhevsky:2 azimuth:1 sumit:1 encoders:2 varies:2 terrence:1 receiving:1 decoding:1 pool:9 michael:2 together:2 ilya:1 w1:5 again:1 huang:1 possibly:1 watching:1 return:1 account:1 stride:2 coding:3 summarized:2 pooled:1 blurred:1 explicitly:2 collobert:2 performed:2 view:1 extrapolation:6 jason:1 wolfgang:1 capability:2 rectifying:1 contribution:1 minimize:3 formed:1 accuracy:1 convolutional:7 correspond:3 weak:1 vincent:1 produced:1 none:1 trajectory:4 explain:2 detector:2 hamed:1 facebook:1 definition:1 naturally:1 recovers:1 propagated:1 gain:2 dataset:4 recall:1 subsection:4 dimensionality:1 organized:1 anticipating:1 back:4 feed:1 courant:1 supervised:3 formulation:2 done:4 arranged:1 furthermore:2 implicit:5 stage:3 hand:2 overlapping:2 propagation:2 minibatch:1 quality:1 olshausen:2 effect:3 k22:2 true:2 regularization:4 spatially:3 gw:4 adjacent:3 konrad:1 during:1 encourages:1 cosine:3 linearizing:1 generalized:1 complete:1 invisible:1 motion:2 vondrick:1 image:12 wise:2 recently:1 charles:1 common:1 rotation:5 mt:3 cohen:1 volume:2 discussed:1 occurred:1 m1:1 approximates:2 interpretation:1 refer:1 ai:1 enter:1 unconstrained:1 grid:1 similarly:1 nonlinearity:1 bruno:2 gratefully:1 language:1 bruna:3 stable:4 supervision:4 longer:1 add:1 curvature:9 own:1 recent:5 perspective:1 forcing:1 nvidia:1 binary:1 success:1 kgw:2 exploited:2 captured:1 oquab:1 sida:1 signal:1 ii:1 smoother:1 multiple:4 unimodal:2 full:3 reduces:1 stem:1 siamese:3 infer:1 technical:1 offer:1 dept:1 equally:2 controlled:1 prediction:35 variant:1 basic:3 pitch:1 vision:5 metric:1 arxiv:10 represent:1 achieved:1 dec:1 cell:1 maxime:1 unrealistically:1 uninformative:1 addressed:1 source:1 unlike:3 pooling:34 induced:1 virtually:1 facilitates:1 integer:1 extracting:1 chopra:1 presence:1 intermediate:1 bengio:1 krishnan:1 switch:1 affect:1 relu:17 ivan:1 architecture:58 topology:6 restrict:1 reduce:1 idea:1 parameterizes:1 translates:1 penalty:1 peter:1 york:2 deep:23 jie:1 useful:1 generally:1 antonio:1 revived:1 locally:8 induces:1 exist:1 affords:1 estimated:1 group:10 four:2 nevertheless:1 prevent:2 merely:1 fraction:1 sum:3 angle:2 parameterized:3 uncertainty:11 place:2 laid:1 yann:4 p3:2 patch:1 coherence:2 graham:1 layer:4 ct:1 followed:3 courville:1 precisely:1 alex:2 x2:1 generates:4 aspect:2 speed:3 interpolated:2 min:1 leon:1 relatively:1 px:1 transferred:1 smaller:1 across:4 y0:2 shallow:8 invariant:2 restricted:3 taken:1 equation:11 remains:1 previously:2 fed:1 end:2 available:2 operation:5 observe:1 generic:2 appropriate:1 blurry:2 enforce:1 reshape:4 batch:1 marcaurelio:1 eigen:2 assumes:2 denotes:1 zeiler:1 somewhere:1 concatenated:2 objective:3 move:1 question:1 already:1 parametric:1 unpool:2 exhibit:1 gradient:5 subspace:1 distance:4 thank:1 simulated:6 capacity:3 parametrized:3 decoder:21 manifold:7 argue:1 trivial:2 assuming:4 code:26 copying:1 index:6 mini:1 minimizing:9 robert:1 potentially:1 zt:2 unknown:1 perform:1 datasets:1 enabling:1 acknowledge:1 descent:4 displayed:2 hinton:3 neurobiology:1 frame:32 arbitrary:3 intensity:5 inferred:1 introduced:1 david:3 cast:1 namely:1 imagenet:1 sivic:1 coherent:1 learned:21 nip:1 address:4 able:1 taco:1 pattern:4 max:9 including:2 video:29 pirsiavash:1 suitable:1 overlap:2 natural:13 treated:1 predicting:1 scheme:4 movie:5 abhinav:1 temporally:2 mathieu:3 autoencoder:1 auto:1 joan:3 review:2 prior:5 l2:8 epoch:1 loss:10 fully:4 geoffrey:2 degree:2 proxy:2 imposes:2 wiskott:1 principle:1 bank:2 translation:6 summary:1 extrapolated:2 last:1 free:1 copy:1 allow:1 institute:1 neighbor:1 explaining:1 correspondingly:1 sparse:2 tracing:1 curve:2 dimension:4 valid:1 transition:1 kz:2 sensory:1 forward:3 made:1 far:1 welling:1 reconstructed:1 implicitly:2 global:1 active:2 assumed:1 norb:3 discriminative:1 fergus:1 un:3 latent:11 triplet:5 table:4 learn:6 capsule:3 channel:4 inherently:1 bottou:1 necessarily:1 complex:3 interpolating:1 pk:6 main:2 icann:2 linearly:6 tesla:1 x1:1 referred:1 ny:2 slow:2 sub:1 position:2 explicit:5 third:4 learns:3 remained:1 xt:14 insightful:1 mobahi:1 nyu:1 gupta:1 trap:1 intrinsic:1 volumetrically:1 magnitude:9 linearization:4 boureau:1 nk:8 fc:6 simply:3 likely:3 explore:1 visual:2 horizontally:1 ordered:1 tracking:1 partially:4 scalar:1 springer:1 corresponds:3 midlevel:1 weston:1 viewed:1 identity:1 formulated:1 goal:2 shared:2 content:1 hard:2 fw:4 youtube:1 specifically:2 determined:1 corrected:1 averaging:1 invariance:1 experimental:1 m3:1 meaningful:1 aaron:1 jonathan:2 philosophy:1 evaluate:4 trainable:1 phenomenon:1 extrapolate:1 |
5,471 | 5,952 | Synaptic Sampling: A Bayesian Approach to
Neural Network Plasticity and Rewiring
David Kappel1
Stefan Habenschuss1
Robert Legenstein
Wolfgang Maass
Institute for Theoretical Computer Science
Graz University of Technology
A-8010 Graz, Austria
[kappel, habenschuss, legi, maass]@igi.tugraz.at
Abstract
We reexamine in this article the conceptual and mathematical framework for understanding the organization of plasticity in spiking neural networks. We propose
that inherent stochasticity enables synaptic plasticity to carry out probabilistic inference by sampling from a posterior distribution of synaptic parameters. This
view provides a viable alternative to existing models that propose convergence of
synaptic weights to maximum likelihood parameters. It explains how priors on
weight distributions and connection probabilities can be merged optimally with
learned experience. In simulations we show that our model for synaptic plasticity
allows spiking neural networks to compensate continuously for unforeseen disturbances. Furthermore it provides a normative mathematical framework to better
understand the permanent variability and rewiring observed in brain networks.
1
Introduction
In the 19th century, Helmholtz proposed that perception could be understood as unconscious inference [1]. This insight has recently (re)gained considerable attention in models of Bayesian inference
in neural networks [2]. The hallmark of this theory is the assumption that the activity z of neuronal
networks can be viewed as an internal model for hidden variables in the outside world that give rise
to sensory experiences x. This hidden state z is usually assumed to be represented by the activity of
neurons in the network. A network N of stochastically firing neurons is modeled in this framework
by a probability distribution pN (x, z|?) that describes the probabilistic relationships between a set
of N inputs x = (x1 , . . . , xN ) and corresponding network responses z = (z 1 , . . . , z N ), where ?
denotes the vector of network parameters that shape
P this distribution, e.g., via synaptic weights and
network connectivity. The likelihood pN (x|?) = z pN (x, z|?) of the actually occurring inputs x
under the resulting internal model can then be viewed as a measure for the agreement between this
internal model (which carries out ?predictive coding? [3]) and its environment (which generates x).
The goal of network learning is usually described in this probabilistic generative framework as finding parameter values ? ? that maximize this agreement, or equivalently the likelihood of the inputs x
(maximum likelihood learning): ? ? = arg max? pN (x|?). Locally optimal estimates of ? ? can be
determined by gradient ascent on the data likelihood pN (x|?), which led to many previous models
of network plasticity [4, 5, 6]. While these models learn point estimates of locally optimal parameters ? ? , theoretical considerations for artificial neural networks suggest that it is advantageous to
learn full posterior distributions p? (?) over parameters. This full Bayesian treatment of learning
allows to integrate structural parameter priors in a Bayes-optimal way and promises better generalization of the acquired knowledge to new inputs [7, 8]. The problem how such posterior distributions
could be learned by brain networks has been highlighted in [2] as an important future challenge in
computational neuroscience.
1
these authors contributed equally
1
Figure 1: Illustration of synaptic sampling for two parameters ? = {?1 , ?2 } of a neural network
N . A: 3D plot of an example likelihood function. For a fixed set of inputs x it assigns a probability
density (amplitude on z-axis) to each parameter setting ?. The likelihood function is defined by
the underlying neural network N . B: Example for a prior that prefers small values for ?. C:
The posterior that results as product of the prior (B) and the likelihood (A). D: A single trajectory
of synaptic sampling from the posterior (C), starting at the black dot. The parameter vector ?
fluctuates between different solutions, the visited values cluster near local optima (red triangles). E:
Cartoon illustrating the dynamic forces (plasticity rule (2)) that enable the network to sample from
the posterior distribution p? (?|x) in (D).
Here we introduce a possible solution to this problem. We present a new theoretical framework
for analyzing and understanding local plasticity mechanisms of networks of neurons as stochastic
processes, that generate specific distributions p? (?) of network parameters ? over which these parameters fluctuate. We call this new theoretical framework synaptic sampling. We use it here to
analyze and model unsupervised learning and rewiring in spiking neural networks. In Section 3
we show that the synaptic sampling hypothesis also provides a unified framework for structural and
synaptic plasticity which both are integrated here into a single learning rule. This model captures
salient features of the permanent rewiring and fluctuation of synaptic efficacies observed in the cortex [9, 10]. In computer simulations, we demonstrate another advantage of the synaptic sampling
framework: It endows neural circuits with an inherent robustness against perturbations [11].
2
Learning a posterior distribution through stochastic synaptic plasticity
In our learning framework we assume that not only a neural network N as described above, but also a
prior pS (?) for its parameters ? = (?1 , . . . , ?M ) are given. This prior pS can encode both structural
constraints (such as sparse connectivity) and structural rules (e.g., a heavy-tailed distribution of
synaptic weights). Then the goal of network learning becomes:
learn the posterior distribution:
p? (?|x) =
1
Z pS (?)
? pN (x|?) ,
(1)
with normalizing constant Z. A key insight (see Fig. 1 for an illustration) is that stochastic local
plasticity rules for the parameters ?i enable a network to achieve the learning goal (1): The distribution of network parameters ? will converge after a while to the posterior distribution (1) ? and
produce samples from it ? if each network parameter ?i obeys the dynamics
p
?
?
d?i = b(?i )
log pS (?) + b(?i )
log pN (x|?) + T b0 (?i ) dt + 2T b(?i ) dWi , (2)
??i
??i
?
for i = 1, . . . , M and b0 (?i ) = ??
b(?i ). The stochastic term dWi describes infinitesimal stochastic
i
increments and decrements of a Wiener process Wi , where process increments over time t ? s are
normally distributed with zero mean and variance t ? s, i.e. Wit ? Wis ? N ORMAL(0, t ? s) [12].
The dynamics (2) extend previous models of Bayesian learning via sampling [13, 14] by including a
temperature T > 0 and a sampling-speed parameter b(?i ) > 0 that can depend on the current value
2
of ?i without changing the stationary distribution. For example, the sampling speed of a synaptic
weight can be slowed down if it reaches very high or very low values.
The temperature parameter T can be used to scale the diffusion term (i.e., the noise). The resulting
1
stationary distribution of ? is proportional to p? (?) T , so that the dynamics of the stochastic process
1
?
can be described by the energy landscape T log p (?). For high values of T this energy landscape
is flattened, i.e., the main modes of p? (?) become less pronounced. For T = 1 we arrive at the
learning goal (1). For T ? 0 the dynamics of ? approaches a deterministic process and converges
to the next local maximum of p? (?). Thus the learning process approximates for low values of T
maximum a posteriori (MAP) inference [8]. The result is formalized in the following theorem:
Theorem 1. Let p(x, ?) be a strictly positive, continuous probability distribution over continuous
or discrete states x and continuous parameters ? = (?1 , . . . , ?M ), twice continuously differentiable
with respect to ?. Let b(?) be a strictly positive, twice continuously differentiable function. Then the
set of stochastic differential equations (2) leaves the distribution p? (?) invariant:
1
1
p? (?) ? 0 p? (? | x) T ,
(3)
Z
R
1
with Z 0 = p? (? | x) T d?. Furthermore, p? (?) is the unique stationary distribution of (2).
Proof: First, note that the first two terms in the drift term of Eq. (2) can be written as
?
?
?
b(?i )
log pS (?) + b(?i )
log pN (x|?) = b(?i )
log p(?i |x, ?\i ),
??i
??i
??i
where ?\i denotes the vector of parameters excluding parameter ?i . Hence, the dynamics (2) can be
written in terms of an It?o stochastic differential equations with drift Ai (?) and diffusion Bi (?):
r
?
d?i = b(?i )
(4)
log p(?i |x, ?\i ) + T b0 (?i ) dt +
2 T b(?i ) dWi .
??i
{z
}
|
| {z }
drift: Ai (?)
diffusion: Bi (?)
This describes the stochastic dynamics of each parameter over time. For the stationary distribution
we are interested in the dynamics of the distribution of parameters. Eq. (4) translate into the following Fokker-Planck equation, that determines the temporal dynamics of the distribution pFP (?, t)
over network parameters ? at time t (see [12]),
X ?
d
?2 1
pFP (?, t) =
?
Bi (?) pFP (?, t) .
(5)
Ai (?) pFP (?, t) +
dt
??i
??i2 2
i
Plugging in the presumed stationary distribution p? (?) on the right hand side of Eq. (5), one obtains
X ?
d
?2
pFP (?, t) =
?
(Ai (?) p? (?)) + 2 (Bi (?) p? (?))
dt
??i
??i
i
X ?
?
=
?
b(?i ) p? (?)
log p(?i |x, ?\i )
??i
??i
i
?
?
+
T b(?i ) p? (?)
log p? (?) ,
??i
??i
which by inserting for p? (?) the assumed stationary distribution (3) becomes
X
d
?
?
?
pFP (?, t) =
?
b(?i ) p (?)
log p(?i |x, ?\i )
dt
??i
??i
i
X
?
?
+
b(?i ) p? (?)
log p(?\i |x) + log p(?i |x, ?\i )
=
0 = 0.
??i
??i
i
This proves that p? (?) is a stationary distribution of the parameter sampling dynamics (4). Under
the assumption that b(?i ) is strictly positive, this stationary distribution is also unique. If the matrix
of diffusion coefficients is invertible, and the potential conditions are satisfied (see Section 3.7.2 in
[12] for details), the stationary distribution can be obtained (uniquely) by simple integration. Since
the matrix of diffusion coefficients B is diagonal in our model (B = diag(Bi (?), . . . , BM (?))), B
is trivially invertible since all elements, i.e. all Bi (?), are positive. Convergence and uniqueness of
the stationary distribution follows then for strictly positive b(?i ) (see Section 5.3.3 in [12]).
3
2.1
Online synaptic sampling
For sequences of N inputs x = (x1 , . . . , xN ), the weight update rule (2) depends on all inputs, such
that synapses have to keep track of the whole set of all network inputs for the exact dynamics (batch
learning). In an online scenario, we assume that only the current network input xn is available.
According to the dynamics (2), synaptic plasticity rules have to compute the log likelihood derivative
?
n
??i log pN (x|?). We assume that every ?x time units a different input x is presented to the network
and that the inputs x1 , . . . , xN are visited repeatedly in a fixed regular order. Under the assumption
that the input patterns are statistically independent the likelihood pN (x|?) becomes
pN (x|?) = pN (x1 , . . . , xN |?) =
N
Y
pN (xn |?) ,
(6)
n=1
i.e., each network input xn can be explained as being drawn individually from pN (xn |?), independently from other inputs. The derivative of the log likelihood in (2) is then given by
PN
?
?
n
n=1 ??i log pN (x |?) . This ?batch? dynamics does not map readily onto
??i log pN (x|?) =
a network implementation because the weight update requires at any time knowledge of all inputs
x1 , . . . , xN . We provide here an online approximation for small sampling speeds. To obtain an
online learning rule, we consider the parameter dynamics
p
?
?
n
0
d?i = b(?i )
log pS (?) + N b(?i )
log pN (x |?) + T b (?i ) dt + 2T b(?i ) dWi . (7)
??i
??i
As in the batch learning setting, we assume that each input xn is presented for a time interval of
?x . Although convergence to the correct posterior distribution cannot be guaranteed theoretically
for this online rule, we show that it is a reasonable approximation to the batch-rule. Integrating the
parameter changes (7) over one full presentation of the data x, i.e., starting from t = 0 with some
initial parameter values ? 0 up to time t = N ?x , we obtain for slow sampling speeds (N ?x b(?i ) 1)
!
N
X
?
N ?x
0
0 ?
0
0
n 0
0 0
?i
? ?i ? N ?x b(?i )
log pS (? ) + b(?i )
log pN (x |? ) + T b (?i )
??i
??i
n=1
q
+ 2T b(?i0 ) (WiN ?x ? Wi0 ).
(8)
This is also what one obtains when integrating the batch rule (2) for N ?x time units (for slow b(?i )).
Hence, for slow enough b(?i ), (7) is a good approximation of optimal weight sampling.
In the presence of hidden variables z, maximum likelihood learning cannot be applied directly, since
the state of the hidden variables is not known from the observed data. The expectation maximization
algorithm [8] can be used to overcome this problem. We adopt this approach here. In the online
setting, when pattern xn is applied to the network, it responds with network state z n according
to pN (z n | xn , ?), where the current network parameters are used in this inference process. The
parameters are updated in parallel according to the dynamics (8) for the given values of xn and z n .
3
Synaptic sampling for network rewiring
In this section we present a simple model to describe permanent network rewiring using the dynamics (2). Experimental studies have provided a wealth of information about the stochastic rewiring in
the brain (see e.g. [9, 10]). They demonstrate that the volume of a substantial fraction of dendritic
spines varies continuously over time, and that all the time new spines and synaptic connections are
formed and existing ones are eliminated. We show that these experimental data on spine motility
can be understood as special cases of synaptic sampling. To arrive at a concrete model we use the
following assumption about dynamic network rewiring:
1. In accordance with experimental studies [10], we require that spine sizes have a multiplicative dynamics, i.e., that the amount of change within some given time window is proportional to the current size of the spine.
2. We assume here for simplicity that there is a single parameter ?i for each potential synaptic
connection i.
4
The second requirement can be met by encoding the state of the synapse in an abstract form, that
represents synaptic connectivity and synaptic efficacy in a single parameter ?i . We define that negative values of ?i represent a current disconnection and positive values represent a functional synaptic
connection (we focus on excitatory connections). The distance of the current value of ?i from zero
indicates how likely it is that the synapse will soon reconnect (for negative values) or withdraw
(for positive values). In addition the synaptic parameter ?i encodes for positive values the synaptic
efficacy wi , i.e., the resulting EPSP amplitudes, by a simple mapping wi = f (?i ).
The first assumption which requires multiplicative synaptic dynamics supports an exponential function f in our model, in accordance with previous models of spine motility [10]. Thus, we assume in
the following that the efficacy wi of synapse i is given by
wi = exp(?i ? ?0 ) .
(9)
Note that for a large enough offset ?0 , negative parameter values ?i (which model a non-functional
synaptic connection) are automatically mapped onto a tiny region close to zero in the w-space, so
that retracted spines have essentially zero synaptic efficacy. In addition we use a Gaussian prior
pS (?i ) = N ORMAL(?i | ?, ?), with mean ? and variance ? 2 over synaptic parameters. In the simulations we used ? = 0.5, ? = 1 and ?0 = 3. A prior of this form allows to include a simple
regularization mechanism in the learning scheme, which prefers sparse solutions (i.e. solutions with
small parameters) [8]. Together with the exponential mapping (9) this prior induces a heavy-tailed
prior distribution over synaptic weights wi . The network therefore learns solutions where only the
most relevant synapses are much larger than zero.
The general rule for online synaptic sampling (7) for the exponential mapping wi = exp(?i ? ?0 )
and the Gaussian prior becomes (for constant small learning rate b 1 and unit temperature T = 1)
?
1
?
d?i = b 2 (? ? ?i ) + N wi
(10)
log pN (xn |w) dt + 2b dWi .
?
?wi
?
In Eq. (10) the multiplicative synaptic dynamics becomes explicit. The gradient ?w
log pN (xn |w),
i
i.e., the activity-dependent contribution to synaptic plasticity, is weighted by wi . Hence, for negative
values of ?i (non-functional synaptic connection), the activities of the pre- and post-synaptic neurons
have negligible impact on the dynamics of the synapse. Assuming a large enough ?0 , retracted
synapses therefore evolve solely according to the prior pS (?) and the random fluctuations dWi . For
?
large values of ?i the opposite is the case. The influence of the prior ??
log pS (?) and the Wiener
i
process dWi become negligible, and the dynamics is dominated by the activity-dependent likelihood
term.
If the activity-dependent second term in Eq. (10) (that tries to maximize the likelihood) is small
(e.g., because ?i is small or parameters are near a mode of the likelihood) then Eq. (10) implements
an Ornstein-Uhlenbeck process. This prediction of our model is consistent with a previous analysis
which showed that an Ornstein-Uhlenbeck process is a viable model for synaptic spine motility [10].
3.1
Spiking network model
Through the use of parameters ? which determine both synaptic connectivity and synaptic weights,
the synaptic sampling framework provides a unified model for structural and synaptic plasticity.
Eq. (10) describes the stochastic dynamics of the synaptic parameters ?i . In this section we analyze
the resulting rewiring dynamics and structural plasticity by applying the synaptic sampling framework to networks of spiking neurons. Here, we used winner-take-all (WTA) networks to learn a
simple sensory integration task and show that learning with synaptic sampling in such networks is
inherently robust to perturbations.
For the WTA we adapted the model described in detail in [15]. Briefly, the WTA neurons were
modeled as stochastic spike response neurons with a firing rate that depends exponentially on the
membrane voltage [16, 17]. The membrane potential uk (t) of neuron k at time t is given by
X
uk (t) =
wki xi (t) + ?k (t) ,
(11)
i
where xi (t) denotes the (unweighted) input from input neuron i, wki denotes the efficacy of the
synapse from input neuron i, and ?k (t) denotes a homeostatic adaptation current (see below). The
5
input xi (t) models the (additive) excitatory postsynaptic current from neuron i. In our simulations
we used a double-exponential kernel with time constants ?m = 20ms and ?s = 2ms [18]. The
instantaneous firing rate ?k (t) of network neuron k depends exponentially on the membrane potential and is subject to divisive lateral inhibition Ilat (t) (described below): ?k (t) = Ilat?net(t) exp(uk (t)),
where ?net = 100Hz scales the firing rate of neurons [16]. Spike trains were then drawn from
independent Poisson processes with instantaneous rate ?k (t) for each neuron. Divisive inhibition [19] between the K neurons in the WTA network was implemented in an idealized form [6],
PK
Ilat (t) = l=1 exp(ul (t)). In addition, each output spike caused a slow depressing current, giving
rise to the adaptation current ?k (t). This implements a slow homeostatic mechanism that regulates
the output rate of individual neurons (see [20] for details).
The WTA network defined above implicitly defines a generative model [21]. Inputs xn are assumed
to be generated in dependence on the value of a hidden multinomial random variable hn that can
take on K possible values 1, . . . , K. Each neuron k in the WTA circuit corresponds to one value k
of this hidden variable. Q
One obtains the probability of an input vector for a given hidden cause as
pN (xn |hn = k, w) = i P OISSON(xni |?ewki ), with a scaling parameter ? > 0. In other words,
the synaptic weight wki encodes (in log-space) the firing rate of input neuron i, given that the hidden
cause is k. The network implements inference in this generative model, i.e., for a given input xn , the
firing rate of network neuron zk is proportional to the posterior probability p(hn = k|xn , w) of the
corresponding hidden cause. Online maximum likelihood learning is realized through the synaptic
update rule (see [21]), which realizes here the second term of Eq. (10)
?
log pN (xn | w) ? Sk (t) (xi (t) ? ? ewki ) ,
?wki
(12)
where Sk (t) denotes the spike train of the kth neuron and xi (t) denotes the weight-normalized value
of the sum of EPSPs from presynaptic neuron i at time t in response to pattern xn .
3.2
Simulation results
Here, we consider a network that allows us to study the self-organization of connections between
hidden neurons. Additional details to this experiment and further analyses of the synaptic sampling
model can be found in [22].
The architecture of the network is illustrated in Fig. 2A. It consists of eight WTA circuits with
arbitrary excitatory synaptic connections between neurons within the same or different ones of these
WTA circuits. Two populations of ?auditory? and ?visual? input neurons xA and xV project onto
corresponding populations zA and zV of hidden neurons (each consisting of four WTA circuits with
K = 10 neurons, see lower panel of Fig. 2A). The hidden neuron populations receive exclusively
auditory (zA , 770 neurons) or visual inputs (zV , 784 neurons) and in addition, arbitrary lateral
excitatory connections between all hidden neurons are allowed. This network models multi-modal
sensory integration and association in a simplified manner [15].
Biological neural networks are astonishingly robust against perturbations and lesions [11]. To investigate the inherent compensation capability of synaptic sampling we applied two lesions to the
network within a learning session of 8 hours (of equivalent biological time). The network was
trained by repeatedly drawing random instances of spoken and written digits of the same type (digit
1 or 2 taken from MNIST and 7 utterances of speaker 1 from TI 46) and simultaneously presenting
Poisson spiking representations of these input patterns to the network. Fig. 2A shows example firing
rates for one spoken/written input pair. Input spikes were randomly drawn according to these rates.
Firing rates of visual input neurons were kept fixed throughout the duration of the auditory stimulus.
In the first lesion we removed all neurons (16 out of 40) that became tuned for digit 2 in the preceding learning. The reconstruction performance of the network was measured through the capability
of a linear readout neuron, which received input only from zV . During these test trials only the
auditory stimulus was presented (the remaining 3 utterances of speaker 1 were used as test set) and
visual input neurons were clamped to 1Hz background noise. The lesion significantly impaired the
performance of the network in stimulus reconstruction, but it was able to recover from the lesion
after about one hour of continuing network plasticity (see Fig. 2C).
In the second lesion all synaptic connections between hidden neurons that were present after recovery from the first lesion were removed and not allowed to regrow (2936 synapses in total). After
6
Figure 2: Inherent compensation for network perturbations. A: Illustration of the network architecture: A recurrent spiking neural network received simultaneously spoken and handwritten
spiking representations of the same digit. B: First three PCA components of the temporal evolution
a subset of the network parameters ?. After each lesion the network parameters migrate to a new
manifold. C: The generative reconstruction performance of the ?visual? neurons zV for the test
case when only an auditory stimulus is presented was tracked throughout the whole learning session
(colors of learning phases as in (B)). After each lesion the performance strongly degrades, but reliably recovers. Learning with zero temperature (dashed yellow) or with approximate HMM learning
[15] (dashed purple) performed significantly worse. Insets at the top show the synaptic weights of
neurons in zV at 4 time points projected back into the input space. Network diagrams in the middle
show ongoing network rewiring for synaptic connections between the hidden neurons. Each arrow
indicates a functional connection between two neurons (only 1% randomly drawn subset shown).
The neuron whose parameters are tracked in (C) is highlighted in red. Numbers under the network
diagrams show the total number of functional connections between hidden neurons at the time point.
about two hours of continuing synaptic sampling 294 new synaptic connections between hidden
neurons emerged. These connections made it again possible to infer the auditory stimulus from the
activity of the remaining 24 hidden neurons in the population zV (in the absence of input from the
population xV ). The classification performance was around 75% (see bottom of Fig. 2C).
In Fig. 2B we track the temporal evolution of a subset ? 0 of network parameters (35 parameters
?i associated with the potential synaptic connections of the neuron marked in red in the middle
of Fig. 2C from or to other hidden neurons, excluding those that were removed at lesion 2 and
not allowed to regrow). The first three PCA components of this 35-dimensional parameter vector
are shown. The vector ? 0 fluctuates first within one region of the parameter space while probing
7
different solutions to the learning problem, e.g., high probability regions of the posterior distribution
(blue trace). Each lesions induced a fast switch to a different region (red,green), accompanied by
a recovery of the visual stimulus reconstruction performance (see Fig. 2C). The network therefore
compensates for perturbations by exploring new parameter spaces.
Without the noise and the prior the same performance could not be reached for this experiment.
Fig. 2C shows the result for the approximate HMM learning [15], which is a deterministic learning
approach (without a prior). Using this approach the network was able to learn representations of the
handwritten and spoken digits. However, these representation and the associations between them
were not as distinctive as for synaptic sampling and the classification performance was significantly
worse (only first learning phase shown). We also evaluated this experiment with a deterministic
version of synaptic sampling (T = 0). Here, the stochasticity inherent to the WTA circuit was
sufficient to overcome the first lesion. However, the performance was worse in the last learning phase
(after removing all active lateral synapses). In this situation, the random exploration of the parameter
space that is inherent to synaptic sampling significantly enhanced the speed of the recovery.
4
Discussion
We have shown that stochasticity may provide an important function for network plasticity. It enables networks to sample parameters from the posterior distribution that represents attractive combinations of structural constraints and rules (such as sparse connectivity and heavy-tailed distributions
of synaptic weights) and a good fit to empirical evidence (e.g., sensory inputs). The resulting rules
for synaptic plasticity contain a prior distributions over parameters. Potential functional benefits of
priors (on emergent selectivity of neurons) have recently been demonstrated in [23] for a restricted
Boltzmann machine.
The mathematical framework that we have presented provides a normative model for evaluating
empirically found stochastic dynamics of network parameters, and for relating specific properties of
this ?noise? to functional aspects of network learning. Some systematic dependencies of changes
in synaptic weights (for the same pairing of pre- and postsynaptic activity) on their current values
had already been reported in [24, 25, 26]. These can be modeled as the impact of priors in our
framework.
Models of learning via sampling from a posterior distribution have been previously studied in machine learning [13, 14] and the underlying theoretical principles are well known in physics (see e.g.
Section 5.3 of [27]). The theoretical framework provided in this paper extends these previous models for learning by introducing the temperature parameter T and by allowing to control the sampling
speed in dependence of the current parameter setting through b(?i ). Furthermore, our model combines for the first time automatic rewiring in neural networks with Bayesian inference via sampling.
The functional consequences of these mechanism are further explored in [22].
The postulate that networks should learn posterior distributions of parameters, rather than maximum
likelihood values, had been proposed for artificial neural networks [7, 8], since such organization
of learning promises better generalization capability to new examples. The open problem of how
such posterior distributions could be learned by networks of neurons in the brain, in a way that is
consistent with experimental data, has been highlighted in [2] as a key challenge for computational
neuroscience. We have presented here a model, whose primary innovation is to view experimentally
found trial-to-trial variability and ongoing fluctuations of parameters no longer as a nuisance, but as
a functionally important component of the organization of network learning. This model may lead to
a better understanding of such noise and seeming imperfections in the brain. It might also provide an
important step towards developing algorithms for upcoming new technologies implementing analog
spiking hardware, which employ noise and variability as a computational resource [28, 29].
Acknowledgments
Written under partial support of the European Union project #604102 The Human Brain Project
(HBP) and CHIST-ERA ERA-Net (Project FWF #I753-N23, PNEUMA).
We would like to thank Seth Grant, Christopher Harvey, Jason MacLean and Simon Rumpel for
helpful comments.
8
References
[1] Hatfield G. Perception as Unconscious Inference. In: Perception and the Physical World: Psychological
and Philosophical Issues in Perception. Wiley; 2002. p. 115?143.
[2] Pouget A, Beck JM, Ma WJ, Latham PE. Probabilistic brains: knowns and unknowns. Nature Neuroscience. 2013;16(9):1170?1178.
[3] Winkler I, Denham S, Mill R, B?ohm TM, Bendixen A. Multistability in auditory stream segregation: a
predictive coding view. Phil Trans R Soc B: Biol Sci. 2012;367(1591):1001?1012.
[4] Brea J, Senn W, Pfister JP. Sequence learning with hidden units in spiking neural networks. In: NIPS.
vol. 24; 2011. p. 1422?1430.
[5] Rezende DJ, Gerstner W. Stochastic variational learning in recurrent spiking networks. Front Comput
Neur. 2014;8:38.
[6] Nessler B, Pfeiffer M, Maass W. STDP enables spiking neurons to detect hidden causes of their inputs.
In: NIPS. vol. 22; 2009. p. 1357?1365.
[7] MacKay DJ. Bayesian interpolation. Neural Computation. 1992;4(3):415?447.
[8] Bishop CM. Pattern Recognition and Machine Learning. New York: Springer; 2006.
[9] Holtmaat AJ, Trachtenberg JT, Wilbrecht L, Shepherd GM, Zhang X, Knott GW, et al. Transient and
Persistent Dendritic Spines in the Neocortex In Vivo. Neuron. 2005;45:279?291.
[10] L?owenstein Y, Kuras A, Rumpel S. Multiplicative dynamics underlie the emergence of the log-normal
distribution of spine sizes in the neocortex in vivo. J Neurosci. 2011;31(26):9481?9488.
[11] Marder E. Variability, compensation and modulation in neurons and circuits. PNAS. 2011;108(3):15542?
15548.
[12] Gardiner CW. Handbook of Stochastic Methods. 3rd ed. Springer; 2004.
[13] Welling M, Teh YW. Bayesian learning via stochastic gradient Langevin dynamics. In: Proceedings of
the 28th International Conference on Machine Learning (ICML-11); 2011. p. 681?688.
[14] Sato I, Nakagawa H. Approximation analysis of stochastic gradient langevin dynamics by using fokkerplanck equation and ito process. In: NIPS; 2014. p. 982?990.
[15] Kappel D, Nessler B, Maass W. STDP installs in winner-take-all circuits an online approximation to
hidden Markov model learning. PLoS Comp Biol. 2014;10(3):e1003511.
[16] Jolivet R, Rauch A, L?uscher H, Gerstner W. Predicting spike timing of neocortical pyramidal neurons by
simple threshold models. J Comp Neurosci. 2006;21:35?49.
[17] Mensi S, Naud R, Gerstner W. From stochastic nonlinear integrate-and-fire to generalized linear models.
In: NIPS. vol. 24; 2011. p. 1377?1385.
[18] Gerstner W, Kistler WM. Spiking Neuron Models. Cambridge University Press; 2002.
[19] Carandini M. From circuits to behavior: a bridge too far? Nature Neurosci. 2012;15(4):507?509.
[20] Habenschuss S, Bill J, Nessler B. Homeostatic plasticity in Bayesian spiking networks as Expectation
Maximization with posterior constraints. In: NIPS. vol. 25; 2012. p. 782?790.
[21] Habenschuss S, Puhr H, Maass W. Emergence of optimal decoding of population codes through STDP.
Neural Computation. 2013;25:1?37.
[22] Kappel D, Habenschuss S, Legenstein R, Maass W. Network Plasticity as Bayesian Inference. PLoS
Comp Biol. 2015;in press.
[23] Xiong H, Szedmak S, Rodrguez-Sanchez A, Piater J. Towards sparsity and selectivity: Bayesian learning
of restricted Boltzmann machine for early visual features. In: ICANN; 2014. p. 419?426.
[24] Bi GQ, Poo MM. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing,
synaptic strength, and postsynaptic cell type. J Neurosci. 1998;18(24):10464?10472.
[25] Sj?ostr?om PJ, Turrigiano GG, Nelson SB. Rate, timing, and cooperativity jointly determine cortical synaptic plasticity. Neuron. 2001;32(6):1149?1164.
[26] Montgomery JM, Pavlidis P, Madison DV. Pair recordings reveal all-silent synaptic connections and the
postsynaptic expression of long-term potentiation. Neuron. 2001;29(3):691?701.
[27] Kennedy AD. The Hybrid Monte Carlo algorithm on parallel computers. Parallel Computing.
1999;25(10):1311?1339.
[28] Johannes Schemmel KMEM Andreas Gruebl. Implementing Synaptic Plasticity in a VLSI Spiking Neural
Network Model. In: IJCNN; 2006. p. 1?6.
[29] Bill J, Legenstein R. A compound memristive synapse model for statistical learning through STDP in
spiking neural networks. Frontiers in Neuroscience. 2014;8.
9
| 5952 |@word trial:3 illustrating:1 middle:2 briefly:1 version:1 advantageous:1 kura:1 open:1 simulation:5 carry:2 initial:1 efficacy:6 exclusively:1 tuned:1 existing:2 current:12 written:5 readily:1 additive:1 plasticity:21 shape:1 enables:3 plot:1 update:3 stationary:10 generative:4 leaf:1 provides:5 zhang:1 mathematical:3 become:2 differential:2 viable:2 pairing:1 persistent:1 consists:1 combine:1 manner:1 introduce:1 theoretically:1 acquired:1 presumed:1 spine:10 behavior:1 multi:1 brain:7 automatically:1 n23:1 brea:1 jm:2 window:1 becomes:5 provided:2 project:4 underlying:2 wki:4 circuit:9 panel:1 what:1 cm:1 unified:2 finding:1 spoken:4 temporal:3 every:1 legi:1 ti:1 uk:3 control:1 normally:1 unit:4 grant:1 underlie:1 planck:1 positive:8 negligible:2 understood:2 local:4 accordance:2 xv:2 timing:3 consequence:1 era:2 encoding:1 analyzing:1 firing:8 fluctuation:3 solely:1 interpolation:1 black:1 might:1 twice:2 modulation:1 studied:1 bi:7 statistically:1 obeys:1 pavlidis:1 unique:2 acknowledgment:1 union:1 implement:3 digit:5 empirical:1 significantly:4 pre:2 integrating:2 regular:1 word:1 suggest:1 onto:3 cannot:2 close:1 influence:1 applying:1 nessler:3 equivalent:1 deterministic:3 map:2 demonstrated:1 phil:1 bill:2 poo:1 attention:1 starting:2 independently:1 duration:1 wit:1 formalized:1 simplicity:1 assigns:1 recovery:3 pouget:1 insight:2 rule:14 ormal:2 century:1 population:6 increment:2 updated:1 unconscious:2 enhanced:1 gm:1 cultured:1 exact:1 hypothesis:1 agreement:2 element:1 helmholtz:1 recognition:1 observed:3 bottom:1 reexamine:1 capture:1 rodrguez:1 graz:2 region:4 readout:1 wj:1 plo:2 removed:3 substantial:1 environment:1 hatfield:1 dynamic:28 trained:1 depend:1 predictive:2 distinctive:1 triangle:1 seth:1 emergent:1 represented:1 train:2 fast:1 describe:1 monte:1 artificial:2 cooperativity:1 outside:1 whose:2 fluctuates:2 larger:1 emerged:1 drawing:1 compensates:1 winkler:1 highlighted:3 emergence:2 jointly:1 online:9 advantage:1 differentiable:2 sequence:2 net:3 turrigiano:1 propose:2 rewiring:11 reconstruction:4 product:1 epsp:1 adaptation:2 gq:1 inserting:1 relevant:1 translate:1 achieve:1 pronounced:1 convergence:3 cluster:1 optimum:1 p:10 requirement:1 produce:1 double:1 impaired:1 converges:1 recurrent:2 measured:1 received:2 b0:3 eq:8 epsps:1 implemented:1 soc:1 met:1 merged:1 correct:1 stochastic:18 exploration:1 human:1 enable:2 oisson:1 transient:1 kistler:1 implementing:2 explains:1 require:1 potentiation:1 generalization:2 dendritic:2 biological:2 strictly:4 exploring:1 frontier:1 mm:1 around:1 stdp:4 exp:4 normal:1 mapping:3 adopt:1 early:1 uniqueness:1 realizes:1 visited:2 bridge:1 individually:1 weighted:1 stefan:1 imperfection:1 gaussian:2 rather:1 pn:24 fluctuate:1 voltage:1 encode:1 rezende:1 focus:1 likelihood:17 indicates:2 detect:1 posteriori:1 inference:9 helpful:1 dependent:3 i0:1 sb:1 integrated:1 hidden:22 vlsi:1 interested:1 arg:1 classification:2 issue:1 fokkerplanck:1 integration:3 special:1 mackay:1 ilat:3 sampling:30 cartoon:1 eliminated:1 represents:2 unsupervised:1 icml:1 future:1 stimulus:6 inherent:6 employ:1 randomly:2 simultaneously:2 individual:1 beck:1 phase:3 consisting:1 fire:1 organization:4 investigate:1 uscher:1 xni:1 partial:1 experience:2 continuing:2 re:1 theoretical:6 chist:1 psychological:1 instance:1 maximization:2 introducing:1 subset:3 ohm:1 front:1 optimally:1 reported:1 too:1 dependency:1 varies:1 density:1 international:1 probabilistic:4 systematic:1 physic:1 decoding:1 invertible:2 together:1 continuously:4 unforeseen:1 concrete:1 connectivity:5 again:1 postulate:1 satisfied:1 denham:1 hn:3 worse:3 stochastically:1 derivative:2 potential:6 accompanied:1 coding:2 seeming:1 coefficient:2 permanent:3 caused:1 igi:1 depends:3 stream:1 ornstein:2 multiplicative:4 view:3 try:1 idealized:1 wolfgang:1 analyze:2 ewki:2 red:4 reached:1 bayes:1 recover:1 parallel:3 capability:3 wm:1 simon:1 vivo:2 contribution:1 om:1 formed:1 purple:1 wiener:2 variance:2 became:1 landscape:2 yellow:1 bayesian:10 handwritten:2 pfp:6 carlo:1 trajectory:1 comp:3 kennedy:1 za:2 synapsis:5 reach:1 synaptic:67 ed:1 infinitesimal:1 against:2 energy:2 proof:1 associated:1 recovers:1 auditory:7 treatment:1 carandini:1 mensi:1 austria:1 knowledge:2 color:1 amplitude:2 actually:1 back:1 dt:7 response:3 modal:1 synapse:6 depressing:1 evaluated:1 strongly:1 furthermore:3 xa:1 hand:1 christopher:1 nonlinear:1 defines:1 mode:2 aj:1 reveal:1 normalized:1 contain:1 evolution:2 hence:3 wi0:1 regularization:1 maass:6 i2:1 illustrated:1 gw:1 attractive:1 motility:3 during:1 self:1 uniquely:1 nuisance:1 speaker:2 m:2 generalized:1 hippocampal:1 gg:1 presenting:1 neocortical:1 demonstrate:2 latham:1 temperature:5 hallmark:1 variational:1 consideration:1 instantaneous:2 recently:2 functional:8 spiking:16 empirically:1 multinomial:1 regulates:1 tracked:2 performed:1 winner:2 exponentially:2 volume:1 extend:1 association:2 approximates:1 relating:1 functionally:1 analog:1 jp:1 cambridge:1 ai:4 automatic:1 knowns:1 trivially:1 physical:1 session:2 rd:1 stochasticity:3 had:2 dot:1 dj:2 cortex:1 longer:1 inhibition:2 posterior:17 showed:1 scenario:1 selectivity:2 compound:1 harvey:1 additional:1 preceding:1 converge:1 maximize:2 determine:2 dashed:2 full:3 pnas:1 habenschuss:4 infer:1 schemmel:1 compensate:1 long:1 post:1 equally:1 plugging:1 impact:2 prediction:1 essentially:1 expectation:2 poisson:2 represent:2 uhlenbeck:2 kernel:1 cell:1 receive:1 addition:4 background:1 interval:1 wealth:1 diagram:2 jason:1 pyramidal:1 ascent:1 comment:1 subject:1 hz:2 induced:1 shepherd:1 sanchez:1 recording:1 call:1 fwf:1 structural:7 near:2 presence:1 enough:3 switch:1 fit:1 architecture:2 opposite:1 silent:1 andreas:1 tm:1 expression:1 pca:2 rauch:1 ul:1 york:1 cause:4 prefers:2 repeatedly:2 migrate:1 withdraw:1 yw:1 johannes:1 amount:1 neocortex:2 locally:2 induces:1 hardware:1 generate:1 senn:1 neuroscience:4 track:2 blue:1 discrete:1 promise:2 vol:4 zv:6 key:2 salient:1 four:1 threshold:1 memristive:1 drawn:4 changing:1 pj:1 diffusion:5 kept:1 fraction:1 sum:1 arrive:2 throughout:2 reasonable:1 extends:1 legenstein:3 scaling:1 guaranteed:1 activity:8 sato:1 adapted:1 marder:1 gardiner:1 constraint:3 strength:1 ijcnn:1 encodes:2 dominated:1 generates:1 aspect:1 speed:6 developing:1 according:5 neur:1 combination:1 membrane:3 describes:4 postsynaptic:4 wi:11 wta:10 modification:1 pneuma:1 slowed:1 invariant:1 explained:1 restricted:2 dv:1 taken:1 equation:4 resource:1 previously:1 segregation:1 montgomery:1 mechanism:4 available:1 multistability:1 eight:1 xiong:1 alternative:1 robustness:1 batch:5 denotes:7 remaining:2 include:1 top:1 tugraz:1 madison:1 giving:1 prof:1 hbp:1 upcoming:1 naud:1 already:1 realized:1 spike:7 degrades:1 primary:1 dependence:3 diagonal:1 responds:1 gradient:4 win:1 kth:1 distance:1 thank:1 mapped:1 lateral:3 sci:1 hmm:2 cw:1 nelson:1 manifold:1 presynaptic:1 assuming:1 code:1 modeled:3 relationship:1 illustration:3 innovation:1 equivalently:1 robert:1 trace:1 negative:4 rise:2 implementation:1 reliably:1 boltzmann:2 unknown:1 contributed:1 allowing:1 teh:1 neuron:55 knott:1 markov:1 compensation:3 situation:1 langevin:2 variability:4 excluding:2 perturbation:5 homeostatic:3 arbitrary:2 drift:3 david:1 pair:2 connection:18 philosophical:1 learned:3 hour:3 jolivet:1 nip:5 trans:1 able:2 usually:2 perception:4 pattern:5 below:2 sparsity:1 challenge:2 max:1 including:1 green:1 force:1 disturbance:1 endows:1 predicting:1 hybrid:1 pfeiffer:1 scheme:1 technology:2 axis:1 utterance:2 piater:1 szedmak:1 prior:18 understanding:3 evolve:1 proportional:3 integrate:2 astonishingly:1 sufficient:1 consistent:2 article:1 principle:1 tiny:1 heavy:3 excitatory:4 last:1 soon:1 ad:1 side:1 ostr:1 understand:1 disconnection:1 institute:1 sparse:3 distributed:1 benefit:1 overcome:2 xn:21 world:2 evaluating:1 unweighted:1 cortical:1 sensory:4 author:1 made:1 dwi:7 projected:1 simplified:1 bm:1 far:1 welling:1 sj:1 approximate:2 obtains:3 implicitly:1 keep:1 retracted:2 active:1 handbook:1 conceptual:1 assumed:3 xi:5 continuous:3 tailed:3 sk:2 learn:6 zk:1 robust:2 nature:2 inherently:1 gerstner:4 european:1 diag:1 pk:1 main:1 icann:1 neurosci:4 decrement:1 whole:2 noise:6 arrow:1 allowed:3 lesion:12 x1:5 neuronal:1 fig:10 slow:5 probing:1 wiley:1 explicit:1 exponential:4 comput:1 clamped:1 pe:1 ito:1 learns:1 down:1 theorem:2 removing:1 specific:2 bishop:1 inset:1 jt:1 normative:2 offset:1 explored:1 normalizing:1 evidence:1 mnist:1 gained:1 flattened:1 occurring:1 led:1 mill:1 likely:1 visual:7 springer:2 fokker:1 corresponds:1 determines:1 ma:1 viewed:2 goal:4 presentation:1 marked:1 towards:2 absence:1 considerable:1 change:3 experimentally:1 determined:1 nakagawa:1 total:2 pfister:1 experimental:4 divisive:2 maclean:1 internal:3 support:2 ongoing:2 biol:3 |
5,472 | 5,953 | Natural Neural Networks
Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, Koray Kavukcuoglu
{gdesjardins,simonyan,razp,korayk}@google.com
Google DeepMind, London
Abstract
We introduce Natural Neural Networks, a novel family of algorithms that speed up
convergence by adapting their internal representation during training to improve
conditioning of the Fisher matrix. In particular, we show a specific example that
employs a simple and efficient reparametrization of the neural network weights by
implicitly whitening the representation obtained at each layer, while preserving
the feed-forward computation of the network. Such networks can be trained efficiently via the proposed Projected Natural Gradient Descent algorithm (PRONG),
which amortizes the cost of these reparametrizations over many parameter updates and is closely related to the Mirror Descent online learning algorithm. We
highlight the benefits of our method on both unsupervised and supervised learning tasks, and showcase its scalability by training on the large-scale ImageNet
Challenge dataset.
1
Introduction
Deep networks have proven extremely successful across a broad range of applications. While their
deep and complex structure affords them a rich modeling capacity, it also creates complex dependencies between the parameters which can make learning difficult via first order stochastic gradient
descent (SGD). As long as SGD remains the workhorse of deep learning, our ability to extract highlevel representations from data may be hindered by difficult optimization, as evidenced by the boost
in performance offered by batch normalization (BN) [7] on the Inception architecture [25].
Though its adoption remains limited, the natural gradient [1] appears ideally suited to these difficult
optimization issues. By following the direction of steepest descent on the probabilistic manifold,
the natural gradient can make constant progress over the course of optimization, as measured by the
Kullback-Leibler (KL) divergence between consecutive iterates. Utilizing the proper distance measure ensures that the natural gradient is invariant to the parametrization of the model. Unfortunately,
its application has been limited due to its high computational cost. Natural gradient descent (NGD)
typically requires an estimate of the Fisher Information Matrix (FIM) which is square in the number
of parameters, and worse, it requires computing its inverse. Truncated Newton methods can avoid
explicitly forming the FIM in memory [12, 15], but they require an expensive iterative procedure to
compute the inverse. Such computations can be wasteful as they do not take into account the highly
structured nature of deep models.
Inspired by recent work on model reparametrizations [17, 13], our approach starts with a simple question: can we devise a neural network architecture whose Fisher is constrained to be
identity? This is an important question, as SGD and NGD would be equivalent in the resulting
model. The main contribution of this paper is in providing a simple, theoretically justified network
reparametrization which approximates via first-order gradient descent, a block-diagonal natural gradient update over layers. Our method is computationally efficient due to the local nature of the
reparametrization, based on whitening, and the amortized nature of the algorithm. Our second contribution is in unifying many heuristics commonly used for training neural networks, under the roof
of the natural gradient, while highlighting an important connection between model reparametrizations and Mirror Descent [3]. Finally, we showcase the efficiency and the scalability of our method
1
across a broad-range of experiments, scaling our method from standard deep auto-encoders to large
convolutional models on ImageNet[20], trained across multiple GPUs. This is to our knowledge the
first-time a (non-diagonal) natural gradient algorithm is scaled to problems of this magnitude.
2
The Natural Gradient
This section provides the necessary background and derives a particular form of the FIM whose
structure will be key to our efficient approximation. While we tailor the development of our method
to the classification setting, our approach generalizes to regression and density estimation.
2.1
Overview
We consider the problem of fitting the parameters ? 2 RN of a model p(y | x; ?) to an empirical
distribution ?(x, y) under the log-loss. We denote by x 2 X the observation vector and y 2 Y its
associated label. Concretely, this stochastic optimization problem aims to solve:
??
2
argmin? E(x,y)?? [ log p(y | x, ?)] .
(1)
Defining the per-example loss as `(x, y), Stochastic Gradient Descent (SGD) performs the above
minimization by iteratively following the direction of steepest descent, given by the column vector
r = E? [d`/d?]. Parameters are updated using the rule ?(t+1)
?(t) ?(t) r(t) , where ? is a
learning rate. An equivalent proximal form of gradient descent [4] reveals the precise nature of ?:
?
2
1
?(t+1) = argmin? h?, ri + (t) ? ?(t)
(2)
2
2?
Namely, each iterate ?(t+1) is the solution to an auxiliary optimization problem, where ? controls
the distance between consecutive iterates, using an L2 distance. In contrast, the natural gradient
relies on the KL-divergence between iterates, a more appropriate distance measure for probability
distributions. Its metric is determined by the Fisher Information matrix,
(
"?
??
?T #)
@ log p
@ log p
F? = Ex?? Ey?p(y|x,?)
,
(3)
@?
@?
i.e. the covariance of the gradients of the model log-probabilities wrt. its parameters. The natural
gradient direction is then obtained as rN = F? 1 r. See [15, 14] for a recent overview of the topic.
2.2
Fisher Information Matrix for MLPs
We start by deriving the precise form of the Fisher for a canonical multi-layer perceptron (MLP)
composed of L layers. We consider the following deep network for binary classification, though our
approach generalizes to an arbitrary number of output classes.
p(y = 1 | x) ? hL
???
h1
=
fL (WL hL
=
f1 (W1 x + b1 )
1
+ bL )
(4)
The parameters of the MLP, denoted ? = {W1 , b1 , ? ? ? , WL , bL }, are the weights Wi 2 RNi ?Ni
connecting layers i and i 1, and the biases bi 2 RNi . fi is an element-wise non-linear function.
1
Let us define i to be the backpropagated gradient through the i-th non-linearity. We ignore the
off block-diagonal components of the Fisher matrix and focus on the block FWi , corresponding to
interactions between parameters of layer i. This block takes the form:
h
i
T
FWi = Ex?? vec i hTi 1 vec i hTi t
,
y?p
where vec(X) is the vectorization function yielding a column vector from the rows of matrix X.
Assuming that
i
and activations hi
1
are independent random variables, we can write:
FWi (km, ln) ? Ex?? [ i (k) i (l)] E? [hi
y?p
2
1 (m)hi 1 (n)] ,
(5)
?t
?t+T
1
F (?t) 2
F (?t)
?t+1
?t
1
2
?t+T
Figure 1: (a) A 2-layer natural neural network. (b) Illustration of the projections involved in PRONG.
where X(i, j) is the element at row i and column j of matrix X and x(i) is the i-th element of vector
x. FWi (km, ln) is the entry in the Fisher capturing interactions between parameters Wi (k, m)
and Wj (l, n). Our hypothesis, verified experimentally
?
? in Sec. 4.1, is that we can greatly improve
conditioning of the Fisher by enforcing that E? hi hTi = I, for all layers of the network, despite
ignoring possible correlations in the ?s and off block diagonal terms of the Fisher.
3
Projected Natural Gradient Descent
This section introduces Whitened Neural Networks (WNN), which perform approximate whitening
of their internal representations. We begin by presenting a novel whitened neural layer, with the
assumption that the network statistics ?i (?) = E[hi ] and ?i (?) = E[hi hTi ] are fixed. We then show
how these layers can be adapted to efficiently track population statistics over the course of training.
The resulting learning algorithm is referred to as Projected Natural Gradient Descent (PRONG). We
highlight an interesting connection between PRONG and Mirror Descent in Section 3.3.
3.1
A Whitened Neural Layer
The building block of WNN is the following neural layer,
hi
=
fi (Vi Ui
1
(hi
1
ci
1)
(6)
+ di ) .
Compared to Eq. 4, we have introduced an explicit centering parameter ci 1 2 RNi 1 , equal to
?i 1 , which ensures that the input to the dot product has zero mean in expectation. This is analogous to the centering reparametrization for Deep Boltzmann Machines [13]. The weight matrix
Ui 1 2 RNi 1 ?Ni 1 is a per-layer PCA-whitening matrix whose rows are obtained from an eigendecomposition of ?i 1 :
?i ? diag ( i ) ? U
?iT =) Ui = diag (
?i = U
i
+ ?)
1
2
?iT .
?U
(7)
The hyper-parameter ? is a regularization term controlling the maximal multiplier on the learning
rate, or equivalently the size of the trust region. The parameters Vi 2 RNi ?Ni 1 and di 2 RNi are
analogous to the canonical parameters of a neural network as introduced in Eq. 4, though operate
in the space of whitened unit activations Ui (hi ci ). This layer can be stacked to form a deep
neural network having L layers, with model parameters ? = {V1 , d1 , ? ? ? VL , dL } and whitening
coefficients = {U0 , c0 , ? ? ? , UL 1 , cL 1 }, as depicted in Fig. 1a.
Though the above layer might appear over-parametrized at first glance, we crucially do not learn
the whitening coefficients via loss minimization, but instead estimate them directly from the model
statistics. These coefficients are thus constants from the point of view of the optimizer and simply
serve to improve conditioning of the Fisher with respect to the parameters ?, denoted F? . Indeed,
using the same derivation
that led
?
? to Eq. 5, we can see that the block-diagonal terms of F? now
involve terms E (Ui hi )(Ui hi )T , which equals identity by construction.
3.2
Updating the Whitening Coefficients
As the whitened model parameters ? evolve during training, so do the statistics ?i and ?i . For our
model to remain well conditioned, the whitening coefficients must be updated at regular intervals,
3
Algorithm 1 Projected Natural Gradient Descent
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
Input: training set D, initial parameters ?.
Hyper-parameters: reparam. frequency T , number of samples Ns , regularization term ?.
Ui
I; ci
0; t
0
repeat
if mod(t, T ) = 0 then
. amortize cost of lines [6-11]
for all layers i do
Compute canonical parameters Wi = Vi Ui 1 ; bi = di Wi ci 1 . . proj. P 1 (?)
Estimate ?i and ?i , using Ns samples from D.
Update ci from ?i and Ui from eigen decomp. of ?i + ?I.
. update
Update parameters Vi
Wi Ui 11 ; di
bi + V i U i 1 c i 1 .
. proj. P (?)
end for
end if
Perform SGD update wrt. ? using samples from D.
t
t+1
until convergence
while taking care not to interfere with the convergence properties of gradient descent. This can be
achieved by coupling updates to with corresponding updates to ? such that the overall function
implemented by the MLP remains unchanged, e.g. by preserving the product Vi Ui 1 before and
after each update to the whitening coefficients (with an analoguous constraint on the biases).
Unfortunately, while estimating the mean ?i and diag(?i ) could be performed online over a minibatch of samples as in the recent Batch Normalization scheme [7], estimating the full covariance
matrix will undoubtedly require a larger number of samples. While statistics could be accumulated
online via an exponential moving average as in RMSprop [27] or K-FAC [8], the cost of the eigendecomposition required for computing the whitening matrix Ui remains cubic in the layer size.
In the simplest instantiation of our method, we exploit the smoothness of gradient descent by simply
amortizing the cost of these operations over T consecutive updates. SGD updates in the whitened
model will be closely aligned to NGD immediately following the reparametrization. The quality
of this approximation will degrade over time, until the subsequent reparametrization. The resulting
algorithm is shown in the pseudo-code of Algorithm 1. We can improve upon this basic amortization scheme by updating the whitened parameters ? using a per-batch diagonal natural gradient update, whose statistics are computed online. In our framework, this can be implemented
via the reparametrization Wi = Vi Di 1 Ui 1 , where Di 1 is a diagonal matrix updated such that
V [Di 1 Ui 1 hi 1 ] = 1, for each minibatch. Updates to Di 1 can be compensated for exactly and
cheaply by scaling the rows of Ui 1 and columns of Vi accordingly. A simpler implementation of
this idea is to combine PRONG with batch-normalization, which we denote as PRONG+ .
3.3
Duality and Mirror Descent
There is an inherent duality between the parameters ? of our whitened neural layer and the parameters ? of a canonical model. Indeed, there exist linear projections P (?) and P 1 (?), which map
from canonical parameters ? to whitened parameters ?, and vice-versa. P (?) corresponds to line
10 of Algorithm 1, while P 1 (?) corresponds to line 7. This duality between ? and ? reveals a
close connection between PRONG and Mirror Descent [3].
Mirror Descent (MD) is an online learning algorithm which generalizes the proximal form of gradient descent to the class of Bregman divergences B (q, p), where q, p 2 and : ! R is a
strictly convex and differentiable function. Replacing the L2 distance by B , mirror descent solves
the proximal problem of Eq. 2 by applying first-order updates in a dual space and then projecting back onto the primal space. Defining ? = r? (?) and ? = r?? (?), with ? the complex
conjugate of , the mirror descent updates are given by:
?(t+1)
=
r?
?(t+1)
=
r?
?
?
?(t)
?(t) r?
?
?
?
?(t+1)
4
(8)
(9)
(a)
(b)
(c)
Figure 2: Fisher matrix for a small MLP (a) before and (b) after the first reparametrization. Best viewed in
colour. (c) Condition number of the FIM during training, relative to the initial conditioning. All models where
initialized such that the initial conditioning was the same, and learning rate where adjusted such that they reach
roughly the same training error in the given time.
It is well known [26, 18] that the natural gradient is a special case of MD, where the distance
generating function 1 is chosen to be (?) = 12 ?T F ?.
The mirror updates are somewhat unintuitive however. Why is the gradient r? applied to the dual
space if it has been computed in the space of parameters
p ? ? This is where PRONG relates to MD. It
is trivial to show that using the function ?(?) = 12 ?T F ?, instead of the previously defined (?),
enables us to directly update the dual parameters using r? , the gradient computed directly in the
dual space. Indeed, the resulting updates can be shown to implement the natural gradient and are
thus equivalent to the updates of Eq. 9 with the appropriate choice of (?):
?
? ?
1
1
d`
(t+1)
(t)
(t)
(t)
(t)
?
?
?
= r?
?
? r? = F 2 ?
? E?
F 2
d?
?
?
?
d`
? (t+1) = ?(t) ?(t) F 1 E?
??(t+1) = r? ?? ?
(10)
d?
? and r
? ? correspond to the projections P (?) and P 1 (?) used by PRONG
The operators r
to map from the canonical neural parameters ? to those of the whitened layers ?. As illustrated
in Fig. 1b, the advantage of this whitened form of MD is that one may amortize the cost of the
projections over several updates, as gradients can be computed directly in the dual parameter space.
3.4
Related Work
This work extends the recent contributions of [17] in formalizing many commonly used heuristics
for training MLPs: the importance of zero-mean activations and gradients [10, 21], as well as the
importance of normalized variances in the forward and backward passes [10, 21, 6]. More recently,
Vatanen et al. [28] extended their previous work [17] by introducing a multiplicative constant i
to the centered non-linearity. In contrast, we introduce a full whitening matrix Ui and focus on
whitening the feedforward network activations, instead of normalizing a geometric mean over units
and gradient variances.
The recently introduced batch normalization (BN) scheme [7] quite closely resembles a diagonal
version of PRONG, the main difference being that BN normalizes the variance of activations before
the non-linearity, as opposed to normalizing the latent activations by looking at the full covariance.
Furthermore, BN implements normalization by modifying the feed-forward computations thus requiring the method to backpropagate through the normalization operator. A diagonal version of
PRONG also bares an interesting resemblance to RMSprop [27, 5], in that both normalization terms
involve the square root of the FIM. An important distinction however is that PRONG applies this
update in the whitened parameter space, thus preserving the natural gradient interpretation.
1
As the Fisher and thus
which we drop for clarity.
?
depend on the parameters ?(t) , these should be indexed with a time superscript,
5
(a)
(b)
(c)
(d)
Figure 3: Optimizing a deep auto-encoder on MNIST. (a) Impact of eigenvalue regularization term ?. (b)
Impact of amortization period T showing that initialization with the whitening reparametrization is important
for achieving faster learning and better error rate. (c) Training error vs number of updates. (d) Training error
vs cpu-time. Plots (c-d) show that PRONG achieves better error rate both in number of updates and wall clock.
K-FAC [8] is closely related to PRONG and was developed concurrently to our method. It targets
the same layer-wise block-diagonal of the Fisher, approximating each block as in Eq. 5. Unlike
our method however, KFAC does not approximate the covariance of backpropagated gradients as
the identity, and further estimates the required statistics using exponential moving averages (unlike our approach based on amortization). Similar techniques can be found in the preconditioning
of the Kaldi speech recognition toolkit [16]. By modeling the Fisher matrix as the covariance of
a sparsely connected Gaussian graphical model, FANG [19] represents a general formalism for
exploiting model structure to efficiently compute the natural gradient. One application to neural
networks [8] is in decorrelating gradients across neighbouring layers.
A similar algorithm to PRONG was later found in [23], where it appeared simply as a thought
experiment, but with no amortization or recourse for efficiently computing F .
4
Experiments
We begin with a set of diagnostic experiments which highlight the effectiveness of our method at
improving conditioning. We also illustrate the impact of the hyper-parameters T and ?, controlling
the frequency of the reparametrization and the size of the trust region. Section 4.2 evaluates PRONG
on unsupervised learning problems, where models are both deep and fully connected. Section 4.3
then moves onto large convolutional models for image classification. Experimental details such as
model architecture or hyper-parameter configurations can be found in the supplemental material.
4.1
Introspective Experiments
Conditioning. To provide a better understanding of the approximation made by PRONG, we train
a small 3-layer MLP with tanh non-linearities, on a downsampled version of MNIST (10x10) [11].
The model size was chosen in order for the full Fisher to be tractable. Fig. 2(a-b) shows the FIM
of the middle hidden layers before and after whitening the model activations (we took the absolute
value of the entries to improve visibility). Fig. 2c depicts the evolution of the condition number
of the FIM during training, measured as a percentage of its initial value (before the first whitening
reparametrization in the case of PRONG). We present such curves for SGD, RMSprop, batch normalization and PRONG. The results clearly show that the reparametrization performed by PRONG
improves conditioning (reduction of more than 95%). These observations confirm our initial assumption, namely that we can improve conditioning of the block diagonal Fisher by whitening
activations alone.
Sensitivity of Hyper-Parameters. Figures 3a- 3b highlight the effect of the eigenvalue regularization term ? and the reparametrization interval T . The experiments were performed on the best
6
(a)
(b)
(c)
(d)
Figure 4: Classification error on CIFAR-10 (a-b) and ImageNet (c-d). On CIFAR-10, PRONG achieves better
test error and converges faster. On ImageNet, PRONG+ achieves comparable validation error while maintaining a faster covergence rate.
performing auto-encoder of Section 4.2 on the MNIST dataset. Figures 3a- 3b plot the reconstruction
error on the training set for various values of ? and T . As ? determines a maximum multiplier on the
learning rate, learning becomes extremely sensitive when this learning rate is high2 . For smaller step
sizes however, lowering ? can yield significant speedups often converging faster than simply using a
larger learning rate. This confirms the importance of the manifold curvature for optimization (lower
? allows for different directions to be scaled drastically different according to their corresponding
curvature). Fig 3b compares the impact of T for models having a proper whitened initialization
(solid lines), to models being initialized with a standard ?fan-in? initialization (dashed lines) [10].
These results are quite surprising in showing the effectiveness of the whitening reparametrization
as a simple initialization scheme. That being said, performance can degrade due to ill conditioning
when T becomes excessively large (T = 105 ).
4.2
Unsupervised Learning
Following Martens [12], we compare PRONG on the task of minimizing reconstruction error of a
dense 8-layer auto-encoder on the MNIST dataset. Reconstruction error with respect to updates and
wallclock time are shown in Fig. 3 (c,d). We can see that PRONG significantly outperforms the
baseline methods, by up to an order of magnitude in number of updates. With respect to wallclock,
our method significantly outperforms the baselines in terms of time taken to reach a certain error
threshold, despite the fact that the runtime per epoch for PRONG was 3.2x that of SGD, compared
to batch normalization (2.3x SGD) and RMSprop (9x SGD). Note that these timing numbers reflect
performance under the optimal choice of hyper-parameters, which in the case of batch normalization
yielded a batch size of 256, compared to 128 for all other methods. Further breaking down the
performance, 34% of the runtime of PRONG was spent performing the whitening reparametrization,
compared to 4% for estimating the per layer means and covariances. This confirms that amortization
is paramount to the success of our method.3
4.3
Supervised Learning
We now evaluate our method for training deep supervised convolutional networks for object recognition. Following [7], we perform whitening across feature maps only: that is we treat pixels in a
given feature map as independent samples. This allows us to implement the whitened neural layer
as a sequence of two convolutions, where the first is by a 1x1 whitening filter. PRONG is compared
to SGD, RMSprop and batch normalization, with each algorithm being accelerated via momentum.
Results are presented on CIFAR-10 [9] and the ImageNet Challenge (ILSVRC12) datasets [20]. In
both cases, learning rates were decreased using a ?waterfall? annealing schedule, which divided the
learning rate by 10 when the validation error failed to improve after a set number of evaluations.
2
Unstable combinations of learning rates and ? are omitted for clarity.
We note that our whitening implementation is not optimized, as it does not take advantage of GPU acceleration. Runtime is therefore expected to improve as we move the eigen-decompositions to GPU.
3
7
CIFAR-10 We now evaluate PRONG on CIFAR-10, using a deep convolutional model inspired
by the VGG architecture [22]. The model was trained on 24 ? 24 random crops with random
horizontal reflections. Model selection was performed on a held-out validation set of 5k examples.
Results are shown in Fig. 4. With respect to training error, PRONG and BN seem to offer similar
speedups compared to SGD with momentum. Our hypothesis is that the benefits of PRONG are more
pronounced for densely connected networks, where the number of units per layer is typically larger
than the number of maps used in convolutional networks. Interestingly, PRONG generalized better,
achieving 7.32% test error vs. 8.22% for batch normalization. This reflects the findings of [15],
which showed how NGD can leverage unlabeled data for better generalization: the ?unlabeled? data
here comes from the extra crops and reflections observed when estimating the whitening matrices.
ImageNet Challenge Dataset Our final set of experiments aims to show the scalability of our
method. We applied our natural gradient algorithm to the large-scale ILSVRC12 dataset (1.3M images labelled into 1000 categories) using the Inception architecture [7]. In order to scale to problems
of this size, we parallelized our training loop so as to split the processing of a single minibatch (of
size 256) across multiple GPUs. Note that PRONG can scale well in this setting, as the estimation
of the mean and covariance parameters of each layer is also embarassingly parallel. Eight GPUs
were used for computing gradients and estimating model statistics, though the eigen decomposition
required for whitening was itself not parallelized in the current implementation. Given the difficulty
of the task, we employed the enhanced version of the algorithm (PRONG+), as simple periodic
whitening of the model proved to be unstable. Figure 4 (c-d) shows that batch normalisation and
PRONG+ converge to approximately the same top-1 validation error (28.6% vs 28.9% respectively)
for similar cpu-time. In comparison, SGD achieved a validation error of 32.1%. PRONG+ however
exhibits much faster convergence initially: after 105 updates it obtains around 36% error compared
to 46% for BN alone. We stress that the ImageNet results are somewhat preliminary. While our
top-1 error is higher than reported in [7] (25.2%), we used a much less extensive data augmentation
pipeline. We are only beginning to explore what natural gradient methods may achieve on these
large scale optimization problems and are encouraged by these initial findings.
5
Discussion
We began this paper by asking whether convergence speed could be improved by simple model
reparametrizations, driven by the structure of the Fisher matrix. From a theoretical and experimental
perspective, we have shown that Whitened Neural Networks can achieve this via a simple, scalable
and efficient whitening reparametrization. They are however one of several possible instantiations
of the concept of Natural Neural Networks. In a previous incarnation of the idea, we exploited a
similar reparametrization to include whitening of backpropagated gradients4 . We favor the simpler
approach presented in this paper, as we generally found the alternative less stable for deep networks.
This may be due to the difficulty in estimating gradient covariances in lower layers, a problem which
seems to mirror the famous vanishing gradient problem. [17].
Maintaining whitened activations may also offer additional benefits from the point of view of model
compression and generalization. By virtue of whitening, the projection Ui hi forms an ordered representation, having least and most significant bits. The sharp roll-off in the eigenspectrum of ?i
may explain why deep networks are ammenable to compression [2]. Similarly, one could envision
spectral versions of Dropout [24] where the dropout probability is a function of the eigenvalues.
Alternative ways of orthogonalizing the representation at each layer should also be explored, via alternate decompositions of ?i , or perhaps by exploiting the connection between linear auto-encoders
and PCA. We also plan on pursuing the connection with Mirror Descent and further bridging the
gap between deep learning and methods from online convex optimization.
Acknowledgments
We are extremely grateful to Shakir Mohamed for invaluable discussions and feedback in the preparation of
this manuscript. We also thank Philip Thomas, Volodymyr Mnih, Raia Hadsell, Sergey Ioffe and Shane Legg
for feedback on the paper.
4
The weight matrix can be parametrized as Wi = RiT Vi Ui
8
1,
with Ri the whitening matrix for
i.
References
[1] Shun-ichi Amari. Natural gradient works efficiently in learning. Neural Computation, 1998.
[2] Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In NIPS. 2014.
[3] Amir Beck and Marc Teboulle. Mirror descent and nonlinear projected subgradient methods for convex
optimization. Oper. Res. Lett., 2003.
[4] P. L. Combettes and J.-C. Pesquet. Proximal Splitting Methods in Signal Processing. ArXiv e-prints,
December 2009.
[5] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and
stochastic optimization. In JMLR. 2011.
[6] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural
networks. In AISTATS, May 2010.
[7] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. ICML, 2015.
[8] Roger Grosse James Martens. Optimizing neural networks with kronecker-factored approximate curvature. In ICML, June 2015.
[9] Alex Krizhevsky. Learning multiple layers of features from tiny images. Master?s thesis, University of
Toronto, 2009.
[10] Yann LeCun, L?eon Bottou, Genevieve B. Orr, and Klaus-Robert M?uller. Efficient backprop. In Neural
Networks, Tricks of the Trade, Lecture Notes in Computer Science LNCS 1524. Springer Verlag, 1998.
[11] Yann Lecun, Lon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, pages 2278?2324, 1998.
[12] James Martens. Deep learning via Hessian-free optimization. In ICML, June 2010.
[13] K.-R. M?uller and G. Montavon. Deep boltzmann machines and the centering trick. In K.-R. M?uller,
G. Montavon, and G. B. Orr, editors, Neural Networks: Tricks of the Trade. Springer, 2013.
[14] Yann Ollivier. Riemannian metrics for neural networks. arXiv, abs/1303.0818, 2013.
[15] Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. In ICLR, 2014.
[16] Daniel Povey, Xiaohui Zhang, and Sanjeev Khudanpur. Parallel training of deep neural networks with
natural gradient and parameter averaging. ICLR workshop, 2015.
[17] T. Raiko, H. Valpola, and Y. LeCun. Deep learning made easier by linear transformations in perceptrons.
In AISTATS, 2012.
[18] G. Raskutti and S. Mukherjee. The Information Geometry of Mirror Descent. arXiv, October 2013.
[19] Ruslan Salakhutdinov Roger B. Grosse. Scaling up natural gradient by sparsely factorizing the inverse
fisher matrix. In ICML, June 2015.
[20] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang,
Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large
Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015.
[21] Nicol N. Schraudolph. Accelerated gradient descent by factor-centering decomposition. Technical Report
IDSIA-33-98, Istituto Dalle Molle di Studi sull?Intelligenza Artificiale, 1998.
[22] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In
International Conference on Learning Representations, 2015.
[23] Jascha Sohl-Dickstein. The natural gradient by analogy to signal whitening, and recipes and tricks for its
use. arXiv, 2012.
[24] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout:
A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 2014.
[25] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru
Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv, 2014.
[26] Philip S Thomas, William C Dabney, Stephen Giguere, and Sridhar Mahadevan. Projected natural actorcritic. In Advances in Neural Information Processing Systems 26. 2013.
[27] Tijmen Tieleman and Geoffrey Hinton. Rmsprop: Divide the gradient by a running average of its recent
magnitude. coursera: Neural networks for machine learning. 2012.
[28] Tommi Vatanen, Tapani Raiko, Harri Valpola, and Yann LeCun. Pushing stochastic gradient towards
second-order methods ? backpropagation learning with transformations in nonlinearities. ICONIP, 2013.
9
| 5953 |@word middle:1 version:5 compression:2 seems:1 c0:1 km:2 confirms:2 crucially:1 bn:6 covariance:8 decomposition:4 sgd:13 incarnation:1 solid:1 analoguous:1 reduction:1 initial:6 configuration:1 liu:1 daniel:1 document:1 interestingly:1 envision:1 outperforms:2 current:1 com:1 surprising:1 activation:9 must:1 gpu:2 john:1 subsequent:1 enables:1 christian:2 visibility:1 drop:1 plot:2 update:26 v:4 alone:2 amir:1 accordingly:1 beginning:1 parametrization:1 steepest:2 vanishing:1 iterates:3 pascanu:2 provides:1 toronto:1 simpler:2 zhang:1 ijcv:1 fitting:1 combine:1 introduce:2 theoretically:1 expected:1 indeed:3 roughly:1 multi:1 inspired:2 salakhutdinov:2 cpu:2 becomes:2 begin:2 estimating:6 linearity:4 formalizing:1 what:1 sull:1 argmin:2 deepmind:1 developed:1 supplemental:1 finding:2 transformation:2 pseudo:1 giguere:1 runtime:3 exactly:1 scaled:2 control:1 unit:3 appear:1 before:5 local:1 timing:1 treat:1 despite:2 approximately:1 might:1 initialization:4 resembles:1 limited:2 range:2 adoption:1 bi:3 acknowledgment:1 lecun:4 block:10 implement:3 backpropagation:1 razvan:2 procedure:1 lncs:1 empirical:1 adapting:1 thought:1 projection:5 significantly:2 embarassingly:1 regular:1 downsampled:1 onto:2 close:1 selection:1 operator:2 unlabeled:2 andrej:1 applying:1 equivalent:3 map:5 marten:3 compensated:1 xiaohui:1 jimmy:1 convex:3 hadsell:1 splitting:1 immediately:1 jascha:1 factored:1 rule:1 utilizing:1 deriving:1 fang:1 population:1 amortizes:1 analogous:2 updated:3 controlling:2 construction:1 target:1 enhanced:1 neighbouring:1 hypothesis:2 trick:4 amortized:1 element:3 expensive:1 recognition:5 updating:2 idsia:1 showcase:2 sparsely:2 mukherjee:1 observed:1 revisiting:1 wj:1 ensures:2 region:2 connected:3 coursera:1 trade:2 ui:18 rmsprop:6 ideally:1 trained:3 depend:1 grateful:1 serve:1 creates:1 upon:1 efficiency:1 preconditioning:1 various:1 harri:1 derivation:1 stacked:1 train:1 fac:2 london:1 klaus:1 hyper:6 whose:4 heuristic:2 larger:3 solve:1 quite:2 elad:1 amari:1 encoder:3 ability:1 simonyan:3 statistic:8 favor:1 itself:1 superscript:1 online:7 final:1 shakir:1 highlevel:1 differentiable:1 advantage:2 eigenvalue:3 wallclock:2 took:1 reconstruction:3 net:1 interaction:2 product:2 maximal:1 aligned:1 loop:1 achieve:2 pronounced:1 scalability:3 recipe:1 exploiting:2 convergence:5 sutskever:1 generating:1 artificiale:1 converges:1 object:1 spent:1 coupling:1 illustrate:1 andrew:1 measured:2 ex:3 progress:1 eq:6 solves:1 auxiliary:1 implemented:2 come:1 tommi:1 direction:4 korayk:1 closely:4 modifying:1 stochastic:5 filter:1 centered:1 material:1 shun:1 backprop:1 require:2 f1:1 generalization:2 wall:1 preliminary:1 really:1 molle:1 adjusted:1 strictly:1 around:1 kaldi:1 desjardins:1 optimizer:1 consecutive:3 achieves:3 omitted:1 estimation:2 ruslan:2 label:1 tanh:1 sensitive:1 wl:2 vice:1 reflects:1 minimization:2 uller:3 concurrently:1 clearly:1 gaussian:1 aim:2 avoid:1 focus:2 june:3 waterfall:1 legg:1 lon:1 greatly:1 contrast:2 baseline:2 accumulated:1 vl:1 typically:2 initially:1 hidden:1 proj:2 going:1 pixel:1 issue:1 classification:4 overall:1 dual:5 denoted:2 ill:1 development:1 plan:1 constrained:1 special:1 equal:2 having:3 koray:1 encouraged:1 represents:1 broad:2 unsupervised:3 icml:4 yoshua:3 report:1 inherent:1 employ:1 composed:1 divergence:3 densely:1 roof:1 beck:1 geometry:1 william:1 ab:1 undoubtedly:1 mlp:5 normalisation:1 highly:1 mnih:1 evaluation:1 genevieve:1 introduces:1 yielding:1 primal:1 held:1 razp:1 bregman:1 necessary:1 istituto:1 indexed:1 divide:1 initialized:2 re:1 theoretical:1 column:4 modeling:2 formalism:1 asking:1 teboulle:1 prong:35 caruana:1 rabinovich:1 cost:6 introducing:1 entry:2 krizhevsky:2 successful:1 reported:1 dependency:1 encoders:2 periodic:1 proximal:4 density:1 international:2 sensitivity:1 probabilistic:1 off:3 michael:1 connecting:1 ilya:1 sanjeev:2 w1:2 augmentation:1 reflect:1 thesis:1 opposed:1 huang:1 worse:1 oper:1 szegedy:2 account:1 amortizing:1 volodymyr:1 li:1 orr:2 nonlinearities:1 sec:1 coefficient:6 explicitly:1 vi:8 performed:4 h1:1 view:2 multiplicative:1 root:1 later:1 hazan:1 start:2 reparametrization:17 parallel:2 jia:2 actorcritic:1 contribution:3 mlps:2 square:2 ni:3 convolutional:6 variance:3 roll:1 efficiently:5 correspond:1 yield:1 famous:1 vincent:1 kavukcuoglu:1 russakovsky:1 explain:1 reach:2 centering:4 evaluates:1 frequency:2 involved:1 mohamed:1 james:2 associated:1 di:9 riemannian:1 dataset:5 proved:1 knowledge:1 improves:1 schedule:1 sean:1 back:1 appears:1 feed:2 manuscript:1 higher:1 supervised:3 zisserman:1 improved:1 wei:1 decorrelating:1 though:5 furthermore:1 inception:2 roger:2 correlation:1 until:2 clock:1 horizontal:1 trust:2 su:1 replacing:1 nonlinear:1 google:2 glance:1 interfere:1 minibatch:3 quality:1 perhaps:1 resemblance:1 dabney:1 building:1 effect:1 excessively:1 normalized:1 multiplier:2 requiring:1 concept:1 evolution:1 regularization:4 xavier:1 leibler:1 iteratively:1 illustrated:1 during:4 generalized:1 presenting:1 stress:1 iconip:1 workhorse:1 performs:1 invaluable:1 reflection:2 duchi:1 dragomir:1 zhiheng:1 image:4 wise:2 novel:2 fi:2 recently:2 began:1 dalle:1 raskutti:1 overview:2 conditioning:10 interpretation:1 approximates:1 significant:2 anguelov:1 versa:1 vec:3 smoothness:1 similarly:1 dot:1 moving:2 toolkit:1 stable:1 whitening:29 patrick:1 curvature:3 recent:5 showed:1 perspective:1 optimizing:2 driven:1 certain:1 verlag:1 binary:1 success:1 devise:1 exploited:1 preserving:3 additional:1 care:1 somewhat:2 tapani:1 ey:1 parallelized:2 employed:1 converge:1 deng:1 period:1 signal:2 dashed:1 u0:1 multiple:3 full:4 relates:1 stephen:1 x10:1 technical:1 faster:5 offer:2 long:1 cifar:5 schraudolph:1 divided:1 raia:1 impact:4 converging:1 scalable:1 regression:1 basic:1 whitened:16 crop:2 metric:2 expectation:1 vision:1 arxiv:5 normalization:13 sergey:2 achieved:2 justified:1 background:1 wnn:2 krause:1 interval:2 decreased:1 annealing:1 extra:1 operate:1 unlike:2 pass:1 shane:1 december:1 mod:1 effectiveness:2 sequence:1 seem:1 leverage:1 feedforward:2 split:1 bengio:3 bernstein:1 ngd:4 iterate:1 mahadevan:1 architecture:5 pesquet:1 hindered:1 idea:2 haffner:1 vgg:1 reparametrizations:4 shift:1 whether:1 pca:2 fim:7 colour:1 bridging:1 ul:1 accelerating:1 karen:1 speech:1 hessian:1 deep:25 generally:1 involve:2 karpathy:1 backpropagated:3 category:1 simplest:1 affords:1 exist:1 canonical:6 percentage:1 diagnostic:1 per:6 track:1 write:1 dickstein:1 ichi:1 key:1 threshold:1 achieving:2 yangqing:1 wasteful:1 clarity:2 prevent:1 verified:1 povey:1 ollivier:1 backward:1 v1:1 lowering:1 subgradient:2 inverse:3 ilsvrc12:2 master:1 tailor:1 extends:1 family:1 pursuing:1 yann:4 scaling:3 comparable:1 bit:1 capturing:1 layer:32 fl:1 hi:13 dropout:3 fan:1 paramount:1 yielded:1 adapted:1 constraint:1 kronecker:1 alex:2 fei:2 ri:2 speed:2 nitish:1 extremely:3 performing:2 gpus:3 speedup:2 structured:1 according:1 alternate:1 combination:1 conjugate:1 across:6 remain:1 smaller:1 wi:7 hl:2 projecting:1 invariant:1 taken:1 recourse:1 computationally:1 ln:2 pipeline:1 remains:4 previously:1 wrt:2 fwi:4 singer:1 tractable:1 end:2 generalizes:3 operation:1 eight:1 appropriate:2 spectral:1 pierre:1 batch:13 alternative:2 eigen:3 thomas:2 top:2 running:1 include:1 graphical:1 maintaining:2 newton:1 unifying:1 pushing:1 exploit:1 yoram:1 eon:1 approximating:1 unchanged:1 bl:2 move:2 question:2 print:1 md:4 diagonal:11 said:1 exhibit:1 gradient:49 iclr:2 distance:6 thank:1 valpola:2 capacity:1 parametrized:2 philip:2 degrade:2 topic:1 manifold:2 intelligenza:1 unstable:2 trivial:1 eigenspectrum:1 enforcing:1 studi:1 assuming:1 code:1 reed:1 illustration:1 providing:1 minimizing:1 sermanet:1 tijmen:1 equivalently:1 difficult:3 unfortunately:2 october:1 robert:1 hao:1 ba:1 unintuitive:1 implementation:3 proper:2 boltzmann:2 satheesh:1 perform:3 observation:2 convolution:2 datasets:1 descent:26 truncated:1 introspective:1 defining:2 extended:1 looking:1 precise:2 hinton:2 rn:2 arbitrary:1 sharp:1 introduced:3 evidenced:1 namely:2 required:3 kl:2 extensive:1 connection:5 imagenet:8 optimized:1 distinction:1 boost:1 nip:1 scott:1 appeared:1 challenge:4 memory:1 natural:31 difficulty:3 scheme:4 improve:8 raiko:2 extract:1 auto:5 epoch:1 geometric:1 l2:2 understanding:2 evolve:1 nicol:1 relative:1 loss:3 fully:1 highlight:4 lecture:1 interesting:2 proven:1 analogy:1 geoffrey:2 validation:5 eigendecomposition:2 rni:6 offered:1 vanhoucke:1 editor:1 tiny:1 amortization:5 row:4 normalizes:1 course:2 repeat:1 free:1 drastically:1 bias:2 deeper:1 perceptron:1 taking:1 absolute:1 benefit:3 curve:1 feedback:2 lett:1 rich:2 forward:3 commonly:2 concretely:1 projected:6 made:2 adaptive:1 erhan:1 approximate:3 obtains:1 ignore:1 implicitly:1 kullback:1 confirm:1 overfitting:1 reveals:2 instantiation:2 ioffe:2 b1:2 factorizing:1 iterative:1 vectorization:1 latent:1 khosla:1 why:2 nature:4 learn:1 ignoring:1 improving:1 bottou:2 complex:3 cl:1 marc:1 diag:3 aistats:2 main:2 dense:1 sridhar:1 x1:1 fig:7 referred:1 depicts:1 cubic:1 amortize:2 grosse:2 n:2 combettes:1 momentum:2 explicit:1 exponential:2 breaking:1 jmlr:1 hti:4 montavon:2 down:1 dumitru:1 specific:1 covariate:1 showing:2 explored:1 virtue:1 normalizing:2 derives:1 dl:1 glorot:1 mnist:4 workshop:1 sohl:1 importance:3 ci:6 mirror:13 magnitude:3 orthogonalizing:1 conditioned:1 gap:1 easier:1 suited:1 backpropagate:1 depicted:1 led:1 simply:4 explore:1 forming:1 cheaply:1 visual:1 failed:1 highlighting:1 khudanpur:1 ordered:1 aditya:1 applies:1 springer:2 corresponds:2 tieleman:1 determines:1 relies:1 ma:1 identity:3 viewed:1 acceleration:1 decomp:1 towards:1 labelled:1 fisher:19 experimentally:1 determined:1 reducing:1 averaging:1 olga:1 duality:3 experimental:2 perceptrons:1 rit:1 guillaume:1 berg:1 internal:3 jonathan:1 alexander:1 accelerated:2 preparation:1 evaluate:2 d1:1 srivastava:1 |
5,473 | 5,954 | Convolutional Networks on Graphs
for Learning Molecular Fingerprints
David Duvenaud? , Dougal Maclaurin?, Jorge Aguilera-Iparraguirre
Rafael G?omez-Bombarelli, Timothy Hirzel, Al?an Aspuru-Guzik, Ryan P. Adams
Harvard University
Abstract
We introduce a convolutional neural network that operates directly on graphs.
These networks allow end-to-end learning of prediction pipelines whose inputs
are graphs of arbitrary size and shape. The architecture we present generalizes
standard molecular feature extraction methods based on circular fingerprints. We
show that these data-driven features are more interpretable, and have better predictive performance on a variety of tasks.
1
Introduction
Recent work in materials design used neural networks to predict the properties of novel molecules
by generalizing from examples. One difficulty with this task is that the input to the predictor, a
molecule, can be of arbitrary size and shape. Currently, most machine learning pipelines can only
handle inputs of a fixed size. The current state of the art is to use off-the-shelf fingerprint software
to compute fixed-dimensional feature vectors, and use those features as inputs to a fully-connected
deep neural network or other standard machine learning method. This formula was followed by
[28, 3, 19]. During training, the molecular fingerprint vectors were treated as fixed.
In this paper, we replace the bottom layer of this stack ? the function that computes molecular
fingerprint vectors ? with a differentiable neural network whose input is a graph representing the
original molecule. In this graph, vertices represent individual atoms and edges represent bonds. The
lower layers of this network is convolutional in the sense that the same local filter is applied to each
atom and its neighborhood. After several such layers, a global pooling step combines features from
all the atoms in the molecule.
These neural graph fingerprints offer several advantages over fixed fingerprints:
? Predictive performance. By using data adapting to the task at hand, machine-optimized
fingerprints can provide substantially better predictive performance than fixed fingerprints.
We show that neural graph fingerprints match or beat the predictive performance of standard fingerprints on solubility, drug efficacy, and organic photovoltaic efficiency datasets.
? Parsimony. Fixed fingerprints must be extremely large to encode all possible substructures
without overlap. For example, [28] used a fingerprint vector of size 43,000, after having
removed rarely-occurring features. Differentiable fingerprints can be optimized to encode
only relevant features, reducing downstream computation and regularization requirements.
? Interpretability. Standard fingerprints encode each possible fragment completely distinctly, with no notion of similarity between fragments. In contrast, each feature of a neural
graph fingerprint can be activated by similar but distinct molecular fragments, making the
feature representation more meaningful.
?
Equal contribution.
1
Figure 1: Left: A visual representation of the computational graph of both standard circular fingerprints and neural graph fingerprints. First, a graph is constructed matching the topology of the
molecule being fingerprinted, in which nodes represent atoms, and edges represent bonds. At each
layer, information flows between neighbors in the graph. Finally, each node in the graph turns on
one bit in the fixed-length fingerprint vector. Right: A more detailed sketch including the bond
information used in each operation.
2
Circular fingerprints
The state of the art in molecular fingerprints are extended-connectivity circular fingerprints
(ECFP) [21]. Circular fingerprints [6] are a refinement of the Morgan algorithm [17], designed
to encode which substructures are present in a molecule in a way that is invariant to atom-relabeling.
Circular fingerprints generate each layer?s features by applying a fixed hash function to the concatenated features of the neighborhood in the previous layer. The results of these hashes are then treated
as integer indices, where a 1 is written to the fingerprint vector at the index given by the feature
vector at each node in the graph. Figure 1(left) shows a sketch of this computational architecture.
Ignoring collisions, each index of the fingerprint denotes the presence of a particular substructure.
The size of the substructures represented by each index depends on the depth of the network. Thus
the number of layers is referred to as the ?radius? of the fingerprints.
Circular fingerprints are analogous to convolutional networks in that they apply the same operation
locally everywhere, and combine information in a global pooling step.
3
Creating a differentiable fingerprint
The space of possible network architectures is large. In the spirit of starting from a known-good configuration, we designed a differentiable generalization of circular fingerprints. This section describes
our replacement of each discrete operation in circular fingerprints with a differentiable analog.
Hashing The purpose of the hash functions applied at each layer of circular fingerprints is to
combine information about each atom and its neighboring substructures. This ensures that any
change in a fragment, no matter how small, will lead to a different fingerprint index being activated.
We replace the hash operation with a single layer of a neural network. Using a smooth function
allows the activations to be similar when the local molecular structure varies in unimportant ways.
Indexing Circular fingerprints use an indexing operation to combine all the nodes? feature vectors
into a single fingerprint of the whole molecule. Each node sets a single bit of the fingerprint to one,
at an index determined by the hash of its feature vector. This pooling-like operation converts an
arbitrary-sized graph into a fixed-sized vector. For small molecules and a large fingerprint length,
the fingerprints are always sparse. We use the softmax operation as a differentiable analog of
indexing. In essence, each atom is asked to classify itself as belonging to a single category. The sum
of all these classification label vectors produces the final fingerprint. This operation is analogous to
the pooling operation in standard convolutional neural networks.
2
Algorithm 1 Circular fingerprints
1: Input: molecule, radius R, fingerprint
length S
2: Initialize: fingerprint vector f ? 0S
3: for each atom a in molecule
4:
ra ? g(a)
. lookup atom features
5: for L = 1 to R
. for each layer
6:
for each atom a in molecule
7:
r1 . . . rN = neighbors(a)
8:
v ? [ra , r1 , . . . , rN ] . concatenate
9:
ra ? hash(v)
. hash function
10:
i ? mod(ra , S) . convert to index
11:
fi ? 1
. Write 1 at index
12: Return: binary vector f
Algorithm 2 Neural graph fingerprints
1: Input: molecule, radius R, hidden weights
5
H11 . . . HR
, output weights W1 . . . WR
2: Initialize: fingerprint vector f ? 0S
3: for each atom a in molecule
4:
ra ? g(a)
. lookup atom features
5: for L = 1 to R
. for each layer
6:
for each atom a in molecule
7:
r1 . . . rN =Pneighbors(a)
8:
v ? ra + N
. sum
i=1 ri
9:
ra ? ?(vHLN )
. smooth function
10:
i ? softmax(ra WL )
. sparsify
11:
f ?f +i
. add to fingerprint
12: Return: real-valued vector f
Figure 2: Pseudocode of circular fingerprints (left) and neural graph fingerprints (right). Differences
are highlighted in blue. Every non-differentiable operation is replaced with a differentiable analog.
Canonicalization Circular fingerprints are identical regardless of the ordering of atoms in each
neighborhood. This invariance is achieved by sorting the neighboring atoms according to their
features, and bond features. We experimented with this sorting scheme, and also with applying the
local feature transform on all possible permutations of the local neighborhood. An alternative to
canonicalization is to apply a permutation-invariant function, such as summation. In the interests of
simplicity and scalability, we chose summation.
Circular fingerprints can be interpreted as a special case of neural graph fingerprints having large
random weights. This is because, in the limit of large input weights, tanh nonlinearities approach
step functions, which when concatenated form a simple hash function. Also, in the limit of large
input weights, the softmax operator approaches a one-hot-coded argmax operator, which is analogous to an indexing operation.
Algorithms 1 and 2 summarize these two algorithms and highlight their differences. Given a fingerprint length L, and F features at each layer, the parameters of neural graph fingerprints consist of
a separate output weight matrix of size F ? L for each layer, as well as a set of hidden-to-hidden
weight matrices of size F ? F at each layer, one for each possible number of bonds an atom can
have (up to 5 in organic molecules).
4
Experiments
We ran two experiments to demonstrate that neural fingerprints with large random weights behave
similarly to circular fingerprints. First, we examined whether distances between circular fingerprints
were similar to distances between neural fingerprint-based distances. Figure 3 (left) shows a scatterplot of pairwise distances between circular vs. neural fingerprints. Fingerprints had length 2048,
and were calculated on pairs of molecules from the solubility dataset [4]. Distance was measured
using a continuous generalization of the Tanimoto (a.k.a. Jaccard) similarity measure, given by
.X
X
distance(x, y) = 1 ?
min(xi , yi )
max(xi , yi )
(1)
There is a correlation of r = 0.823 between the distances. The line of points on the right of the plot
shows that for some pairs of molecules, binary ECFP fingerprints have exactly zero overlap.
Second, we examined the predictive performance of neural fingerprints with large random weights
vs. that of circular fingerprints. Figure 3 (right) shows average predictive performance on the solubility dataset, using linear regression on top of fingerprints. The performances of both methods
follow similar curves. In contrast, the performance of neural fingerprints with small random weights
follows a different curve, and is substantially better. This suggests that even with random weights,
the relatively smooth activation of neural fingerprints helps generalization performance.
3
2.0
1.8
0.9
RMSE (log Mol/L)
Neural fingerprint distances
Neural vs Circular distances, r =0:823
1.0
0.8
0.7
0.6
0.5
0.5
Circular fingerprints
Random conv with large parameters
Random conv with small parameters
1.6
1.4
1.2
1.0
0.8
0
0.6
0.7
0.8
0.9
1.0
Circular fingerprint distances
1
2
3
4
Fingerprint radius
5
6
Figure 3: Left: Comparison of pairwise distances between molecules, measured using circular fingerprints and neural graph fingerprints with large random weights. Right: Predictive performance
of circular fingerprints (red), neural graph fingerprints with fixed large random weights (green) and
neural graph fingerprints with fixed small random weights (blue). The performance of neural graph
fingerprints with large random weights closely matches the performance of circular fingerprints.
4.1
Examining learned features
To demonstrate that neural graph fingerprints are interpretable, we show substructures which most
activate individual features in a fingerprint vector. Each feature of a circular fingerprint vector can
each only be activated by a single fragment of a single radius, except for accidental collisions.
In contrast, neural graph fingerprint features can be activated by variations of the same structure,
making them more interpretable, and allowing shorter feature vectors.
Solubility features Figure 4 shows the fragments that maximally activate the most predictive features of a fingerprint. The fingerprint network was trained as inputs to a linear model predicting
solubility, as measured in [4]. The feature shown in the top row has a positive predictive relationship
with solubility, and is most activated by fragments containing a hydrophilic R-OH group, a standard
indicator of solubility. The feature shown in the bottom row, strongly predictive of insolubility, is
activated by non-polar repeated ring structures.
Fragments most
activated by
pro-solubility
feature
O
OH
O
NH
O
OH
OH
Fragments most
activated by
anti-solubility
feature
Figure 4: Examining fingerprints optimized for predicting solubility. Shown here are representative
examples of molecular fragments (highlighted in blue) which most activate different features of the
fingerprint. Top row: The feature most predictive of solubility. Bottom row: The feature most
predictive of insolubility.
4
Toxicity features We trained the same model architecture to predict toxicity, as measured in two
different datasets in [26]. Figure 5 shows fragments which maximally activate the feature most
predictive of toxicity, in two separate datasets.
Fragments most
activated by
toxicity feature
on SR-MMP
dataset
Fragments most
activated by
toxicity feature
on NR-AHR
dataset
Figure 5: Visualizing fingerprints optimized for predicting toxicity. Shown here are representative
samples of molecular fragments (highlighted in red) which most activate the feature most predictive
of toxicity. Top row: the most predictive feature identifies groups containing a sulphur atom attached
to an aromatic ring. Bottom row: the most predictive feature identifies fused aromatic rings, also
known as polycyclic aromatic hydrocarbons, a well-known carcinogen.
[27] constructed similar visualizations, but in a semi-manual way: to determine which toxic fragments activated a given neuron, they searched over a hand-made list of toxic substructures and chose
the one most correlated with a given neuron. In contrast, our visualizations are generated automatically, without the need to restrict the range of possible answers beforehand.
4.2
Predictive Performance
We ran several experiments to compare the predictive performance of neural graph fingerprints to
that of the standard state-of-the-art setup: circular fingerprints fed into a fully-connected neural
network.
Experimental setup Our pipeline takes as input the SMILES [30] string encoding of each
molecule, which is then converted into a graph using RDKit [20]. We also used RDKit to produce
the extended circular fingerprints used in the baseline. Hydrogen atoms were treated implicitly.
In our convolutional networks, the initial atom and bond features were chosen to be similar to those
used by ECFP: Initial atom features concatenated a one-hot encoding of the atom?s element, its
degree, the number of attached hydrogen atoms, and the implicit valence, and an aromaticity indicator. The bond features were a concatenation of whether the bond type was single, double, triple,
or aromatic, whether the bond was conjugated, and whether the bond was part of a ring.
Training and Architecture Training used batch normalization [11]. We also experimented with
tanh vs relu activation functions for both the neural fingerprint network layers and the fullyconnected network layers. relu had a slight but consistent performance advantage on the validation set. We also experimented with dropconnect [29], a variant of dropout in which weights are
randomly set to zero instead of hidden units, but found that it led to worse validation error in general. Each experiment optimized for 10000 minibatches of size 100 using the Adam algorithm [13],
a variant of RMSprop that includes momentum.
Hyperparameter Optimization To optimize hyperparameters, we used random search. The hyperparameters of all methods were optimized using 50 trials for each cross-validation fold. The
following hyperparameters were optimized: log learning rate, log of the initial weight scale, the log
L2 penalty, fingerprint length, fingerprint depth (up to 6), and the size of the hidden layer in the
fully-connected network. Additionally, the size of the hidden feature vector in the convolutional
neural fingerprint networks was optimized.
5
Dataset
Units
Predict mean
Circular FPs + linear layer
Circular FPs + neural net
Neural FPs + linear layer
Neural FPs + neural net
Solubility [4]
log Mol/L
Drug efficacy [5]
EC50 in nM
Photovoltaic efficiency [8]
percent
4.29 ? 0.40
1.71 ? 0.13
1.40 ? 0.13
0.77 ? 0.11
0.52 ? 0.07
1.47 ? 0.07
1.13 ? 0.03
1.36 ? 0.10
1.15 ? 0.02
1.16 ? 0.03
6.40 ? 0.09
2.63 ? 0.09
2.00 ? 0.09
2.58 ? 0.18
1.43 ? 0.09
Table 1: Mean predictive accuracy of neural fingerprints compared to standard circular fingerprints.
Datasets We compared the performance of standard circular fingerprints against neural graph fingerprints on a variety of domains:
? Solubility: The aqueous solubility of 1144 molecules as measured by [4].
? Drug efficacy: The half-maximal effective concentration (EC50 ) in vitro of 10,000
molecules against a sulfide-resistant strain of P. falciparum, the parasite that causes malaria,
as measured by [5].
? Organic photovoltaic efficiency: The Harvard Clean Energy Project [8] uses expensive
DFT simulations to estimate the photovoltaic efficiency of organic molecules. We used a
subset of 20,000 molecules from this dataset.
Predictive accuracy We compared the performance of circular fingerprints and neural graph fingerprints under two conditions: In the first condition, predictions were made by a linear layer using
the fingerprints as input. In the second condition, predictions were made by a one-hidden-layer
neural network using the fingerprints as input. In all settings, all differentiable parameters in the
composed models were optimized simultaneously. Results are summarized in Table 4.2.
In all experiments, the neural graph fingerprints matched or beat the accuracy of circular fingerprints,
and the methods with a neural network on top of the fingerprints typically outperformed the linear
layers.
Software Automatic differentiation (AD) software packages such as Theano [1] significantly
speed up development time by providing gradients automatically, but can only handle limited control
structures and indexing. Since we required relatively complex control flow and indexing in order
to implement variants of Algorithm 2, we used a more flexible automatic differentiation package
for Python called Autograd (github.com/HIPS/autograd). This package handles standard
Numpy [18] code, and can differentiate code containing while loops, branches, and indexing.
Code for computing neural fingerprints and producing visualizations is available at
github.com/HIPS/neural-fingerprint.
5
Limitations
Computational cost Neural fingerprints have the same asymptotic complexity in the number of
atoms and the depth of the network as circular fingerprints, but have additional terms due to the
matrix multiplies necessary to transform the feature vector at each step. To be precise, computing
the neural fingerprint of depth R, fingerprint length L of a molecule with N atoms using a molecular
convolutional net having F features at each layer costs O(RN F L + RN F 2 ). In practice, training
neural networks on top of circular fingerprints usually took several minutes, while training both the
fingerprints and the network on top took on the order of an hour on the larger datasets.
Limited computation at each layer How complicated should we make the function that goes
from one layer of the network to the next? In this paper we chose the simplest feasible architecture:
a single layer of a neural network. However, it may be fruitful to apply multiple layers of nonlinearities between each message-passing step (as in [22]), or to make information preservation easier by
adapting the Long Short-Term Memory [10] architecture to pass information upwards.
6
Limited information propagation across the graph The local message-passing architecture developed in this paper scales well in the size of the graph (due to the low degree of organic molecules),
but its ability to propagate information across the graph is limited by the depth of the network. This
may be appropriate for small graphs such as those representing the small organic molecules used in
this paper. However, in the worst case, it can take a depth N2 network to distinguish between graphs
of size N . To avoid this problem, [2] proposed a hierarchical clustering of graph substructures. A
tree-structured network could examine the structure of the entire graph using only log(N ) layers,
but would require learning to parse molecules. Techniques from natural language processing [25]
might be fruitfully adapted to this domain.
Inability to distinguish stereoisomers Special bookkeeping is required to distinguish between
stereoisomers, including enantomers (mirror images of molecules) and cis/trans isomers (rotation
around double bonds). Most circular fingerprint implementations have the option to make these
distinctions. Neural fingerprints could be extended to be sensitive to stereoisomers, but this remains
a task for future work.
6
Related work
This work is similar in spirit to the neural Turing machine [7], in the sense that we take an existing
discrete computational architecture, and make each part differentiable in order to do gradient-based
optimization.
Neural nets for quantitative structure-activity relationship (QSAR) The modern standard for
predicting properties of novel molecules is to compose circular fingerprints with fully-connected
neural networks or other regression methods. [3] used circular fingerprints as inputs to an ensemble
of neural networks, Gaussian processes, and random forests. [19] used circular fingerprints (of depth
2) as inputs to a multitask neural network, showing that multiple tasks helped performance.
Neural graph fingerprints The most closely related work is [15], who build a neural network
having graph-valued inputs. Their approach is to remove all cycles and build the graph into a tree
structure, choosing one atom to be the root. A recursive neural network [23, 24] is then run from
the leaves to the root to produce a fixed-size representation. Because a graph having N nodes
has N possible roots, all N possible graphs are constructed. The final descriptor is a sum of the
representations computed by all distinct graphs. There are as many distinct graphs as there are
atoms in the network. The computational cost of this method thus grows as O(F 2 N 2 ), where F
is the size of the feature vector and N is the number of atoms, making it less suitable for large
molecules.
Convolutional neural networks Convolutional neural networks have been used to model images,
speech, and time series [14]. However, standard convolutional architectures use a fixed computational graph, making them difficult to apply to objects of varying size or structure, such as molecules.
More recently, [12] and others have developed a convolutional neural network architecture for modeling sentences of varying length.
Neural networks on fixed graphs [2] introduce convolutional networks on graphs in the regime
where the graph structure is fixed, and each training example differs only in having different features
at the vertices of the same graph. In contrast, our networks address the situation where each training
input is a different graph.
Neural networks on input-dependent graphs [22] propose a neural network model for graphs
having an interesting training procedure. The forward pass consists of running a message-passing
scheme to equilibrium, a fact which allows the reverse-mode gradient to be computed without storing
the entire forward computation. They apply their network to predicting mutagenesis of molecular
compounds as well as web page rankings. [16] also propose a neural network model for graphs
with a learning scheme whose inner loop optimizes not the training loss, but rather the correlation
between each newly-proposed vector and the training error residual. They apply their model to a
dataset of boiling points of 150 molecular compounds. Our paper builds on these ideas, with the
7
following differences: Our method replaces their complex training algorithms with simple gradientbased optimization, generalizes existing circular fingerprint computations, and applies these networks in the context of modern QSAR pipelines which use neural networks on top of the fingerprints
to increase model capacity.
Unrolled inference algorithms [9] and others have noted that iterative inference procedures
sometimes resemble the feedforward computation of a recurrent neural network. One natural extension of these ideas is to parameterize each inference step, and train a neural network to approximately
match the output of exact inference using only a small number of iterations. The neural fingerprint,
when viewed in this light, resembles an unrolled message-passing algorithm on the original graph.
7
Conclusion
We generalized existing hand-crafted molecular features to allow their optimization for diverse tasks.
By making each operation in the feature pipeline differentiable, we can use standard neural-network
training methods to scalably optimize the parameters of these neural molecular fingerprints end-toend. We demonstrated the interpretability and predictive performance of these new fingerprints.
Data-driven features have already replaced hand-crafted features in speech recognition, machine
vision, and natural-language processing. Carrying out the same task for virtual screening, drug
design, and materials design is a natural next step.
Acknowledgments
We thank Edward Pyzer-Knapp, Jennifer Wei, and Samsung Advanced Institute of Technology for
their support. This work was partially funded by NSF IIS-1421780.
References
[1] Fr?ed?eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud
Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
[2] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally
connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013.
[3] George E. Dahl, Navdeep Jaitly, and Ruslan Salakhutdinov. Multi-task neural networks for
QSAR predictions. arXiv preprint arXiv:1406.1231, 2014.
[4] John S. Delaney. ESOL: Estimating aqueous solubility directly from molecular structure. Journal of Chemical Information and Computer Sciences, 44(3):1000?1005, 2004.
[5] Francisco-Javier Gamo, Laura M Sanz, Jaume Vidal, Cristina de Cozar, Emilio Alvarez,
Jose-Luis Lavandera, Dana E Vanderwall, Darren VS Green, Vinod Kumar, Samiul Hasan,
et al. Thousands of chemical starting points for antimalarial lead identification. Nature,
465(7296):305?310, 2010.
[6] Robert C. Glem, Andreas Bender, Catrin H. Arnby, Lars Carlsson, Scott Boyer, and James
Smith. Circular fingerprints: flexible molecular descriptors with applications from physical
chemistry to ADME. IDrugs: the investigational drugs journal, 9(3):199?204, 2006.
[7] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing machines. arXiv preprint
arXiv:1410.5401, 2014.
[8] Johannes Hachmann, Roberto Olivares-Amaya, Sule Atahan-Evrenk, Carlos Amador-Bedolla,
Roel S S?anchez-Carrera, Aryeh Gold-Parker, Leslie Vogt, Anna M Brockway, and Al?an
Aspuru-Guzik. The Harvard clean energy project: large-scale computational screening and
design of organic photovoltaics on the world community grid. The Journal of Physical Chemistry Letters, 2(17):2241?2251, 2011.
[9] John R Hershey, Jonathan Le Roux, and Felix Weninger. Deep unfolding: Model-based inspiration of novel deep architectures. arXiv preprint arXiv:1409.2574, 2014.
8
[10] Sepp Hochreiter and J?urgen Schmidhuber. Long short-term memory. Neural computation,
9(8):1735?1780, 1997.
[11] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training
by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[12] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network
for modelling sentences. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, June 2014.
[13] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[14] Yann LeCun and Yoshua Bengio. Convolutional networks for images, speech, and time series.
The handbook of brain theory and neural networks, 3361, 1995.
[15] Alessandro Lusci, Gianluca Pollastri, and Pierre Baldi. Deep architectures and deep learning
in chemoinformatics: the prediction of aqueous solubility for drug-like molecules. Journal of
chemical information and modeling, 53(7):1563?1575, 2013.
[16] Alessio Micheli. Neural network for graphs: A contextual constructive approach. Neural
Networks, IEEE Transactions on, 20(3):498?511, 2009.
[17] H.L. Morgan. The generation of a unique machine description for chemical structure. Journal
of Chemical Documentation, 5(2):107?113, 1965.
[18] Travis E Oliphant. Python for scientific computing. Computing in Science & Engineering,
9(3):10?20, 2007.
[19] Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, and Vijay
Pande. Massively multitask networks for drug discovery. arXiv:1502.02072, 2015.
[20] RDKit: Open-source cheminformatics. www.rdkit.org. [accessed 11-April-2013].
[21] David Rogers and Mathew Hahn. Extended-connectivity fingerprints. Journal of Chemical
Information and Modeling, 50(5):742?754, 2010.
[22] F. Scarselli, M. Gori, Ah Chung Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural
network model. Neural Networks, IEEE Transactions on, 20(1):61?80, Jan 2009.
[23] Richard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y Ng.
Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances
in Neural Information Processing Systems, pages 801?809, 2011.
[24] Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages
151?161. Association for Computational Linguistics, 2011.
[25] Kai Sheng Tai, Richard Socher, and Christopher D Manning.
Improved semantic
representations from tree-structured long short-term memory networks. arXiv preprint
arXiv:1503.00075, 2015.
[26] Tox21 Challenge. National center for advancing translational sciences. http://tripod.
nih.gov/tox21/challenge, 2014. [Online; accessed 2-June-2015].
[27] Thomas Unterthiner, Andreas Mayr, G?unter Klambauer, and Sepp Hochreiter. Toxicity prediction using deep learning. arXiv preprint arXiv:1503.01445, 2015.
[28] Thomas Unterthiner, Andreas Mayr, G u? nter Klambauer, Marvin Steijaert, J?org Wenger, Hugo
Ceulemans, and Sepp Hochreiter. Deep learning as an opportunity in virtual screening. In
Advances in Neural Information Processing Systems, 2014.
[29] Li Wan, Matthew Zeiler, Sixin Zhang, Yann L. Cun, and Rob Fergus. Regularization of neural
networks using dropconnect. In International Conference on Machine Learning, 2013.
[30] David Weininger. SMILES, a chemical language and information system. Journal of chemical
information and computer sciences, 28(1):31?36, 1988.
9
| 5954 |@word multitask:2 trial:1 nd:1 vogt:1 open:1 scalably:1 simulation:1 propagate:1 initial:3 configuration:1 series:2 efficacy:3 fragment:15 cristina:1 existing:3 current:1 com:2 contextual:1 activation:3 diederik:1 must:1 written:1 john:2 luis:1 concatenate:1 shape:2 christian:1 webster:1 remove:1 designed:2 interpretable:3 plot:1 hash:8 v:5 half:1 leaf:1 ivo:1 smith:1 short:3 photovoltaic:4 pascanu:1 node:6 org:2 accessed:2 zhang:1 constructed:3 aryeh:1 fps:4 consists:1 combine:4 fullyconnected:1 compose:1 baldi:1 introduce:2 pairwise:2 ra:8 examine:1 multi:1 brain:1 salakhutdinov:1 automatically:2 gov:1 bender:1 conv:2 project:2 estimating:1 matched:1 interpreted:1 substantially:2 parsimony:1 string:1 developed:2 differentiation:2 quantitative:1 every:1 zaremba:1 exactly:1 control:2 wayne:1 unit:2 szlam:1 mayr:2 producing:1 danihelka:1 positive:1 felix:1 engineering:1 local:5 limit:2 encoding:2 approximately:1 might:1 chose:3 blunsom:1 resembles:1 examined:2 suggests:1 limited:4 range:1 acknowledgment:1 lecun:2 unique:1 tsoi:1 practice:1 recursive:3 implement:1 differs:1 razvan:1 procedure:2 jan:1 empirical:1 drug:7 adapting:2 significantly:1 matching:1 organic:7 bergeron:1 operator:2 context:1 applying:2 optimize:2 fruitful:1 www:1 demonstrated:1 phil:1 center:1 go:1 regardless:1 starting:2 sepp:3 jimmy:1 simplicity:1 roux:1 lamblin:1 oh:4 toxicity:8 handle:3 notion:1 variation:1 analogous:3 guzik:2 exact:1 us:1 goodfellow:1 jaitly:1 harvard:3 element:1 documentation:1 expensive:1 recognition:1 bottom:4 steven:1 pande:1 preprint:8 tripod:1 worst:1 parameterize:1 thousand:1 ensures:1 connected:5 cycle:1 ordering:1 removed:1 ran:2 alessandro:1 rmsprop:1 complexity:1 asked:1 dynamic:1 trained:2 carrying:1 predictive:21 efficiency:4 eric:3 completely:1 samsung:1 represented:1 pennin:1 train:1 distinct:3 effective:1 activate:5 alessio:1 neighborhood:4 choosing:1 parasite:1 kalchbrenner:1 whose:3 larger:1 valued:2 canonicalization:2 kai:1 ability:1 malaria:1 highlighted:3 itself:1 transform:2 final:2 hagenbuchner:1 online:1 differentiate:1 advantage:2 differentiable:11 net:4 took:2 propose:2 maximal:1 fr:1 neighboring:2 relevant:1 loop:2 gold:1 description:1 scalability:1 sanz:1 double:2 requirement:1 r1:3 produce:3 adam:3 ring:4 object:1 help:1 recurrent:1 andrew:2 measured:6 edward:2 resemble:1 radius:5 closely:2 filter:1 lars:1 stochastic:1 material:2 virtual:2 rogers:1 require:1 generalization:3 h11:1 ryan:1 summation:2 extension:1 gradientbased:1 around:1 duvenaud:1 maclaurin:1 equilibrium:1 predict:3 matthew:1 purpose:1 polar:1 ruslan:1 outperformed:1 bond:11 currently:1 label:1 tanh:2 kearnes:1 sensitive:1 wl:1 unfolding:2 always:1 gaussian:1 rather:1 avoid:1 shelf:1 varying:2 sparsify:1 falciparum:1 encode:4 june:2 improvement:1 modelling:1 contrast:5 baseline:1 sense:2 inference:4 dependent:1 typically:1 entire:2 hidden:7 boyer:1 translational:1 classification:1 flexible:2 pascal:1 multiplies:1 development:1 art:3 softmax:3 initialize:2 special:2 urgen:1 equal:1 extraction:1 having:7 atom:27 ng:2 identical:1 roel:1 unsupervised:1 investigational:1 future:1 others:2 yoshua:2 richard:3 modern:2 randomly:1 composed:1 simultaneously:1 national:1 individual:2 relabeling:1 numpy:1 autograd:2 replaced:2 argmax:1 scarselli:1 replacement:1 jeffrey:2 detection:1 interest:1 dougal:1 message:4 circular:42 screening:3 light:1 activated:11 aqueous:3 beforehand:1 edge:2 necessary:1 arthur:1 unter:1 shorter:1 tree:3 unterthiner:2 hip:2 classify:1 modeling:3 leslie:1 riley:1 cost:3 vertex:2 subset:1 predictor:1 examining:2 fruitfully:1 answer:1 varies:1 tox21:2 international:1 off:1 fused:1 connectivity:2 w1:1 nm:1 containing:3 huang:2 wan:1 dropconnect:2 worse:1 creating:1 laura:1 chung:1 return:2 conjugated:1 wojciech:1 szegedy:1 li:1 converted:1 nonlinearities:2 de:1 lookup:2 chemistry:2 bergstra:1 summarized:1 includes:1 matter:1 ranking:1 depends:1 ad:1 helped:1 root:3 hirzel:1 red:2 option:1 complicated:1 bouchard:1 carlos:1 substructure:8 rmse:1 contribution:1 accuracy:3 convolutional:15 descriptor:2 who:1 greg:1 ensemble:1 identification:1 nter:1 weninger:1 qsar:3 aromatic:4 ah:1 bharath:1 manual:1 ed:1 against:2 pollastri:1 energy:2 james:2 newly:1 dataset:7 aguilera:1 javier:1 hashing:1 supervised:1 follow:1 hershey:1 maximally:2 wei:1 alvarez:1 april:1 improved:1 strongly:1 implicit:1 correlation:2 autoencoders:2 hand:4 sketch:2 sheng:1 parse:1 iparraguirre:1 web:1 christopher:3 propagation:1 tanimoto:1 mode:1 scientific:1 grows:1 regularization:2 inspiration:1 chemical:8 arnaud:1 semantic:1 visualizing:1 during:1 essence:1 noted:1 generalized:1 demonstrate:2 pro:1 percent:1 upwards:1 image:3 novel:3 fi:1 recently:1 nih:1 rotation:1 bookkeeping:1 pseudocode:1 vitro:1 physical:2 hugo:1 attached:2 nh:1 analog:3 slight:1 association:2 dft:1 automatic:2 grid:1 similarly:1 language:4 fingerprint:123 had:2 funded:1 bruna:1 resistant:1 similarity:2 add:1 patrick:1 recent:1 optimizes:1 driven:2 reverse:1 schmidhuber:1 compound:2 massively:1 sixin:1 binary:2 jorge:1 meeting:1 yi:2 morgan:2 additional:1 george:1 determine:1 semi:2 branch:1 multiple:2 preservation:1 ii:1 emilio:1 smooth:3 match:3 offer:1 cross:1 long:3 molecular:16 coded:1 prediction:6 variant:3 regression:2 vision:1 navdeep:1 arxiv:17 iteration:1 represent:4 normalization:2 sometimes:1 delaney:1 achieved:1 aromaticity:1 hochreiter:3 sergey:1 source:1 hasan:1 sr:1 pooling:5 flow:2 spirit:2 mod:1 smile:2 integer:1 presence:1 feedforward:1 bengio:2 vinod:1 variety:2 relu:2 architecture:13 topology:1 restrict:1 inner:1 idea:2 andreas:3 shift:1 whether:4 accelerating:1 penalty:1 sentiment:1 speech:3 passing:4 cause:1 deep:9 collision:2 detailed:1 unimportant:1 johannes:1 locally:2 category:1 simplest:1 generate:1 http:1 nsf:1 toend:1 wr:1 blue:3 diverse:1 discrete:2 write:1 hyperparameter:1 boiling:1 mutagenesis:1 group:2 clean:2 dahl:1 nal:1 advancing:1 graph:57 downstream:1 convert:2 sum:3 run:1 hydrophilic:1 everywhere:1 package:3 turing:2 jose:1 letter:1 yann:3 jaccard:1 bit:2 dropout:1 layer:28 followed:1 distinguish:3 accidental:1 fold:1 replaces:1 mathew:1 annual:1 activity:1 marvin:1 adapted:1 alex:1 ri:1 software:3 speed:2 extremely:1 min:1 kumar:1 relatively:2 structured:2 according:1 isomer:1 manning:3 belonging:1 describes:1 across:2 cun:1 toxic:2 making:5 rob:1 invariant:2 indexing:7 theano:2 pipeline:5 visualization:3 remains:1 jennifer:1 turn:1 tai:1 fed:1 end:3 generalizes:2 operation:12 available:1 cheminformatics:1 vidal:1 apply:6 hierarchical:1 appropriate:1 spectral:1 travis:1 pierre:1 alternative:1 batch:2 original:2 thomas:2 denotes:1 top:8 clustering:1 running:1 linguistics:2 gori:1 opportunity:1 zeiler:1 concatenated:3 build:3 hahn:1 already:1 concentration:1 nr:1 gradient:3 micheli:1 distance:11 separate:2 valence:1 thank:1 concatenation:1 capacity:1 oliphant:1 length:8 code:3 index:8 relationship:2 providing:1 unrolled:2 setup:2 difficult:1 robert:1 ba:1 design:4 implementation:1 allowing:1 neuron:2 anchez:1 datasets:5 behave:1 anti:1 beat:2 situation:1 extended:4 strain:1 precise:1 rn:5 stack:1 arbitrary:3 paraphrase:1 community:1 david:4 pair:2 required:2 optimized:9 sentence:2 learned:1 distinction:1 hour:1 kingma:1 nip:1 trans:1 address:1 usually:1 scott:1 regime:1 summarize:1 monfardini:1 challenge:2 interpretability:2 including:2 max:1 green:2 memory:3 hot:2 overlap:2 suitable:1 difficulty:1 treated:3 natural:5 predicting:6 indicator:2 hr:1 residual:1 advanced:1 representing:2 scheme:3 github:2 technology:1 identifies:2 roberto:1 joan:1 carlsson:1 l2:1 python:2 knapp:1 discovery:1 asymptotic:1 graf:1 fully:4 loss:1 permutation:2 highlight:1 interesting:1 limitation:1 generation:1 dana:1 triple:1 validation:3 degree:2 consistent:1 storing:1 row:6 gianluca:1 allow:2 institute:1 aspuru:2 neighbor:2 sparse:1 distinctly:1 curve:2 depth:7 calculated:1 world:1 computes:1 dale:1 forward:2 made:3 refinement:1 klambauer:2 transaction:2 rafael:1 implicitly:1 global:2 ioffe:1 handbook:1 francisco:1 xi:2 fergus:1 chemoinformatics:1 continuous:1 hydrogen:2 search:1 iterative:1 table:2 additionally:1 nature:1 molecule:32 nicolas:1 ignoring:1 mol:2 forest:1 complex:2 domain:2 anna:1 whole:1 hyperparameters:3 n2:1 repeated:1 ahr:1 crafted:2 referred:1 representative:2 parker:1 momentum:1 mmp:1 ian:1 formula:1 minute:1 bastien:1 covariate:1 showing:1 list:1 experimented:3 consist:1 workshop:1 scatterplot:1 socher:3 pennington:1 ci:1 mirror:1 occurring:1 sorting:2 easier:1 vijay:1 generalizing:1 led:1 timothy:1 visual:1 omez:1 partially:1 applies:1 darren:1 minibatches:1 grefenstette:1 sized:2 viewed:1 replace:2 wenger:1 feasible:1 change:1 determined:1 except:1 operates:1 reducing:2 called:1 pas:2 invariance:1 experimental:1 meaningful:1 rarely:1 internal:1 searched:1 support:1 inability:1 jonathan:1 constructive:1 correlated:1 |
5,474 | 5,955 | Convolutional LSTM Network: A Machine Learning
Approach for Precipitation Nowcasting
Xingjian Shi Zhourong Chen Hao Wang Dit-Yan Yeung
Department of Computer Science and Engineering
Hong Kong University of Science and Technology
{xshiab,zchenbb,hwangaz,dyyeung}@cse.ust.hk
Wai-kin Wong Wang-chun Woo
Hong Kong Observatory
Hong Kong, China
{wkwong,wcwoo}@hko.gov.hk
Abstract
The goal of precipitation nowcasting is to predict the future rainfall intensity in a
local region over a relatively short period of time. Very few previous studies have
examined this crucial and challenging weather forecasting problem from the machine learning perspective. In this paper, we formulate precipitation nowcasting
as a spatiotemporal sequence forecasting problem in which both the input and the
prediction target are spatiotemporal sequences. By extending the fully connected
LSTM (FC-LSTM) to have convolutional structures in both the input-to-state and
state-to-state transitions, we propose the convolutional LSTM (ConvLSTM) and
use it to build an end-to-end trainable model for the precipitation nowcasting problem. Experiments show that our ConvLSTM network captures spatiotemporal
correlations better and consistently outperforms FC-LSTM and the state-of-theart operational ROVER algorithm for precipitation nowcasting.
1
Introduction
Nowcasting convective precipitation has long been an important problem in the field of weather
forecasting. The goal of this task is to give precise and timely prediction of rainfall intensity in a
local region over a relatively short period of time (e.g., 0-6 hours). It is essential for taking such
timely actions as generating society-level emergency rainfall alerts, producing weather guidance for
airports, and seamless integration with a longer-term numerical weather prediction (NWP) model.
Since the forecasting resolution and time accuracy required are much higher than other traditional
forecasting tasks like weekly average temperature prediction, the precipitation nowcasting problem
is quite challenging and has emerged as a hot research topic in the meteorology community [22].
Existing methods for precipitation nowcasting can roughly be categorized into two classes [22],
namely, NWP based methods and radar echo1 extrapolation based methods. For the NWP approach,
making predictions at the nowcasting timescale requires a complex and meticulous simulation of
the physical equations in the atmosphere model. Thus the current state-of-the-art operational precipitation nowcasting systems [19, 6] often adopt the faster and more accurate extrapolation based
methods. Specifically, some computer vision techniques, especially optical flow based methods,
have proven useful for making accurate extrapolation of radar maps [10, 6, 20]. One recent progress
along this path is the Real-time Optical flow by Variational methods for Echoes of Radar (ROVER)
1
In real-life systems, radar echo maps are often constant altitude plan position indicator (CAPPI) images [9].
1
algorithm [25] proposed by the Hong Kong Observatory (HKO) for its Short-range Warning of
Intense Rainstorms in Localized System (SWIRLS) [15]. ROVER calculates the optical flow of
consecutive radar maps using the algorithm in [5] and performs semi-Lagrangian advection [4] on
the flow field, which is assumed to be still, to accomplish the prediction. However, the success of
these optical flow based methods is limited because the flow estimation step and the radar echo extrapolation step are separated and it is challenging to determine the model parameters to give good
prediction performance.
These technical issues may be addressed by viewing the problem from the machine learning perspective. In essence, precipitation nowcasting is a spatiotemporal sequence forecasting problem
with the sequence of past radar maps as input and the sequence of a fixed number (usually larger
than 1) of future radar maps as output.2 However, such learning problems, regardless of their exact
applications, are nontrivial in the first place due to the high dimensionality of the spatiotemporal
sequences especially when multi-step predictions have to be made, unless the spatiotemporal structure of the data is captured well by the prediction model. Moreover, building an effective prediction
model for the radar echo data is even more challenging due to the chaotic nature of the atmosphere.
Recent advances in deep learning, especially recurrent neural network (RNN) and long short-term
memory (LSTM) models [12, 11, 7, 8, 23, 13, 18, 21, 26], provide some useful insights on how
to tackle this problem. According to the philosophy underlying the deep learning approach, if we
have a reasonable end-to-end model and sufficient data for training it, we are close to solving the
problem. The precipitation nowcasting problem satisfies the data requirement because it is easy
to collect a huge amount of radar echo data continuously. What is needed is a suitable model for
end-to-end learning. The pioneering LSTM encoder-decoder framework proposed in [23] provides a
general framework for sequence-to-sequence learning problems by training temporally concatenated
LSTMs, one for the input sequence and another for the output sequence. In [18], it is shown that
prediction of the next video frame and interpolation of intermediate frames can be done by building
an RNN based language model on the visual words obtained by quantizing the image patches. They
propose a recurrent convolutional neural network to model the spatial relationships but the model
only predicts one frame ahead and the size of the convolutional kernel used for state-to-state transition is restricted to 1. Their work is followed up later in [21] which points out the importance
of multi-step prediction in learning useful representations. They build an LSTM encoder-decoderpredictor model which reconstructs the input sequence and predicts the future sequence simultaneously. Although their method can also be used to solve our spatiotemporal sequence forecasting
problem, the fully connected LSTM (FC-LSTM) layer adopted by their model does not take spatial
correlation into consideration.
In this paper, we propose a novel convolutional LSTM (ConvLSTM) network for precipitation nowcasting. We formulate precipitation nowcasting as a spatiotemporal sequence forecasting problem
that can be solved under the general sequence-to-sequence learning framework proposed in [23]. In
order to model well the spatiotemporal relationships, we extend the idea of FC-LSTM to ConvLSTM
which has convolutional structures in both the input-to-state and state-to-state transitions. By stacking multiple ConvLSTM layers and forming an encoding-forecasting structure, we can build an
end-to-end trainable model for precipitation nowcasting. For evaluation, we have created a new
real-life radar echo dataset which can facilitate further research especially on devising machine
learning algorithms for the problem. When evaluated on a synthetic Moving-MNIST dataset [21]
and the radar echo dataset, our ConvLSTM model consistently outperforms both the FC-LSTM and
the state-of-the-art operational ROVER algorithm.
2
2.1
Preliminaries
Formulation of Precipitation Nowcasting Problem
The goal of precipitation nowcasting is to use the previously observed radar echo sequence to forecast a fixed length of the future radar maps in a local region (e.g., Hong Kong, New York, or Tokyo).
In real applications, the radar maps are usually taken from the weather radar every 6-10 minutes and
nowcasting is done for the following 1-6 hours, i.e., to predict the 6-60 frames ahead. From the ma2
It is worth noting that our precipitation nowcasting problem is different from the one studied in [14], which
aims at predicting only the central region of just the next frame.
2
chine learning perspective, this problem can be regarded as a spatiotemporal sequence forecasting
problem.
Suppose we observe a dynamical system over a spatial region represented by an M ? N grid which
consists of M rows and N columns. Inside each cell in the grid, there are P measurements which
vary over time. Thus, the observation at any time can be represented by a tensor X ? RP ?M ?N ,
where R denotes the domain of the observed features. If we record the observations periodically, we
will get a sequence of tensors X?1 , X?2 , . . . , X?t . The spatiotemporal sequence forecasting problem is
to predict the most likely length-K sequence in the future given the previous J observations which
include the current one:
X?t+1 , . . . , X?t+K =
arg max p(Xt+1 , . . . , Xt+K | X?t?J+1 , X?t?J+2 , . . . , X?t )
(1)
Xt+1 ,...,Xt+K
For precipitation nowcasting, the observation at every timestamp is a 2D radar echo map. If we
divide the map into tiled non-overlapping patches and view the pixels inside a patch as its measurements (see Fig. 1), the nowcasting problem naturally becomes a spatiotemporal sequence forecasting
problem.
We note that our spatiotemporal sequence forecasting problem is different from the one-step time
series forecasting problem because the prediction target of our problem is a sequence which contains
both spatial and temporal structures. Although the number of free variables in a length-K sequence
can be up to O(M K N K P K ), in practice we may exploit the structure of the space of possible
predictions to reduce the dimensionality and hence make the problem tractable.
2.2
Long Short-Term Memory for Sequence Modeling
For general-purpose sequence modeling, LSTM as a special RNN structure has proven stable and
powerful for modeling long-range dependencies in various previous studies [12, 11, 17, 23]. The
major innovation of LSTM is its memory cell ct which essentially acts as an accumulator of the
state information. The cell is accessed, written and cleared by several self-parameterized controlling
gates. Every time a new input comes, its information will be accumulated to the cell if the input gate
it is activated. Also, the past cell status ct?1 could be ?forgotten? in this process if the forget gate
ft is on. Whether the latest cell output ct will be propagated to the final state ht is further controlled
by the output gate ot . One advantage of using the memory cell and gates to control information flow
is that the gradient will be trapped in the cell (also known as constant error carousels [12]) and be
prevented from vanishing too quickly, which is a critical problem for the vanilla RNN model [12,
17, 2]. FC-LSTM may be seen as a multivariate version of LSTM where the input, cell output and
states are all 1D vectors. In this paper, we follow the formulation of FC-LSTM as in [11]. The key
equations are shown in (2) below, where ??? denotes the Hadamard product:
it
ft
ct
ot
ht
= ?(Wxi xt + Whi ht?1 + Wci ? ct?1 + bi )
= ?(Wxf xt + Whf ht?1 + Wcf ? ct?1 + bf )
= ft ? ct?1 + it ? tanh(Wxc xt + Whc ht?1 + bc )
= ?(Wxo xt + Who ht?1 + Wco ? ct + bo )
= ot ? tanh(ct )
(2)
Multiple LSTMs can be stacked and temporally concatenated to form more complex structures.
Such models have been applied to solve many real-life sequence modeling problems [23, 26].
3
The Model
We now present our ConvLSTM network. Although the FC-LSTM layer has proven powerful for
handling temporal correlation, it contains too much redundancy for spatial data. To address this
problem, we propose an extension of FC-LSTM which has convolutional structures in both the
input-to-state and state-to-state transitions. By stacking multiple ConvLSTM layers and forming an
encoding-forecasting structure, we are able to build a network model not only for the precipitation
nowcasting problem but also for more general spatiotemporal sequence forecasting problems.
3
Ht+1 , Ct+1
P
Ht , Ct
Ht?1 , Ct?1
P
2D Image
Xt
3D Tensor
Figure 1: Transforming 2D image
into 3D tensor
3.1
Xt+1
Figure 2: Inner structure of ConvLSTM
Convolutional LSTM
The major drawback of FC-LSTM in handling spatiotemporal data is its usage of full connections in
input-to-state and state-to-state transitions in which no spatial information is encoded. To overcome
this problem, a distinguishing feature of our design is that all the inputs X1 , . . . , Xt , cell outputs
C1 , . . . , Ct , hidden states H1 , . . . , Ht , and gates it , ft , ot of the ConvLSTM are 3D tensors whose
last two dimensions are spatial dimensions (rows and columns). To get a better picture of the inputs
and states, we may imagine them as vectors standing on a spatial grid. The ConvLSTM determines
the future state of a certain cell in the grid by the inputs and past states of its local neighbors.
This can easily be achieved by using a convolution operator in the state-to-state and input-to-state
transitions (see Fig. 2). The key equations of ConvLSTM are shown in (3) below, where ??? denotes
the convolution operator and ???, as before, denotes the Hadamard product:
it
ft
Ct
ot
Ht
= ?(Wxi ? Xt + Whi ? Ht?1 + Wci ? Ct?1 + bi )
= ?(Wxf ? Xt + Whf ? Ht?1 + Wcf ? Ct?1 + bf )
= ft ? Ct?1 + it ? tanh(Wxc ? Xt + Whc ? Ht?1 + bc )
= ?(Wxo ? Xt + Who ? Ht?1 + Wco ? Ct + bo )
= ot ? tanh(Ct )
(3)
If we view the states as the hidden representations of moving objects, a ConvLSTM with a larger
transitional kernel should be able to capture faster motions while one with a smaller kernel can
capture slower motions. Also, if we adopt a similar view as [16], the inputs, cell outputs and hidden
states of the traditional FC-LSTM represented by (2) may also be seen as 3D tensors with the last
two dimensions being 1. In this sense, FC-LSTM is actually a special case of ConvLSTM with all
features standing on a single cell.
To ensure that the states have the same number of rows and same number of columns as the inputs,
padding is needed before applying the convolution operation. Here, padding of the hidden states on
the boundary points can be viewed as using the state of the outside world for calculation. Usually,
before the first input comes, we initialize all the states of the LSTM to zero which corresponds to
?total ignorance? of the future. Similarly, if we perform zero-padding (which is used in this paper)
on the hidden states, we are actually setting the state of the outside world to zero and assume no prior
knowledge about the outside. By padding on the states, we can treat the boundary points differently,
which is helpful in many cases. For example, imagine that the system we are observing is a moving
ball surrounded by walls. Although we cannot see these walls, we can infer their existence by finding
the ball bouncing over them again and again, which can hardly be done if the boundary points have
the same state transition dynamics as the inner points.
3.2
Encoding-Forecasting Structure
Like FC-LSTM, ConvLSTM can also be adopted as a building block for more complex structures.
For our spatiotemporal sequence forecasting problem, we use the structure shown in Fig. 3 which
consists of two networks, an encoding network and a forecasting network. Like in [21], the initial
states and cell outputs of the forecasting network are copied from the last state of the encoding
network. Both networks are formed by stacking several ConvLSTM layers. As our prediction target
has the same dimensionality as the input, we concatenate all the states in the forecasting network
and feed them into a 1 ? 1 convolutional layer to generate the final prediction.
We can interpret this structure using a similar viewpoint as [23]. The encoding LSTM compresses
the whole input sequence into a hidden state tensor and the forecasting LSTM unfolds this hidden
4
P rediction
Encoding Network
ConvLST M2
Copy
ConvLST M4
ConvLST M1
Copy
ConvLST M3
Input
Forecasting Network
Figure 3: Encoding-forecasting ConvLSTM network for precipitation nowcasting
state to give the final prediction:
X?t+1 , . . . , X?t+K = arg max p(Xt+1 , . . . , Xt+K | X?t?J+1 , X?t?J+2 , . . . , X?t )
Xt+1 ,...,Xt+K
?
arg max p(Xt+1 , . . . , Xt+K | fencoding (X?t?J+1 , X?t?J+2 , . . . , X?t )) (4)
Xt+1 ,...,Xt+K
? gforecasting (fencoding (X?t?J+1 , X?t?J+2 , . . . , X?t ))
This structure is also similar to the LSTM future predictor model in [21] except that our input and
output elements are all 3D tensors which preserve all the spatial information. Since the network has
multiple stacked ConvLSTM layers, it has strong representational power which makes it suitable
for giving predictions in complex dynamical systems like the precipitation nowcasting problem we
study here.
4
Experiments
We first compare our ConvLSTM network with the FC-LSTM network on a synthetic MovingMNIST dataset to gain some basic understanding of the behavior of our model. We run our model
with different number of layers and kernel sizes and also study some ?out-of-domain? cases as
in [21]. To verify the effectiveness of our model on the more challenging precipitation nowcasting
problem, we build a new radar echo dataset and compare our model with the state-of-the-art ROVER
algorithm based on several commonly used precipitation nowcasting metrics. The results of the
experiments conducted on these two datasets lead to the following findings:
? ConvLSTM is better than FC-LSTM in handling spatiotemporal correlations.
? Making the size of state-to-state convolutional kernel bigger than 1 is essential for capturing
the spatiotemporal motion patterns.
? Deeper models can produce better results with fewer parameters.
? ConvLSTM performs better than ROVER for precipitation nowcasting.
Our implementations of the models are in Python with the help of Theano [3, 1]. We run all the
experiments on a computer with a single NVIDIA K20 GPU. Also, more illustrative ?gif? examples
are included in the appendix.
4.1
Moving-MNIST Dataset
For this synthetic dataset, we use a generation process similar to that described in [21]. All data
instances in the dataset are 20 frames long (10 frames for the input and 10 frames for the prediction) and contain two handwritten digits bouncing inside a 64 ? 64 patch. The moving digits are
chosen randomly from a subset of 500 digits in the MNIST dataset.3 The starting position and velocity direction are chosen uniformly at random and the velocity amplitude is chosen randomly in
[3, 5). This generation process is repeated 15000 times, resulting in a dataset with 10000 training
sequences, 2000 validation sequences, and 3000 testing sequences. We train all the LSTM models by minimizing the cross-entropy loss4 using back-propagation through time (BPTT) [2] and
3
MNIST dataset: http://yann.lecun.com/exdb/mnist/
4
PThe cross-entropy loss of the predicted frame P and the ground-truth frame T is defined as
? i,j,k Ti,j,k log Pi,j,k + (1 ? Ti,j,k ) log(1 ? Pi,j,k ).
5
Table 1: Comparison of ConvLSTM networks with FC-LSTM network on the Moving-MNIST
dataset. ?-5x5? and ?-1x1? represent the corresponding state-to-state kernel size, which is either 5?5
or 1?1. ?256?, ?128?, and ?64? refer to the number of hidden states in the ConvLSTM layers. ?(5x5)?
and ?(9x9)? represent the input-to-state kernel size.
Model
FC-LSTM-2048-2048
ConvLSTM(5x5)-5x5-256
ConvLSTM(5x5)-5x5-128-5x5-128
ConvLSTM(5x5)-5x5-128-5x5-64-5x5-64
ConvLSTM(9x9)-1x1-128-1x1-128
ConvLSTM(9x9)-1x1-128-1x1-64-1x1-64
Number of parameters
142,667,776
13,524,496
10,042,896
7,585,296
11,550,224
8,830,480
Cross entropy
4832.49
3887.94
3733.56
3670.85
4782.84
4231.50
Figure 4: An example showing an ?out-of-domain? run. From left to right: input frames; ground
truth; prediction by the 3-layer network.
RMSProp [24] with a learning rate of 10?3 and a decay rate of 0.9. Also, we perform early-stopping
on the validation set.
Despite the simple generation process, there exist strong nonlinearities in the resulting dataset because the moving digits can exhibit complicated appearance and will occlude and bounce during
their movement. It is hard for a model to give accurate predictions on the test set without learning
the inner dynamics of the system.
For the FC-LSTM network, we use the same structure as the unconditional future predictor model
in [21] with two 2048-node LSTM layers. For our ConvLSTM network, we set the patch size to
4 ? 4 so that each 64 ? 64 frame is represented by a 16 ? 16 ? 16 tensor. We test three variants of
our model with different number of layers. The 1-layer network contains one ConvLSTM layer with
256 hidden states, the 2-layer network has two ConvLSTM layers with 128 hidden states each, and
the 3-layer network has 128, 64, and 64 hidden states respectively in the three ConvLSTM layers.
All the input-to-state and state-to-state kernels are of size 5 ? 5. Our experiments show that the
ConvLSTM networks perform consistently better than the FC-LSTM network. Also, deeper models
can give better results although the improvement is not so significant between the 2-layer and 3-layer
networks. Moreover, we also try other network configurations with the state-to-state and input-tostate kernels of the 2-layer and 3-layer networks changed to 1 ? 1 and 9 ? 9, respectively. Although
the number of parameters of the new 2-layer network is close to the original one, the result becomes
much worse because it is hard to capture the spatiotemporal motion patterns with only 1 ? 1 state-tostate transition. Meanwhile, the new 3-layer network performs better than the new 2-layer network
since the higher layer can see a wider scope of the input. Nevertheless, its performance is inferior
to networks with larger state-to-state kernel size. This provides evidence that larger state-to-state
kernels are more suitable for capturing spatiotemporal correlations. In fact, for 1 ? 1 kernel, the
receptive field of the states will not grow as time advances. But for larger kernels, later states have
larger receptive fields and are related to a wider range of the input. The average cross-entropy loss
(cross-entropy loss per sequence) of each algorithm on the test set is shown in Table 1. We need
to point out that our experiment setting is different from [21] where an infinite number of training
data is assumed to be available. The current offline setting is chosen in order to understand how
different models perform in occasions where not so much data is available. Comparison of the
3-layer ConvLSTM and FC-LSTM in the online setting is included in the appendix.
6
Next, we test our model on some ?out-of-domain? inputs. We generate another 3000 sequences of
three moving digits, with the digits drawn randomly from a different subset of 500 MNIST digits
that does not overlap with the training set. Since the model has never seen any system with three
digits, such an ?out-of-domain? run is a good test of the generalization ability of the model [21].
The average cross-entropy error of the 3-layer model on this dataset is 6379.42. By observing some
of the prediction results, we find that the model can separate the overlapping digits successfully
and predict the overall motion although the predicted digits are quite blurred. One ?out-of-domain?
prediction example is shown in Fig. 4.
4.2
Radar Echo Dataset
The radar echo dataset used in this paper is a subset of the three-year weather radar intensities
collected in Hong Kong from 2011 to 2013. Since not every day is rainy and our nowcasting target
is precipitation, we select the top 97 rainy days to form our dataset. For preprocessing, we first
Z?min{Z}
transform the intensity values Z to gray-level pixels P by setting P = max{Z}?min{Z}
and crop
the radar maps in the central 330 ? 330 region. After that, we apply the disk filter5 with radius 10
and resize the radar maps to 100 ? 100. To reduce the noise caused by measuring instruments, we
further remove the pixel values of some noisy regions which are determined by applying K-means
clustering to the monthly pixel average. The weather radar data is recorded every 6 minutes, so there
are 240 frames per day. To get disjoint subsets for training, testing and validation, we partition each
daily sequence into 40 non-overlapping frame blocks and randomly assign 4 blocks for training, 1
block for testing and 1 block for validation. The data instances are sliced from these blocks using
a 20-frame-wide sliding window. Thus our radar echo dataset contains 8148 training sequences,
2037 testing sequences and 2037 validation sequences and all the sequences are 20 frames long (5
for the input and 15 for the prediction). Although the training and testing instances sliced from the
same day may have some dependencies, this splitting strategy is still reasonable because in real-life
nowcasting, we do have access to all previous data, including data from the same day, which allows
us to apply online fine-tuning of the model. Such data splitting may be viewed as an approximation
of the real-life ?fine-tuning-enabled? setting for this application.
We set the patch size to 2 and train a 2-layer ConvLSTM network with each layer containing 64
hidden states and 3 ? 3 kernels. For the ROVER algorithm, we tune the parameters of the optical
flow estimator6 on the validation set and use the best parameters (shown in the appendix) to report the
test results. Also, we try three different initialization schemes for ROVER: ROVER1 computes the
optical flow of the last two observed frames and performs semi-Lagrangian advection afterwards;
ROVER2 initializes the velocity by the mean of the last two flow fields; and ROVER3 gives the
initialization by a weighted average (with weights 0.7, 0.2 and 0.1) of the last three flow fields. In
addition, we train an FC-LSTM network with two 2000-node LSTM layers. Both the ConvLSTM
network and the FC-LSTM network optimize the cross-entropy error of 15 predictions.
We evaluate these methods using several commonly used precipitation nowcasting metrics, namely,
rainfall mean squared error (Rainfall-MSE), critical success index (CSI), false alarm rate (FAR),
probability of detection (POD), and correlation. The Rainfall-MSE metric is defined as the average
squared error between the predicted rainfall and the ground truth. Since our predictions are done at
the pixel level, we project them back to radar echo intensities and calculate the rainfall at every cell of
the grid using the Z-R relationship [15]: Z = 10 log a + 10b log R, where Z is the radar echo intensity in dB, R is the rainfall rate in mm/h, and a, b are two constants with a = 118.239, b = 1.5241.
The CSI, FAR and POD are skill scores similar to precision and recall commonly used by machine
learning researchers. We convert the prediction and ground truth to a 0/1 matrix using a threshold
of 0.5mm/h rainfall rate (indicating raining or not) and calculate the hits (prediction = 1, truth = 1),
misses (prediction = 0, truth = 1) and false alarms (prediction = 1, truth = 0). The three skill scores
falsealarms
hits
hits
, FAR = hits+falsealarms
, POD = hits+misses
. The corare defined as CSI = hits+misses+falsealarms
relation of a predicted frame P and a ground-truth frame T is defined as ?
P
i,j
(
P
i,j
Pi,j Ti,j
P
2
i,j Ti,j )+?
2 )(
Pi,j
where ? = 10?9 .
5
The disk filter is applied using the MATLAB function fspecial(?disk?, 10).
We use an open-source project to calculate the optical flow: http://sourceforge.net/
projects/varflow/
6
7
Table 2: Comparison of the average scores of different models over 15 prediction steps.
Model
Rainfall-MSE
CSI
FAR
POD Correlation
ConvLSTM(3x3)-3x3-64-3x3-64
1.420
0.577 0.195 0.660
0.908
Rover1
1.712
0.516 0.308 0.636
0.843
Rover2
1.684
0.522 0.301 0.642
0.850
Rover3
1.685
0.522 0.301 0.642
0.849
FC-LSTM-2000-2000
1.865
0.286 0.335 0.351
0.774
ConvLSTM
ROVER1
ROVER2
0.8
0.6
0.4
0
5
10
0.2
15
0
5
10
15
Time
1
0.4
0.8
0.3
POD
FAR
Time
0.5
0.2
0.6
0.4
0.1
0
FC-LSTM
0.8
0.9
0.7
ROVER3
1
CSI
Correlation
1
0
5
Time
10
15
0.2
0
5
Time
10
15
Figure 5: Comparison of different models
based on four precipitation nowcasting metrics over time.
Figure 6: Two prediction examples for the
precipitation nowcasting problem. All the
predictions and ground truths are sampled with
an interval of 3. From top to bottom: input
frames; ground truth frames; prediction by ConvLSTM network; prediction by ROVER2.
All results are shown in Table 2 and Fig. 5. We can find that the performance of the FC-LSTM
network is not so good for this task, which is mainly caused by the strong spatial correlation in the
radar maps, i.e., the motion of clouds is highly consistent in a local region. The fully-connected
structure has too many redundant connections and makes the optimization very unlikely to capture
these local consistencies. Also, it can be seen that ConvLSTM outperforms the optical flow based
ROVER algorithm, which is mainly due to two reasons. First, ConvLSTM is able to handle the
boundary conditions well. In real-life nowcasting, there are many cases when a sudden agglomeration of clouds appears at the boundary, which indicates that some clouds are coming from the
outside. If the ConvLSTM network has seen similar patterns during training, it can discover this
type of sudden changes in the encoding network and give reasonable predictions in the forecasting
network. This, however, can hardly be achieved by optical flow and semi-Lagrangian advection
based methods. Another reason is that, ConvLSTM is trained end-to-end for this task and some
complex spatiotemporal patterns in the dataset can be learned by the nonlinear and convolutional
structure of the network. For the optical flow based approach, it is hard to find a reasonable way to
update the future flow fields and train everything end-to-end. Some prediction results of ROVER2
and ConvLSTM are shown in Fig. 6. We can find that ConvLSTM can predict the future rainfall
contour more accurately especially in the boundary. Although ROVER2 can give sharper predictions than ConvLSTM, it triggers more false alarms and is less precise than ConvLSTM in general.
Also, the blurring effect of ConvLSTM may be caused by the inherent uncertainties of the task, i.e,
it is almost impossible to give sharp and accurate predictions of the whole radar maps in longer-term
predictions. We can only blur the predictions to alleviate the error caused by this type of uncertainty.
5
Conclusion and Future Work
In this paper, we have successfully applied the machine learning approach, especially deep learning,
to the challenging precipitation nowcasting problem which so far has not benefited from sophisticated machine learning techniques. We formulate precipitation nowcasting as a spatiotemporal sequence forecasting problem and propose a new extension of LSTM called ConvLSTM to tackle the
problem. The ConvLSTM layer not only preserves the advantages of FC-LSTM but is also suitable
for spatiotemporal data due to its inherent convolutional structure. By incorporating ConvLSTM
into the encoding-forecasting structure, we build an end-to-end trainable model for precipitation
nowcasting. For future work, we will investigate how to apply ConvLSTM to video-based action
recognition. One idea is to add ConvLSTM on top of the spatial feature maps generated by a convolutional neural network and use the hidden states of ConvLSTM for the final classification.
8
References
[1] F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. WardeFarley, and Y. Bengio. Theano: New features and speed improvements. Deep Learning and Unsupervised
Feature Learning NIPS 2012 Workshop, 2012.
[2] Y. Bengio, I. Goodfellow, and A. Courville. Deep Learning. Book in preparation for MIT Press, 2015.
[3] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley,
and Y. Bengio. Theano: a CPU and GPU math expression compiler. In Scipy, volume 4, page 3. Austin,
TX, 2010.
[4] R. Bridson. Fluid Simulation for Computer Graphics. Ak Peters Series. Taylor & Francis, 2008.
[5] T. Brox, A. Bruhn, N. Papenberg, and J. Weickert. High accuracy optical flow estimation based on a
theory for warping. In ECCV, pages 25?36. 2004.
[6] P. Cheung and H.Y. Yeung. Application of optical-flow technique to significant convection nowcast for
terminal areas in Hong Kong. In the 3rd WMO International Symposium on Nowcasting and Very ShortRange Forecasting (WSN12), pages 6?10, 2012.
[7] K. Cho, B. van Merrienboer, C. Gulcehre, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase
representations using RNN encoder-decoder for statistical machine translation. In EMNLP, pages 1724?
1734, 2014.
[8] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell.
Long-term recurrent convolutional networks for visual recognition and description. In CVPR, 2015.
[9] R. H. Douglas. The stormy weather group (Canada). In Radar in Meteorology, pages 61?68. 1990.
[10] Urs Germann and Isztar Zawadzki. Scale-dependence of the predictability of precipitation from continental radar images. Part I: Description of the methodology. Monthly Weather Review, 130(12):2859?2873,
2002.
[11] A. Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
[12] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735?1780,
1997.
[13] A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In
CVPR, 2015.
[14] B. Klein, L. Wolf, and Y. Afek. A dynamic convolutional layer for short range weather prediction. In
CVPR, 2015.
[15] P.W. Li, W.K. Wong, K.Y. Chan, and E. S.T. Lai. SWIRLS-An Evolving Nowcasting System. Hong Kong
Special Administrative Region Government, 2000.
[16] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR,
2015.
[17] R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks. In ICML,
pages 1310?1318, 2013.
[18] M. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert, and S. Chopra. Video (language) modeling: a
baseline for generative models of natural videos. arXiv preprint arXiv:1412.6604, 2014.
[19] M. Reyniers. Quantitative Precipitation Forecasts Based on Radar Observations: Principles, Algorithms
and Operational Systems. Institut Royal M?et?eorologique de Belgique, 2008.
[20] H. Sakaino. Spatio-temporal image pattern prediction method based on a physical model with timevarying optical flow. IEEE Transactions on Geoscience and Remote Sensing, 51(5-2):3023?3036, 2013.
[21] N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representations using
lstms. In ICML, 2015.
[22] J. Sun, M. Xue, J. W. Wilson, I. Zawadzki, S. P. Ballard, J. Onvlee-Hooimeyer, P. Joe, D. M. Barker,
P. W. Li, B. Golding, M. Xu, and J. Pinto. Use of NWP for nowcasting convective precipitation: Recent
progress and challenges. Bulletin of the American Meteorological Society, 95(3):409?426, 2014.
[23] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In NIPS,
pages 3104?3112, 2014.
[24] T. Tieleman and G. Hinton. Lecture 6.5 - RMSProp: Divide the gradient by a running average of its recent
magnitude. Coursera Course: Neural Networks for Machine Learning, 4, 2012.
[25] W.C. Woo and W.K. Wong. Application of optical flow techniques to rainfall nowcasting. In the 27th
Conference on Severe Local Storms, 2014.
[26] K. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend and tell:
Neural image caption generation with visual attention. In ICML, 2015.
9
| 5955 |@word kong:8 version:1 wco:2 bf:2 bptt:1 disk:3 open:1 simulation:2 initial:1 configuration:1 series:2 contains:4 score:3 bc:2 cleared:1 outperforms:3 existing:1 past:3 current:3 com:1 guadarrama:1 ust:1 written:1 gpu:2 numerical:1 periodically:1 concatenate:1 partition:1 blur:1 remove:1 update:1 occlude:1 generative:1 fewer:1 devising:1 vanishing:1 short:7 record:1 sudden:2 provides:2 pascanu:3 cse:1 node:2 math:1 accessed:1 alert:1 along:1 symposium:1 consists:2 inside:3 behavior:1 roughly:1 kiros:1 multi:2 terminal:1 salakhutdinov:2 gov:1 cpu:1 window:1 precipitation:34 becomes:2 project:3 moreover:2 underlying:1 discover:1 what:1 gif:1 finding:2 warning:1 temporal:3 forgotten:1 every:6 quantitative:1 act:1 ti:4 tackle:2 weekly:1 rainfall:13 mansimov:1 hit:6 control:1 szlam:1 producing:1 before:3 engineering:1 local:7 treat:1 attend:1 despite:1 encoding:10 ak:1 path:1 interpolation:1 initialization:2 china:1 examined:1 studied:1 collect:1 challenging:6 limited:1 range:4 bi:2 accumulator:1 lecun:1 testing:5 practice:1 block:6 whc:2 x3:3 chaotic:1 xshiab:1 digit:10 area:1 rnn:5 yan:1 evolving:1 weather:10 word:1 bergeron:1 get:3 cannot:1 close:2 operator:2 applying:2 impossible:1 wong:3 optimize:1 map:14 lagrangian:3 shi:1 latest:1 regardless:1 starting:1 pod:5 attention:1 barker:1 formulate:3 resolution:1 splitting:2 scipy:1 m2:1 swirl:2 insight:1 regarded:1 lamblin:2 k20:1 enabled:1 handle:1 target:4 suppose:1 controlling:1 imagine:2 exact:1 trigger:1 caption:1 distinguishing:1 goodfellow:2 element:1 velocity:3 recognition:2 predicts:2 observed:3 ft:6 bottom:1 cloud:3 preprint:2 wang:2 capture:5 solved:1 calculate:3 region:9 connected:3 sun:1 coursera:1 ranzato:1 remote:1 movement:1 transitional:1 csi:5 transforming:1 rmsprop:2 nowcasting:40 warde:1 dynamic:3 radar:32 trained:1 solving:1 rover:9 blurring:1 easily:1 differently:1 schwenk:1 represented:4 various:1 tx:1 stacked:2 separated:1 train:4 effective:1 zemel:1 tell:1 outside:4 quite:2 emerged:1 larger:6 solve:2 whi:2 encoded:1 whose:1 cvpr:4 encoder:3 ability:1 timescale:1 transform:1 echo:15 noisy:1 final:4 online:2 sequence:45 advantage:2 quantizing:1 net:1 propose:5 product:2 coming:1 hadamard:2 philosophy:1 pthe:1 representational:1 description:3 sourceforge:1 sutskever:1 requirement:1 extending:1 darrell:2 produce:1 generating:3 advection:3 object:1 help:1 wider:2 recurrent:5 progress:2 strong:3 predicted:4 come:2 direction:1 radius:1 drawback:1 tokyo:1 filter:1 viewing:1 everything:1 atmosphere:2 government:1 assign:1 generalization:1 wall:2 preliminary:1 alleviate:1 merrienboer:1 whf:2 extension:2 mm:2 ground:7 scope:1 predict:5 major:2 vary:1 adopt:2 consecutive:1 early:1 desjardins:1 purpose:1 estimation:2 observatory:2 tanh:4 successfully:2 weighted:1 mit:1 aim:1 timevarying:1 wilson:1 improvement:2 consistently:3 indicates:1 mainly:2 hk:2 baseline:1 sense:1 helpful:1 stopping:1 accumulated:1 unlikely:1 hidden:13 relation:1 pixel:5 issue:1 arg:3 overall:1 classification:1 plan:1 art:3 integration:1 airport:1 brox:1 spatial:11 field:7 timestamp:1 special:3 initialize:1 never:1 unsupervised:2 bruhn:1 theart:1 icml:3 future:13 report:1 inherent:2 few:1 rainy:2 randomly:4 simultaneously:1 preserve:2 m4:1 papenberg:1 detection:1 huge:1 highly:1 investigate:1 evaluation:1 severe:1 alignment:1 farley:1 unconditional:1 activated:1 convective:2 accurate:4 daily:1 intense:1 unless:1 institut:1 divide:2 taylor:1 guidance:1 instance:3 column:3 modeling:5 measuring:1 phrase:1 stacking:3 subset:4 predictor:2 conducted:1 too:3 graphic:1 dependency:2 spatiotemporal:23 accomplish:1 synthetic:3 cho:1 xue:1 lstm:46 international:1 seamless:1 standing:2 continuously:1 quickly:1 again:2 central:2 x9:3 recorded:1 reconstructs:1 containing:1 squared:2 emnlp:1 worse:1 book:1 american:1 li:2 nonlinearities:1 de:1 bergstra:2 blurred:1 caused:4 collobert:1 later:2 view:3 extrapolation:4 h1:1 try:2 observing:2 francis:1 compiler:1 complicated:1 bouchard:1 timely:2 formed:1 accuracy:2 convolutional:17 who:2 handwritten:1 accurately:1 venugopalan:1 worth:1 researcher:1 wai:1 storm:1 naturally:1 tostate:2 wxo:2 propagated:1 gain:1 sampled:1 dataset:19 recall:1 knowledge:1 dimensionality:3 segmentation:1 amplitude:1 sophisticated:1 actually:2 back:2 appears:1 feed:1 higher:2 day:5 follow:1 methodology:1 formulation:2 done:4 evaluated:1 just:1 correlation:9 lstms:3 nonlinear:1 overlapping:3 propagation:1 meteorological:1 gray:1 building:3 facilitate:1 usage:1 verify:1 contain:1 meticulous:1 effect:1 hence:1 semantic:2 ignorance:1 x5:11 during:2 self:1 inferior:1 essence:1 illustrative:1 hong:8 occasion:1 exdb:1 performs:4 motion:6 temperature:1 image:8 variational:1 consideration:1 novel:1 zawadzki:2 agglomeration:1 physical:2 volume:1 extend:1 m1:1 interpret:1 bougares:1 measurement:2 refer:1 significant:2 monthly:2 xingjian:1 chine:1 tuning:2 vanilla:1 grid:5 consistency:1 similarly:1 rd:1 afek:1 language:2 moving:8 stable:1 access:1 longer:2 bruna:1 convection:1 add:1 multivariate:1 recent:4 chan:1 perspective:3 dyyeung:1 schmidhuber:1 certain:1 nvidia:1 success:2 life:6 captured:1 seen:5 determine:1 period:2 redundant:1 semi:3 sliding:1 multiple:4 full:1 afterwards:1 infer:1 technical:1 faster:2 calculation:1 cross:7 long:9 lai:1 prevented:1 bigger:1 controlled:1 calculates:1 prediction:43 variant:1 basic:1 crop:1 vision:1 essentially:1 metric:4 yeung:2 arxiv:4 kernel:14 represent:2 achieved:2 cell:15 c1:1 hochreiter:1 addition:1 fine:2 addressed:1 interval:1 grow:1 source:1 crucial:1 movingmnist:1 ot:6 breuleux:1 db:1 flow:20 effectiveness:1 chopra:1 noting:1 intermediate:1 bengio:6 easy:1 reduce:2 idea:2 inner:3 bounce:1 golding:1 whether:1 expression:1 padding:4 forecasting:28 peter:1 york:1 convlstm:54 hardly:2 action:2 matlab:1 deep:6 useful:3 tune:1 karpathy:1 amount:1 meteorology:2 dit:1 generate:2 http:2 exist:1 wci:2 trapped:1 disjoint:1 per:2 klein:1 group:1 key:2 redundancy:1 four:1 nevertheless:1 threshold:1 drawn:1 douglas:1 ht:15 ma2:1 rediction:1 year:1 convert:1 run:4 parameterized:1 powerful:2 bouncing:2 uncertainty:2 place:1 almost:1 reasonable:4 yann:1 patch:6 appendix:3 resize:1 capturing:2 layer:33 ct:19 emergency:1 followed:1 courville:2 copied:1 weickert:1 nontrivial:1 ahead:2 fei:2 speed:1 min:2 mikolov:1 optical:14 relatively:2 department:1 according:1 ball:2 wxi:2 smaller:1 ur:1 making:3 restricted:1 handling:3 theano:3 altitude:1 taken:1 equation:3 previously:1 needed:2 tractable:1 instrument:1 end:14 adopted:2 available:2 operation:1 gulcehre:1 apply:3 observe:1 gate:6 rp:1 slower:1 existence:1 compress:1 denotes:4 original:1 include:1 ensure:1 top:3 clustering:1 running:1 exploit:1 giving:1 concatenated:2 rainstorm:1 build:6 especially:6 society:2 hko:2 tensor:9 initializes:1 warping:1 receptive:2 strategy:1 dependence:1 traditional:2 exhibit:1 gradient:2 continental:1 separate:1 decoder:2 topic:1 collected:1 reason:2 length:3 index:1 relationship:3 minimizing:1 innovation:1 sharper:1 hao:1 fluid:1 ba:1 design:1 implementation:1 perform:4 observation:5 convolution:3 datasets:1 hinton:1 precise:2 frame:21 sharp:1 community:1 intensity:6 canada:1 namely:2 required:1 wxf:2 connection:2 learned:1 hour:2 nip:2 address:1 able:3 wardefarley:1 usually:3 dynamical:2 below:2 pattern:5 hendricks:1 challenge:1 pioneering:1 royal:1 memory:5 video:5 max:4 including:1 hot:1 suitable:4 critical:2 power:1 overlap:1 shortrange:1 predicting:1 indicator:1 difficulty:1 natural:1 scheme:1 technology:1 temporally:2 picture:1 mathieu:1 created:1 woo:2 prior:1 understanding:1 review:1 python:1 graf:1 fully:4 loss:3 lecture:1 generation:4 proven:3 localized:1 validation:6 shelhamer:1 sufficient:1 consistent:1 principle:1 viewpoint:1 surrounded:1 pi:4 translation:1 row:3 austin:1 eccv:1 changed:1 course:1 last:6 free:1 copy:2 offline:1 deeper:2 understand:1 neighbor:1 wide:1 taking:1 bulletin:1 van:1 overcome:1 wcf:2 dimension:3 transition:8 boundary:6 world:2 unfolds:1 computes:1 raining:1 made:1 commonly:3 preprocessing:1 contour:1 far:6 transaction:1 skill:2 status:1 assumed:2 spatio:1 table:4 nature:1 ballard:1 operational:4 mse:3 complex:5 meanwhile:1 domain:6 whole:2 noise:1 alarm:3 zhourong:1 turian:1 repeated:1 sliced:2 categorized:1 x1:7 wxc:2 fig:6 benefited:1 xu:2 predictability:1 precision:1 germann:1 position:2 administrative:1 donahue:1 kin:1 hwangaz:1 minute:2 xt:23 bastien:2 showing:1 nwp:4 sensing:1 decay:1 chun:1 evidence:1 essential:2 incorporating:1 mnist:7 false:3 workshop:1 joe:1 importance:1 magnitude:1 forecast:2 chen:1 entropy:7 forget:1 fc:26 likely:1 appearance:1 forming:2 rohrbach:1 visual:4 vinyals:1 geoscience:1 bo:2 pinto:1 corresponds:1 truth:10 satisfies:1 determines:1 wolf:1 tieleman:1 goal:3 viewed:2 cheung:1 hard:3 change:1 included:2 specifically:1 except:1 uniformly:1 infinite:1 determined:1 miss:3 total:1 called:1 cappi:1 tiled:1 m3:1 saenko:1 indicating:1 select:1 preparation:1 evaluate:1 bridson:1 trainable:3 srivastava:1 |
5,475 | 5,956 | Scheduled Sampling for Sequence Prediction with
Recurrent Neural Networks
Samy Bengio, Oriol Vinyals, Navdeep Jaitly, Noam Shazeer
Google Research
Mountain View, CA, USA
{bengio,vinyals,ndjaitly,noam}@google.com
Abstract
Recurrent Neural Networks can be trained to produce sequences of tokens given
some input, as exemplified by recent results in machine translation and image
captioning. The current approach to training them consists of maximizing the
likelihood of each token in the sequence given the current (recurrent) state and the
previous token. At inference, the unknown previous token is then replaced by a
token generated by the model itself. This discrepancy between training and inference can yield errors that can accumulate quickly along the generated sequence.
We propose a curriculum learning strategy to gently change the training process
from a fully guided scheme using the true previous token, towards a less guided
scheme which mostly uses the generated token instead. Experiments on several sequence prediction tasks show that this approach yields significant improvements.
Moreover, it was used succesfully in our winning entry to the MSCOCO image
captioning challenge, 2015.
1
Introduction
Recurrent neural networks can be used to process sequences, either as input, output or both. While
they are known to be hard to train when there are long term dependencies in the data [1], some
versions like the Long Short-Term Memory (LSTM) [2] are better suited for this. In fact, they have
recently shown impressive performance in several sequence prediction problems including machine
translation [3], contextual parsing [4], image captioning [5] and even video description [6].
In this paper, we consider the set of problems that attempt to generate a sequence of tokens of
variable size, such as the problem of machine translation, where the goal is to translate a given
sentence from a source language to a target language. We also consider problems in which the input
is not necessarily a sequence, like the image captioning problem, where the goal is to generate a
textual description of a given image.
In both cases, recurrent neural networks (or their variants like LSTMs) are generally trained to
maximize the likelihood of generating the target sequence of tokens given the input. In practice, this
is done by maximizing the likelihood of each target token given the current state of the model (which
summarizes the input and the past output tokens) and the previous target token, which helps the
model learn a kind of language model over target tokens. However, during inference, true previous
target tokens are unavailable, and are thus replaced by tokens generated by the model itself, yielding
a discrepancy between how the model is used at training and inference. This discrepancy can be
mitigated by the use of a beam search heuristic maintaining several generated target sequences, but
for continuous state space models like recurrent neural networks, there is no dynamic programming
approach, so the effective number of sequences considered remains small, even with beam search.
1
The main problem is that mistakes made early in the sequence generation process are fed as input
to the model and can be quickly amplified because the model might be in a part of the state space it
has never seen at training time.
Here, we propose a curriculum learning approach [7] to gently bridge the gap between training and
inference for sequence prediction tasks using recurrent neural networks. We propose to change the
training process in order to gradually force the model to deal with its own mistakes, as it would
have to during inference. Doing so, the model explores more during training and is thus more robust
to correct its own mistakes at inference as it has learned to do so during training. We will show
experimentally that this approach yields better performance on several sequence prediction tasks.
The paper is organized as follows: in Section 2, we present our proposed approach to better train
sequence prediction tasks with recurrent neural networks; this is followed by Section 3 which draws
links to some related approaches. We then present some experimental results in Section 4 and
conclude in Section 5.
2
Proposed Approach
We are considering supervised tasks where the training set is given in terms of N input/output pairs
i
{X i , Y i }N
i=1 , where X is the input and can be either static (like an image) or dynamic (like a
sequence) while the target output Y i is a sequence y1i , y2i , . . . , yTi i of a variable number of tokens
that belong to a fixed known dictionary.
2.1
Model
Given a single input/output pair (X, Y ), the log probability P (Y |X) can be computed as:
log P (Y |X)
=
=
log P (y1T |X)
T
X
log P (yt |y1t?1 , X)
t=1
(1)
where Y is a sequence of length T represented by tokens y1 , y2 , . . . , yT . The latter term in the above
equation is estimated by a recurrent neural network with parameters ? by introducing a state vector,
ht , that is a function of the previous state, ht?1 , and the previous output token, yt?1 , i.e.
log P (yt |y1t?1 , X; ?) = log P (yt |ht ; ?)
where ht is computed by a recurrent neural network as follows:
f (X; ?)
if t = 1,
ht =
f (ht?1 , yt?1 ; ?) otherwise.
(2)
(3)
P (yt |ht ; ?) is often implemented as a linear projection1 of the state vector ht into a vector of scores,
one for each token of the output dictionary, followed by a softmax transformation to ensure the
scores are properly normalized (positive and sum to 1). f (h, y) is usually a non-linear function that
combines the previous state and the previous output in order to produce the current state.
This means that the model focuses on learning to output the next token given the current state
of the model AND the previous token. Thus, the model represents the probability distribution of
sequences in the most general form - unlike Conditional Random Fields [8] and other models that
assume independence between between outputs at different time steps, given latent variable states.
The capacity of the model is only limited by the representational capacity of the recurrent and
feedforward layers. LSTMs, with their ability to learn long range structure are especially well suited
to this task and make it possible to learn rich distributions over sequences.
In order to learn variable length sequences, a special token, <EOS>, that signifies the end of a
sequence is added to the dictionary and the model. During training, <EOS> is concatenated to the
end of each sequence. During inference, the model generates tokens until it generates <EOS>.
1
Although one could also use a multi-layered non-linear projection.
2
2.2
Training
Training recurrent neural networks to solve such tasks is usually accomplished by using mini-batch
stochastic gradient descent to look for a set of parameters ?? that maximizes the log likelihood of
producing the correct target sequence Y i given the input data X i for all training pairs (X i , Y i ):
X
?? = arg max
log P (Y i |X i ; ?) .
(4)
?
2.3
(X i ,Y i )
Inference
During inference the model can generate the full sequence y1T given X by generating one token at a
time, and advancing time by one step. When an <EOS> token is generated, it signifies the end of
the sequence. For this process, at time t, the model needs as input the output token yt?1 from the
last time step in order to produce yt . Since we do not have access to the true previous token, we can
instead either select the most likely one given our model, or sample according to it.
Searching for the sequence Y with the highest probability given X is too expensive because of the
combinatorial growth in the number of sequences. Instead we use a beam searching procedure to
generate k ?best? sequences. We do this by maintaining a heap of m best candidate sequences. At
each time step new candidates are generated by extending each candidate by one token and adding
them to the heap. At the end of the step, the heap is re-pruned to only keep m candidates. The beam
searching is truncated when no new sequences are added, and k best sequences are returned.
While beam search is often used for discrete state based models like Hidden Markov Models where
dynamic programming can be used, it is harder to use efficiently for continuous state based models
like recurrent neural networks, since there is no way to factor the followed state paths in a continuous
space, and hence the actual number of candidates that can be kept during beam search decoding is
very small.
In all these cases, if a wrong decision is taken at time t ? 1, the model can be in a part of the
state space that is very different from those visited from the training distribution and for which it
doesn?t know what to do. Worse, it can easily lead to cumulative bad decisions - a classic problem in
sequential Gibbs sampling type approaches to sampling, where future samples can have no influence
on the past.
2.4
Bridging the Gap with Scheduled Sampling
The main difference between training and inference for sequence prediction tasks when predicting
token yt is whether we use the true previous token yt?1 or an estimate y?t?1 coming from the model
itself.
We propose here a sampling mechanism that will randomly decide, during training, whether we use
yt?1 or y?t?1 . Assuming we use a mini-batch based stochastic gradient descent approach, for every
token to predict yt ? Y of the ith mini-batch of the training algorithm, we propose to flip a coin
and use the true previous token with probability i , or an estimate coming from the model itself with
probability (1 ? i )2 The estimate of the model can be obtained by sampling a token according to
the probability distribution modeled by P (yt?1 |ht?1 ), or can be taken as the arg maxs P (yt?1 =
s|ht?1 ). This process is illustrated in Figure 1.
When i = 1, the model is trained exactly as before, while when i = 0 the model is trained in
the same setting as inference. We propose here a curriculum learning strategy to go from one to
the other: intuitively, at the beginning of training, sampling from the model would yield a random
token since the model is not well trained, which could lead to very slow convergence, so selecting
more often the true previous token should help; on the other hand, at the end of training, i should
favor sampling from the model more often, as this corresponds to the true inference situation, and
one expects the model to already be good enough to handle it and sample reasonable tokens.
2
Note that in the experiments, we flipped the coin for every token. We also tried to flip the coin once per
sequence, but the results were much worse, most probably because consecutive errors are amplified during the
first rounds of training.
3
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Figure 1: Illustration of the Scheduled Sampling approach,
where one flips a coin at every time step to decide to use the
true previous token or one sampled from the model itself.
Exponential decay
Inverse sigmoid decay
Linear decay
0
200
Figure 2:
schedules.
400
600
800
1000
Examples of decay
We thus propose to use a schedule to decrease i as a function of i itself, in a similar manner used
to decrease the learning rate in most modern stochastic gradient descent approaches. Examples of
such schedules can be seen in Figure 2 as follows:
? Linear decay: i = max(, k ? ci) where 0 ? < 1 is the minimum amount of truth to be
given to the model and k and c provide the offset and slope of the decay, which depend on
the expected speed of convergence.
? Exponential decay: i = k i where k < 1 is a constant that depends on the expected speed
of convergence.
? Inverse sigmoid decay: i = k/(k +exp(i/k)) where k ? 1 depends on the expected speed
of convergence.
We call our approach Scheduled Sampling. Note that when we sample the previous token y?t?1 from
the model itself while training, we could back-propagate the gradient of the losses at times t ? T
through that decision. This was not done in the experiments described in this paper and is left for
future work.
3
Related Work
The discrepancy between the training and inference distributions has already been noticed in the
literature, in particular for control and reinforcement learning tasks.
SEARN [9] was proposed to tackle problems where supervised training examples might be different
from actual test examples when each example is made of a sequence of decisions, like acting in a
complex environment where a few mistakes of the model early in the sequential decision process
might compound and yield a very poor global performance. Their proposed approach involves a
meta-algorithm where at each meta-iteration one trains a new model according to the current policy
(essentially the expected decisions for each situation), applies it on a test set and modifies the next
iteration policy in order to account for the previous decisions and errors. The new policy is thus a
combination of the previous one and the actual behavior of the model.
In comparison to SEARN and related ideas [10, 11], our proposed approach is completely online: a
single model is trained and the policy slowly evolves during training, instead of a batch approach,
which makes it much faster to train3 Furthermore, SEARN has been proposed in the context of
reinforcement learning, while we consider the supervised learning setting trained using stochastic
gradient descent on the overall objective.
Other approaches have considered the problem from a ranking perspective, in particular for parsing
tasks [12] where the target output is a tree. In this case, the authors proposed to use a beam search
both during training and inference, so that both phases are aligned. The training beam is used to find
3
In fact, in the experiments we report in this paper, our proposed approach was not meaningfully slower
(nor faster) to train than the baseline.
4
the best current estimate of the model, which is compared to the guided solution (the truth) using a
ranking loss. Unfortunately, this is not feasible when using a model like a recurrent neural network
(which is now the state-of-the-art technique in many sequential tasks), as the state sequence cannot
be factored easily (because it is a multi-dimensional continuous state) and thus beam search is hard
to use efficiently at training time (as well as inference time, in fact).
Finally, [13] proposed an online algorithm for parsing problems that adapts the targets through the
use of a dynamic oracle that takes into account the decisions of the model. The trained model
is a perceptron and is thus not state-based like a recurrent neural network, and the probability of
choosing the truth is fixed during training.
4
Experiments
We describe in this section experiments on three different tasks, in order to show that scheduled
sampling can be helpful in different settings. We report results on image captioning, constituency
parsing and speech recognition.
4.1
Image Captioning
Image captioning has attracted a lot of attention in the past year. The task can be formulated as a
mapping of an image onto a sequence of words describing its content in some natural language, and
most proposed approaches employ some form of recurrent network structure with simple decoding
schemes [5, 6, 14, 15, 16]. A notable exception is the system proposed in [17], which does not
directly optimize the log likelihood of the caption given the image, and instead proposes a pipelined
approach.
Since an image can have many valid captions, the evaluation of this task is still an open problem. Some attempts have been made to design metrics that positively correlate with human evaluation [18], and a common set of tools have been published by the MSCOCO team [19].
We used the MSCOCO dataset from [19] to train our model. We trained on 75k images and report
results on a separate development set of 5k additional images. Each image in the corpus has 5 different captions, so the training procedure picks one at random, creates a mini-batch of examples,
and optimizes the objective function defined in (4). The image is preprocessed by a pretrained convolutional neural network (without the last classification layer) similar to the one described in [20],
and the resulting image embedding is treated as if it was the first word from which the model starts
generating language. The recurrent neural network generating words is an LSTM with one layer
of 512 hidden units, and the input words are represented by embedding vectors of size 512. The
number of words in the dictionary is 8857. We used an inverse sigmoid decay schedule for i for the
scheduled sampling approach.
Table 1 shows the results on various metrics on the development set. Each of these metrics is
a variant of estimating the overlap between the obtained sequence of words and the target one.
Since there were 5 target captions per image, the best result is always chosen. To the best of our
knowledge, the baseline results are consistent (slightly better) with the current state-of-the-art on
that task. While dropout helped in terms of log likelihood (as expected but not shown), it had a
negative impact on the real metrics. On the other hand, scheduled sampling successfully trained a
model more resilient to failures due to training and inference mismatch, which likely yielded higher
quality captions according to all the metrics. Ensembling models also yielded better performance,
both for the baseline and the schedule sampling approach. It is also interesting to note that a model
trained while always sampling from itself (hence in a regime similar to inference), dubbed Always
Sampling in the table, yielded very poor performance, as expected because the model has a hard
time learning the task in that case. We also trained a model with scheduled sampling, but instead
of sampling from the model, we sampled from a uniform distribution, in order to verify that it was
important to build on the current model and that the performance boost was not just a simple form
of regularization. We called this Uniform Scheduled Sampling and the results are better than the
baseline, but not as good as our proposed approach. We also experimented with flipping the coin
once per sequence instead of once per token, but the results were as poor as the Always Sampling
approach.
5
Table 1: Various metrics (the higher the better) on the MSCOCO development set for the image
captioning task.
Approach vs Metric
Baseline
Baseline with Dropout
Always Sampling
Scheduled Sampling
Uniform Scheduled Sampling
Baseline ensemble of 10
Scheduled Sampling ensemble of 5
BLEU-4
28.8
28.1
11.2
30.6
29.2
30.7
32.3
METEOR
24.2
23.9
15.7
24.3
24.2
25.1
25.4
CIDER
89.5
87.0
49.7
92.1
90.9
95.7
98.7
It?s worth noting that we used our scheduled sampling approach to participate in the 2015 MSCOCO
image captioning challenge [21] and ranked first in the final leaderboard.
4.2
Constituency Parsing
Another less obvious connection with the any-to-sequence paradigm is constituency parsing. Recent
work [4] has proposed an interpretation of a parse tree as a sequence of linear ?operations? that build
up the tree. This linearization procedure allowed them to train a model that can map a sentence onto
its parse tree without any modification to the any-to-sequence formulation.
The trained model has one layer of 512 LSTM cells and words are represented by embedding vectors
of size 512. We used an attention mechanism similar to the one described in [22] which helps,
when considering the next output token to produce yt , to focus on part of the input sequence only
by applying a softmax over the LSTM state vectors corresponding to the input sequence. The input
word dictionary contained around 90k words, while the target dictionary contained 128 symbols used
to describe the tree. We used an inverse sigmoid decay schedule for i in the scheduled sampling
approach.
Parsing is quite different from image captioning as the function that one has to learn is almost
deterministic. In contrast to an image having a large number of valid captions, most sentences have
a unique parse tree (although some very difficult cases exist). Thus, the model operates almost
deterministically, which can be seen by observing that the train and test perplexities are extremely
low compared to image captioning (1.1 vs. 7).
This different operating regime makes for an interesting comparison, as one would not expect the
baseline algorithm to make many mistakes. However, and as can be seen in Table 2, scheduled
sampling has a positive effect which is additive to dropout. In this table we report the F1 score on the
WSJ 22 development set [23]. We should also emphasize that there are only 40k training instances,
so overfitting contributes largely to the performance of our system. Whether the effect of sampling
during training helps with regard to overfitting or the training/inference mismatch is unclear, but the
result is positive and additive with dropout. Once again, a model trained by always sampling from
itself instead of using the groundtruth previous token as input yielded very bad results, in fact so bad
that the resulting trees were often not valid trees (hence the ?-? in the corresponding F1 metric).
Table 2: F1 score (the higher the better) on the validation set of the parsing task.
Approach
Baseline LSTM
Baseline LSTM with Dropout
Always Sampling
Scheduled Sampling
Scheduled Sampling with Dropout
6
F1
86.54
87.0
88.08
88.68
4.3
Speech Recognition
For the speech recognition experiments, we used a slightly different setting from the rest of the
paper. Each training example is an input/output pair (X, Y ), where X is a sequence of T input
vectors x1 , x2 , ? ? ? xT and Y is a sequence of T tokens y1 , y2 , ? ? ? yT so each yt is aligned with the
corresponding xt . Here, xt are the acoustic features represented by log Mel filter bank spectra at
frame t, and yt is the corresponding target. The targets used were HMM-state labels generated from
a GMM-HMM recipe, using the Kaldi toolkit [24] but could very well have been phoneme labels.
This setting is different from the other experiments in that the model we used is the following:
log P (Y |X; ?)
=
=
log P (y1T |xT1 ; ?)
T
X
log P (yt |y1t?1 , xt1 ; ?)
t=1
=
T
X
log P (yt |ht ; ?)
(5)
t=1
where ht is computed by a recurrent neural network as follows:
f (oh , S, x1 ; ?)
if t = 1,
ht =
f (ht?1 , yt?1 , xt ; ?) otherwise.
(6)
where oh is a vector of 0?s with same dimensionality as ht ?s and S is an extra token added to the
dictionary to represent the start of each sequence.
We generated data for these experiments using the TIMIT4 corpus and the KALDI toolkit as described in [25]. Standard configurations were used for the experiments - 40 dimensional log Mel
filter banks and their first and second order temporal derivatives were used as inputs to each frame.
180 dimensional targets were generated for each time frame using forced alignment to transcripts
using a trained GMM-HMM system. The training, validation and test sets have 3696, 400 and 192
sequences respectively, and their average length was 304 frames. The validation set was used to
choose the best epoch in training, and the model parameters from that epoch were used to evaluate
the test set.
The trained models had two layers of 250 LSTM cells and a softmax layer, for each of five configurations - a baseline configuration where the ground truth was always fed to the model, a configuration
(Always Sampling) where the model was only fed in its own predictions from the last time step,
and three scheduled sampling configurations (Scheduled Sampling 1-3), where i was ramped linearly from a maximum value to a minimum value over ten epochs and then kept constant at the
final value. For each configuration, we trained 3 models and report average performance over them.
Training of each model was done over frame targets from the GMM. The baseline configurations
typically reached the best validation accuracy after approximately 14 epochs whereas the sampling
models reached the best accuracy after approximately 9 epochs, after which the validation accuracy
decreased. This is probably because the way we trained our models is not exact - it does not account
for the gradient of the sampling probabilities from which we sampled our targets. Future effort at
tackling this problem may further improve results.
Testing was done by finding the best sequence from beam search decoding (using a beam size of
10 beams) and computing the error rate over the sequences. We also report the next step error rate
(where the model was fed in the ground truth to predict the class of the next frame) for each of the
models on the validation set to summarize the performance of the models on the training objective.
Table 3 shows a summary of the results
It can be seen that the baseline performs better next step prediction than the models that sample the
tokens for input. This is to be expected, since the former has access to the groundtruth. However, it
can be seen that the models that were trained with sampling perform better than the baseline during
decoding. It can also be seen that for this problem, the ?Always Sampling? model performs quite
4
https://catalog.ldc.upenn.edu/LDC93S1.
7
well. We hypothesize that this has to do with the nature of the dataset. The HMM-aligned states
have a lot of correlation - the same state appears as the target for several frames, and most of the
states are constrained only to go to a subset of other states. Next step prediction with groundtruth
labels on this task ends up paying disproportionate attention to the structure of the labels (y1t?1 )
and not enough to the acoustics input (xt1 ). Thus it achieves very good next step prediction error
when the groundtruth sequence is fed in with the acoustic information, but is not able to exploit
the acoustic information sufficiently when the groundtruth sequence is not fed in. For this model
the testing conditions are too far from the training condition for it to make good predictions. The
model that is only fed its own prediction (Always Sampling) ends up exploiting all the information
it can find in the acoustic signal, and effectively ignores its own predictions to influence the next
step prediction. Thus at test time, it performs just as well as it does during training. A model such as
the attention model of [26] which predicts phone sequences directly, instead of the highly redundant
HMM state sequences, would not suffer from this problem because it would need to exploit both the
acoustic signal and the language model sufficiently to make predictions. Nevertheless, even in this
setting, adding scheduled sampling still helped to improve the decoding frame error rate.
Note that typically speech recognition experiments use HMMs to decode predictions from neural
networks in a hybrid model. Here we avoid using an HMM altogether and hence we do not have the
advantage of the smoothing that results from the HMM architecture and the language models. Thus
the results are not directly comparable to the typical hybrid model results.
Table 3: Frame Error Rate (FER) on the speech recognition experiments. In next step prediction
(reported on validation set) the ground truth is fed in to predict the next target like it is done during
training. In decoding experiments (reported on test set), beam searching is done to find the best
sequence. We report results on four different linear schedulings of sampling, where i was ramped
down linearly from s to e . For the baseline, the model was only fed in the ground truth. See
Section 4.3 for an analysis of the results.
Approach
Always Sampling
Scheduled Sampling 1
Scheduled Sampling 2
Scheduled Sampling 3
Baseline LSTM
5
s
0
0.25
0.5
0.9
1
e
0
0
0
0.5
1
Next Step FER
34.6
34.3
34.1
19.8
15.0
Decoding FER
35.8
34.5
35.0
42.0
46.0
Conclusion
Using recurrent neural networks to predict sequences of tokens has many useful applications like
machine translation and image description. However, the current approach to training them, predicting one token at a time, conditioned on the state and the previous correct token, is different from
how we actually use them and thus is prone to the accumulation of errors along the decision paths.
In this paper, we proposed a curriculum learning approach to slowly change the training objective
from an easy task, where the previous token is known, to a realistic one, where it is provided by the
model itself. Experiments on several sequence prediction tasks yield performance improvements,
while not incurring longer training times. Future work includes back-propagating the errors through
the sampling decisions, as well as exploring better sampling strategies including conditioning on
some confidence measure from the model itself.
References
[1] Y. Bengio, P. Simard, and P. Frasconi. Learning long term dependencies is hard. IEEE Transactions on
Neural Networks, 5(2):157?166, 1994.
[2] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8), 1997.
[3] I. Sutskever, O. Vinyals, and Q. Le. Sequence to sequence learning with neural networks. In Advances in
Neural Information Processing Systems, NIPS, 2014.
[4] O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton. Grammar as a foreign language. In
arXiv:1412.7449, 2014.
8
[5] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In
IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2015.
[6] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell.
Long-term recurrent convolutional networks for visual recognition and description. In IEEE Conference
on Computer Vision and Pattern Recognition, CVPR, 2015.
[7] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In Proceedings of the International Conference on Machine Learning, ICML, 2009.
[8] J. D. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional random fields: Probabilistic models for
segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on
Machine Learning, ICML, pages 282?289, San Francisco, CA, USA, 2001. Morgan Kaufmann Publishers
Inc.
[9] H. Daum?e III, J. Langford, and D. Marcu. Search-based structured prediction as classification. Machine
Learning Journal, 2009.
[10] S. Ross, G. J. Gordon, and J. A. Bagnell. A reduction of imitation learning and structured prediction
to no-regret online learning. In Proceedings of the Workshop on Artificial Intelligence and Statistics,
AISTATS, 2011.
[11] A. Venkatraman, M. Herbert, and J. A. Bagnell. Improving multi-step prediction of learned time series
models. In Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI, 2015.
[12] M. Collins and B. Roark. Incremental parsing with the perceptron algorithm. In Proceedings of the
Association for Computational Linguistics, ACL, 2004.
[13] Y. Goldberg and J. Nivre. A dynamic oracle for arc-eager dependency parsing. In Proceedings of COLING, 2012.
[14] J. Mao, W. Xu, Y. Yang, J. Wang, Z. H. Huang, and A. Yuille. Deep captioning with multimodal recurrent
neural networks (m-rnn). In International Conference on Learning Representations, ICLR, 2015.
[15] R. Kiros, R. Salakhutdinov, and R. Zemel. Unifying visual-semantic embeddings with multimodal neural
language models. In TACL, 2015.
[16] A. Karpathy and F.-F. Li. Deep visual-semantic alignments for generating image descriptions. In IEEE
Conference on Computer Vision and Pattern Recognition, CVPR, 2015.
[17] H. Fang, S. Gupta, F. Iandola, R. K. Srivastava, L. Deng, P. Dollar, J. Gao, X. He, M. Mitchell, J. C.
Platt, C. L. Zitnick, and G. Zweig. From captions to visual concepts and back. In IEEE Conference on
Computer Vision and Pattern Recognition, CVPR, 2015.
[18] R. Vedantam, C. L. Zitnick, and D. Parikh. CIDEr: Consensus-based image description evaluation. In
IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2015.
[19] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll?ar, and C. L. Zitnick. Microsoft
coco: Common objects in context. arXiv:1405.0312, 2014.
[20] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal
covariate shift. In Proceedings of the International Conference on Machine Learning, ICML, 2015.
[21] Y. Cui, M. R. Ronchi, T.-Y. Lin, P. Dollr, and L. Zitnick.
http://mscoco.org/dataset/#captions-challenge2015, 2015.
Microsoft coco captioning challenge.
[22] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate.
In International Conference on Learning Representations, ICLR, 2015.
[23] E. Hovy, M. Marcus, M. Palmer, L. Ramshaw, and R. Weischedel. Ontonotes: The 90% solution. In
Proceedings of the Human Language Technology Conference of the NAACL, Short Papers, pages 57?60,
New York City, USA, June 2006. Association for Computational Linguistics.
[24] D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek,
Y. Qian, P. Schwarz, J. Silovsky, G. Stemmer, and K. Vesely. The kaldi speech recognition toolkit. In
IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Processing
Society, December 2011. IEEE Catalog No.: CFP11SRW-USB.
[25] N. Jaitly. Exploring Deep Learning Methods for discovering features in speech signals. PhD thesis,
University of Toronto, 2014.
[26] Jan Chorowski, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. End-to-end continuous speech
recognition using attention-based recurrent nn: First results. arXiv preprint arXiv:1412.1602, 2014.
9
| 5956 |@word version:1 open:1 tried:1 propagate:1 pick:1 harder:1 reduction:1 configuration:7 series:1 score:4 selecting:1 past:3 current:10 com:1 contextual:1 guadarrama:1 tackling:1 attracted:1 parsing:10 additive:2 ldc93s1:1 realistic:1 hypothesize:1 v:2 motlicek:1 intelligence:2 discovering:1 mccallum:1 beginning:1 ith:1 short:3 toronto:1 org:1 five:1 along:2 consists:1 ramped:2 combine:1 manner:1 upenn:1 expected:7 behavior:1 nor:1 kiros:1 multi:3 salakhutdinov:1 actual:3 considering:2 provided:1 estimating:1 moreover:1 mitigated:1 maximizes:1 what:1 mountain:1 kind:1 finding:1 transformation:1 dubbed:1 temporal:1 every:3 growth:1 tackle:1 exactly:1 wrong:1 platt:1 control:1 unit:1 ramanan:1 producing:1 segmenting:1 positive:3 before:1 mistake:5 path:2 koo:1 approximately:2 might:3 acl:1 hmms:1 limited:1 succesfully:1 range:1 palmer:1 unique:1 testing:2 practice:1 regret:1 procedure:3 maire:1 jan:1 y2i:1 rnn:1 projection:1 burget:1 word:9 confidence:1 cannot:1 onto:2 layered:1 pipelined:1 scheduling:1 context:2 influence:2 applying:1 optimize:1 accumulation:1 map:1 deterministic:1 yt:22 maximizing:2 modifies:1 go:2 attention:5 eighteenth:1 boulianne:1 qian:1 factored:1 oh:2 fang:1 classic:1 searching:4 handle:1 embedding:3 target:21 decode:1 caption:9 programming:2 exact:1 us:1 samy:1 goldberg:1 jaitly:2 expensive:1 recognition:14 marcu:1 predicts:1 preprint:1 wang:1 decrease:2 highest:1 environment:1 dynamic:5 trained:19 depend:1 yuille:1 creates:1 completely:1 easily:2 multimodal:2 represented:4 various:2 train:7 forced:1 effective:1 describe:2 artificial:2 zemel:1 tell:1 labeling:1 choosing:1 eos:4 quite:2 heuristic:1 y1t:7 solve:1 cvpr:5 otherwise:2 grammar:1 ability:1 favor:1 statistic:1 jointly:1 itself:11 final:2 online:3 sequence:61 advantage:1 propose:7 coming:2 fer:3 aligned:3 translate:2 amplified:2 representational:1 adapts:1 description:6 recipe:1 exploiting:1 convergence:4 sutskever:2 darrell:1 extending:1 produce:4 captioning:13 generating:5 wsj:1 incremental:1 object:1 help:4 recurrent:22 propagating:1 transcript:1 paying:1 implemented:1 involves:1 disproportionate:1 guided:3 correct:3 meteor:1 filter:2 stochastic:4 human:2 resilient:1 f1:4 exploring:2 around:1 considered:2 ground:4 sufficiently:2 exp:1 mapping:1 predict:4 kaldi:3 dictionary:7 early:2 heap:3 consecutive:1 achieves:1 combinatorial:1 label:4 visited:1 ross:1 bridge:1 schwarz:1 successfully:1 tool:1 city:1 always:12 cider:2 avoid:1 focus:2 june:1 improvement:2 properly:1 likelihood:6 contrast:1 baseline:16 dollar:1 helpful:1 inference:19 foreign:1 nn:1 typically:2 hidden:2 perona:1 arg:2 overall:1 classification:2 proposes:1 development:4 smoothing:1 art:2 softmax:3 special:1 constrained:1 field:2 once:4 never:1 having:1 frasconi:1 sampling:48 represents:1 flipped:1 look:1 icml:3 venkatraman:1 discrepancy:4 future:4 report:7 yoshua:1 gordon:1 employ:1 few:1 modern:1 randomly:1 replaced:2 phase:1 microsoft:2 attempt:2 hannemann:1 highly:1 evaluation:3 alignment:2 yielding:1 tree:8 re:1 instance:1 ar:1 signifies:2 introducing:1 subset:1 entry:1 expects:1 uniform:3 too:2 eager:1 reported:2 dependency:3 cho:2 lstm:8 explores:1 international:5 probabilistic:1 decoding:7 quickly:2 thesis:1 again:1 aaai:2 choose:1 slowly:2 huang:1 worse:2 derivative:1 simard:1 li:1 szegedy:1 account:3 chorowski:1 includes:1 inc:1 notable:1 ranking:2 depends:2 collobert:1 view:1 lot:2 helped:2 doing:1 observing:1 reached:2 start:2 slope:1 accuracy:3 convolutional:2 phoneme:1 largely:1 efficiently:2 ensemble:2 yield:6 kaufmann:1 hovy:1 venugopalan:1 worth:1 usb:1 published:1 failure:1 petrov:1 obvious:1 static:1 sampled:3 dataset:3 mitchell:1 knowledge:1 dimensionality:1 organized:1 schedule:6 actually:1 back:3 appears:1 higher:3 nivre:1 supervised:3 formulation:1 done:6 furthermore:1 just:2 until:1 correlation:1 hand:2 langford:1 tacl:1 lstms:2 parse:3 google:2 quality:1 scheduled:23 usa:3 effect:2 naacl:1 normalized:1 true:8 y2:2 verify:1 former:1 hence:4 regularization:1 kyunghyun:1 concept:1 ontonotes:1 semantic:2 illustrated:1 deal:1 round:1 during:17 mel:2 performs:3 image:27 recently:1 parikh:1 sigmoid:4 common:2 conditioning:1 gently:2 belong:1 interpretation:1 association:2 he:1 accumulate:1 significant:1 gibbs:1 automatic:1 ramshaw:1 language:10 had:2 toolkit:3 access:2 impressive:1 operating:1 longer:1 align:1 own:5 recent:2 perspective:1 optimizes:1 perplexity:1 phone:1 compound:1 schmidhuber:1 hay:1 coco:2 meta:2 accomplished:1 seen:7 minimum:2 additional:1 morgan:1 herbert:1 deng:1 goel:1 maximize:1 paradigm:1 redundant:1 signal:4 full:1 faster:2 long:6 zweig:1 lin:2 impact:1 prediction:22 variant:2 essentially:1 metric:8 navdeep:1 vision:5 arxiv:4 iteration:2 represent:1 normalization:1 hochreiter:1 cell:2 beam:13 whereas:1 decreased:1 source:1 publisher:1 extra:1 rest:1 unlike:1 probably:2 bahdanau:2 meaningfully:1 december:1 lafferty:1 call:1 noting:1 yang:1 feedforward:1 bengio:7 enough:2 easy:1 iii:1 embeddings:1 independence:1 weischedel:1 architecture:1 idea:1 shift:1 whether:3 bridging:1 accelerating:1 effort:1 suffer:1 returned:1 speech:9 searn:3 york:1 deep:4 generally:1 useful:1 karpathy:1 amount:1 ten:1 constituency:3 generate:4 http:2 exist:1 estimated:1 per:4 discrete:1 four:1 nevertheless:1 preprocessed:1 gmm:3 povey:1 ronchi:1 ht:15 kept:2 advancing:1 sum:1 year:1 inverse:4 almost:2 reasonable:1 decide:2 groundtruth:5 draw:1 roark:1 decision:10 summarizes:1 comparable:1 dropout:6 layer:6 followed:3 oracle:2 yielded:4 x2:1 y1i:1 generates:2 toshev:1 speed:3 extremely:1 pruned:1 structured:2 according:4 combination:1 poor:3 cui:1 slightly:2 evolves:1 modification:1 ndjaitly:1 intuitively:1 gradually:1 taken:2 equation:1 remains:1 describing:1 mechanism:2 know:1 flip:3 fed:9 end:9 operation:1 incurring:1 doll:1 batch:6 coin:5 slower:1 altogether:1 ensure:1 linguistics:2 maintaining:2 unifying:1 daum:1 exploit:2 concatenated:1 especially:1 build:2 society:1 objective:4 noticed:1 added:3 already:2 flipping:1 kaiser:1 strategy:3 bagnell:2 unclear:1 gradient:6 iclr:2 link:1 separate:1 capacity:2 hmm:7 participate:1 consensus:1 bleu:1 dzmitry:1 marcus:1 assuming:1 length:3 modeled:1 mini:4 illustration:1 difficult:1 mostly:1 unfortunately:1 noam:2 negative:1 design:1 policy:4 unknown:1 perform:1 twenty:1 markov:1 arc:1 descent:4 truncated:1 situation:2 hinton:1 team:1 y1:2 shazeer:1 frame:9 ninth:1 pair:4 sentence:3 connection:1 catalog:2 acoustic:6 learned:2 textual:1 boost:1 nip:1 able:1 usually:2 exemplified:1 mismatch:2 pattern:5 hendricks:1 regime:2 challenge:3 summarize:1 including:2 memory:2 video:1 max:3 ldc:1 overlap:1 natural:1 force:1 treated:1 predicting:2 ranked:1 curriculum:5 hybrid:2 scheme:3 improve:2 technology:1 epoch:5 literature:1 understanding:1 fully:1 loss:2 expect:1 generation:1 interesting:2 leaderboard:1 validation:7 generator:1 consistent:1 bank:2 translation:5 prone:1 summary:1 token:49 last:3 perceptron:2 stemmer:1 ghoshal:1 regard:1 valid:3 cumulative:1 rich:1 doesn:1 ignores:1 author:1 made:3 reinforcement:2 san:1 far:1 erhan:1 correlate:1 transaction:1 emphasize:1 keep:1 global:1 overfitting:2 ioffe:1 corpus:2 xt1:3 conclude:1 francisco:1 vedantam:1 belongie:1 glembek:1 imitation:1 spectrum:1 search:8 continuous:5 latent:1 table:8 learn:5 nature:1 robust:1 ca:2 contributes:1 unavailable:1 improving:1 necessarily:1 complex:1 louradour:1 zitnick:4 aistats:1 main:2 linearly:2 allowed:1 positively:1 x1:2 ensembling:1 xu:1 mscoco:6 slow:1 mao:1 pereira:1 deterministically:1 winning:1 exponential:2 candidate:5 coling:1 donahue:1 down:1 bad:3 xt:4 covariate:1 symbol:1 offset:1 decay:10 experimented:1 gupta:1 workshop:2 adding:2 sequential:3 effectively:1 ci:1 phd:1 linearization:1 conditioned:1 gap:2 suited:2 likely:2 rohrbach:1 gao:1 visual:4 vinyals:5 contained:2 iandola:1 pretrained:1 applies:1 corresponds:1 truth:7 weston:1 conditional:2 goal:2 formulated:1 towards:1 yti:1 change:3 hard:4 experimentally:1 feasible:1 content:1 operates:1 typical:1 acting:1 reducing:1 called:1 experimental:1 saenko:1 exception:1 select:1 internal:1 latter:1 collins:1 oriol:1 evaluate:1 srivastava:1 |
5,476 | 5,957 | Mind the Gap: A Generative Approach to
Interpretable Feature Selection and Extraction
Been Kim
Julie Shah
Massachusetts Institute of Technology
Cambridge, MA 02139
{beenkim, julie a shah}@csail.mit.edu
Finale Doshi-Velez
Harvard University
Cambridge, MA 02138
finale@seas.harvard.edu
Abstract
We present the Mind the Gap Model (MGM), an approach for interpretable feature extraction and selection. By placing interpretability criteria directly into the
model, we allow for the model to both optimize parameters related to interpretability and to directly report a global set of distinguishable dimensions to assist with
further data exploration and hypothesis generation. MGM extracts distinguishing
features on real-world datasets of animal features, recipes ingredients, and disease co-occurrence. It also maintains or improves performance when compared
to related approaches. We perform a user study with domain experts to show the
MGM?s ability to help with dataset exploration.
1
Introduction
Not only are our data growing in volume and dimensionality, but the understanding that we wish to
gain from them is increasingly sophisticated. For example, an educator might wish to know what
features characterize different clusters of assignments to provide in-class feedback tailored to each
student?s needs. A clinical researcher might apply a clustering algorithm to his patient cohort, and
then wish to understand what sets of symptoms distinguish clusters to assist in performing a differential diagnosis. More broadly, researchers often perform clustering as a tool for data exploration and
hypothesis generation. In these situations, the domain expert?s goal is to understand what features
characterize a cluster, and what features distinguish between clusters.
Objectives such as data exploration present unique challenges and opportunities for problems in
unsupervised learning. While in more typical scenarios, the discovered latent structures are simply
required for some downstream task?such as features for a supervised prediction problem?in data
exploration, the model must provide information to a domain expert in a form that they can readily
interpret. It is not sufficient to simply list what observations are part of which cluster; one must also
be able to explain why the data partition in that particular way. These explanations must necessarily
be succinct, as people are limited in the number of cognitive entities that they can process at one
time [1].
The de-facto standard for summarizing clusters (and other latent factor representations) is to list the
most probable features of each factor. For example, top-N word lists are the de-facto standard for
presenting topics from topic models [2]; principle component vectors in PCA are usually described
by a list of dimensions with the largest magnitude values for the components with the largest magnitude eigenvalues. Sparsity-inducing versions of these models [3, 4, 5, 6] make this goal more
explicit by trying to limit the number of non-zero values in each factor. Other works make these
descriptions more intuitive by deriving disjunctive normal form (DNF) expressions for each cluster [7] or learning a set of important features and examples that characterizes each cluster [8]. While
these approaches might effectively characterize each cluster, they do not provide information about
1
what distinguishes clusters from each other. Understanding these differences is important in many
situations?such when performing a differential diagnosis and computing relative risks [9, 10].
Techniques that combine variable selection and clustering assist in finding dimensions that
distinguish?rather than simply characterize?the clusters [11, 12]. Variable extraction methods,
such as PCA, project the data into a smaller number of dimensions and perform clustering there. In
contrast, variable selection methods choose a small number of dimensions to retain. Within variable selection approaches, filter methods (e.g. [13, 14, 15]) first select important dimensions and
then cluster based on those. Wrapper methods (e.g. [16]) iterate between selecting dimensions and
clustering to maximize a clustering objective. Embedded methods (e.g. [17, 18, 19]) combine variable selection and clustering into one objective. All of these approaches identify a small subset of
dimensions that can be used to form a clustering that is as good as (or better than) using all the
dimensions. A primary motivation for identifying this small subset is that one can then accurately
cluster future data with many fewer measurements per observation. However, identifying a minimal
set of distinguishing dimensions is the opposite of what is required in data exploration and hypothesis generation tasks. Here, the researcher desires a comprehensive set of distinguishing dimensions
to better understand the important patterns in the data.
In this work, we present a generative approach for discovering a global set of distinguishable dimensions when clustering high-dimensional data. Our goal is to find a comprehensive set of distinguishing dimensions to assist with further data exploration and hypothesis generation, rather than a few
dimensions that will distinguish the clusters. We use an embedded approach that incorporates interpretability criteria directly into the model. First, we use a logic-based feature extraction technique to
consolidate dimensions into easily-interpreted groups. Second, we define important groups as ones
having multi-modal parameter values?that is, groups that have gap in their parameter values across
clusters. By building these human-oriented interpretability criteria directly into the model, we can
easily report back what an extracted set of features means (by its logical formula) and what sets of
features distinguish one cluster from another without any ad-hoc post-hoc analysis.
2
Model
We consider a data-set {wnd } with N observations and D binary dimensions. Our goal is to decompose these N observations into K clusters while simulateneously returning a comprehensive list of
what sets of dimensions d are important for distinguishing between the clusters.
MGM has two core elements which perform interpretable feature extraction and selection. At the
feature extraction stage, features are grouped together by logical formulas, which are easily interpreted by people [20, 21], allowing some dimensionality reduction while maintaining interpretability. Next, we select features for which there is a large separation?or a gap?in parameter values. From personal communication with domain experts across several domains, we observed that
separation?rather than simply variation?is often as aspect of interest as it provides an unambiguous way to discriminate between clusters.
We focus on binary-valued data. Our feature extraction step involves consolidating dimensions into
groups. We posit that there an infinite number of groups g, and a multinomial latent variable ld
that indicates the group to which dimension d belongs. Each group g is characterized by a latent
variable fg which contains the formula associated with the group g. In this work, we only consider
the formulas fg = or, fg = and and constrain each dimension to belong to only one group. Simple
Boolean operations like or and and are easy to interpret by people. Requiring each dimension to be
part of only one group avoid having to solve a (possibly NP-complete) satisfiability problem as part
of the generative procedure.
Feature selection is performed through a binary latent variable yg which indicates whether each
group g is important for distinguishing clusters. If a group is important (yg = 1), then the probability
?gk that group g is present in an observation from cluster k is drawn from a bi-modal distribution
(modeled as a mixture of Beta distributions). If the group is unimportant (yg = 0), the the probability
?gk is drawn from a uni-modal distribution. While a uni-modal distribution with high variance can
also produce both low and high values for the probability ?gk , it will also produce intermediate
values. However, draws from the bi-modal distribution will have a clear gap between low and
high values. This definition of important distributions is distinct from the criterion in [17], where
2
?p
?g
?g
yg
tgk
?gk
?f
?l
fg
ld
K
?i , ?i
?z
?z
G
D
C
zn
wnd
ing
D
G
N
(a) Mind the gap graphical model
(b) Cartoon describing emissions from
important dimensions. In our case, we
define importance by separability?or
a gap?rather than simply variance.
Thus, we distinguish panel (1) from (2)
and (3), while [17] distinguishes between (2) and (3).
Figure 1: Graphical model of MGM, Cartoon of distinguishing dimensions.
parameters for important distributions were selected from a uni-modal distribution and parameters
for unimportant dimensions were shared across all clusters. Figure 1b illustrates this difference.
Generative Model The graphical model for MGM is shown in Figure 1. We assume that there are an
infinite number of possible groups g, each with an associated formula fg . Each dimension d belongs
to a group g, as indicated by ld . We also posit that there are a set of latent clusters k, each with
emission characteristics described below. The latent variable ?gk corresponds to the probability that
group g is present in the data, and is drawn with a uni-modal or bi-modal distribution governed by
the parameters {?g , yg , tgk }. Each observation n belongs to exactly one latent cluster k, indicated
by zn . The binary variable ing indicates whether group g is present in observation n. Finally, the
probability of some observation wnd = 1 depends on whether its associated group g (indicated by
ld ) is present in the data (indicated by ing ) and the associated formula fg .
The complete generative process first involves assigning dimensions d to groups, choosing the formula fg associated with each group, and deciding whether each group g is important:
?l ? DP(?l )
yg ? Bernoulli(?g )
?f ? Dirichlet(?f )
?g ? Beta(?1 , ?2 )
ld ? Multinomial(?l )
fg ? Multinomial(?f )
where DP is the Dirichlet process. Thus, there are an infinite number of potential groups; however,
given a finite number of dimensions, only a finite number of groups can be present in the data. Next,
emission parameters are selected for each cluster k:
If(yg = 0)
Else : tgk ? Bernoulli(?g )
If : tgk = 0 :
Else :
?gk ? Beta(?u , ?u )
?gk ? Beta(?b , ?b )
?gk ? Beta(?t , ?t )
Finally, observations wnd are generated:
?z ? Dirichlet(?z )
zn ? Multinomial(?z )
ing ? Bernoulli(?gk )
If : ing = 0 : {wnd |ld = g} = 0 Else : {wnd |ld = g} ? Formulafg
The above equations indicate that if ing = 0, that is, group g is not present in the observation, then
in that observation, all wnd such that ld = g are also absent (i.e. wnd = 0). If the group g is present
(ing = 1) and the group formula fg = and, then all the dimensions associated with that dimension
are present (i.e. wnd = 1). Finally, if the group g is present (ing = 1) and the group formula
fg = or, then we sample the associated wnd from all possible configurations of wnd such that at
least one wnd = 1.
3
Figure 2: Motivating examples with cartoons from three clusters (vacation, student, winter) and the
distinguishable dimensions discovered by the MGM.
Let ? = {yg , ?g , tgk , ?gk , ld , fg , zn , ing } be the set of variables in the MGM. Given a set of observations {wnd }, the posterior over ? factors as
P r({yg , ?g , tgk , ?gk , ld , fg , zn , ing }|{wnd }) =
G
Y
p(yg |?)p(?g |?)p(fg |?)?
g
K
D
N
Y
Y
Y
[ p(tgk |?g )p(?gk |tgk , yg )]p(?|?)
p(ld |?)p(?|?)
p(zn |?)
k
d
G
N Y
Y
n
p(ing |?, zn )
g
N Y
D
Y
n
n
p(wnd |ing , f, ld )]
(1)
d
Most of these terms are straight-forward to compute given the generative model. The likelihood
term p(wnd |ing , f, ld ) can be expanded as
p(wn? |ing , f, ld ) =
Y
[(0)1(ing =1)(1?SAT(g;wn? ,fg ,ld )) (1)1(ing =1)SAT(g;wn? ,fg ,ld )
d,g
(0)1(ing =0)1(ld =g)1(wnd =1) (1)1(ing =0)1(ld =g)1(wnd =0)
(2)
where we use wn? to indicate the vector of measurements associated with observation n. The function SAT(g; wn ?, fg , ld ) indicates whether the associated formula, fg is satisfied, where fg involves
d dimensions of wn? that belong to group ld .
Motivating Example Here we provide an example to illustrate the properties of MGM on a synthetic
data-set of 400 cartoon faces. Each cartoon face can be described by eight features: earmuffs, scarf,
hat, sunglasses, pencil, silly glasses, face color, mouth shape (see Figure 2). The cartoon faces
belong to three clusters. Winter faces tend to have earmuffs and scarves. Student faces tend to
have silly glasses and pencils. Vacation faces tend to have hats and sunglasses. Face color does not
distinguish between the different clusters.
The MGM discovers four distinguishing sets of features: the vacation cluster has hat or sunglasses,
the winter cluster has earmuffs or scarfs or smile, and the student cluster has silly glasses as well as
pencils. Face color does not appear because it does not distinguish between the groups. However,
we do identify both hats and sunglasses as important, even though only one of those two features
is important for distinguishing the vacation cluster from the other clusters: our model aims to find
a comprehensive list the distinguishing features for a human expert to later review for interesting
patterns, not a minimal subset for classification. By consolidating features?such as (sunglasses or
hat)?we still provide a compact summary of the ways in which the clusters can be distinguished.
4
3
Inference
Solving Equation 1 is computationally intractable. We use variational approach to approximate the
true posterior distribution p(yg , ?g , tgk , ?gk , ld , fg , zn , ing |{wnd }) with a factored distribution:
q?g (yg ) ? Bernoulli(?g )
q?gk (tgk ) ? Bernoulli(?gk )
q`g (?g ) ? Beta(`g1 , `g2 )
q?gk (?gk ) ? Beta(?gk1 , ?gk2 )
q?n (?) ? Dirichlet(? )
qcd (ld ) ? Multinomial(cd )
q?n (zn ) ? Multinomial(?n )
qing (ing ) ? Bernoulli(ong )
qeg (fg ) ? Bernoulli(eg )
where in addition we use a weak-limit approximation to the Dirichlet process to approximate the distribution over group assignments ld . Minimizing the Kullback-Leibler divergence between the true
posterior p(?|{wnd }) and the variational distribution q(?) corresponds to maximizing the evidence
lower bound (the ELBO) Eq [log p(?|{wnd })] ? H(q) where H(q) is the entropy.
Because of the conjugate exponential family terms, most of the expressions in the ELBO are straightforward to compute. The most challenging part is determining how to optimize the variational
terms q(ld ), q(ing ), and q(fg ) that are involved in the likelihood in Equation 2. Here, we first relax
our generative process of or to have it correspond to independently sampling each wnd with some
probability s. Thus, Equation 2 becomes
Y
p(wn? |ing , fg , ld ) =
[(0)1(fg =and)1(ld =g)1(ing =1)1(wnd =0) (1)1(fg =and)1(ld =g)1(ing =1)1(wnd =1)
d,g
(1 ? s)1(fg =or)1(ld =g)1(ing =1)1(wnd =0) (s)1(fg =or)1(ld =g)1(ing =1)1(wnd =1)
(0)1(ing =0)1(ld =g)1(wnd =1) (1)1(ing =0)1(ld =g)1(wnd =0)
(3)
With this relaxation, the expression for the entire evidence lower bound is straight-forward to compute. (The full derivations are given in the supplementary materials.)
However, the logical formulas in Equation 3 still impose hard, combinatorial constraints on settings
of the variables {ing , fg , ld } that are associated with the logical formulas. Specifically, if the values
for the formula choice {fg } and group assignments {ld } are fixed, then the value of ing is also fixed
because the feature extraction step is deterministic. Once ing is fixed, however, the relationships
between all the other variables are conjugate in the exponential family. Therefore, we alternate our
inference between the extraction-related variables {ing , fg , ld } and the selection-related variables
{yg , ?g , tgk , ?gk , zn }.
Feature Extraction We consider only degenerate distributions q(ing ), q(fg ), q(ld ) that put mass on
only one setting of the variables. Note that this is still a valid setting for the variational inference
as fixing values for ing , fg , and ld , which corresponds to a degenerate Beta or Dirichlet prior, only
means that we are further limiting our set of variational distributions. Not fully optimizing a lower
bound due to this constraint can only lower the lower bound.
We perform an agglomerative procedure for assigning dimensions to groups. We begin our search
with each dimension d assigned to its own formula ld = d, fd = or. Merges of groups are explored
using a combination of data-driven and random proposals, in which we also explore changing the
formula assignment of the group. For the data-driven proposals, we use an initial run of a vanilla
k-means clustering algorithm to test whether combining two more groups results in an extracted
feature that has high variance. At each iteration, we compute the ELBO for non-overlapping subsets
of these proposals and choose the agglomeration with the highest ELBO.
Feature Selection Given a particular setting of the extraction variables {ing , fg , ld }, the remaining variables {yg , ?g , tgk , ?gk , zn } are all in the exponential family. The corresponding posterior
distributions q(yg ), q(?g ), q(tgk ), q(?gk ), and q(zn ) can be optimized via coordinate ascent [22].
4
Results
We applied our MGM to both standard benchmark and more interesting data sets. In all cases, we
ran 5 restarts of the MGM. Inference was run for 40 iterations or until the ELBO improved by less
than 0.1 relative to the previous iteration. Twenty possible merges were explored in each iteration;
5
Faces
Digits
MGM
0.59 (13)
0.53 (13)
Kmeans
0.46 (4)
0.45 (13)
HFS(G)
0.627 (16)
0.258 (13)
Law
0.454 (4)
0.254 (6)
DPM
0.481 (12)
0.176 (5)
HFS(L)
0.569 (12)
0.354 (11)
Cc
0.547 (4)
0.364 (10)
Table 1: Mutual information and number of clusters (in parentheses) for UCI benchmarks. The
mutual information is with respect to the true class labels (higher is better). Performance values for
HFS(G), Law, DPM, HFS(L), and CC are taken from [17].
Figure 3: Results on real-world datasets: animal dataset (left), recipe dataset (middle) and disease
dataset (right). Each row represents an important feature. Lighter boxes indicate that the feature is
likely to be present in the cluster, while darker boxes are unlikely to be present.
each merge exploration involved combining two existing groups into a new group. If we failed
to accept our data-driven candidate merge proposals more than three times within an iteration, we
switched to random proposals for the remaining proposals. We swept over the number of clusters
from K=4 to K=16 and reported the results with the highest ELBO.
4.1
Benchmark Problems: MGM discriminates classes
We compared the classification performance of our clustering algorithms on several UCI benchmark
problems [23]. The digits data set consists of 11000 16?16 grayscale images, 1100 for each digit.
The faces data set consists of 640 32?30 images of 20 people, with 32 images of each person from
various angles. In both cases, we binarized the images, setting the value to 0 if the value was less
than 128, 1 if the value was greater than 128. These two data-sets are chosen as they are discrete
and we have the same versions for comparison to results cited in [17].
The mutual information between our discovered clusters and the true classes in the data sets is
shown in Table 1. A higher mutual information between our clustering and known labels is one
way to objectively show that our clusters correspond to groups that humans find interesting (i.e. the
human-provided classification labels). MGM is second only to HFS(G) in the Faces dataset (second
only to HFS(G)) and the highest scoring model in the Digits dataset. It always outperforms k-means.
4.2
Demonstrating Interpretability: Real-world Applications
Our quantitative results on the benchmark datasets show that the structure recovered by our approach
is consistent with classes defined by human labelers better than or at the level of other clustering approaches. However, the dimensions in the image benchmarks do not have much associated meaning,
and the our approach was designed for clustering, not classification. Here, we demonstrate the qualitative advantages of our approach on three more interesting datasets.
Animals The animals data set [24] consists of 21 biological and ecological properties of 101 animals
(such as ?has wings? or ?has teeth?). We are also provided class labels such as insects, mammals,
and birds. The result of our MGM is shown in Figure 3. Each row is a distinguishable feature; each
column is a cluster. Lighter color boxes in Figure 3 indicate that the feature is likely to be present in
the cluster, while darker color boxes indicate that the feature is unlikely to be present in the cluster.
Below each cluster, a few animals that belong to that cluster are listed.
6
We first note that, as desired, our model selects features that have large variation in their probabilities
across the clusters (rows in Figure 3). Thus, it is straight-forward to read what makes each column
different from the others: the mammals in the third column do not lay eggs; the insects in the fifth
column are toothless and invertebrates (and therefore have no tails). They are also rarely predators.
Unlike the land animals, many of the water animals in columns one and two do not breathe.
Recipes The recipes data set consists of ingredients from recipes taken from the computer cooking
contest1 . There are 56 recipes, with 147 total ingredients. The recipes fall into four categories: pasta,
chili, brownies or punch. We seek to find ingredients and groups of ingredients that can distinguish
different types of recipes. Note: The names for each cluster have been filled in after the analysis,
based on the class label of the majority of the observations that were grouped into that cluster.
The MGM distills the 147 ingredients into only 3 important features. The first extracted feature
contains several spices, which are present in pasta, brownies, and chili but not in punch. Punch
is also distinguished from the other clusters by its lack of basic spices such as salt and pepper
(the second extracted feature). The third extracted feature contains a number of savory cooking
ingredients such as oil, garlic, and shallots. These are common in the pasta and chili clusters but
uncommon in the punch and brownie clusters.
Diseases Finally, we consider a data set of patients with autism spectrum disorder (ASD) accumulated over the first 15 years of life [25]. ASDs are a complex disease that is often associated with
co-occurring conditions such as seizures and developmental delays. As most patients have very
few diagnoses, we limited our analysis to the 184 patients with at least 200 diagnoses and the 58
diagnoses that occurred in at least 5% of the patients. We binarized the count data to 0-1 values.
Our model reduces these 58 dimensions to 9 important sets of features. The extracted features had
many more dimensions than in the examples, so we only list two features from each group and
provide the total number in parenthesis. Several of the groups of the extracted variables?which did
not use any auxiliary information?are similar to those from [25]. In particular, [25] report clusters
of patients with epilepsy and cerebral palsy, patients with psychiatric disorders, and patients with
gastrointestinal disorders. Using our representation, we can easily see that there appears to be one
group of sick patients (cluster 1) for whom all features are likely. We can also see what features
distinguish clusters 0, 2, and 3 from each other by which ones are unlikely to be present.
4.3
Verifying interpretability: Human subject experiment
We conducted a pilot study to gather more qualitative evaluation of the MGM. We first divided the
ASD data into three datasets with random disjoint subsets of approximately 20 dimensions each. For
each of these subsets, we prepared the data in three formats: raw patient data (a list of symptoms),
clustered results (centroids) from K-means, and clustered results with the MGM with distinguishable
sets of features. Both the clustered results were presented as figures such as figure 3 and the raw
data were presented in a spreadsheet. Three domain experts were then tasked to explore the different
data subsets in each format (so each participant saw all formats and all data subsets) and produce a
2-3 sentence executive summary of each. The different conditions serve as reference points for the
subjects to give more qualitative feedback about the MGM.
All subjects reported that the raw data?even with a ?small? number of 20 dimensions?was impossible to summarize in a 5 minute period. Subjects also reported that the aggregation of states in
the MGM helped them summarize the data faster rather than having to aggregate manually. While
none of them explicitly indicated they noticed that all the rows of the MGM were relevant, they did
report that it was easier to find the differences. One strictly preferred the MGM over the options,
while another found the MGM easier for making up a narrative but was overall satisfied with both
the MGM and the K-means clustering. One subject appreciated the succinctness of the MGM but
was concerned that ?it may lose some information?. This final comment motivates future work on
structured priors for on what logical formulas should be allowed or likely; future user studies should
study the effects of the feature extraction and selection separately. Finally, a qualitative review of
the summaries produced found similar but slightly more compact organization of notes in the MGM
compared to the K-means model.
1
Computer Cooking Contest: http://liris.cnrs.fr/ccc/ccc2014/doku.php
7
5
Discussion and Related Work
MGM combines extractive and selective approaches for finding a small set of distinguishable dimensions when performing unsupervised learning on high-dimensional data sets. Rather than rely
on criteria that use statistical measures of variation, and then performing additional post-processing
to interpret the results, we build interpretable criteria directly into the model. Our logic-based feature extraction step allows us to find natural groupings of dimensions such as (backbone or tail
or toothless) in the animal data and (salt or pepper or cream) in the recipe data. Defining an interesting dimension as one whose parameters are drawn from a multi-modal distribution helps us
recover groups like pasta and punch. Providing such comprehensive lists of distinguishing dimensions assists in the data exploration and hypothesis generation process. Similarly, providing lists of
dimensions that have been consolidated in one extraction aids the human discovery process of why
those dimensions might be a meaningful group.
Closest to our work are feature selection approaches such as [17, 18, 19], which also use a mixture
of beta-distributions to identify feature types. In particular, [17] uses a similar hierarchy of Beta
and Bernoulli priors to identify important dimensions. They carefully choose the priors so that
some dimensions can be globally important, while other dimensions can be locally important. The
parameters for important dimensions are chosen IID from a Gaussian distribution, while values for
all unimportant dimensions come from the same background distribution.
Our approach draws parameters for important dimensions from distributions with multiple modes?
while unimportant dimensions are drawn from a uni-modal distribution. Thus, our model is more
expressive than approaches in which all unimportant dimension values are drawn from the same
distribution. It captures the idea that not all variation is important; clusters can vary in their emission
parameters for a particular dimension and that variation still might not be interesting. Specifically,
an important dimension is one where there is a gap between parameter values. Our logic-based
feature extraction step collapses the dimensionality further while retaining interpretability.
More broadly, there are many other lines of work that focus on creating latent variable models
based on diversity or differences. Methods for inducing diversity, such as determinantal point processes [26], have been used to find diverse solutions on applications ranging from detecting objects
in videos [27], topic modeling [28], and variable selection [29]. In these cases, the goal is to avoid
finding multiple very similar optima; while the generated solutions are different, the model itself
does not provide descriptions of what distinguishes one solution from the rest. Moreover, there may
be situations in which forcing solutions to be very different might not make sense: for example,
when clustering recipes, it may be very sensible for the ingredient ?salt? to be a common feature of
all clusters; likewise when clustering patients from an autism cohort, one would expect all patients
to have some kind of developmental disorder.
Finally, other approaches focus on building models in which factors describe what distinguishes
them from some baseline. For example, [30] builds a topic model in which each topic is described
by the difference from some baseline distribution. Contrastive learning [31] focuses on finding the
directions that are most distinguish background data from foreground data. Max-margin approaches
to topic models [32] try to find topics that can best assist in distinguishing between classes, but are
not necessarily readily interpretable themselves.
6
Conclusions and Future Work
We presented MGM, an approach for interpretable feature extraction and selection. By incorporating interpretability-based criteria directly into the model design, we found key dimensions that
distinguished clusters of animals, recipes, and patients. While this work focused on the clustering of
binary data, these ideas could also be applied to mixed and multiple membership models. Similarly,
notions of interestingness based on a gap could be applied to categorical and continuous data. It also
would be interesting to consider more expressive extracted features, such as more complex logical
formulas. Finally, while we learned feature extractions in a completely unsupervised fashion, our
generative approach also allows one to flexibly incorporate domain knowledge about possible group
memberships into the priors.
8
References
[1] G. A. Miller, ?The magical number seven, plus or minus two: Some limits on our capacity for processing
information,? The Psychological Review, pp. 81?97, March 1956.
[2] D. M. Blei, A. Y. Ng, and M. I. Jordan, ?Latent Dirichlet allocation,? JMLR, pp. 3:993?1022, 2003.
[3] H. Zou, T. Hastie, and R. Tibshirani, ?Sparse principal component analysis,? Journal of Computational
and Graphical Statistics, vol. 15, p. 2006, 2004.
[4] K. Than and T. B. Ho, ?Fully sparse topic models,? in ECML-PKDD, pp. 490?505, 2012.
[5] S. Williamson, C. Wang, K. Heller, and D. Blei, ?The ibp compound dirichlet process and its application
to focused topic modeling,? ICML, 2010.
[6] E. Elhamifar and R. Vidal, ?Sparse subspace clustering: Algorithm, theory, and applications,? IEEE Trans.
Pattern Anal. Mach. Intell., vol. 35, no. 11, pp. 2765?2781, 2013.
[7] R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan, ?Automatic subspace clustering of high dimensional data for data mining applications,? SIGMOD Rec., vol. 27, pp. 94?105, June 1998.
[8] B. Kim, C. Rudin, and J. A. Shah, ?The Bayesian Case Model: A generative approach for case-based
reasoning and prototype classification,? in NIPS, 2014.
[9] R. K, ?Differential diagnosis in primary care,? JAMA, vol. 307, no. 14, pp. 1533?1534, 2012.
[10] Z. J and Y. KF, ?What?s the relative risk?: A method of correcting the odds ratio in cohort studies of
common outcomes,? JAMA, vol. 280, no. 19, pp. 1690?1691, 1998.
[11] S. Alelyani, J. Tang, and H. Liu, ?Feature selection for clustering: A review.,? Data Clustering: Algorithms and Applications, vol. 29, 2013.
[12] S. Gu?erif, ?Unsupervised variable selection: when random rankings sound as irrelevancy.,? in FSDM,
2008.
[13] P. Mitra, C. Murthy, and S. K. Pal, ?Unsupervised feature selection using feature similarity,? IEEE transactions on pattern analysis and machine intelligence, vol. 24, no. 3, pp. 301?312, 2002.
[14] M. Dash and H. Liu, ?Feature selection for clustering,? in KDD: Current Issues and New Applications,
pp. 110?121, 2000.
[15] K. Tsuda, M. Kawanabe, and K.-R. Mller, ?Clustering with the fisher score,? in NIPS, 2003.
[16] J. G. Dy and C. E. Brodley, ?Feature selection for unsupervised learning,? JMLR, pp. 5:845?889, 2004.
[17] Y. Guan, J. G. Dy, and M. I. Jordan, ?A unified probabilistic model for global and local unsupervised
feature selection.,? in ICML, pp. 1073?1080, 2011.
[18] W. Fan and N. Bouguila, ?Online learning of a dirichlet process mixture of generalized dirichlet distributions for simultaneous clustering and localized feature selection.,? in ACML, pp. 113?128, 2012.
[19] G. Yu, R. Huang, and Z. Wang, ?Document clustering via dirichlet process mixture model with feature
selection,? in KDD, pp. 763?772, ACM, 2010.
[20] A. A. Freitas, ?Comprehensible classification models: a position paper,? ACM SIGKDD Explorations
Newsletter, 2014.
[21] G. De?ath and K. E. Fabricius, ?Classification and regression trees: a powerful yet simple technique for
ecological data analysis,? Ecology, vol. 81, no. 11, pp. 3178?3192, 2000.
[22] M. J. Wainwright and M. I. Jordan, ?Graphical models, exponential families, and variational inference,?
Foundations and Trends in Machine Learning, vol. 1, no. 1-2, pp. 1?305, 2008.
[23] M. Lichman, ?UCI machine learning repository,? 2013.
[24] C. Kemp and J. B. Tenenbaum, ?The discovery of structural form,? PNAS, 2008.
[25] F. Doshi-Velez, Y. Ge, and I. Kohane, ?Comorbidity clusters in autism spectrum disorders: an electronic
health record time-series analysis,? Pediatrics, vol. 133, no. 1, pp. e54?e63, 2014.
[26] A. Kulesza, Learning with Determinantal Point Processes. PhD thesis, University of Pennsylvania, 2012.
[27] A. Kulesza and B. Taskar, ?Structured determinantal point processes,? in NIPS, 2010.
[28] J. Y. Zou and R. P. Adams, ?Priors for diversity in generative latent variable models,? in NIPS, 2012.
[29] N. K. Batmanghelich, G. Quon, A. Kulesza, M. Kellis, P. Golland, and L. Bornn, ?Diversifying sparsity
using variational determinantal point processes,? CoRR, 2014.
[30] J. Eisenstein, A. Ahmed, and E. P. Xing, ?Sparse additive generative models of text,? ICML, 2011.
[31] J. Y. Zou, D. J. Hsu, D. C. Parkes, and R. P. Adams, ?Contrastive learning using spectral methods,? in
NIPS, 2013.
[32] J. Zhu, A. Ahmed, and E. P. Xing, ?Medlda: maximum margin supervised topic models for regression
and classification,? in ICML, pp. 1257?1264, ACM, 2009.
9
| 5957 |@word repository:1 middle:1 version:2 seek:1 chili:3 contrastive:2 mammal:2 minus:1 ld:38 reduction:1 initial:1 configuration:1 contains:3 liu:2 selecting:1 score:1 lichman:1 series:1 document:1 wrapper:1 outperforms:1 existing:1 freitas:1 recovered:1 current:1 assigning:2 yet:1 must:3 readily:2 determinantal:4 additive:1 partition:1 kdd:2 shape:1 designed:1 interpretable:6 wnd:28 generative:11 fewer:1 discovering:1 selected:2 rudin:1 intelligence:1 core:1 record:1 parkes:1 blei:2 provides:1 detecting:1 qcd:1 differential:3 beta:10 qualitative:4 consists:4 combine:3 themselves:1 pkdd:1 growing:1 multi:2 globally:1 gastrointestinal:1 bouguila:1 becomes:1 project:1 begin:1 provided:2 moreover:1 panel:1 mass:1 what:16 backbone:1 kind:1 interpreted:2 consolidated:1 unified:1 finding:4 pediatrics:1 quantitative:1 binarized:2 exactly:1 returning:1 facto:2 appear:1 cooking:3 interestingness:1 mitra:1 doku:1 local:1 limit:3 gunopulos:1 mach:1 merge:2 approximately:1 might:6 plus:1 bird:1 challenging:1 co:2 limited:2 collapse:1 bi:3 unique:1 digit:4 procedure:2 word:1 psychiatric:1 selection:22 put:1 risk:2 impossible:1 optimize:2 deterministic:1 maximizing:1 straightforward:1 flexibly:1 independently:1 focused:2 identifying:2 disorder:5 correcting:1 factored:1 deriving:1 his:1 notion:1 variation:5 coordinate:1 limiting:1 hierarchy:1 user:2 lighter:2 distinguishing:12 us:1 hypothesis:5 harvard:2 element:1 trend:1 rec:1 lay:1 observed:1 disjunctive:1 taskar:1 wang:2 verifying:1 capture:1 highest:3 ran:1 disease:4 discriminates:1 developmental:2 ong:1 personal:1 solving:1 serve:1 mgm:29 completely:1 gu:1 easily:4 various:1 derivation:1 distinct:1 dnf:1 describe:1 aggregate:1 choosing:1 outcome:1 brownie:3 whose:1 supplementary:1 valued:1 solve:1 relax:1 elbo:6 ability:1 objectively:1 statistic:1 g1:1 itself:1 final:1 online:1 hoc:2 advantage:1 eigenvalue:1 agrawal:1 fr:1 uci:3 combining:2 relevant:1 educator:1 ath:1 degenerate:2 description:2 inducing:2 intuitive:1 recipe:11 cluster:57 optimum:1 sea:1 produce:3 adam:2 object:1 help:2 illustrate:1 fixing:1 ibp:1 eq:1 auxiliary:1 involves:3 indicate:5 come:1 extractive:1 direction:1 posit:2 filter:1 exploration:10 human:7 raghavan:1 material:1 clustered:3 decompose:1 biological:1 probable:1 strictly:1 normal:1 deciding:1 mller:1 gk2:1 vary:1 narrative:1 lose:1 combinatorial:1 label:5 saw:1 largest:2 grouped:2 liris:1 gehrke:1 tool:1 mit:1 always:1 gaussian:1 aim:1 rather:6 avoid:2 focus:4 emission:4 june:1 bernoulli:8 indicates:4 likelihood:2 contrast:1 sigkdd:1 centroid:1 baseline:2 kim:2 summarizing:1 glass:3 inference:5 sense:1 scarf:3 cnrs:1 accumulated:1 membership:2 entire:1 unlikely:3 accept:1 selective:1 selects:1 issue:1 overall:1 classification:8 insect:2 retaining:1 animal:10 mutual:4 once:1 extraction:17 having:3 cartoon:6 sampling:1 manually:1 placing:1 represents:1 yu:1 unsupervised:7 icml:4 ng:1 foreground:1 future:4 report:4 np:1 others:1 few:3 distinguishes:4 oriented:1 winter:3 divergence:1 comprehensive:5 intell:1 qing:1 tgk:13 ecology:1 organization:1 interest:1 fd:1 mining:1 evaluation:1 uncommon:1 mixture:4 filled:1 tree:1 palsy:1 desired:1 tsuda:1 minimal:2 psychological:1 column:5 modeling:2 boolean:1 zn:12 assignment:4 subset:8 consolidating:2 delay:1 conducted:1 pal:1 characterize:4 motivating:2 reported:3 synthetic:1 person:1 cited:1 csail:1 retain:1 probabilistic:1 together:1 yg:16 thesis:1 satisfied:2 choose:3 possibly:1 huang:1 cognitive:1 creating:1 expert:6 wing:1 potential:1 qeg:1 de:3 diversity:3 student:4 explicitly:1 ranking:1 ad:1 depends:1 performed:1 later:1 helped:1 try:1 characterizes:1 xing:2 hf:6 maintains:1 participant:1 aggregation:1 option:1 toothless:2 predator:1 recover:1 php:1 variance:3 characteristic:1 likewise:1 miller:1 correspond:2 identify:4 weak:1 raw:3 bayesian:1 accurately:1 produced:1 iid:1 none:1 researcher:3 cc:2 straight:3 autism:3 murthy:1 explain:1 simultaneous:1 definition:1 pp:17 involved:2 doshi:2 associated:12 gain:1 pilot:1 dataset:6 hsu:1 massachusetts:1 logical:6 color:5 knowledge:1 improves:1 dimensionality:3 satisfiability:1 sophisticated:1 carefully:1 back:1 appears:1 higher:2 supervised:2 restarts:1 modal:10 improved:1 though:1 symptom:2 box:4 stage:1 until:1 expressive:2 overlapping:1 lack:1 mode:1 indicated:5 building:2 effect:1 name:1 requiring:1 true:4 oil:1 succinctness:1 pencil:3 assigned:1 read:1 leibler:1 eg:1 unambiguous:1 eisenstein:1 criterion:7 generalized:1 trying:1 presenting:1 complete:2 demonstrate:1 newsletter:1 reasoning:1 image:5 variational:7 meaning:1 discovers:1 ranging:1 bornn:1 common:3 agglomeration:1 multinomial:6 salt:3 volume:1 cerebral:1 belong:4 tail:2 occurred:1 diversifying:1 velez:2 interpret:3 epilepsy:1 measurement:2 cambridge:2 automatic:1 vanilla:1 similarly:2 contest:1 had:1 similarity:1 labelers:1 sick:1 posterior:4 own:1 closest:1 optimizing:1 belongs:3 driven:3 forcing:1 scenario:1 compound:1 ecological:2 binary:5 life:1 swept:1 scoring:1 greater:1 additional:1 impose:1 care:1 maximize:1 period:1 full:1 multiple:3 sound:1 reduces:1 pnas:1 ing:35 faster:1 characterized:1 ahmed:2 clinical:1 divided:1 post:2 parenthesis:2 prediction:1 basic:1 regression:2 patient:13 tasked:1 spreadsheet:1 iteration:5 tailored:1 proposal:6 addition:1 background:2 separately:1 golland:1 else:3 rest:1 unlike:1 ascent:1 comment:1 subject:5 tend:3 dpm:2 cream:1 incorporates:1 smile:1 finale:2 jordan:3 odds:1 structural:1 cohort:3 intermediate:1 easy:1 wn:7 concerned:1 iterate:1 pepper:2 hastie:1 gk1:1 opposite:1 pennsylvania:1 idea:2 prototype:1 absent:1 whether:6 expression:3 pca:2 assist:6 beenkim:1 clear:1 unimportant:5 listed:1 prepared:1 locally:1 tenenbaum:1 comorbidity:1 category:1 http:1 spice:2 punch:5 disjoint:1 per:1 tibshirani:1 diagnosis:6 broadly:2 discrete:1 diverse:1 vol:10 medlda:1 group:47 key:1 four:2 demonstrating:1 drawn:6 distills:1 changing:1 asd:2 garlic:1 relaxation:1 downstream:1 year:1 run:2 angle:1 powerful:1 family:4 electronic:1 separation:2 draw:2 consolidate:1 seizure:1 dy:2 bound:4 dash:1 distinguish:11 fan:1 constraint:2 constrain:1 invertebrate:1 aspect:1 silly:3 performing:4 expanded:1 format:3 structured:2 alternate:1 combination:1 march:1 conjugate:2 smaller:1 across:4 increasingly:1 separability:1 slightly:1 making:1 taken:2 computationally:1 equation:5 describing:1 count:1 mind:3 know:1 ge:1 operation:1 vidal:1 apply:1 eight:1 kawanabe:1 spectral:1 occurrence:1 distinguished:3 shah:3 vacation:4 hat:5 ho:1 magical:1 comprehensible:1 top:1 clustering:26 dirichlet:11 remaining:2 graphical:5 opportunity:1 maintaining:1 sigmod:1 build:2 kellis:1 objective:3 noticed:1 primary:2 ccc:1 dp:2 subspace:2 entity:1 majority:1 sensible:1 capacity:1 topic:10 whom:1 agglomerative:1 seven:1 kemp:1 water:1 modeled:1 relationship:1 providing:2 minimizing:1 ratio:1 gk:20 design:1 anal:1 motivates:1 twenty:1 perform:5 allowing:1 observation:14 datasets:5 benchmark:6 finite:2 ecml:1 situation:3 defining:1 communication:1 acml:1 discovered:3 required:2 optimized:1 sentence:1 merges:2 learned:1 nip:5 trans:1 able:1 usually:1 pattern:4 below:2 kulesza:3 sparsity:2 challenge:1 summarize:2 interpretability:9 max:1 explanation:1 video:1 mouth:1 wainwright:1 natural:1 rely:1 zhu:1 technology:1 brodley:1 sunglass:5 categorical:1 extract:1 health:1 text:1 review:4 understanding:2 prior:6 discovery:2 heller:1 kf:1 determining:1 relative:3 law:2 embedded:2 fully:2 expect:1 mixed:1 generation:5 interesting:7 allocation:1 ingredient:8 localized:1 executive:1 switched:1 foundation:1 teeth:1 gather:1 sufficient:1 consistent:1 principle:1 cd:1 land:1 row:4 summary:3 appreciated:1 allow:1 understand:3 institute:1 fall:1 face:12 fifth:1 julie:2 fg:32 sparse:4 feedback:2 dimension:55 world:3 valid:1 forward:3 quon:1 transaction:1 approximate:2 compact:2 uni:5 preferred:1 kullback:1 logic:3 global:3 sat:3 grayscale:1 spectrum:2 search:1 latent:11 continuous:1 why:2 table:2 pasta:4 williamson:1 necessarily:2 complex:2 zou:3 domain:7 did:2 motivation:1 succinct:1 allowed:1 egg:1 fashion:1 darker:2 aid:1 position:1 wish:3 explicit:1 exponential:4 candidate:1 governed:1 guan:1 jmlr:2 third:2 tang:1 formula:17 minute:1 list:10 explored:2 evidence:2 grouping:1 intractable:1 incorporating:1 effectively:1 importance:1 corr:1 phd:1 magnitude:2 illustrates:1 occurring:1 elhamifar:1 margin:2 gap:9 easier:2 entropy:1 distinguishable:6 simply:5 explore:2 likely:4 jama:2 failed:1 desire:1 g2:1 corresponds:3 extracted:8 ma:2 breathe:1 acm:3 goal:5 kmeans:1 shared:1 fisher:1 hard:1 typical:1 infinite:3 specifically:2 principal:1 total:2 discriminate:1 meaningful:1 rarely:1 select:2 people:4 incorporate:1 |
5,477 | 5,958 | Max-Margin Deep Generative Models
Chongxuan Li? , Jun Zhu? , Tianlin Shi? , Bo Zhang?
?
Dept. of Comp. Sci. & Tech., State Key Lab of Intell. Tech. & Sys., TNList Lab,
Center for Bio-Inspired Computing Research, Tsinghua University, Beijing, 100084, China
?
Dept. of Comp. Sci., Stanford University, Stanford, CA 94305, USA
{licx14@mails., dcszj@, dcszb@}tsinghua.edu.cn; stl501@gmail.com
Abstract
Deep generative models (DGMs) are effective on learning multilayered representations of complex data and performing inference of input data by exploring the
generative ability. However, little work has been done on examining or empowering the discriminative ability of DGMs on making accurate predictions. This paper presents max-margin deep generative models (mmDGMs), which explore the
strongly discriminative principle of max-margin learning to improve the discriminative power of DGMs, while retaining the generative capability. We develop an
efficient doubly stochastic subgradient algorithm for the piecewise linear objective. Empirical results on MNIST and SVHN datasets demonstrate that (1) maxmargin learning can significantly improve the prediction performance of DGMs
and meanwhile retain the generative ability; and (2) mmDGMs are competitive to
the state-of-the-art fully discriminative networks by employing deep convolutional
neural networks (CNNs) as both recognition and generative models.
1
Introduction
Max-margin learning has been effective on learning discriminative models, with many examples
such as univariate-output support vector machines (SVMs) [5] and multivariate-output max-margin
Markov networks (or structured SVMs) [30, 1, 31]. However, the ever-increasing size of complex
data makes it hard to construct such a fully discriminative model, which has only single layer of
adjustable weights, due to the facts that: (1) the manually constructed features may not well capture
the underlying high-order statistics; and (2) a fully discriminative approach cannot reconstruct the
input data when noise or missing values are present.
To address the first challenge, previous work has considered incorporating latent variables into
a max-margin model, including partially observed maximum entropy discrimination Markov networks [37], structured latent SVMs [32] and max-margin min-entropy models [20]. All this work
has primarily focused on a shallow structure of latent variables. To improve the flexibility, learning SVMs with a deep latent structure has been presented in [29]. However, these methods do not
address the second challenge, which requires a generative model to describe the inputs. The recent work on learning max-margin generative models includes max-margin Harmoniums [4], maxmargin topic models [34, 35], and nonparametric Bayesian latent SVMs [36] which can infer the
dimension of latent features from data. However, these methods only consider the shallow structure
of latent variables, which may not be flexible enough to describe complex data.
Much work has been done on learning generative models with a deep structure of nonlinear hidden
variables, including deep belief networks [25, 16, 23], autoregressive models [13, 9], and stochastic
variations of neural networks [3]. For such models, inference is a challenging problem, but fortunately there exists much recent progress on stochastic variational inference algorithms [12, 24].
However, the primary focus of deep generative models (DGMs) has been on unsupervised learning,
1
with the goals of learning latent representations and generating input samples. Though the latent
representations can be used with a downstream classifier to make predictions, it is often beneficial
to learn a joint model that considers both input and response variables. One recent attempt is the
conditional generative models [11], which treat labels as conditions of a DGM to describe input
data. This conditional DGM is learned in a semi-supervised setting, which is not exclusive to ours.
In this paper, we revisit the max-margin principle and present a max-margin deep generative model
(mmDGM), which learns multi-layer representations that are good for both classification and input inference. Our mmDGM conjoins the flexibility of DGMs on describing input data and the
strong discriminative ability of max-margin learning on making accurate predictions. We formulate
mmDGM as solving a variational inference problem of a DGM regularized by a set of max-margin
posterior constraints, which bias the model to learn representations that are good for prediction. We
define the max-margin posterior constraints as a linear functional of the target variational distribution of the latent presentations. Then, we develop a doubly stochastic subgradient descent algorithm,
which generalizes the Pagesos algorithm [28] to consider nontrivial latent variables. For the variational distribution, we build a recognition model to capture the nonlinearity, similar as in [12, 24].
We consider two types of networks used as our recognition and generative models: multiple layer
perceptrons (MLPs) as in [12, 24] and convolutional neural networks (CNNs) [14]. Though CNNs
have shown promising results in various domains, especially for image classification, little work has
been done to take advantage of CNN to generate images. The recent work [6] presents a type of
CNN to map manual features including class labels to RBG chair images by applying unpooling,
convolution and rectification sequentially; but it is a deterministic mapping and there is no random
generation. Generative Adversarial Nets [7] employs a single such layer together with MLPs in a
minimax two-player game framework with primary goal of generating images. We propose to stack
this structure to form a highly non-trivial deep generative network to generate images from latent
variables learned automatically by a recognition model using standard CNN. We present the detailed
network structures in experiments part. Empirical results on MNIST [14] and SVHN [22] datasets
demonstrate that mmDGM can significantly improve the prediction performance, which is competitive to the state-of-the-art methods [33, 17, 8, 15], while retaining the capability of generating input
samples and completing their missing values.
2
Basics of Deep Generative Models
We start from a general setting, where we have N i.i.d. data X = {xn }N
n=1 . A deep generative
model (DGM) assumes that each xn ? RD is generated from a vector of latent variables zn ? RK ,
which itself follows some distribution. The joint probability of a DGM is as follows:
p(X, Z|?, ?) =
N
Y
p(zn |?)p(xn |zn , ?),
(1)
n=1
where p(zn |?) is the prior of the latent variables and p(xn |zn , ?) is the likelihood model for generating observations. For notation simplicity, we define ? = (?, ?). Depending on the structure
of z, various DGMs have been developed, such as the deep belief networks [25, 16], deep sigmoid
networks [21], deep latent Gaussian models [24], and deep autoregressive models [9]. In this paper,
we focus on the directed DGMs, which can be easily sampled from via an ancestral sampler.
However, in most cases learning DGMs is challenging due to the intractability of posterior inference.
The state-of-the-art methods resort to stochastic variational methods under the maximum likelihood
? = argmax log p(X|?). Specifically, let q(Z) be the variational
estimation (MLE) framework, ?
?
distribution that approximates the true posterior p(Z|X, ?). A variational upper bound of the per
sample negative log-likelihood (NLL) ? log p(xn |?, ?) is:
L(?, q(zn ); xn ) , KL(q(zn )||p(zn |?)) ? Eq(zn ) [log p(xn |zn , ?)],
(2)
where KL(q||p) is the Kullback-Leibler (KL) divergence between distributions q and p. Then,
P
L(?, q(Z); X) , n L(?, q(zn ); xn ) upper bounds the full negative log-likelihood ? log p(X|?).
It is important to notice that if we do not make restricting assumption on the variational distribution
q, the lower bound is tight by simply setting q(Z) = p(Z|X, ?). That is, the MLE is equivalent to
solving the variational problem: min?,q(Z) L(?, q(Z); X). However, since the true posterior is intractable except a handful of special cases, we must resort to approximation methods. One common
2
assumption is that the variational distribution is of some parametric form, q? (Z), and then we optimize the variational bound w.r.t the variational parameters ?. For DGMs, another challenge arises
that the variational bound is often intractable to compute analytically. To address this challenge, the
early work further bounds the intractable parts with tractable ones by introducing more variational
parameters [26]. However, this technique increases the gap between the bound being optimized and
the log-likelihood, potentially resulting in poorer estimates. Much recent progress [12, 24, 21] has
been made on hybrid Monte Carlo and variational methods, which approximates the intractable expectations and their gradients over the parameters (?, ?) via some unbiased Monte Carlo estimates.
Furthermore, to handle large-scale datasets, stochastic optimization of the variational objective can
be used with a suitable learning rate annealing scheme. It is important to notice that variance reduction is a key part of these methods in order to have fast and stable convergence.
Most work on directed DGMs has been focusing on the generative capability on inferring the observations, such as filling in missing values [12, 24, 21], while little work has been done on investigating
the predictive power, except the semi-supervised DGMs [11] which builds a DGM conditioned on
the class labels and learns the parameters via MLE. Below, we present max-margin deep generative
models, which explore the discriminative max-margin principle to improve the predictive ability of
the latent representations, while retaining the generative capability.
3
Max-margin Deep Generative Models
We consider supervised learning, where the training data is a pair (x, y) with input features x ? RD
and the ground truth label y. Without loss of generality, we consider the multi-class classification,
where y ? C = {1, . . . , M }. A max-margin deep generative model (mmDGM) consists of two
components: (1) a deep generative model to describe input features; and (2) a max-margin classifier
to consider supervision. For the generative model, we can in theory adopt any DGM that defines a
joint distribution over (X, Z) as in Eq. (1). For the max-margin classifier, instead of fitting the input
features into a conventional SVM, we define the linear classifier on the latent representations, whose
learning will be regularized by the supervision signal as we shall see. Specifically, if the latent
representation z is given, we define the latent discriminant function F (y, z, ?; x) = ? > f (y, z),
where f (y, z) is an M K-dimensional vector that concatenates M subvectors, with the yth being z
and all others being zero, and ? is the corresponding weight vector.
We consider the case that ? is a random vector, following some prior distribution p0 (?). Then
our goal is to infer the posterior distribution p(?, Z|X, Y), which is typically approximated by a
variational distribution q(?, Z) for computational tractability. Notice that this posterior is different
from the one in the vanilla DGM. We expect that the supervision information will bias the learned
representations to be more powerful on predicting the labels at testing. To account for the
uncertainty
of (?, Z), we take the expectation and define the discriminant function F (y; x) = Eq ? > f (y, z) ,
and the final prediction rule that maps inputs to outputs is:
y? = argmax F (y; x).
(3)
y?C
Note that different from the conditional DGM [11], which puts the class labels upstream, the above
classifier is a downstream model, in the sense that the supervision signal is determined by conditioning on the latent representations.
3.1
The Learning Problem
We want to jointly learn the parameters ? and infer the posterior distribution q(?, Z). Based on the
equivalent variational formulation of MLE, we define the joint learning problem as solving:
min
L(?, q(?, Z); X) + C
?,q(?,Z),?
?n
(4)
n=1
?n, y ? C, s.t. :
N
X
Eq [? > ?fn (y)] ? ?ln (y) ? ?n
?n ? 0,
where ?fn (y) = f (yn , zn ) ? f (y, zn ) is the difference of the feature vectors; ?ln (y) is the loss
function that measures the cost to predict y if the true label is yn ; and C is a nonnegative regularization parameter balancing the two components. In the objective, the variational bound is defined
3
as L(?, q(?, Z); X) = KL(q(?, Z)||p0 (?, Z|?)) ? Eq [log p(X|Z, ?)], and the margin constraints
are from the classifier (3). If we ignore the constraints (e.g., setting C at 0), the solution of q(?, Z)
will be exactly the Bayesian posterior, and the problem is equivalent to do MLE for ?.
By absorbing the slack variables, we can rewrite the problem in an unconstrained form:
min L(?, q(?, Z); X) + CR(q(?, Z; X)),
(5)
?,q(?,Z)
PN
where the hinge loss is: R(q(?, Z); X) = n=1 maxy?C (?ln (y) ? Eq [? > ?fn (y)]). Due to the
convexity of max function, it is easy to verify P
that the hinge loss is an upper bound of the training error of classifier (3), that is, R(q(?, Z); X) ? n ?ln (?
yn ). Furthermore, the hinge loss is a convex
functional over the variational distribution because of the linearity of the expectation operator. These
properties render the hinge loss as a good surrogate to optimize over. Previous work has explored
this idea to learn discriminative topic models [34], but with a restriction on the shallow structure of
hidden variables. Our work presents a significant extension to learn deep generative models, which
pose new challenges on the learning and inference.
3.2
The Doubly Stochastic Subgradient Algorithm
The variational formulation of problem (5) naturally suggests that we can develop a variational
algorithm to address the intractability of the true posterior. We now present a new algorithm to
solve problem (5). Our method is a doubly stochastic generalization of the Pegasos (i.e., Primal
Estimated sub-GrAdient SOlver for SVM) algorithm [28] for the classic SVMs with fully observed
input features, with the new extension of dealing with a highly nontrivial structure of latent variables.
First, we make the structured mean-field (SMF) assumption that q(?, Z) = q(?)q? (Z). Under the
assumption, we have the discriminant function as Eq [? > ?fn (y)] = Eq(?) [? > ]Eq? (z(n) ) [?fn (y)].
Moreover, we can solve for the optimal solution of q(?) in some analytical form. In fact,
by the calculus
of variations, we can
show that given the other parts the solution is q(?) ?
P
>
y
p0 (?) exp ?
n,y ?n Eq? [?fn (y)] , where ? are the Lagrange multipliers (See [34] for de2
tails). If the prior is normal,
P p0 (?) = N (0, ? I), we have the normal posterior: q(?) =
N (?, ? 2 I), where ? = ? 2 n,y ?ny Eq? [?fn (y)]. Therefore, even though we did not make a parametric form assumption of q(?), the above results show that the optimal posterior distribution of ?
is Gaussian. Since we only use the expectation in the optimization problem and in prediction, we
can directly solve for the mean parameter ? instead of q(?). Further, in this case we can verify that
2
KL(q(?)||p0 (?)) = ||?||
2? 2 and then the equivalent objective function in terms of ? can be written
as:
||?||2
min L(?, ?; X) +
+ CR(?, ?; X),
(6)
?,?,?
2? 2
PN
where R(?, ?; X) =
n=1 `(?, ?; xn ) is the total hinge loss, and the per-sample hinge-loss is
`(?, ?; xn ) = maxy?C (?ln (y) ? ?> Eq? [?fn (y)]). Below, we present a doubly stochastic subgradient descent algorithm to solve this problem.
The first stochasticity arises from a stochastic estimate of the objective by random mini-batches.
Specifically, the batch learning needs to scan the full dataset to compute subgradients, which is
often too expensive to deal with large-scale datasets. One effective technique is to do stochastic
subgradient descent [28], where at each iteration we randomly draw a mini-batch of the training
data and then do the variational updates over the small mini-batch. Formally, given a mini batch of
size m, we get an unbiased estimate of the objective:
m
m
N X
||?||2
NC X
L?m :=
L(?, ?; xn ) +
+
`(?, ?; xn ).
m n=1
2? 2
m n=1
The second stochasticity arises from a stochastic estimate of the per-sample variational bound
and its subgradient, whose intractability calls for another Monte Carlo estimator. Formally, let
zln ? q? (z|xn , yn ) be a set of samples from the variational distribution, where we explicitly put the
conditions. Then, an estimate of the per-sample variational bound and the per-sample hinge-loss is
X
X
? ?; xn )=max ?ln (y)? 1 ?>?fn (y, zl ) ,
? ?; xn )= 1 log p(xn , zln |?)?log q? (zln ); `(?,
L(?,
n
y
L
L
l
l
4
where ?fn (y, zln ) = f (yn , zln ) ? f (y, zln ). Note that L? is an unbiased estimate of L, while `? is a
biased estimate of `. Nevertheless, we can still show that `? is an upper bound estimate of ` under
expectation. Furthermore, this biasedness does not affect our estimate of the gradient. In fact,
by using the equality ?? q? (z) = q? (z)?? log q? (z), we can construct an unbiased Monte Carlo
estimate of ?? (L(?, ?; xn ) + `(?, ?; xn )) as:
L
g? =
1 X
log p(zln , xn ) ? log q? (zln ) + C?> ?fn (?
yn , zln ) ?? log q? (zln ),
L
(7)
l=1
where the last term roots
from the hinge loss with the loss-augmented prediction y?n =
P
argmaxy (?ln (y) + L1 l ?> f (y, zln )). For ? and ?, the estimates of the gradient ?? L(?, ?; xn )
and the subgradient ?? `(?, ?; xn ) are easier, which are:
1X
1X
g? =
?? log p(xn , zln |?), g? =
f (?
yn , zln ) ? f (yn , zln ) .
L
L
l
l
Notice that the sampling and the gradient
not the underlying model.
?? log q? (zln )
only depend on the variational distribution,
The above estimates consider the gen- Algorithm 1 Doubly Stochastic Subgradient Algorithm
Initialize ?, ?, and ?
eral case where the variational bound is
repeat
intractable. In some cases, we can comdraw a random mini-batch of m data points
pute the KL-divergence term analytidraw random samples from noise distribution p()
cally, e.g., when the prior and the vari? ?, ?; Xm , )
ational distribution are both Gaussian.
compute subgradient g = ??,?,? L(?,
In such cases, we only need to estimate
update parameters (?, ?, ?) using subgradient g.
the rest intractable part by sampling,
until Converge
which often reduces the variance [12].
return ?, ?, and ?
Similarly, we could use the expectation
of the features directly, if it can be computed analytically, in the computation of subgradients (e.g.,
g? and g? ) instead of sampling, which again can lead to variance reduction.
With the above estimates of subgradients, we can use stochastic optimization methods such as
SGD [28] and AdaM [10] to update the parameters, as outlined in Alg. 1. Overall, our algorithm is
a doubly stochastic generalization of Pegasos to deal with the highly nontrivial latent variables.
Now, the remaining question is how to define an appropriate variational distribution q? (z) to obtain
a robust estimate of the subgradients as well as the objective. Two types of methods have been developed for unsupervised DGMs, namely, variance reduction [21] and auto-encoding variational Bayes
(AVB) [12]. Though both methods can be used for our models, we focus on the AVB approach. For
continuous variables Z, under certain mild conditions we can reparameterize the variational distribution q? (z) using some simple variables . Specifically, we can draw samples from some simple
distribution p() and do the transformation z = g? (, x, y) to get the sample of the distribution
q(z|x, y). We refer the readers to [12] for more details. In our experiments, we consider the special
Gaussian case, where we assume that the variational distribution is a multivariate Gaussian with a
diagonal covariance matrix:
q? (z|x, y) = N (?(x, y; ?), ? 2 (x, y; ?)),
(8)
whose mean and variance are functions of the input data. This defines our recognition model. Then,
the reparameterization trick is as follows: we first draw standard normal variables l ? N (0, I) and
then do the transformation zln = ?(xn , yn ; ?) + ?(xn , yn ; ?) l to get a sample. For simplicity,
we assume that both the mean and variance are function of x only. However, it is worth to emphasize
that although the recognition model is unsupervised, the parameters ? are learned in a supervised
manner because the subgradient (7) depends on the hinge loss. Further details of the experimental
settings are presented in Sec. 4.1.
4
Experiments
We now present experimental results on the widely adopted MNIST [14] and SVHN [22] datasets.
Though mmDGMs are applicable to any DGMs that define a joint distribution of X and Z, we
5
concentrate on the Variational Auto-encoder (VA) [12], which is unsupervised. We denote our
mmDGM with VA by MMVA. In our experiments, we consider two types of recognition models:
multiple layer perceptrons (MLPs) and convolutional neural networks (CNNs). We implement all
experiments based on Theano [2]. 1
4.1
Architectures and Settings
In the MLP case, we follow the settings in [11] to compare both generative and discriminative
capacity of VA and MMVA. In the CNN case, we use standard convolutional nets [14] with convolution and max-pooling operation as the recognition model to obtain more competitive classification
results. For the generative model, we use unconvnets [6] with a ?symmetric? structure as the recognition model, to reconstruct the input images approximately. More specifically, the top-down generative model has the same structure as the bottom-up recognition model but replacing max-pooling
with unpooling operation [6] and applies unpooling, convolution and rectification in order. The total
number of parameters in the convolutional network is comparable with previous work [8, 17, 15].
For simplicity, we do not involve mlpconv layers [17, 15] and contrast normalization layers in our
recognition model, but they are not exclusive to our model. We illustrate details of the network
architectures in appendix A.
In both settings, the mean and variance of the latent z are transformed from the last layer of the
recognition model through a linear operation. It should be noticed that we could use not only the
expectation of z but also the activation of any layer in the recognition model as features. The only
theoretical difference is from where we add a hinge loss regularization to the gradient and backpropagate it to previous layers. In all of the experiments, the mean of z has the same nonlinearity
but typically much lower dimension than the activation of the last layer in the recognition model,
and hence often leads to a worse performance. In the MLP case, we concatenate the activations of
2 layers as the features used in the supervised tasks. In the CNN case, we use the activations of the
last layer as the features. We use AdaM [10] to optimize parameters in all of the models. Although it
is an adaptive gradient-based optimization method, we decay the global learning rate by factor three
periodically after sufficient number of epochs to ensure a stable convergence.
We denote our mmDGM with MLPs by MMVA. To perform classification using VA, we first learn
the feature representations by VA, and then build a linear SVM classifier on these features using the
Pegasos stochastic subgradient algorithm [28]. This baseline will be denoted by VA+Pegasos. The
corresponding models with CNNs are denoted by CMMVA and CVA+Pegasos respectively.
4.2
Results on the MNIST dataset
We present both the prediction performance and the results on generating samples of MMVA and
VA+Pegasos with both kinds of recognition models on the MNIST [14] dataset, which consists of
images of 10 different classes (0 to 9) of size 28?28 with 50,000 training samples, 10,000 validating
samples and 10,000 testing samples.
Table 1: Error rates (%) on MNIST dataset.
M ODEL
E RROR R ATE
4.2.1 Predictive Performance
VA+Pegasos
1.04
In the MLP case, we only use 50,000 trainVA+Class-conditionVA
0.96
ing data, and the parameters for classification are
MMVA
0.90
optimized according to the validation set. We
CVA+Pegasos
1.35
CMMVA
0.45
choose C = 15 for MMVA and initialize it with
Stochastic Pooling [33]
0.47
an unsupervised pre-training procedure in classiNetwork in Network [17]
0.47
fication. First three rows in Table 1 compare
Maxout Network [8]
0.45
VA+Pegasos, VA+Class-condtionVA and MMVA,
DSN
[15]
0.39
where VA+Class-condtionVA refers to the best fully
supervised model in [11]. Our model outperforms the baseline significantly. We further use the
t-SNE algorithm [19] to embed the features learned by VA and MMVA on 2D plane, which again
demonstrates the stronger discriminative ability of MMVA (See Appendix B for details).
In the CNN case, we use 60,000 training data. Table 2 shows the effect of C on classification error
rate and variational lower bound. Typically, as C gets lager, CMMVA learns more discriminative
features and leads to a worse estimation of data likelihood. However, if C is too small, the supervision is not enough to lead to predictive features. Nevertheless, C = 103 is quite a good trade-off
1
The source code is available at https://github.com/zhenxuan00/mmdgm.
6
(a) VA
(b) MMVA
(c) CVA
(d) CMMVA
Figure 1: (a-b): randomly generated images by VA and MMVA, 3000 epochs; (c-d): randomly
generated images by CVA and CMMVA, 600 epochs.
between the classification performance and generative performance and this is the default setting
of CMMVA on MNIST throughout this paper. In this setting, the classification performance of our
CMMVA model is comparable to the recent state-of-the-art fully discriminative networks (without
data augmentation), shown in the last four rows of Table 1.
Table 2: Effects of C on MNIST dataset
4.2.2 Generative Performance
with a CNN recognition model.
We further investigate the generative capability of MMVA C E RROR R ATE (%) L OWER B OUND
on generating samples. Fig. 1 illustrates the images ran- 0
1.35
-93.17
domly sampled from VA and MMVA models where we 1
1.86
-95.86
output the expectation of the gray value at each pixel to 10
0.88
-95.90
get a smooth visualization. We do not pre-train our model 102
0.54
-96.35
in all settings when generating data to prove that MMVA 103
0.45
-99.62
(CMMVA) remains the generative capability of DGMs.
104
0.43
-112.12
4.3
Results on the SVHN (Street View House Numbers) dataset
SVHN [22] is a large dataset consisting of color images of size 32 ? 32. The task is to recognize
center digits in natural scene images, which is significantly harder than classification of hand-written
digits. We follow the work [27, 8] to split the dataset into 598,388 training data, 6000 validating
data and 26, 032 testing data and preprocess the data by Local Contrast Normalization (LCN).
We only consider the CNN recognition model here. The network structure is similar to that in
MNIST. We set C = 104 for our CMMVA model on SVHN by default.
Table 3 shows the predictive performance.
In
this more challenging problem, we observe a
larger improvement by CMMVA as compared to
CVA+Pegasos, suggesting that DGMs benefit a lot
from max-margin learning on image classification.
We also compare CMMVA with state-of-the-art results. To the best of our knowledge, there is no competitive generative models to classify digits on SVHN
dataset with full labels.
Table 3: Error rates (%) on SVHN dataset.
M ODEL
E RROR R ATE
CVA+Pegasos
25.3
CMMVA
3.09
CNN [27]
4.9
Stochastic Pooling [33]
2.80
Maxout Network [8]
2.47
Network in Network [17]
2.35
DSN [15]
1.92
We further compare the generative capability of CMMVA and CVA to examine the benefits from
jointly training of DGMs and max-margin classifiers. Though CVA gives a tighter lower bound
of data likelihood and reconstructs data more elaborately, it fails to learn the pattern of digits in a
complex scenario and could not generate meaningful images. Visualization of random samples from
CVA and CMMVA is shown in Fig. 2. In this scenario, the hinge loss regularization on recognition
model is useful for generating main objects to be classified in images.
4.4
Missing Data Imputation and Classification
Finally, we test all models on the task of missing data imputation. For MNIST, we consider two types
of missing values [18]: (1) Rand-Drop: each pixel is missing randomly with a pre-fixed probability;
and (2) Rect: a rectangle located at the center of the image is missing. Given the perturbed images,
we uniformly initialize the missing values between 0 and 1, and then iteratively do the following
steps: (1) using the recognition model to sample the hidden variables; (2) predicting the missing
values to generate images; and (3) using the refined images as the input of the next round. For
SVHN, we do the same procedure as in MNIST but initialize the missing values with Guassian
7
(a) Training data
(b) CVA
(c) CMMVA (C = 103 ) (d) CMMVA (C = 104 )
Figure 2: (a): training data after LCN preprocessing; (b): random samples from CVA; (c-d):
random samples from CMMVA when C = 103 and C = 104 respectively.
random variables as the input distribution changes. Visualization results on MNIST and SVHN are
presented in Appendix C and Appendix D respectively.
Intuitively, generative models with CNNs
Table 4: MSE on MNIST data with missing values in
could be more powerful on learning patthe testing procedure.
terns and high-level structures, while
VA
MMVA CVA CMMVA
generative models with MLPs lean more N OISE T YPE
R
AND -D ROP (0.2) 0.0109 0.0110 0.0111 0.0147
to reconstruct the pixels in detail. This
R AND -D ROP (0.4) 0.0127 0.0127 0.0127 0.0161
conforms to the MSE results shown in R AND -D ROP (0.6) 0.0168 0.0165 0.0175 0.0203
Table 4: CVA and CMMVA outperform R AND -D ROP (0.8) 0.0379 0.0358 0.0453 0.0449
VA and MMVA with a missing rectan- R ECT (6 ? 6)
0.0637 0.0645 0.0585 0.0597
gle, while VA and MMVA outperform R ECT (8 ? 8)
0.0850 0.0841 0.0754 0.0724
CVA and CMMVA with random miss- R ECT (10 ? 10) 0.1100 0.1079 0.0978 0.0884
ing values. Compared with the baseline, R ECT (12 ? 12) 0.1450 0.1342 0.1299 0.1090
mmDGMs also make more accurate completion when large patches are missing. All of the models infer missing values for 100 iterations.
We also compare the classification performance of CVA, CNN and CMMVA with Rect missing
values in testing procedure in Appendix E. CMMVA outperforms both CVA and CNN.
Overall, mmDGMs have comparable capability of inferring missing values and prefer to learn highlevel patterns instead of local details.
5
Conclusions
We propose max-margin deep generative models (mmDGMs), which conjoin the predictive power
of max-margin principle and the generative ability of deep generative models. We develop a doubly
stochastic subgradient algorithm to learn all parameters jointly and consider two types of recognition
models with MLPs and CNNs respectively. In both cases, we present extensive results to demonstrate that mmDGMs can significantly improve the prediction performance of deep generative models, while retaining the strong generative ability on generating input samples as well as completing
missing values. In fact, by employing CNNs in both recognition and generative models, we achieve
low error rates on MNIST and SVHN datasets, which are competitive to the state-of-the-art fully
discriminative networks.
Acknowledgments
The work was supported by the National Basic Research Program (973 Program) of China (Nos.
2013CB329403, 2012CB316301), National NSF of China (Nos. 61322308, 61332007), Tsinghua TNList Lab
Big Data Initiative, and Tsinghua Initiative Scientific Research Program (Nos. 20121088071, 20141080934).
References
[1] Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden Markov support vector machines. In ICML, 2003.
[2] F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. WardeFarley, and Y. Bengio. Theano: new features and speed improvements. In Deep Learning and Unsupervised Feature Learning NIPS Workshop, 2012.
[3] Y. Bengio, E. Laufer, G. Alain, and J. Yosinski. Deep generative stochastic networks trainable by backprop. In ICML, 2014.
[4] N. Chen, J. Zhu, F. Sun, and E. P. Xing. Large-margin predictive latent subspace learning for multi-view
data analysis. IEEE Trans. on PAMI, 34(12):2365?2378, 2012.
8
[5] C. Cortes and V. Vapnik. Support-vector networks. Journal of Machine Learning, 20(3):273?297, 1995.
[6] A. Dosovitskiy, J. T. Springenberg, and T. Brox. Learning to generate chairs with convolutional neural
networks. arXiv:1411.5928, 2014.
[7] I. J. Goodfellow, J. P. Abadie, M. Mirza, B. Xu, D. W. Farley, S.ozair, A. Courville, and Y. Bengio.
Generative adversarial nets. In NIPS, 2014.
[8] I. J. Goodfellow, D.Warde-Farley, M. Mirza, A. C. Courville, and Y. Bengio. Maxout networks. In ICML,
2013.
[9] K. Gregor, I. Danihelka, A. Mnih, C. Blundell, and D. Wierstra. Deep autoregressive networks. In ICML,
2014.
[10] D. P. Kingma and J. L. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[11] D. P. Kingma, D. J. Rezende, S. Mohamed, and M. Welling. Semi-supervised learning with deep generative models. In NIPS, 2014.
[12] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014.
[13] H. Larochelle and I. Murray. The neural autoregressive distribution estimator. In AISTATS, 2011.
[14] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
In Proceedings of the IEEE, 1998.
[15] C. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply-supervised nets. In AISTATS, 2015.
[16] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In ICML, 2009.
[17] M. Lin, Q. Chen, and S. Yan. Network in network. In ICLR, 2014.
[18] R. J. Little and D. B. Rubin. Statistical analysis with missing data. JMLR, 539, 1987.
[19] L. V. Matten and G. Hinton. Visualizing data using t-SNE. JMLR, 9:2579?2605, 2008.
[20] K. Miller, M. P. Kumar, B. Packer, D. Goodman, and D. Koller. Max-margin min-entropy models. In
AISTATS, 2012.
[21] A. Mnih and K. Gregor. Neural variational inference and learning in belief networks. In ICML, 2014.
[22] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with
unsupervised feature learning. NIPS Workshop on Deep Learning and Unsupervised Feature Learning,
2011.
[23] M. Ranzato, J. Susskind, V. Mnih, and G. E. Hinton. On deep generative models with applications to
recognition. In CVPR, 2011.
[24] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in
deep generative models. In ICML, 2014.
[25] R. Salakhutdinov and G. E. Hinton. Deep Boltzmann machines. In AISTATS, 2009.
[26] L. Saul, T. Jaakkola, and M. Jordan. Mean field theory for sigmoid belief networks. Journal of AI
Research, 4:61?76, 1996.
[27] P. Sermanet, S. Chintala, and Y. Lecun. Convolutional neural networks applied to house numbers digit
classification. In ICPR, 2012.
[28] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter. Pegasos: Primal estimated sub-gradient solver for
SVM. Mathematical Programming, Series B, 2011.
[29] Y. Tang. Deep learning using linear support vector machines. In Challenges on Representation Learning
Workshop, ICML, 2013.
[30] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. In NIPS, 2003.
[31] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. In ICML, 2004.
[32] C. J. Yu and T. Joachims. Learning structural SVMs with latent variables. In ICML, 2009.
[33] M. D. Zeiler and R. Fergus. Stochastic pooling for regularization of deep convolutional neural networks.
In ICLR, 2013.
[34] J. Zhu, A. Ahmed, and E. P. Xing. MedLDA: Maximum margin supervised topic models. JMLR, 13:2237?
2278, 2012.
[35] J. Zhu, N. Chen, H. Perkins, and B. Zhang. Gibbs max-margin topic models with data augmentation.
JMLR, 15:1073?1110, 2014.
[36] J. Zhu, N. Chen, and E. P. Xing. Bayesian inference with posterior regularization and applications to
infinite latent SVMs. JMLR, 15:1799?1847, 2014.
[37] J. Zhu, E.P. Xing, and B. Zhang. Partially observed maximum entropy discrimination Markov networks.
In NIPS, 2008.
9
| 5958 |@word mild:1 cnn:11 stronger:1 calculus:1 covariance:1 p0:5 sgd:1 harder:1 tnlist:2 reduction:3 series:1 ours:1 document:1 outperforms:2 com:2 activation:4 gmail:1 must:1 written:2 fn:11 unpooling:3 concatenate:1 periodically:1 hofmann:2 drop:1 update:3 discrimination:2 generative:48 plane:1 sys:1 bissacco:1 pascanu:1 zhang:4 wierstra:2 mathematical:1 constructed:1 ect:4 dsn:2 initiative:2 consists:2 doubly:8 prove:1 fitting:1 manner:1 examine:1 multi:3 inspired:1 salakhutdinov:1 automatically:1 little:4 subvectors:1 increasing:1 solver:2 underlying:2 notation:1 linearity:1 moreover:1 kind:1 developed:2 transformation:2 exactly:1 classifier:9 demonstrates:1 bio:1 zl:1 yn:10 danihelka:1 local:2 treat:1 tsinghua:4 laufer:1 cva:16 encoding:2 approximately:1 pami:1 china:3 suggests:1 challenging:3 directed:2 acknowledgment:1 lecun:2 testing:5 implement:1 backpropagation:1 digit:6 procedure:4 susskind:1 empirical:2 yan:1 significantly:5 pre:3 bergeron:1 refers:1 altun:2 get:5 cannot:1 pegasos:12 tsochantaridis:2 operator:1 put:2 applying:1 optimize:3 equivalent:4 map:2 deterministic:1 shi:1 center:3 missing:19 conventional:1 restriction:1 convex:1 focused:1 formulate:1 simplicity:3 rule:1 estimator:2 fication:1 lamblin:1 de2:1 reparameterization:1 classic:1 handle:1 variation:2 target:1 programming:1 goodfellow:3 trick:1 recognition:23 approximated:1 expensive:1 located:1 lean:1 observed:3 bottom:1 taskar:1 wang:1 capture:2 sun:1 ranzato:1 trade:1 ran:1 deeply:1 convexity:1 warde:1 depend:1 solving:3 tight:1 harmonium:1 rewrite:1 predictive:7 easily:1 joint:5 various:2 train:1 fast:1 effective:3 describe:4 monte:4 guassian:1 shalev:1 refined:1 whose:3 quite:1 stanford:2 solve:4 widely:1 larger:1 cvpr:1 reconstruct:3 encoder:1 ability:8 statistic:1 jointly:3 itself:1 final:1 nll:1 advantage:1 biasedness:1 highlevel:1 net:4 analytical:1 propose:2 tu:1 gen:1 flexibility:2 mmva:17 achieve:1 convergence:2 generating:9 adam:3 object:1 depending:1 develop:4 illustrate:1 pose:1 completion:1 progress:2 eq:12 strong:2 larochelle:1 avb:2 concentrate:1 cnns:8 stochastic:23 backprop:1 generalization:2 tighter:1 exploring:1 extension:2 considered:1 ground:1 normal:3 exp:1 mapping:1 predict:1 early:1 adopt:1 domly:1 estimation:2 applicable:1 label:8 cmmva:22 ound:1 cotter:1 gaussian:5 pn:2 cr:2 jaakkola:1 rezende:2 focus:3 joachim:2 improvement:2 likelihood:7 tech:2 contrast:2 adversarial:2 baseline:3 sense:1 inference:10 typically:3 hidden:4 koller:2 transformed:1 pixel:3 overall:2 classification:14 flexible:1 denoted:2 retaining:4 rop:4 art:6 special:2 initialize:4 brox:1 field:2 construct:2 ng:2 sampling:3 manually:1 yu:1 unsupervised:9 filling:1 icml:10 others:1 mirza:2 piecewise:1 dosovitskiy:1 primarily:1 employ:1 randomly:4 packer:1 divergence:2 intell:1 national:2 recognize:1 argmax:2 consisting:1 attempt:1 mlp:3 highly:3 investigate:1 mnih:3 argmaxy:1 farley:2 primal:2 accurate:3 poorer:1 netzer:1 conforms:1 theoretical:1 classify:1 zn:13 tractability:1 introducing:1 cost:1 examining:1 conjoin:1 too:2 perturbed:1 tianlin:1 retain:1 ancestral:1 lee:2 off:1 together:1 empowering:1 again:2 augmentation:2 reconstructs:1 choose:1 yth:1 worse:2 cb329403:1 resort:2 return:1 li:1 account:1 suggesting:1 bergstra:1 sec:1 includes:1 explicitly:1 depends:1 root:1 view:2 lab:3 lot:1 competitive:5 start:1 bayes:2 capability:8 odel:2 bouchard:1 xing:4 mlps:6 convolutional:9 variance:7 miller:1 preprocess:1 bayesian:3 carlo:4 comp:2 worth:1 classified:1 manual:1 mohamed:2 naturally:1 chintala:1 sampled:2 dataset:10 color:1 knowledge:1 focusing:1 supervised:9 follow:2 xie:1 response:1 rand:1 formulation:2 done:4 though:6 strongly:1 generality:1 furthermore:3 until:1 hand:1 dgm:9 replacing:1 nonlinear:1 defines:2 gray:1 scientific:1 usa:1 effect:2 verify:2 true:4 unbiased:4 multiplier:1 analytically:2 regularization:5 equality:1 hence:1 symmetric:1 leibler:1 iteratively:1 deal:2 round:1 visualizing:1 game:1 demonstrate:3 l1:1 svhn:11 image:20 variational:36 sigmoid:2 common:1 absorbing:1 functional:2 conditioning:1 tail:1 yosinski:1 approximates:2 significant:1 refer:1 gibbs:1 ai:1 rd:2 vanilla:1 unconstrained:1 outlined:1 similarly:1 elaborately:1 nonlinearity:2 stochasticity:2 pute:1 stable:2 supervision:5 add:1 multivariate:2 posterior:13 recent:6 scenario:2 certain:1 guestrin:1 fortunately:1 converge:1 signal:2 semi:3 multiple:2 full:3 infer:4 reduces:1 ing:2 smooth:1 ahmed:1 lin:1 mle:5 va:18 prediction:11 scalable:1 basic:2 expectation:8 arxiv:1 iteration:2 normalization:2 dcszb:1 want:1 annealing:1 source:1 goodman:1 biased:1 rest:1 lcn:2 pooling:5 validating:2 jordan:1 call:1 structural:1 split:1 enough:2 easy:1 bengio:5 affect:1 architecture:2 idea:1 cn:1 haffner:1 blundell:1 render:1 deep:35 useful:1 detailed:1 involve:1 gle:1 nonparametric:1 cb316301:1 svms:8 generate:5 http:1 outperform:2 nsf:1 coates:1 revisit:1 notice:4 estimated:2 per:5 dgms:17 shall:1 medlda:1 key:2 four:1 nevertheless:2 imputation:2 rectangle:1 subgradient:13 downstream:2 beijing:1 powerful:2 uncertainty:1 springenberg:1 throughout:1 reader:1 wu:1 patch:1 draw:3 appendix:5 prefer:1 eral:1 comparable:3 layer:13 bound:15 completing:2 rbg:1 courville:2 nonnegative:1 nontrivial:3 constraint:4 handful:1 perkins:1 scene:1 speed:1 min:6 chair:2 reparameterize:1 performing:1 subgradients:4 kumar:1 structured:4 according:1 icpr:1 beneficial:1 ate:3 shallow:3 making:2 maxmargin:2 maxy:2 intuitively:1 theano:2 rectification:2 ln:7 visualization:3 remains:1 describing:1 slack:1 singer:1 tractable:1 adopted:1 generalizes:1 operation:3 available:1 observe:1 hierarchical:1 appropriate:1 ype:1 batch:6 assumes:1 remaining:1 top:1 ensure:1 zeiler:1 hinge:11 cally:1 build:3 especially:1 licx14:1 gregor:2 murray:1 objective:7 noticed:1 question:1 parametric:2 primary:2 exclusive:2 diagonal:1 surrogate:1 gradient:9 iclr:4 subspace:1 sci:2 capacity:1 street:1 topic:4 mail:1 considers:1 discriminant:3 trivial:1 ozair:1 code:1 mini:5 sermanet:1 nc:1 potentially:1 sne:2 negative:2 ba:1 boltzmann:1 adjustable:1 perform:1 upper:4 convolution:3 observation:2 datasets:6 markov:5 smf:1 descent:3 hinton:3 ever:1 stack:1 pair:1 namely:1 kl:6 extensive:1 optimized:2 learned:5 kingma:3 nip:6 trans:1 address:4 wardefarley:1 below:2 pattern:2 xm:1 reading:1 challenge:6 program:3 max:31 including:3 belief:5 power:3 suitable:1 natural:2 hybrid:1 regularized:2 predicting:2 zhu:6 minimax:1 scheme:1 improve:6 github:1 jun:1 auto:3 prior:4 epoch:3 interdependent:1 fully:7 loss:14 expect:1 generation:1 srebro:1 validation:1 chongxuan:1 sufficient:1 rubin:1 principle:4 intractability:3 balancing:1 row:2 repeat:1 last:5 supported:1 alain:1 bias:2 saul:1 benefit:2 dimension:2 xn:24 default:2 vari:1 autoregressive:4 made:1 adaptive:1 preprocessing:1 employing:2 welling:2 ranganath:1 approximate:1 emphasize:1 ignore:1 kullback:1 dealing:1 global:1 sequentially:1 investigating:1 rect:2 discriminative:15 shwartz:1 fergus:1 continuous:1 latent:26 zln:16 table:9 promising:1 learn:9 concatenates:1 robust:1 ca:1 alg:1 mse:2 bottou:1 complex:4 meanwhile:1 upstream:1 domain:1 did:1 aistats:4 main:1 multilayered:1 big:1 noise:2 xu:1 augmented:1 fig:2 grosse:1 ny:1 ational:1 sub:2 inferring:2 fails:1 conjoins:1 house:2 jmlr:5 learns:3 tang:1 rk:1 down:1 embed:1 bastien:1 explored:1 decay:1 svm:4 cortes:1 abadie:1 incorporating:1 exists:1 mnist:14 restricting:1 intractable:6 workshop:3 ower:1 vapnik:1 gallagher:1 conditioned:1 illustrates:1 margin:30 gap:1 easier:1 chen:4 backpropagate:1 entropy:4 simply:1 explore:2 univariate:1 lagrange:1 partially:2 bo:1 rror:3 applies:1 truth:1 dcszj:1 conditional:3 goal:3 presentation:1 maxout:3 hard:1 change:1 specifically:5 except:2 determined:1 uniformly:1 sampler:1 infinite:1 miss:1 total:2 experimental:2 player:1 meaningful:1 perceptrons:2 formally:2 support:5 tern:1 oise:1 arises:3 scan:1 dept:2 trainable:1 |
5,478 | 5,959 | Cross-Domain Matching for Bag-of-Words Data
via Kernel Embeddings of Latent Distributions
Yuya Yoshikawa?
Nara Institute of Science and Technology
Nara, 630-0192, Japan
yoshikawa.yuya.yl9@is.naist.jp
Tomoharu Iwata
NTT Communication Science Laboratories
Kyoto, 619-0237, Japan
iwata.tomoharu@lab.ntt.co.jp
Hiroshi Sawada
NTT Service Evolution Laboratories
Kanagawa, 239-0847, Japan
sawada.hiroshi@lab.ntt.co.jp
Takeshi Yamada
NTT Communication Science Laboratories
Kyoto, 619-0237, Japan
yamada.tak@lab.ntt.co.jp
Abstract
We propose a kernel-based method for finding matching between instances across
different domains, such as multilingual documents and images with annotations.
Each instance is assumed to be represented as a multiset of features, e.g., a bag-ofwords representation for documents. The major difficulty in finding cross-domain
relationships is that the similarity between instances in different domains cannot
be directly measured. To overcome this difficulty, the proposed method embeds
all the features of different domains in a shared latent space, and regards each
instance as a distribution of its own features in the shared latent space. To represent the distributions efficiently and nonparametrically, we employ the framework
of the kernel embeddings of distributions. The embedding is estimated so as to
minimize the difference between distributions of paired instances while keeping
unpaired instances apart. In our experiments, we show that the proposed method
can achieve high performance on finding correspondence between multi-lingual
Wikipedia articles, between documents and tags, and between images and tags.
1
Introduction
The discovery of matched instances in different domains is an important task, which appears in natural language processing, information retrieval and data mining tasks such as finding the alignment
of cross-lingual sentences [1], attaching tags to images [2] or text documents [3], and matching user
identifications in different databases [4].
When given an instance in a source domain, our goal is to find the instance in a target domain that
is the most closely related to the given instance. In this paper, we focus on a supervised setting,
where correspondence information between some instances in different domains is given. To find
matching in a single domain, e.g., find documents relevant to an input document, a similarity (or
distance) measure between instances can be used. On the other hand, when trying to find matching
between instances in different domains, we cannot directly measure the distances since they consist of different types of features. For example, when matching documents in different languages,
since the documents have different vocabularies we cannot directly measure the similarities between
documents across different languages without dictionaries.
?
The author moved to Software Technology and Artificial Intelligence Research Laboratory (STAIR Lab)
at Chiba Institute of Technology, Japan.
1
Figure 1: An example of the proposed method
used on a multilingual document matching
task. Correspondences between instances in
source (English) and target (Japanese) domains are observed. The proposed method assumes that each feature (vocabulary term) has
a latent vector in a shared latent space, and
each instance is represented as a distribution
of the latent vectors of the features associated
with the instance. Then, the distribution is
mapped as an element in a reproducing kernel
Hilbert space (RKHS) based on the kernel embeddings of distributions. The latent vectors
are estimated so that the paired instances are
embedded closer together in the RKHS.
One solution is to map instances in both the source and target domains into a shared latent space.
One such method is canonical correspondence analysis (CCA) [5], which maps instances into a latent space by linear projection to maximize the correlation between paired instances in the latent
space. However, in practice, CCA cannot solve non-linear relationship problems due to its linearity.
To find non-linear correspondence, kernel CCA [6] can be used. It has been reported that kernel
CCA performs well as regards document/sentence alignment between different languages [7, 8],
when searching for images from text queries [9] and when matching 2D-3D face images [10]. Note
that the performance of kernel CCA depends on how appropriately we define the kernel function
for measuring the similarity between instances within a domain. Many kernels, such as linear, polynomial and Gaussian kernels, cannot consider the occurrence of different but semantically similar
words in two instances because these kernels use the inner-product between the feature vectors representing the instances. For example, words, ?PC? and ?Computer?, are different but indicate the
same meaning. Nevertheless, the kernel value between instances consisting only of ?PC? and consisting only of ?Computer? is equal to zero with linear and polynomial kernels. Even if a Gaussian
kernel is used, the kernel value is determined only by the vector length of the instances.
In this paper, we propose a kernel-based cross-domain matching method that can overcome the
problem of kernel CCA. Figure 1 shows an example of the proposed method. The proposed method
assumes that each feature in source and target domains is associated with a latent vector in a shared
latent space. Since all the features are mapped into the latent space, the proposed method can measure the similarity between features in different domains. Then, each instance is represented as a
distribution of the latent vectors of features that are contained in the instance. To represent the distributions efficiently and nonparametrically, we employ the framework of the kernel embeddings of
distributions, which measures the difference between distributions in a reproducing kernel Hilbert
space (RKHS) without the need to define parametric distributions. The latent vectors are estimated
by minimizing the differences between the distributions of paired instances while keeping unpaired
instances apart. The proposed method can discover unseen matching in test data by using the distributions of the estimated latent vectors. We will explain matching between two domains below,
however, the proposed method can be straightforwardly extended to matching between three and
more domains by regarding one of the domains as a pivot domain.
In our experiments, we demonstrate the effectiveness of our proposed method in tasks that involve
finding the correspondence between multi-lingual Wikipedia articles, between documents and tags,
and between images and tags, by comparison with existing linear and non-linear matching methods.
2
Related Work
As described above, canonical correlation analysis (CCA) and kernel CCA have been successfully
used for finding various types of cross-domain matching. When we want to match cross-domain
instances represented by bag-of-words such as documents, bilingual topic models [1, 11] can also
be used. The difference between the proposed method and these methods is that since the proposed
method represents each instance as a set of latent vectors of its own features, the proposed method
can learn a more complex representation of the instance than these existing methods that represent
2
each instance as a single latent vector. Another difference is that the proposed method employs a
discriminative approach, while kernel CCA and bilingual topic models employ generative ones.
To model cross-domain data, deep learning and neural network approaches have been recently proposed [12, 13]. Unlike such approaches, the proposed method performs non-linear matching without
deciding the number of layers of the networks, which largely affects their performances.
A key technique of the proposed method is the kernel embeddings of distributions [14], which can
represent a distribution as an element in an RKHS, while preserving the moment information of
the distribution such as the mean, covariance and higher-order moments without density estimation. The kernel embeddings of distributions have been successfully used for a statistical test of the
independence of two sample sets [15], discriminative learning on distribution data [16], anomaly
detection for group data [17], density estimation [18] and a three variable interaction test [19]. Most
previous studies about the kernel embeddings of distributions consider cases where the distributions
are unobserved but the samples generated from the distributions are observed. Additionally, each
of the samples is represented as a dense vector. With the proposed method, the kernel embedding
technique cannot be used to represent the observed multisets of features such as bag-of-words for
documents, since each of the features is represented as a one-hot vector whose dimensions are zero
except for the dimension indicating that the feature has one. In this study, we benefit from the kernel
embeddings of distributions by representing each feature as a dense vector in a shared latent space.
The proposed method is inspired by the use of the kernel embeddings of distributions in bag-ofwords data classification [20] and regression [21]. Their methods can be applied to single domain
data, and the latent vectors of features are used to measure the similarity between the features in a
domain. Unlike these methods, the proposed method is used for the cross-domain matching of two
different types of domain data, and the latent vectors are used for measuring the similarity between
the features in different domains.
3
Kernel Embeddings of Distributions
In this section, we introduce the framework of the kernel embeddings of distributions. The kernel
embeddings of distributions are used to embed any probability distribution P on space X into a reproducing kernel Hilbert space (RKHS) Hk specified by kernel k, and the distribution is represented
as element m? (P) in the RKHS. More precisely, when given distribution P, the kernel embedding
of the distribution m? (P) is defined as follows:
?
m? (P) := Ex?P [k(?, x)] =
k(?, x)dP ? Hk ,
(1)
X
where kernel k is referred to as embedding kernel. It is known that kernel embedding m? (P) preserves the properties of probability distribution P such as the mean, covariance and higher-order
moments by using characteristic kernels (e.g., Gaussian RBF kernel) [22].
When a set of samples X = {xl }nl=1 is drawn from the distribution, by interpreting sample set X as
? = 1 ?n ?x (?), where ?x (?) is the Dirac delta function at point x ? X ,
empirical distribution P
l
l=1
n
empirical kernel embedding m(X) is given by
1?
k(?, xl ),
n
n
m(X) =
(2)
l=1
which can be approximated with an error rate of ||m(X)?m? (P)||Hk = Op (n? 2 ) [14]. Unlike kernel density estimation, the error rate of the kernel embeddings is independent of the dimensionality
of the given distribution.
1
3.1 Measuring Difference between Distributions
By using the kernel embedding representation Eq. (2), we can measure the difference between two
?
distributions. Given two sets of samples X = {xl }nl=1 and Y = {yl? }nl? =1 where xl and yl? belong
to the same space, we can obtain their kernel embedding representations m(X) and m(Y). Then,
the difference between m(X) and m(Y) is given by
D(X, Y) = ||m(X) ? m(Y)||2Hk .
(3)
Intuitively, it reflects the difference in the moment information of the distributions. The difference
is equivalent to the square of maximum mean discrepancy (MMD), which is used for a statistical test
3
of independence of two distributions [15]. The difference can be calculated by expanding Eq. (3) as
follows:
||m(X) ? m(Y)||2Hk = ?m(X), m(X)?Hk + ?m(Y), m(Y)?Hk ? 2?m(X), m(Y)?Hk ,
(4)
where, ??, ??Hk is an inner-product in the RKHS. In particular, ?m(X), m(Y)?Hk is given by
? n
?
n?
n n?
1?
1 ?
1 ??
?m(X), m(Y)?Hk =
k(?, xl ), ?
k(?, yl? )
k(xl , yl? ).
=
n
n ?
nn?
?
l=1
l =1
Hk
(5)
l=1 l =1
?m(X), m(X)?Hk and ?m(Y), m(Y)?Hk can also be calculated by Eq. (5).
4
Proposed Method
s
Suppose that we are given a training set consisting of N instance pairs O = {(dsi , dti )}N
i=1 , where di
is the ith instance in a source domain and dti is the ith instance in a target domain. These instances
dsi and dti are represented as multisets of features included in source feature set F s and target feature
set F t , respectively. This means that these instances are represented as bag-of-words (BoW). The
goal of our task is to determine the unseen relationship between instances across source and target
domains in test data. The number of instances in the source domain may be different to that in the
target domain.
4.1 Kernel Embeddings of Distributions in a Shared Latent Space
As described in Section 1, the difficulty as regards finding cross-domain instance matching is that the
similarity between instances across source and target domains cannot be directly measured. We have
also stated that although we can find a latent space that can measure the similarity by using kernel
CCA, standard kernel functions, e.g., a Gaussian kernel, cannot reflect the co-occurrence of different
but related features in a kernel calculation between instances. To overcome them, we propose a new
data representation for finding cross-domain instance matching. The proposed method assumes that
each feature in a source feature set, f ? F s , has a q-dimensional latent vector xf ? Rq in a
shared space. Likewise, each feature in target feature set, g ? F t , has a q-dimensional latent vector
yg ? Rq in the shared space. Since all the features in the source and target domains are mapped into
a common shared space, the proposed method can capture the relationship between features both
in each domain and across different domains. We define the sets of latent vectors in the source and
target domains as X = {xf }f ?F s and Y = {yg }g?F t , respectively.
The proposed method assumes that each instance is represented by a distribution (or multiset) of
the latent vectors of the features that are contained in the instance. The ith instance in the source
domain dsi is represented by a set of latent vectors Xi = {xf }f ?dsi and the jth instance in the target
domain dtj is represented by a set of latent vectors Yj = {yg }g?dtj . Note that Xi and Yj lie in the
same latent space.
In Section 3, we introduced the kernel embedding representation of a distribution and described how
to measure the difference between two distributions when samples generated from the distribution
are observed. In the proposed method, we employ the kernel embeddings of distributions to represent the distributions of the latent vectors for the instances. The kernel embedding representations
for the ith source and the jth target domain instances are given by
1 ?
1 ?
m(Xi ) = s
k(?, xf ),
m(Yj ) = t
k(?, yg ).
(6)
|di |
|dj |
s
t
f ?di
g?dj
Then, the difference between the distributions of the latent vectors are measured by using Eq. (3),
that is, the difference between the ith source and the jth target domain instances is given by
D(Xi , Yj ) = ||m(Xi ) ? m(Yj )||2Hk .
(7)
4.2 Model
The proposed method assumes that paired instances have similar distributions of latent vectors and
unpaired instances have different distributions. In accordance with the assumption, we define the
likelihood of the relationship between the ith source domain instance and the jth target domain
instance as follows:
exp (?D(Xi , Yj ))
p(dtj |dsi , X, Y, ?) = ?N
,
(8)
?
j ? =1 exp (?D(Xi , Yj ))
4
where, ? is a set of hyper-parameters for the embedding kernel used in Eq. (6). Eq. (8) is in fact
the conditional probability with which the jth target domain instance is chosen given the ith source
domain instance. This formulation is more efficient than we consider a bidirectional matching.
Intuitively, when distribution Xi is more similar to Yj than other distributions {Yj ? | j ? ?= j}N
j ? =1 ,
the probability has a higher value.
We define the posterior distribution of latent vectors X and Y.?By placing
( Gaussian
) priors with
?
2
?
precision parameter
?
>
0
for
X
and
Y,
that
is,
p(X|?)
?
exp
||x||
, p(Y|?) ?
2
x?X
2
( ?
)
?
2
y?Y exp ? 2 ||y||2 , the posterior distribution is given by
p(X, Y|O, ?) =
N
?
1
p(X|?)p(Y|?)
p(dti |dsi , X, Y, ?),
Z
i=1
(9)
N
where, O = {(dsi , dti )}
? i=1
? is a training set of N instance pairs, ? = {?, ?} is a set of hyperparameters and Z =
p(X, Y, O, ?)dXdY is a marginal probability, which is constant with
respect to X and Y.
4.3 Learning
We estimate latent vectors X and Y by maximizing the posterior probability of the latent vectors
given by Eq. (9). Instead of Eq. (9), we consider the following negative logarithm of the posterior
probability,
?
?
?
?
N
N ?
? ? ?
?
?
?
D(Xi , Yi ) + log
exp (?D(Xi , Yj )) + ?
||x||22 +
||y||22 ? ,
L(X, Y) =
? 2
?
j=1
i=1
x?X
y?Y
(10)
and minimize it with respect to the latent vectors. Here, maximizing Eq. (9) is equivalent to minimizing Eq. (10). To minimize Eq. (10) with respect to X and Y, we perform a gradient-based
optimization. The gradient of Eq. (10) with respect to each xf ? X is given by
?
?
N
?
?
?
?L(X, Y)
1
?D(Xi , Yi )
?D(Xi , Yj ) ?
=
?
+ ?xf
(11)
eij
?
?
?xf
?xf
ci
?xf
s
i:f ?di
j=1
where,
eij = exp (?D(Xi , Yj )) ,
ci =
N
?
exp (?D(Xi , Yj )) ,
(12)
j=1
and the gradient of the difference between distributions Xi and Yj with respect to xf is given by
? ? ?k(xl , yg )
?D(Xi , Yj )
1 ? ? ?k(xl , xl? )
2
= s2
? s t
.
(13)
?xf
|di |
?xf
|di ||dj | s
?xf
s ?
s
t
l?di l ?di
l?di g?di
When the distribution Xi does not include the latent vector xf , the gradient consistently becomes a
l ,xl? )
zero vector. ?k(x
is the gradient of an embedding kernel. This depends on the choice of kernel.
?xf
When the embedding kernel is a Gaussian kernel, the gradient is calculated as with Eq. (15) in [21].
Similarly, The gradient of Eq. (10) with respect to each yg ? Y is given by
?
?
N ?
?
?
?D(Xi , Yi )
1
?L(X, Y)
?D(Xi , Yj ) ?
=
?
+ ?yg ,
(14)
eij
?
?
?yg
?yg
ci
?yg
t
i=1
j:g?dj
where, the gradient of the difference between distributions Xi and Yj with respect to yg can be
calculated as with Eq. (13)
Learning is performed by alternately updating X using Eq. (11) and updating Y using Eq. (14) until
the improvement in the negative log likelihood Eq. (10) converges.
4.4 Matching
After the estimation of the latent vectors X and Y, the proposed method can reveal the matching
between test instances. The matching is found by first measuring the difference between a given
source domain instance and target domain instances using Eq. (7), and then searching for the instance
pair with the smallest difference.
5
5
Experiments
In this section, we report our experimental results for three different types of cross-domain datasets:
multi-lingual Wikipedia, document-tag and image-tag datasets.
Setup of proposed method. Throughout
these experiments,
we used a Gaussian kernel with param(
)
eter ? ? 0: k(xf , yg ) = exp ? ?2 ||xf ? yg ||22 as an embedding kernel. The hyper-parameters of
the proposed method are the dimensionality of a shared latent space q, a regularizer parameter for
latent vectors ? and a Gaussian embedding kernel parameter ?. After we train the proposed method
with various hyper-parameters q ? {8, 10, 12}, ? ? {0, 10?2 , 10?1 } and ? ? {10?1 , 100 , ? ? ? , 103 },
we chose the optimal hyper-parameters by using validation data. When training the proposed
method, we initialized latent vectors X and Y by applying principle component analysis (PCA)
to a matrix concatenating two feature-frequency matrices in the source and target domains. Then,
we employed the L-BFGS method [23] with gradients given by Eqs. (11) (14) to learn the latent
vectors.
Comparison methods. We compared the proposed method with the k-nearest neighbor method
(KNN), canonical correspondence analysis (CCA), kernel CCA (KCCA), bilingual latent Dirichlet
allocation (BLDA), and kernel CCA with the kernel embeddings of distributions (KED-KCCA). For
a test instance in the source domain, our KNN searches for the nearest neighbor source instances in
the training data, and outputs a target instance in the test data, which is located close to the target
instances that are paired with the searched for source instances. CCA and KCCA first learn the
projection of instances into a shared latent space using training data, and then they find matching
between instances by projecting the test instances into the shared latent space. KCCA used a Gaussian kernel for measuring the similarity between instances and chose the optimal Gaussian kernel
parameter and regularizer parameter by using validation data. With BLDA, we first learned the same
model as [1, 11] and found matching between instances in the test data by obtaining the topic distributions of these instances from the learned model. KED-KCCA uses the kernel embeddings of
distributions described in Section 3 for obtaining the kernel values between the instances. The vector representations of features were obtained by applying singular value decomposition (SVD) for
instance-feature frequency matrices. Here, we set the dimensionality of the vector representations to
100. Then, KED-KCCA learns kernel CCA with the kernel values as with the above KCCA. With
CCA, KCCA, BLDA and KED-KCCA, we chose the optimal latent dimensionality (or number of
topics) within {10, 20, ? ? ? , 100} by using validation data.
Evaluation method. Throughout the experiments, we quantitatively evaluated the matching performance by using the precision with which the true target instance is included in a set of R candidate
instances, S(R), found by each method. More formally, the precision is given by
Precision@R =
Nte
1 ?
? (ti ? Si (R)) ,
Nte i=1
(15)
where, Nte is the number of test instances in the target domain, ti is the ith true target instance,
Si (R) is R candidate instances of the ith source instance and ?(?) is the binary function that returns
1 if the argument is true, and 0 otherwise.
5.1
Matching between Bilingual Documents
With a multi-lingual Wikipedia document dataset, we examine whether the proposed method can
find the correct matching between documents written in different languages. The dataset includes
34,024 Wikipedia documents for each of six languages: German (de), English (en), Finnish (fi),
French (fr), Italian (it) and Japanese (ja), and documents with the same content are aligned across
the languages. From the dataset, we create 6 C2 = 15 bilingual document pairs. We regard the
first component of the pair as a source domain and the other as a target domain. For each of the
bilingual document pairs, we randomly create 10 evaluation sets that consist of 1,000 document
pairs as training data, 100 document pairs as validation data and 100 document pairs as test data.
Here, each document is represented as a bag-of-words without stopwords and low frequency words.
Figure 2 shows the matching precision for each of the bilingual pairs of the Wikipedia dataset.
With all the bilingual pairs, the proposed method achieves significantly higher precision than the
other methods with a wide range of R. Table 1 shows examples of predicted matching with the
Japanese-English Wikipedia dataset. Compared with KCCA, which is the second best method, the
6
Figure 2: Precision of matching prediction and its standard deviation on multi-lingual Wikipedia
datasets.
Table 1: Top five English documents matched by the proposed method and KCCA given five
Japanese documents in the Wikipedia dataset. Titles in bold typeface indicate correct matching.
(a) Japanese Input title: SD ??? (SD card)
Proposed
KCCA
Intel, SD card, Libavcodec, MPlayer, Freeware
BBC World News, SD card, Morocco, Phoenix, 24 Hours of Le Mans
(b) Japanese Input title: ??? (Anthrax)
Proposed
KCCA
Psittacosis, Anthrax, Dehydration, Isopoda, Cataract
Dehydration, Psittacosis, Cataract, Hypergeometric distribution, Long Island Iced Tea
(c) Japanese Input title: ??????? (Doppler effect)
Proposed
KCCA
LU deconmposition, Redshift, Doppler effect, Phenylalanine, Dehydration
Long Island Iced Tea, Opportunity cost, Cataract, Hypergeometric distribution, Intel
(d) Japanese Input title: ?????? (Mexican cuisine)
Proposed
KCCA
Mexican cuisine, Long Island Iced Tea, Phoenix, Baldr, China Radio International
Taoism, Chariot, Anthrax, Digital Millennium Copyright Act, Alexis de Tocqueville
(e) Japanese Input title: ?????? (Freeware)
Proposed
KCCA
BBC World News, Opportunity cost, Freeware, NFS, Intel
Digital Millennium Copyright Act, China Radio International, Hypergeometric distribution, Taoism, Chariot
proposed method can find both the correct document and many related documents. For example,
in Table 1(a), the correct document title is ?SD card?. The proposed method outputs the SD card?s
document and documents related to computer technology such as ?Intel? and ?MPlayer?. This is
because the proposed method can capture the relationship between words and reflect the difference
between documents across different domains by learning the latent vectors of the words.
5.2
Matching between Documents and Tags, and between Images and Tags
We performed experiments matching documents and tailgates, and matching images and tailgates
with the datasets used in [3]. When matching documents and tailgates, we use datasets obtained
from two social bookmarking sites, delicious1 and hatena2 , and patent dataset. The
delicious and the hatena datasets include pairs consisting of a web page and a tag list labeled by users, and the patent dataset includes pairs consisting of a patent description and a tag list
representing the category of the patent. Each web page and each patent description are represented
1
2
https://delicious.com/
http://b.hatena.ne.jp/
7
Figure 3: Precision of matching prediction and its standard deviation on delicious, hatena,
patent and flickr datasets.
Figure 4: Two examples of input tag lists and the top five images matched by the proposed method
on the flickr dataset.
as a bag-of-words as with the experiments using the Wikipedia dataset, and the tag list is represented
as a set of tags. With the matching of images and tag lists, we use the flickr dataset, which consists of pairs of images and tag lists. Each image is represented as a bag-of-visual-words, which
is obtained by first extracting features using SIFT, and then applying K-means clustering with 200
components to the SIFT features. For all the datasets, the numbers of training, test and validation
pairs are 1,000, 100 and 100, respectively.
Figure 3 shows the precision of the matching prediction of the proposed and comparison methods
for the delicious, hatena, patent and flickr datasets. The precision of the comparison
methods with these datasets was much the same as the precision of random prediction. Nevertheless,
the proposed method achieved very high precision particularly for the delicious, hatena and
patent datasets. Figure 4 shows examples of input tag lists and the top five images matched by
the proposed method with the flickr dataset. In the examples, the proposed method found the
correct images and similar related images from given tag lists.
6
Conclusion
We have proposed a novel kernel-based method for addressing cross-domain instance matching tasks
with bag-of-words data. The proposed method represents each feature in all the domains as a latent
vector in a shared latent space to capture the relationship between features. Each instance is represented by a distribution of the latent vectors of features associated with the instance, which can
be regarded as samples from the unknown distribution corresponding to the instance. To calculate
difference between the distributions efficiently and nonparametrically, we employ the framework of
kernel embeddings of distributions, and we learn the latent vectors so as to minimize the difference
between the distributions of paired instances in a reproducing kernel Hilbert space. Experiments
on various types of cross-domain datasets confirmed that the proposed method significantly outperforms the existing methods for cross-domain matching.
Acknowledgments. This work was supported by JSPS Grant-in-Aid for JSPS Fellows (259867).
8
References
[1] T Zhang, K Liu, and J Zhao. Cross Lingual Entity Linking with Bilingual Topic Model. In Proceedings
of the Twenty-Third International Joint Conference on Artificial Intelligence, 2013.
[2] Yunchao Gong, Qifa Ke, Michael Isard, and Svetlana Lazebnik. A Multi-View Embedding Space
for Modeling Internet Images, Tags, and Their Semantics. International Journal of Computer Vision,
106(2):210?233, oct 2013.
[3] Tomoharu Iwata, T. Yamada, and N. Ueda. Modeling Social Annotation Data with Content Relevance
using a Topic Model. In Advances in Neural Information Processing Systems. Citeseer, 2009.
[4] Bin Li, Qiang Yang, and Xiangyang Xue. Transfer Learning for Collaborative Filtering via a RatingMatrix Generative Model. In Proceedings of the 26th Annual International Conference on Machine
Learning, 2009.
[5] H. Hotelling. Relations Between Two Sets of Variants. Biometrika, 28:321?377, 1936.
[6] S Akaho. A Kernel Method for Canonical Correlation Analysis. In Proceedings of International Meeting
on Psychometric Society, number 4, 2001.
[7] Alexei Vinokourov, John Shawe-Taylor, and Nello Cristianini. Inferring a Semantic Representation of
Text via Cross-Language Correlation Analysis. In Advances in Neural Information Processing Systems,
2003.
[8] Yaoyong Li and John Shawe-Taylor. Using KCCA for Japanese-English Cross-Language Information
Retrieval and Document Classification. Journal of Intelligent Information Systems, 27(2):117?133, sep
2006.
[9] Nikhil Rasiwasia, Jose Costa Pereira, Emanuele Coviello, Gabriel Doyle, Gert R.G. Lanckriet, Roger
Levy, and Nuno Vasconcelos. A New Approach to Cross-Modal Multimedia Retrieval. In Proceedings of
the International Conference on Multimedia, 2010.
[10] Patrik Kamencay, Robert Hudec, Miroslav Benco, and Martina Zachariasov. 2D-3D Face Recognition
Method Based on a Modified CCA-PCA Algorithm. International Journal of Advanced Robotic Systems,
2014.
[11] Tomoharu Iwata, Shinji Watanabe, and Hiroshi Sawada. Fashion Coordinates Recommender System
Using Photographs from Fashion Magazines. In Proceedings of the Twenty-Second International Joint
Conference on Artificial Intelligence. AAAI Press, jul 2011.
[12] Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng. Multimodal
Deep Learning. In Proceedings of The 28th International Conference on Machine Learning, pages 689?
696, 2011.
[13] Galen Andrew, Raman Arora, Jeff Bilmes, and Karen Livescu. Deep Canonical Correlation Analysis. In
Proceedings of The 30th International Conference on Machine Learning, pages 1247?1255, 2013.
[14] Alex Smola, Arthur Gretton, Le Song, and Bernhard Sch?olkopf. A Hilbert Space Embedding for Distributions. In Algorithmic Learning Theory. 2007.
[15] A. Gretton, K. Fukumizu, C.H. Teo, L. Song, B. Sch?olkopf, and A.J. Smola. A Kernel Statistical Test of
Independence. In Advances in Neural Information Processing Systems, 2008.
[16] Krikamol Muandet, Kenji Fukumizu, Francesco Dinuzzo, and Bernhard Sch?olkopf. Learning from Distributions via Support Measure Machines. In Advances in Neural Information Processing Systems, 2012.
[17] Krikamol Muandet and Bernhard Sch?olkopf. One-Class Support Measure Machines for Group Anomaly
Detection. In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, 2013.
[18] M Dudik, S J Phillips, and R E Schapire. Maximum Entropy Density Estimation with Generalized Regularization and an Application to Species Distribution Modeling. Journal of Machine Learning Research,
8:1217?1260, 2007.
[19] Dino Sejdinovic, Arthur Gretton, and Wicher Bergsma. A Kernel Test for Three-Variable Interactions. In
Advances in Neural Information Processing Systems, 2013.
[20] Yuya Yoshikawa, Tomoharu Iwata, and Hiroshi Sawada. Latent Support Measure Machines for Bag-ofWords Data Classification. In Advances in Neural Information Processing Systems, 2014.
[21] Yuya Yoshikawa, Tomoharu Iwata, and Hiroshi Sawada. Non-linear Regression for Bag-of-Words Data
via Gaussian Process Latent Variable Set Model. In Proceedings of the 29th AAAI Conference on Artificial
Intelligence, 2015.
[22] Bharath K. Sriperumbudur, Arthur Gretton, Kenji Fukumizu, Bernhard Sch?olkopf, and Gert R. G. Lanckriet. Hilbert Space Embeddings and Metrics on Probability Measures. The Journal of Machine Learning
Research, 11:1517?1561, 2010.
[23] Dong C. Liu and Jorge Nocedal. On the Limited Memory BFGS Method for Large Scale Optimization.
Mathematical Programming, 45(1-3):503?528, aug 1989.
9
| 5959 |@word polynomial:2 covariance:2 decomposition:1 citeseer:1 anthrax:3 moment:4 liu:2 document:38 rkhs:7 outperforms:1 existing:3 com:1 si:2 written:1 john:2 krikamol:2 intelligence:5 generative:2 isard:1 ith:9 dinuzzo:1 yamada:3 multiset:2 zhang:1 yoshikawa:4 stopwords:1 five:4 mathematical:1 c2:1 consists:1 introduce:1 examine:1 multi:6 inspired:1 param:1 becomes:1 discover:1 matched:4 linearity:1 finding:8 unobserved:1 dti:5 fellow:1 ti:2 act:2 biometrika:1 grant:1 service:1 cuisine:2 accordance:1 sd:6 chose:3 china:2 sawada:5 co:4 limited:1 range:1 acknowledgment:1 yj:17 practice:1 bookmarking:1 empirical:2 significantly:2 matching:40 projection:2 word:14 mingyu:1 cannot:8 close:1 applying:3 equivalent:2 map:2 maximizing:2 ke:1 regarded:1 nam:1 embedding:17 searching:2 gert:2 coordinate:1 target:25 suppose:1 user:2 anomaly:2 hatena:5 qifa:1 us:1 alexis:1 magazine:1 livescu:1 lanckriet:2 programming:1 element:3 approximated:1 particularly:1 updating:2 located:1 recognition:1 database:1 labeled:1 observed:4 capture:3 calculate:1 news:2 rq:2 cristianini:1 phenylalanine:1 bbc:2 sep:1 joint:2 multimodal:1 represented:17 various:3 regularizer:2 train:1 hiroshi:5 artificial:5 query:1 hyper:4 whose:1 solve:1 nikhil:1 otherwise:1 knn:2 unseen:2 propose:3 interaction:2 product:2 fr:1 relevant:1 aligned:1 bow:1 achieve:1 description:2 moved:1 dirac:1 olkopf:5 converges:1 andrew:2 gong:1 measured:3 nearest:2 op:1 aug:1 eq:20 predicted:1 kenji:2 indicate:2 closely:1 correct:5 bin:1 ja:1 galen:1 deciding:1 exp:8 algorithmic:1 major:1 dictionary:1 achieves:1 smallest:1 estimation:5 bag:12 radio:2 title:7 teo:1 create:2 successfully:2 reflects:1 fukumizu:3 gaussian:11 modified:1 focus:1 improvement:1 consistently:1 likelihood:2 hk:15 kim:1 nn:1 italian:1 tak:1 relation:1 semantics:1 classification:3 marginal:1 equal:1 vasconcelos:1 ng:1 qiang:1 represents:2 placing:1 discrepancy:1 report:1 quantitatively:1 intelligent:1 employ:6 randomly:1 preserve:1 doyle:1 consisting:5 detection:2 mining:1 alexei:1 evaluation:2 alignment:2 nl:3 pc:2 copyright:2 stair:1 yaoyong:1 closer:1 arthur:3 taylor:2 logarithm:1 initialized:1 miroslav:1 instance:86 modeling:3 measuring:5 cost:2 deviation:2 addressing:1 jsps:2 reported:1 straightforwardly:1 xue:1 muandet:2 density:4 international:11 lee:1 yl:4 xiangyang:1 dong:1 michael:1 together:1 yg:13 aaai:2 reflect:2 zhao:1 return:1 blda:3 li:2 japan:5 rasiwasia:1 bfgs:2 de:2 bold:1 nfs:1 includes:2 juhan:1 depends:2 performed:2 view:1 lab:4 wicher:1 annotation:2 jul:1 collaborative:1 minimize:4 square:1 largely:1 efficiently:3 characteristic:1 likewise:1 identification:1 lu:1 bilmes:1 confirmed:1 bharath:1 explain:1 flickr:5 sriperumbudur:1 frequency:3 nuno:1 associated:3 di:10 costa:1 dataset:12 dimensionality:4 hilbert:6 appears:1 bidirectional:1 higher:4 supervised:1 modal:1 formulation:1 evaluated:1 tomoharu:6 typeface:1 roger:1 smola:2 correlation:5 until:1 hand:1 morocco:1 web:2 nonparametrically:3 french:1 reveal:1 effect:2 true:3 evolution:1 regularization:1 laboratory:4 semantic:1 generalized:1 trying:1 demonstrate:1 performs:2 interpreting:1 image:17 meaning:1 lazebnik:1 novel:1 recently:1 fi:1 wikipedia:10 common:1 phoenix:2 patent:8 jp:5 belong:1 linking:1 honglak:1 phillips:1 similarly:1 akaho:1 emanuele:1 language:9 dj:4 shawe:2 dino:1 similarity:10 posterior:4 own:2 bergsma:1 apart:2 binary:1 jorge:1 delicious:5 meeting:1 yi:3 preserving:1 dxdy:1 dudik:1 employed:1 determine:1 maximize:1 kyoto:2 gretton:4 ntt:6 match:1 xf:17 calculation:1 cross:18 long:3 nara:2 retrieval:3 paired:7 prediction:4 variant:1 regression:2 kcca:17 vision:1 metric:1 kernel:75 represent:6 mmd:1 sejdinovic:1 achieved:1 eter:1 want:1 singular:1 source:24 appropriately:1 sch:5 unlike:3 finnish:1 coviello:1 effectiveness:1 extracting:1 yang:1 embeddings:19 affect:1 independence:3 inner:2 regarding:1 vinokourov:1 pivot:1 ked:4 whether:1 six:1 pca:2 song:2 karen:1 deep:3 gabriel:1 involve:1 takeshi:1 category:1 unpaired:3 http:2 schapire:1 canonical:5 estimated:4 delta:1 tea:3 group:2 key:1 nevertheless:2 drawn:1 yuya:4 nocedal:1 jose:1 uncertainty:1 svetlana:1 throughout:2 ueda:1 raman:1 jiquan:1 cca:17 layer:1 internet:1 correspondence:7 annual:1 precisely:1 alex:1 software:1 tag:19 argument:1 redshift:1 lingual:7 across:7 island:3 intuitively:2 projecting:1 german:1 occurrence:2 hotelling:1 assumes:5 dirichlet:1 include:2 top:3 clustering:1 opportunity:2 society:1 yunchao:1 parametric:1 ofwords:3 gradient:9 dp:1 distance:2 mapped:3 card:5 entity:1 topic:6 nello:1 length:1 relationship:7 minimizing:2 setup:1 robert:1 stated:1 negative:2 unknown:1 perform:1 twenty:3 recommender:1 francesco:1 datasets:12 extended:1 communication:2 reproducing:4 ninth:1 introduced:1 pair:15 specified:1 doppler:2 sentence:2 learned:2 hypergeometric:3 hour:1 naist:1 alternately:1 below:1 martina:1 memory:1 hot:1 difficulty:3 natural:1 advanced:1 representing:3 millennium:2 technology:4 ne:1 arora:1 multisets:2 patrik:1 text:3 prior:1 discovery:1 shinji:1 embedded:1 dsi:7 allocation:1 filtering:1 validation:5 digital:2 nte:3 article:2 principle:1 supported:1 keeping:2 english:5 jth:5 institute:2 neighbor:2 wide:1 face:2 attaching:1 benefit:1 regard:4 overcome:3 chiba:1 vocabulary:2 dimension:2 calculated:4 world:2 author:1 social:2 bernhard:4 multilingual:2 robotic:1 assumed:1 discriminative:2 xi:20 search:1 latent:54 khosla:1 table:3 additionally:1 learn:4 transfer:1 kanagawa:1 expanding:1 obtaining:2 ngiam:1 complex:1 japanese:10 domain:62 dense:2 s2:1 bilingual:9 hyperparameters:1 dtj:3 site:1 referred:1 intel:4 en:1 psychometric:1 fashion:2 aid:1 embeds:1 precision:12 inferring:1 pereira:1 watanabe:1 concatenating:1 xl:10 lie:1 candidate:2 levy:1 third:1 learns:1 embed:1 sift:2 list:8 consist:2 ci:3 entropy:1 photograph:1 eij:3 visual:1 aditya:1 contained:2 iwata:6 oct:1 conditional:1 goal:2 rbf:1 jeff:1 shared:14 man:1 content:2 included:2 determined:1 except:1 semantically:1 mexican:2 multimedia:2 specie:1 experimental:1 svd:1 indicating:1 formally:1 searched:1 support:3 relevance:1 ex:1 |
5,479 | 596 | Learning to categorize objects using
temporal coherence
Suzanna Becker?
The Rotman Research Institute
Baycrest Center
3560 Bathurst St.
Toronto, Ontario, M6A 2E1
Abstract
The invariance of an objects' identity as it transformed over time
provides a powerful cue for perceptual learning. We present an unsupervised learning procedure which maximizes the mutual information between the representations adopted by a feed-forward network at consecutive time steps. We demonstrate that the network
can learn, entirely unsupervised, to classify an ensemble of several
patterns by observing pattern trajectories, even though there are
abrupt transitions from one object to another between trajectories. The same learning procedure should be widely applicable to
a variety of perceptual learning tasks.
1
INTRODUCTION
A promising approach to understanding human perception is to try to model its
developmental stages. There is ample evidence that much of perception is learned.
Even some very low level perceptual abilities such as stereopsis (Held, Birch and
Gwiazda, 1980; Birch, Gwiazda and Held, 1982) are not present at birth, and appear
to be learned. Once rudimentary feature detection abilities have been established,
the infant can learn to segment the sensory input, and eventually classify it into
familiar patterns. These earliest stages of learning seem to be inherently unsuper? Address as of July 1993: Department of Psychology, McMaster University, 1280 Main
Street West, Hamilton Ontario, Canada, L8S 4K1
361
362
Becker
vised (or "self-supervised"). Gradually, the infant learns to detect regularities in
the world. One kind of structure that is ubiquitous in sensory information is spatiotemporal coherence. For example, in speech signals, speaker characteristics such as
the fundamental frequency are relatively constant over time. At shorter time scales,
individual words are typically composed of long intervals having relatively constant
spectral characteristics, corresponding to vowels, with short intervening bursts and
rapid transitions corresponding to consonants. The consonants also change across
time in very regular ways. This temporal coherence at various scales makes speech
predictable, to a certain degree. As one moves about in the world, the visual field
flows by in characteri3tic patterns of expansion, dilation and translation. Since
most objects in the visual world move slowly, if at all, the visual scene changes
slowly over time, exhibiting the same temporal coherence as other sensory sources.
Independently moving rigid objects are invariant with respect to shape, texture
and many other features, up to very high level properties such as the object's identity. Even under nonlinear shape distortions, images like clouds drifting across the
sky are perceived to have coherent features, in spite of undergoing highly non-rigid
transformations. Thus, temporal coherence of the sensory input may provide important cues for segmenting signals in space and time, and for object localization
and identification.
2
PREVIOUS WORK
A common approach to training neural networks to perform transformationinvariant object recognition is to build in hard constraints which enforce invariance
with respect to the transformations of interest. For example, equality constraints
among feature-detecting kernels have been used to enforce translation-invariance
(Fukushima, 1988; Le Cun et al., 1990). Various other higher-order constraints
have been used to enforce viewpoint-invariance (Hinton and Lang, 1985; Zemel,
Hinton and Mozer, 1990) and invariance with respect to arbitrary group transformations (Giles and Maxwell, 1987). While in the case of translation-invariance
it is straightforward to hard-wire the appropriate constraints, more general linear
transformation-in variance requires rather cumbersome machinery, and for arbitrary
non-linear transformations the approach is difficult if not impossible.
In contrast to the above approaches, Foldiak's model of complex cell development
results in translation-invariant orientation detectors without the imposition of any
hard constraints (Foldiak, 1991). Further, his method is unsupervised. He proposed
a modified Hebbian learning rule, in which each weight change depends on the unit's
output history:
~Wij(t)
= a Yi(t)
(Xj(t) - Wij(t))
where Xj(t) is the activity of the jth presynaptic unit at the tth time step, and Yi(t)
is a temporally low-pass filtered trace of the postsynaptic activity of the ith unit.
Whereas a standard Hebb-rule encourages a unit to detect correlations between its
inputs, this rule encourages a unit to produce outputs which are correlated over
time. A single unit can therefore learn to group patterns which have zero overlap.
Foldiak demonstrated this by presenting trajectories of moving lines, with line orientation held constant within each trajectory, to a network whose input features
were local orientation detectors. Units became tuned to particular orientations,
Learning to categorize objects using temporal coherence
independent of location.
While Foldiak's work is of interest as a model of cell development in early visual
cortex, there are several reasons why it cannot be applied directly to the more
general problem of transformation-invariant object recognition. One reason that
Foldiak's learning rule worked well on the line trajectory problem is that the input
representation (oriented line features) made the problem linearly separable: there
was no overlap between input features present in successive trajectories, hence it
was easy to categorize lines of the same orientation. Generally, in more difficult
pattern classification problem:l (such as digit or speech recognition) the optimal
input features cannot be preselected but must be learned, and there is considerable
overlap between the component features of different pattern classes. Hence, a multilayer network is required, and it must be able to optimally select features so as to
improve its classification performance. The question of interest here is whether it
is possible to train such a network entirely unsupervised? As mentioned above, the
temporal coherence of the sensory input may be an important cue for solving this
problem in biological systems.
3
TEMPORAL-COHERENCE BASED LEARNING
One way to capture the constraint of temporal coherence in a learning procedure
is to build it into the objective function. For example, we could try to build representations that are relatively predictable, at least over short time scales. We also
need a constraint which captures the notion of high information content; for example, we could require that the network be unpredictable over long time scales. A
measure which satisfies both criteria is the mutual information between the classifications produced by the network at successive time steps. If the network produces
classification C(t) at time t and classification C(t+ 1) at time t+ 1, the mutual information between the two successive classifications, averaged over the entire sequence
of patterns, is given by
H(Ct) + H(Ct+t) - H(Ct, Ct+t)
(p/)t log (p/)t (p/+!)t log (p/+1)t
-L
L
;
+ L {p/p/+1)t log {p/p/+l)t
i;
where the angle brackets denote time-averaged quantities.
A set of n output units can be forced to represent a probability distribution over
n classes, C E {Cl ... cn }, by adopting states whose probabilities sum to one. This
can be done, for example, by using the "soft max" activation function suggested by
Bridle (1990):
t
Pi =
eXi(t)
",n
~j=l
e
x ?(t)
=
P(C(t) = Ci)
1
where Xi is the total weighted summed input to the ith unit, and Pit, the output of
the ith unit, stands for the probability of the ith class, P( C(t) = CIi).
363
364
Becker
Once we know the probability of the network assigning each pattern to each class,
we can compute the mutual information between the classifications produced by
the network at neighboring time steps, e(t) and e(t + 1). This requires sampling,
over the entire training set, the average probability of each class, as well as the joint
probabilities of each possible pair of classifications being produced as successive time
steps. The learning involves adjusting the weights in the network so as to maximize
the mutual information between the representations produced by the network at
adjacent time steps. In the experiments reported here, a gradient ascent procedure
was used with the method of conjugate gradients.
One problem with maximizing the information measure described above is that for
a fixed amount of entropy in the classifications, H(et ), the network can always
improve the mutual information by decreasing the joint entropy, H(et , etJ. In
order to achieve low joint entropy, the network must try to assign class probabilities
with high certainty, i.e., produce output values near zero or one. Thus the network
can always improve its current solution by simply make the weights very large.
Unfortunately, this often occurs during learning. To discourage the network from
getting stuck in such locally optimal (but very poor) solutions, we introduce a
constant A to weight the importance of the joint entropy term in the objective
function, so as to maximize the following:
In the simulations reported here, we used a value of 0.5 for A. This effectively
prevents the network from concentrating all its effort on reducing the joint entropy,
and forces it to learn more gradually, resulting in more globally optimal solutions.
We have tested this learning procedure on a simple signal classification problem.
The pattern set consisted of trajectories of random intensity patterns, drawn from
six classes, shown in figure 1. Members of the same class consisted of translated versions of the same pattern, shifted one to five pixels with wrap-around. A trajectory
consisted of a block of ten randomly selected patterns from the same class. Between
trajectories, the pattern class changed randomly. The network had six input units,
twenty hidden units, and six output units. The hidden units used the logistic nonlinearity, 1+~-'" and the output units used the softmax activation function. The
hidden units had biases but the outputs did not. 1 After training the network on
1200 patterns (20 trajectories of 10 examples of each of the six patterns) for 300
conjugate gradient iterations, the output units always became reasonably specific
to particular pattern classes, as shown for a typical run in Figure 2a). The general
pattern is that each output unit responds maximally to one or two pattern classes,
although some of the units have mixed responses.
This classification problem is extremely difficult for an unsupervised learning procedure, as there is considerable overlap between patterns in different classes, and
essentially no overlap between patterns in the same class. It is therefore easy to see
why a single unit might end up capturing a few patterns from one class and a few
from another. We can create an easier subproblem by only training the network on
half the patterns in each class. In this case, the network always learns to separate
the six pattern classes either perfectly, or nearly so, as shown in figure 2b).
1 Removing biases from the outputs helps prevent the network from getting trapped in
local maxima during learning.
Learning to categorize objects using temporal coherence
:????::I:??--~.::.???:::???:::.:
?
.1,
? ?" - - - " . . . . . . .
I'
,'-- -
??'
,'
??_ - _
,"
??? ?
?
__
I
Figure 1: The set of 6 random patterns used to create pattern trajectories. Each pattern was created by randomly setting the intensities of the 6 pixels, and normalizing
the intensity profile to have zero mean.
4
DISCUSSION
Becker and Hinton (1992) showed that a network could learn to extract a continuous
parameter of visual scenes which is coherent across space, by maximizing the mutual
information between the outputs of two network modules that receive input from
spatially adjacent parts of the input. Here, we have shown how the same idea
can be applied to the temporal domain, to perform a discrete classification of the
input assuming temporal coherence. We could also apply the same algorithm to the
problem of unsupervised multi-sensory integration, by forming classifications which
are coherent across different sensory modalities, as well as across time.
One advantage of the approach presented here over unsupervised learning procedures such as competitive learning is that units must co-operate to try to find
a globally optimal solution. There is therefore incentive for each unit to try to
improve the temporal predictability of all of the output units' classifications over
time, including its own; this discourages anyone unit from trying to model all of
365
366
Becker
a)
Mean:
Unit 2
Unit 1
Unt 3
Unit 4
Unit 5
Unit 6
o.~---+---------+--------~--------;---------+---------+---------;
0.7~---+--------~---------~--------+---------~
o.
0
O.
0
O.
o.
b)
Mean:
Unit 1
Unit 2
Unit 3
Unit 4
0.90
O.Se)
--
..."
--
O . ~::
--
n c"
I--
o- oil"
f--
O.3~
f--
,,~v
0.10--.vv
Unlt6
,.!-
1 ,,,,
0''''
Unit 5
~
.
~
~
~
~
~
~
~
~
=:
!;
t;
t;
"""
;:
-
-
:!
;:::::~
??
?
?
?
!
~
~
;;
~
==
e
~
;;
~
P-
-?
==
;;
~
~
~
~
f--
tit
I
!;:
~
mm
??
?III
?
:
:?
?
t:
Figure 2: The probability of each output unit responding for each of the six classes
of patterns, averaged over 1200 cases. In a) the pattern trajectories contained six
shifted examples of each class, while in b) there were three examples of each class.
Learning to categorize objects using temporal coherence
the patterns. Additionally, because we have a well-defined objective function for
the learning, the procedure can be applied to multi-layer networks which discover
features specifically tuned to the classification problem.
However, there are a few drawbacks to using this learning procedure. One is that
if any lower-order temporally coherent structure exists, the network will invariably
discover it. So, for example, if the pattern classes differ in their average intensity, the
network can easily learn to separate them simply by detecting the average intensity
of the inputs and ignoring all other information. Similarly, if the spatial location of
pattern features varies slowly .md predictably over time, the network tends to learn
a spatial map rather than solving the higher-order problem of pattern classification.
On the other hand, this suggests that a sequential approach to modelling temporally
coherent structure may be possible: an initial processing stage could try to model
low-order temporal structure such as local spatial correlations, a second processing
stage could model the remaining structure in the output of the first over a larger
spatio-temporal extent, and so on.
A second drawback is the space complexity of the algorithm: for a network with
n output units, each must store n 2 joint probability statistics and n individual
probabilities. 2 The storage complexity can be reduced from n 2 + n to just two
statistics per output unit by optimizing a more constrained objective function in
which each output unit assumes a maximum entropy distribution for the other n - 1
units. It then need only consider the average probability of its own output, and
the joint probability of its output at successive time steps. In this case, the mutual
information can be approximated by a sum of n terms:
L
H(Ci,t)
+ H(Ci,t+d -
H(Ci,t, Ci,t+d
i
where H(Ci,t) = - (Pit)t log (Pit)t - n~l (1 - Pit)t log n~l (1 - Pit)t is the entropy
of the ith output unit under the maximum entropy assumption for the other output
units, and the other constrained entropies are computed similarly.
A final drawback of the learning procedure presented here, as discussed earlier, is
its tendency to become trapped in local optima with very large weights. We dealt
with this by introducing a constant parameter, A, to dampen the importance of the
joint entropy term. A more principled way to deal with the problem of local optima
is to use stochastic rather than deterministic output units, resulting in a stochastic
gradient descent learning procedure (although this would increase the simulation
time considerably). Another way of obtaining more globally optimal solutions might
be to consider the predictability of classifications over longer time scales rather than
just at pairwise time steps, as was done in Foldicik's model (1991). The network
could thus maximize the mutual information between its current response and a
weighted average of its responses over the last few time steps.
2Note, however, that the complexity (both in time and space) of the computation of
these statistics is negligible relative to that of the gradient calculations, assuming there
are many more weights than the squared number of output units in the network.
367
368
Becker
5
CONCLUSIONS
The invariance of an objects' identity over time, with respect to transformations it
may undergo as it and/or the observer move, provides a powerful cue for perceptual
learning. We have demonstrated that a network can learn, entirely unsupervised,
to build translation-invariant object detectors based on the assumption of temporal
coherence about the input. This procedure should be widely applicable to a variety
of perceptual learning tasks, such as identifying phonemes in speech, segmenting
objects in images of trajectories, and classifying textures in tactile input.
Acknowledgments
I thank Geoff Hinton for many fruitful discussions that led to the ideas presented
in this paper.
References
Becker, S. and Hinton, G. E. (1992). A self-organizing neural network that discovers
surfaces in random-dot stereograms. Nature, 355:161-163.
Birch, E. E., Gwiazda, J., and Held, R. (1982). Stereoacuity development for crossed
and uncrossed disparities in human infants. Vison research, 22:507-513.
Bridle, J. S. (1990). Training stochastic model recognition algorithms as networks
can lead to maximum mutual information estimation of parameters. In Touretzky, D. S., editor, Neural Information Processing Systems, Vol. 2, pages 111217, San Mateo, CA. Morgan Kaufmann.
Foldiak, P. (1991). Learning invariance from transformation sequences. Neural
Computation, 3(2):194-200.
Fukushima, K. (1988). Neocognition: A hierarchical neural network capable of
visual pattern pattern recognition. Neural networks, 1:119-130.
Giles, C. 1. and Maxwell, T. (1987). Learning, invariance, and generalization in
high-order neural networks. Applied Optics, 26(23):4972-4978.
Held, R., Birch, E. E., and Gwiazda, J. (1980). Stereoacuity of human infants.
Proceedings of the national academy of sciences USA, 77(9):5572-5574.
Hinton, G. E. and Lang, K. (1985). Shape recognition and illusory conjunctions.
In IlCAI 9, Los Angeles.
Le Cun, Y., Boser, B., Denker, J., Henderson, D., Howard, R., Hubbard, W.,
and Jackel, 1. (1990). Handwritten digit recognition with a back-propagation
network. In Touretzky, D., editor, Advances in Neural Information Processing
Systems, pages 396-404, Denver 1989. Morgan Kaufmann, San Mateo.
Zemel, R. S., Hinton, G. E., and Mozer, M. C. (1990). TRAFFIC: object recognition using hierarchical reference frame transformations. In Advances in Neural
Information Processing Systems 2, pages 266-273. Morgan Kaufmann Publishers.
| 596 |@word version:1 simulation:2 initial:1 disparity:1 tuned:2 current:2 lang:2 activation:2 assigning:1 must:5 shape:3 infant:4 cue:4 selected:1 half:1 stereoacuity:2 ith:5 short:2 filtered:1 detecting:2 provides:2 toronto:1 location:2 successive:5 five:1 burst:1 become:1 introduce:1 pairwise:1 rapid:1 multi:2 globally:3 decreasing:1 unpredictable:1 discover:2 maximizes:1 kind:1 transformation:9 temporal:16 sky:1 certainty:1 unit:43 appear:1 hamilton:1 segmenting:2 negligible:1 local:5 tends:1 might:2 mateo:2 suggests:1 pit:5 co:1 averaged:3 acknowledgment:1 block:1 digit:2 procedure:12 word:1 regular:1 spite:1 cannot:2 storage:1 impossible:1 fruitful:1 map:1 demonstrated:2 center:1 maximizing:2 deterministic:1 straightforward:1 independently:1 abrupt:1 identifying:1 suzanna:1 rule:4 his:1 notion:1 recognition:8 approximated:1 cloud:1 module:1 subproblem:1 capture:2 mentioned:1 mozer:2 developmental:1 predictable:2 complexity:3 principled:1 stereograms:1 unt:1 solving:2 segment:1 tit:1 localization:1 translated:1 easily:1 exi:1 joint:8 geoff:1 various:2 train:1 forced:1 zemel:2 birth:1 whose:2 widely:2 larger:1 distortion:1 ability:2 statistic:3 final:1 sequence:2 advantage:1 neighboring:1 organizing:1 ontario:2 achieve:1 academy:1 intervening:1 getting:2 los:1 regularity:1 etj:1 optimum:2 produce:3 object:16 help:1 involves:1 exhibiting:1 differ:1 drawback:3 stochastic:3 human:3 require:1 assign:1 generalization:1 biological:1 mm:1 around:1 consecutive:1 early:1 perceived:1 estimation:1 applicable:2 jackel:1 hubbard:1 create:2 weighted:2 dampen:1 always:4 modified:1 rather:4 conjunction:1 earliest:1 modelling:1 contrast:1 detect:2 rigid:2 typically:1 entire:2 hidden:3 transformed:1 wij:2 pixel:2 among:1 orientation:5 classification:17 development:3 spatial:3 summed:1 softmax:1 mutual:10 integration:1 field:1 once:2 constrained:2 having:1 sampling:1 unsupervised:8 nearly:1 few:4 oriented:1 randomly:3 composed:1 national:1 individual:2 familiar:1 vowel:1 fukushima:2 detection:1 invariably:1 interest:3 highly:1 henderson:1 bracket:1 held:5 capable:1 shorter:1 machinery:1 classify:2 soft:1 giles:2 earlier:1 introducing:1 optimally:1 reported:2 varies:1 spatiotemporal:1 considerably:1 st:1 fundamental:1 rotman:1 squared:1 slowly:3 mcmaster:1 vison:1 baycrest:1 depends:1 crossed:1 try:6 observer:1 observing:1 traffic:1 competitive:1 became:2 variance:1 characteristic:2 phoneme:1 ensemble:1 kaufmann:3 dealt:1 identification:1 handwritten:1 produced:4 trajectory:13 history:1 detector:3 cumbersome:1 touretzky:2 frequency:1 bridle:2 adjusting:1 birch:4 concentrating:1 illusory:1 ubiquitous:1 back:1 feed:1 maxwell:2 higher:2 supervised:1 response:3 maximally:1 done:2 though:1 just:2 stage:4 correlation:2 hand:1 nonlinear:1 propagation:1 logistic:1 usa:1 oil:1 consisted:3 equality:1 hence:2 spatially:1 deal:1 adjacent:2 during:2 self:2 encourages:2 speaker:1 criterion:1 trying:1 presenting:1 demonstrate:1 rudimentary:1 image:2 discovers:1 common:1 discourages:1 denver:1 discussed:1 he:1 transformationinvariant:1 similarly:2 nonlinearity:1 had:2 dot:1 moving:2 cortex:1 longer:1 surface:1 own:2 foldiak:6 showed:1 optimizing:1 store:1 certain:1 yi:2 morgan:3 cii:1 maximize:3 july:1 signal:3 hebbian:1 calculation:1 long:2 e1:1 multilayer:1 essentially:1 iteration:1 kernel:1 represent:1 adopting:1 cell:2 receive:1 whereas:1 interval:1 source:1 modality:1 publisher:1 operate:1 ascent:1 undergo:1 ample:1 member:1 flow:1 seem:1 near:1 iii:1 easy:2 variety:2 xj:2 psychology:1 perfectly:1 idea:2 cn:1 angeles:1 whether:1 six:7 becker:7 effort:1 tactile:1 speech:4 generally:1 se:1 amount:1 locally:1 ten:1 tth:1 reduced:1 ilcai:1 shifted:2 trapped:2 per:1 discrete:1 incentive:1 vol:1 group:2 drawn:1 prevent:1 sum:2 imposition:1 angle:1 run:1 powerful:2 coherence:13 uncrossed:1 entirely:3 capturing:1 ct:4 layer:1 activity:2 optic:1 constraint:7 worked:1 vised:1 scene:2 anyone:1 extremely:1 separable:1 relatively:3 department:1 poor:1 conjugate:2 across:5 postsynaptic:1 cun:2 foldicik:1 gradually:2 invariant:4 eventually:1 know:1 end:1 adopted:1 apply:1 denker:1 hierarchical:2 spectral:1 enforce:3 appropriate:1 drifting:1 responding:1 remaining:1 assumes:1 k1:1 build:4 move:3 objective:4 question:1 quantity:1 occurs:1 md:1 responds:1 gradient:5 wrap:1 separate:2 thank:1 street:1 presynaptic:1 extent:1 reason:2 assuming:2 difficult:3 unfortunately:1 trace:1 twenty:1 perform:2 wire:1 howard:1 descent:1 hinton:7 frame:1 arbitrary:2 canada:1 intensity:5 pair:1 required:1 coherent:5 learned:3 boser:1 established:1 address:1 able:1 suggested:1 pattern:35 perception:2 preselected:1 max:1 including:1 overlap:5 force:1 improve:4 temporally:3 created:1 extract:1 understanding:1 relative:1 mixed:1 degree:1 viewpoint:1 editor:2 classifying:1 pi:1 translation:5 changed:1 last:1 jth:1 bias:2 vv:1 institute:1 transition:2 world:3 stand:1 sensory:7 forward:1 made:1 stuck:1 san:2 predictably:1 consonant:2 spatio:1 xi:1 stereopsis:1 continuous:1 dilation:1 why:2 additionally:1 promising:1 learn:8 reasonably:1 nature:1 ca:1 inherently:1 ignoring:1 obtaining:1 expansion:1 complex:1 cl:1 discourage:1 domain:1 did:1 main:1 linearly:1 profile:1 west:1 hebb:1 predictability:2 perceptual:5 learns:2 removing:1 specific:1 undergoing:1 evidence:1 normalizing:1 exists:1 l8s:1 sequential:1 effectively:1 importance:2 ci:6 texture:2 easier:1 entropy:10 led:1 simply:2 forming:1 visual:6 prevents:1 contained:1 satisfies:1 identity:3 considerable:2 change:3 hard:3 content:1 typical:1 specifically:1 reducing:1 unsuper:1 total:1 pas:1 invariance:9 tendency:1 select:1 categorize:5 tested:1 correlated:1 |
5,480 | 5,960 | A Gaussian Process Model of Quasar
Spectral Energy Distributions
Andrew Miller? , Albert Wu
School of Engineering and Applied Sciences
Harvard University
acm@seas.harvard.edu, awu@college.harvard.edu
Jeffrey Regier, Jon McAuliffe
Department of Statistics
University of California, Berkeley
{jeff, jon}@stat.berkeley.edu
Dustin Lang
McWilliams Center for Cosmology
Carnegie Mellon University
dstn@cmu.edu
Ryan Adams ?
School of Engineering and Applied Sciences
Harvard University
rpa@seas.harvard.edu
Prabhat, David Schlegel
Lawrence Berkeley National Laboratory
{prabhat, djschlegel}@lbl.gov
Abstract
We propose a method for combining two sources of astronomical data, spectroscopy and photometry, that carry information about sources of light (e.g., stars,
galaxies, and quasars) at extremely different spectral resolutions. Our model treats
the spectral energy distribution (SED) of the radiation from a source as a latent
variable that jointly explains both photometric and spectroscopic observations.
We place a flexible, nonparametric prior over the SED of a light source that admits a physically interpretable decomposition, and allows us to tractably perform
inference. We use our model to predict the distribution of the redshift of a quasar
from five-band (low spectral resolution) photometric data, the so called ?photoz? problem. Our method shows that tools from machine learning and Bayesian
statistics allow us to leverage multiple resolutions of information to make accurate predictions with well-characterized uncertainties.
1
Introduction
Enormous amounts of astronomical data are collected by a range of instruments at multiple spectral
resolutions, providing information about billions of sources of light in the observable universe [1,
10]. Among these data are measurements of the spectral energy distributions (SEDs) of sources of
light (e.g. stars, galaxies, and quasars). The SED describes the distribution of energy radiated by a
source over the spectrum of wavelengths or photon energy levels. SEDs are of interesting because
they convey information about a source?s physical properties, including type, chemical composition,
and redshift, which will be an estimand of interest in this work.
The SED can be thought of as a latent function of which we can only obtain noisy measurements.
Measurements of SEDs, however, are produced by instruments at widely varying spectral resolutions ? some instruments measure many wavelengths simultaneously (spectroscopy), while others
?
?
http://people.seas.harvard.edu/~acm/
http://people.seas.harvard.edu/~rpa/
1
8
PSFFLUX
flux (nanomaggies)
7
6
5
4
3
2
1
0
u
g
r
band
i
z
Figure 1: Left: example of a BOSS-measured quasar SED with SDSS band filters, Sb (?), b ?
{u, g, r, i, z}, overlaid. Right: the same quasar?s photometrically measured band fluxes. Spectroscopic measurements include noisy samples at thousands of wavelengths, whereas SDSS photometric fluxes reflect the (weighted) response over a large range of wavelengths.
average over large swaths of the energy spectrum and report a low dimensional summary (photometry). Spectroscopic data describe a source?s SED in finer detail than broadband photometric
data. For example, the Baryonic Oscillation Spectroscopic Survey [5] measures SED samples at
over four thousand wavelengths between 3,500 and 10,500 ?. In contrast, the Sloan Digital Sky
Survey (SDSS) [1] collects spectral information in only 5 broad spectral bins by using broadband
filters (called u, g, r, i, and z), but at a much higher spatial resolution. Photometric preprocessing
models can then aggregate pixel information into five band-specific fluxes and their uncertainties
[17], reflecting the weighted average response over a large range of the wavelength spectrum. The
two methods of spectral information collection are graphically compared in Figure 1.
Despite carrying less spectral information, broadband photometry is more widely available and exists for a larger number of sources than spectroscopic measurements. This work develops a method
for inferring physical properties sources by jointly modeling spectroscopic and photometric data.
One use of our model is to measure the redshift of quasars for which we only have photometric observations. Redshift is a phenomenon in which the observed SED of a source of light is stretched toward longer (redder) wavelengths. This effect is due to a combination of radial velocity with respect
to the observer and the expansion of the universe (termed cosmological redshift) [8, 7]. Quasars, or
quasi-stellar radio sources, are extremely distant and energetic sources of electromagnetic radiation
that can exhibit high redshift [16]. Accurate estimates and uncertainties of redshift measurements
from photometry have the potential to guide the use of higher spectral resolution instruments to study
sources of interest. Furthermore, accurate photometric models can aid the automation of identifying
source types and estimating physical characteristics of faintly observed sources in large photometric
surveys [14].
To jointly describe both resolutions of data, we directly model a quasar?s latent SED and the process
by which it generates spectroscopic and photometric observations. Representing a quasar?s SED as
a latent random measure, we describe a Bayesian inference procedure to compute the marginal probability distribution of a quasar?s redshift given observed photometric fluxes and their uncertainties.
The following section provides relevant application and statistical background. Section 3 describes
our probabilistic model of SEDs and broadband photometric measurements. Section 4 outlines
our MCMC-based inference method for efficiently computing statistics of the posterior distribution. Section 5 presents redshift and SED predictions from photometric measurements, among other
model summaries, and a quantitative comparison between our method and two existing ?photo-z?.
We conclude with a discussion of directions for future work.
2
Background
The SEDs of most stars are roughly approximated by Planck?s law for black body radiators and
stellar atmosphere models [6]. Quasars, on the other hand, have complicated SEDs characterized by
some salient features, such as the Lyman-? forest, which is the absorption of light at many wavelengths from neutral hydrogen gas between the earth and the quasar [19]. One of the most interesting
properties of quasars (and galaxies) conveyed by the SED is redshift, which gives us insight into an
object?s distance and age. Redshift affects our observation of SEDs by ?stretching? the wavelengths,
? ? ?, of the quasar?s rest frame SED, skewing toward longer (redder) wavelengths. Denoting the
(rest)
rest frame SED of a quasar n as a function, fn
: ? ? R+ , the effect of redshift with value zn
2
Figure 2: Spectroscopic measurements of multiple quasars at different redshifts, z. The upper graph
depicts the sample spectrograph in the observation frame, intuitively thought of as ?stretched? by a
factor (1 + z). The lower figure depicts the ?de-redshifted? (rest frame) version of the same quasar
spectra, The two lines show the corresponding locations of the characteristic peak in each reference
frame. Note that the x-axis has been changed to ease the visualization - the transformation is much
more dramatic. The appearance of translation is due to missing data; we don?t observe SED samples
outside the range 3,500-10,500 ?.
(typically between 0 and 7) on the observation-frame SED is described by the relationship
?
fn(obs) (?) = fn(rest)
.
1 + zn
(1)
Some observed quasar spectra and their ?de-redshifted? rest frame spectra are depicted in Figure 2.
3
Model
This section describes our probabilistic model of spectroscopic and photometric observations.
Spectroscopic flux model The SED of a quasar is a non-negative function f : ? ? R+ , where ?
denotes the range of wavelengths and R+ are non-negative real numbers representing flux density.
Our model specifies a quasar?s rest frame SED as a latent random function. Quasar SEDs are highly
structured, and we model this structure by imposing the assumption that each SED is a convex
mixture of K latent, positive basis functions. The model assumes there are a small number (K) of
latent features or characteristics and that each quasar can be described by a short vector of mixing
weights over these features.
We place a normalized log-Gaussian process prior on each of these basis functions (described in
supplementary material). The generative procedure for quasar spectra begins with a shared basis
iid
?k (?) ? GP(0, K? ), k = 1, . . . , K,
Bk (?) = R
exp(?k (?))
,
exp(?k (?)) d?
?
(2)
where K? is the kernel and Bk is the exponentiated and normalized version of ?k . For each quasar n,
X
wn ? p(w) , s.t.
wk = 1,
mn ? p(m) , s.t. mn > 0,
zn ? p(z),
(3)
wk
where wn mixes over the latent types, mn is the apparent brightness, zn is the quasar?s redshift,
and distributions p(w), p(m), and p(z) are priors to be specified later. As each positive SED basis
function, Bk , is normalized to integrate to one, and each quasar?s weight vector wn also sums to
one, the latent normalized SED is then constructed as
X
fn(rest) (?) =
wn,k Bk (?)
(4)
k
(rest)
f?n (?)
(rest)
and we define the unnormalized SED
? mn ? fn (?). This parameterization admits the
(rest)
interpretation of fn (?) as a probability density scaled by mn . This interpretation allows us to
3
`, ?
Figure 3: Graphical model representation
of the joint photometry and spectroscopy
model. The left shaded variables represent
spectroscopically measured samples and
their variances. The right shaded variables
represent photometrically measured fluxes
and their variances. The upper box represents the latent basis, with GP prior parameters ` and ?. Note that Nspec + Nphoto
replicates of wn , mn and zn are instantiated.
Bk
K
xn,?
wn
yn,b
mn
2
?n,b
2
?n,?
zn
???
Nspec
b ? {u, g, r, i, z}
Nphoto
separate out the apparent brightness, which is a function of distance and overall luminosity, from the
SED itself, which carries information pertinent to the estimand of interest, redshift.
For each quasar with spectroscopic data, we observe noisy samples of the redshifted and scaled spectral energy distribution at a grid of P wavelengths ? ? {?1 , . . . , ?P }. For quasar n, our observation
frame samples are conditionally distributed as
?
ind
2
xn,? |zn , wn , {Bk } ? N f?n(rest)
, ?n,?
(5)
1 + zn
2
where ?n,?
is known measurement variance from the instruments used to make the observations.
The BOSS spectra (and our rest frame basis) are stored in units 10?17 ? erg ? cm?2 ? s?1 ? ?
?1
.
Photometric flux model Photometric data summarize the amount of energy observed over a
large swath of the wavelength spectrum. Roughly, a photometric flux measures (proportionally) the
number of photons recorded by the instrument over the duration of an exposure, filtered by a bandspecific sensitivity curve. We express flux in nanomaggies [15]. Photometric fluxes and measurement error derived from broadband imagery have been computed directly from pixels [17]. For each
quasar n, SDSS photometric data are measured in five bands, b ? {u, g, r, i, z}, yielding a vector of
2
five flux values and their variances, yn and ?n,b
. Each band, b, measures photon observations at each
wavelength in proportion to a known filter sensitivity, Sb (?). The filter sensitivities for the SDSS
ugriz bands are depicted in Figure 1, with an example observation frame quasar SED overlaid. The
(obs)
actual measured fluxes can be computed by integrating the full object?s spectrum, mn ? fn (?)
against the filters. For a band b ? {u, g, r, i, z}
Z
?b (fn(rest) , zn ) = fn(obs) (?) Sb (?) C(?) d? ,
(6)
where C(?) is a conversion factor to go from the units of fn (?) to nanomaggies (details of this
conversion are available in the supplementary material). The function ?b takes in a rest frame SED,
a redshift (z) and maps it to the observed b-band specific flux. The results of this projection onto
SDSS bands are modeled as independent Gaussian random variables with known variance
ind
2
yn,b | fn(rest) , zn ? N (?b (fn(rest) , zn ), ?n,b
).
(7)
(rest)
Conditioned on the basis, B = {Bk }, we can represent fn
with a low-dimensional vector. Note
(rest)
that fn
is a function of wn , zn , mn , and B (see Equation 4), so we can think of ?b as a function
of wn , zn , mn , and B. We overload notation, and re-write the conditional likelihood of photometric
observations as
2
yn,b | wn , zn , mn , B ? N (?b (wn , zn , mn , B), ?n,b
).
(8)
Intuitively, what gives us statistical traction in inferring the posterior distribution over zn is the structure learned in the latent basis, B, and weights w, i.e., the features that correspond to distinguishing
bumps and dips in the SED.
Note on priors For photometric weight and redshift inference, we use a flat prior on zn ? [0, 8],
and empirically derived priors for mn and wn , from the sample of spectroscopically measured
sources. Choice of priors is described in the supplementary material.
4
4
Inference
Basis estimation For computational tractability, we first compute a maximum a posteriori (MAP)
2
estimate of the basis, Bmap to condition on. Using the spectroscopic data, {xn,? , ?n,?
, zn }, we compute a discretized MAP estimate of {Bk } by directly optimizing the unnormalized (log) posterior
implied by the likelihood in Equation 5, the GP prior over B, and diffuse priors over wn and mn ,
N
Y
2
p {wn , mn }, {Bk }|{xn,? , ?n,?
, zn } ?
p(xn,? |zn , wn , mn , {Bk })p({Bk })p(wn )p(mn ) .
n=1
(9)
We use gradient descent with momentum and LBFGS [12] directly on the parameters ?k , ?n,k , and
log(mn ) for the Nspec spectroscopically measured quasars. Gradients were automatically computed
using autograd [9]. Following [18], we first resample the observed spectra into a common rest
frame grid, ?0 = (?0,1 , . . . , ?0,V ), easing computation of the likelihood. We note that although our
model places a full distribution over Bk , efficiently integrating out those parameters is left for future
work.
Sampling wn , mn , and zn The Bayesian ?photo-z? task requires that we compute posterior
marginal distributions of z, integrating out w, and m. To compute these distributions, we construct a Markov chain over the state space including z, w, and m that leaves the target posterior
distribution invariant. We treat the inference problem for each photometrically measured quasar,
yn , independently. Conditioned on a basis Bk , k = 1, . . . , K, our goal is to draw posterior samples
of wn , mn and zn for each n. The unnormalized posterior can be expressed
p(wn , mn , zn |yn , B) ? p(yn |wn , mn , zn , B)p(wn , mn , zn )
(10)
where the left likelihood term is defined in Equation 8. Note that due to analytic intractability, we
R (obs)
numerically integrate expressions involving ? fn (?)d? and Sb (?). Because the observation yn
can often be well explained by various redshifts and weight settings, the resulting marginal posterior, p(zn |X, yn , B), is often multi-modal, with regions of near zero probability between modes.
Intuitively, this is due to the information loss in the SED-to-photometric flux integration step.
This multi-modal property is problematic for many standard MCMC techniques. Single chain
MCMC methods have to jump between modes or travel through a region of near-zero probability, resulting in slow mixing. To combat this effect, we use parallel tempering [4], a method that is
well-suited to constructing Markov chains on multi-modal distributions. Parallel tempering instantiates C independent chains, each sampling from the target distribution raised to an inverse temperature. Given a target distribution, ?(x), the constructed chains sample ?c (x) ? ?(x)1/Tc , where Tc
controls how ?hot? (i.e., how close to uniform) each chain is. At each iteration, swaps between
chains are proposed and accepted with a standard Metropolis-Hastings acceptance probability
Pr(accept swap c, c0 ) =
?c (xc0 )?c0 (xc )
.
?c (xc )?c0 (xc0 )
(11)
Within each chain, we use component-wise slice sampling [11] to generate samples that leave each
chain?s distribution invariant. Slice-sampling is a (relatively) tuning-free MCMC method, a convenient property when sampling from thousands of independent posteriors. We found parallel tempering to be essential for convincing posterior simulations. MCMC diagnostics and comparisons to
single-chain samplers are available in the supplemental material.
5
Experiments and Results
We conduct three experiments to test our model, where each experiment measures redshift predictive
accuracy for a different train/test split of spectroscopically measured quasars from the DR10QSO
dataset [13] with confirmed redshifts in the range z ? (.01, 5.85). Our experiments split train/test
in the following ways: (i) randomly, (ii) by r-band fluxes, (iii) by redshift values. In split (ii), we
train on the brightest 90% of quasars, and test on a subset of the remaining. Split (iii) takes the
lowest 85% of quasars as training data, and a subset of the brightest 15% as test cases. Splits (ii)
5
Figure 4: Top: MAP estimate of the
latent bases B = {Bk }K
k=1 . Note the
different ranges of the x-axis (wavelength). Each basis function distributes
its mass across different regions of the
spectrum to explain different salient
features of quasar spectra in the rest
frame. Bottom: model reconstruction
of a training-sample SED.
and (iii) are intended to test the method?s robustness to different training and testing distributions,
mimicking the discovery of fainter and farther sources. For each split, we find a MAP estimate of the
basis, B1 , . . . , BK , and weights, wn to use as a prior for photometric inference. For computational
purposes, we limit our training sample to a random subsample of 2,000 quasars. The following
sections outline the resulting model fit and inferred SEDs and redshifts.
Basis validation We examined multiple choices of K using out of sample likelihood on a validation set. In the following experiments we set K = 4, which balances generalizability and computational tradeoffs. Discussion of this validation is provided in the supplementary material.
SED Basis We depict a MAP estimate of B1 , . . . , BK in Figure 4. Our basis decomposition
enjoys the benefit of physical interpretability due to our density-estimate formulation of the problem.
Basis B4 places mass on the Lyman-? peak around 1,216 ?, allowing the model to capture the cooccurrence of more peaked SEDs with a bump around 1,550 ?. Basis B1 captures the H-? emission
line at around 6,500 ?. Because of the flexible nonparametric priors on Bk our model is able to
automatically learn these features from data. The positivity of the basis and weights distinguishes
our model from PCA-based methods, which sacrifice physical interpretability.
Photometric measurements For each test quasar, we construct an 8-chain parallel tempering sampler and run for 8,000 iterations, and discard the first 4,000 samples as burn-in. Given posterior samples of zn , we take the posterior mean as a point estimate. Figure 5 compares the posterior mean to
spectroscopic measurements (for three different data-split experiments), where the gray lines denote
posterior sample quantiles. In general there is a strong correspondence between spectroscopically
measured redshift and our posterior estimate. In cases where the posterior mean is off, our distribution often covers the spectroscopically confirmed value with probability mass. This is clear upon
inspection of posterior marginal distributions that exhibit extreme multi-modal behavior. To combat
this multi-modality, it is necessary to inject the model with more information to eliminate plausible
hypotheses; this information could come from another measurement (e.g., a new photometric band),
or from structured prior knowledge over the relationship between zn , wn , and mn . Our method
simply fits a mixture of Gaussians to the spectroscopically measured wn , mn sample to formulate
a prior distribution. However, incorporating dependencies between zn , wn and mn , similar to the
XDQSOz technique, will be incorporated in future work.
5.1
Comparisons
We compare the performance of our redshift estimator with two recent photometric redshift estimators, XDQSOz [2] and a neural network [3]. The method in [2] is a conditional density estimator
that discretizes the range of one flux band (the i-band) and fits a mixture of Gaussians to the joint
distribution over the remaining fluxes and redshifts. One disadvantage to this approach is there there
6
Figure 5: Comparison of spectroscopically (x-axis) and photometrically (y-axis) measured redshifts
from the SED model for three different data splits. The left reflects a random selection of 4,000
quasars from the DR10QSO dataset. The right graph reflects a selection of 4,000 test quasars from
the upper 15% (zcutof f ? 2.7), where all training was done on lower redshifts. The red estimates
are posterior means.
Figure 6: Left: inferred SEDs from photometric data. The black line is a smoothed approximation to
the ?true? SED using information from the full spectral data. The red line is a sample from the pos(obs)
terior, fn (?)|X, yn , B, which imputes the entire SED from only five flux measurements. Note
that the bottom sample is from the left mode, which under-predicts redshift. Right: corresponding posterior predictive distributions, p(zn |X, yn , B). The black line marks the spectroscopically
confirmed redshift; the red line marks the posterior mean. Note the difference in scale of the x-axis.
is no physical significance to the mixture of Gaussians, and no model of the latent SED. Furthermore, the original method trains and tests the model on a pre-specified range of i-magnitudes, which
is problematic when predicting redshifts on much brighter or dimmer stars. The regression approach
from [3] employs a neural network with two hidden layers, and the SDSS fluxes as inputs. More
features (e.g., more photometric bands) can be incorporated into all models, but we limit our experiments to the five SDSS bands for the sake of comparison. Further detail on these two methods and
a broader review of ?photo-z? approaches are available in the supplementary material.
Average error and test distribution We compute mean absolute error (MAE), mean absolute
percentage error (MAPE), and root mean square error (RMSE) to measure predictive performance.
Table 1 compares prediction errors for the three different approaches (XD, NN, Spec). Our experiments show that accurate redshift measurements are attainable even when the distribution of
training set is different from test set by directly modeling the SED itself. Our method dramatically
outperforms [2] and [3] in split (iii), particularly for very high redshift fluxes. We also note that
our training set is derived from only 2,000 examples, whereas the training set for XDQSOz and the
neural network were ? 80,000 quasars and 50,000 quasars, respectively. This shortcoming can be
overcome with more sophisticated inference techniques for the non-negative basis. Despite this, the
7
split
random (all)
flux (all)
redshift (all)
random (z > 2.35)
flux (z > 2.33)
redshift (z > 3.20)
random (z > 3.11)
flux (z > 2.86)
redshift (z > 3.80)
XD
0.359
0.308
0.841
0.247
0.292
1.327
0.171
0.373
2.389
MAE
NN
0.773
0.483
0.736
0.530
0.399
1.149
0.418
0.493
2.348
Spec
0.485
0.497
0.619
0.255
0.326
0.806
0.289
0.334
0.829
XD
0.293
0.188
0.237
0.091
0.108
0.357
0.050
0.112
0.582
MAPE
NN
0.533
0.283
0.214
0.183
0.143
0.317
0.117
0.144
0.569
Spec
0.430
0.339
0.183
0.092
0.124
0.226
0.082
0.103
0.198
XD
0.519
0.461
1.189
0.347
0.421
1.623
0.278
0.606
2.504
RMSE
NN
0.974
0.660
0.923
0.673
0.550
1.306
0.540
0.693
2.405
Spec
0.808
0.886
0.831
0.421
0.531
0.997
0.529
0.643
1.108
Table 1: Prediction error for three train-test splits, (i) random, (ii) flux-based, (iii) redshift-based,
corresponding to XDQSOz [2] (XD), the neural network approach [3] (NN), our SED-based model
(Spec). The middle and lowest sections correspond to test redshifts in the upper 50% and 10%,
respectively. The XDQSOz and NN models were trained on (roughly) 80,000 and 50,000 example
quasars, respectively, while the Spec models were trained on 2,000.
SED-based predictions are comparable. Additionally, because we are directly modeling the latent
SED, our method admits a posterior estimate of the entire SED. Figure 6 displays posterior SED
samples and their corresponding redshift marginals for test-set quasars inferred from only SDSS
photometric measurements.
6
Discussion
We have presented a generative model of two sources of information at very different spectral resolutions to form an estimate of the latent spectral energy distribution of quasars. We also described
an efficient MCMC-based inference algorithm for computing posterior statistics given photometric
observations. Our model accurately predicts and characterizes uncertainty about redshifts from only
photometric observations and a small number of separate spectroscopic examples. Moreover, we
showed that we can make reasonable estimates of the unobserved SED itself, from which we can
make inferences about other physical properties informed by the full SED.
We see multiple avenues of future work. Firstly, we can extend the model of SEDs to incorporate
more expert knowledge. One such augmentation would include a fixed collection of features, curated by an expert, corresponding to physical properties already known about a class of sources.
Furthermore, we can also extend our model to directly incorporate photometric pixel observations,
as opposed to preprocessed flux measurements. Secondly, we note that our method is more more
computationally burdensome than XDQSOz and the neural network approach. Another avenue of
future work is to find accurate approximations of these posterior distributions that are cheaper to
compute. Lastly, we can extend our methodology to galaxies, whose SEDs can be quite complicated. Galaxy observations have spatial extent, complicating their SEDs. The combination of SED
and spatial appearance modeling and computationally efficient inference procedures is a promising
route toward the automatic characterization of millions of sources from the enormous amounts of
data available in massive photometric surveys.
Acknowledgments
The authors would like to thank Matthew Hoffman and members of the HIPS lab for helpful discussions. This work is supported by the Applied Mathematics Program within the Office of Science
Advanced Scientific Computing Research of the U.S. Department of Energy under contract No.
DE-AC02-05CH11231. This work used resources of the National Energy Research Scientific Computing Center (NERSC). We would like to thank Tina Butler, Tina Declerck and Yushu Yao for their
assistance.
References
[1] Shadab Alam, Franco D Albareti, Carlos Allende Prieto, F Anders, Scott F Anderson, Brett H
Andrews, Eric Armengaud, ?ric Aubourg, Stephen Bailey, Julian E Bautista, et al. The
8
eleventh and twelfth data releases of the Sloan digital sky survey: Final data from SDSS-III.
arXiv preprint arXiv:1501.00963, 2015.
[2] Jo Bovy, Adam D Myers, Joseph F Hennawi, David W Hogg, Richard G McMahon, David
Schiminovich, Erin S Sheldon, Jon Brinkmann, Donald P Schneider, and Benjamin A Weaver.
Photometric redshifts and quasar probabilities from a single, data-driven generative model. The
Astrophysical Journal, 749(1):41, 2012.
[3] M Brescia, S Cavuoti, R D?Abrusco, G Longo, and A Mercurio. Photometric redshifts for
quasars in multi-band surveys. The Astrophysical Journal, 772(2):140, 2013.
[4] Steve Brooks, Andrew Gelman, Galin Jones, and Xiao-Li Meng. Handbook of Markov Chain
Monte Carlo. CRC press, 2011.
[5] Kyle S Dawson, David J Schlegel, Christopher P Ahn, Scott F Anderson, ?ric Aubourg,
Stephen Bailey, Robert H Barkhouser, Julian E Bautista, Alessandra Beifiori, Andreas A
Berlind, et al. The baryon oscillation spectroscopic survey of SDSS-III. The Astronomical
Journal, 145(1):10, 2013.
[6] RO Gray, PW Graham, and SR Hoyt. The physical basis of luminosity classification in the late
a-, f-, and early g-type stars. ii. basic parameters of program stars and the role of microturbulence. The Astronomical Journal, 121(4):2159, 2001.
[7] Edward Harrison. The redshift-distance and velocity-distance laws. The Astrophysical Journal,
403:28?31, 1993.
[8] David W Hogg. Distance measures in cosmology. arXiv preprint astro-ph/9905116, 1999.
[9] Dougal Maclaurin, David Duvenaud, and Ryan P. Adams. Autograd: Reverse-mode differentiation of native python. ICML workshop on Automatic Machine Learning, 2015.
[10] D Christopher Martin, James Fanson, David Schiminovich, Patrick Morrissey, Peter G Friedman, Tom A Barlow, Tim Conrow, Robert Grange, Patrick N Jelinksy, Bruno Millard, et al.
The galaxy evolution explorer: A space ultraviolet survey mission. The Astrophysical Journal
Letters, 619(1), 2005.
[11] Radford M Neal. Slice sampling. Annals of statistics, pages 705?741, 2003.
[12] Jorge Nocedal. Updating quasi-newton matrices with limited storage. Mathematics of computation, 35(151):773?782, 1980.
[13] Isabelle P?ris, Patrick Petitjean, ?ric Aubourg, Nicholas P Ross, Adam D Myers, Alina
Streblyanska, Stephen Bailey, Patrick B Hall, Michael A Strauss, Scott F Anderson, et al.
The Sloan digital sky survey quasar catalog: tenth data release. Astronomy & Astrophysics,
563:A54, 2014.
[14] Jeffrey Regier, Andrew Miller, Jon McAuliffe, Ryan Adams, Matt Hoffman, Dustin Lang,
David Schlegel, and Prabhat. Celeste: Variational inference for a generative model of astronomical images. In Proceedings of The 32nd International Conference on Machine Learning,
2015.
[15] SDSSIII. Measures of flux and magnitude. 2013. https://www.sdss3.org/dr8/
algorithms/magnitudes.php.
[16] Joseph Silk and Martin J Rees. Quasars and galaxy formation. Astronomy and Astrophysics,
1998.
[17] Chris Stoughton, Robert H Lupton, Mariangela Bernardi, Michael R Blanton, Scott Burles,
Francisco J Castander, AJ Connolly, Daniel J Eisenstein, Joshua A Frieman, GS Hennessy,
et al. Sloan digital sky survey: early data release. The Astronomical Journal, 123(1):485,
2002.
[18] Jakob Walcher, Brent Groves, Tam?s Budav?ri, and Daniel Dale. Fitting the integrated spectral
energy distributions of galaxies. Astrophysics and Space Science, 331(1):1?51, 2011.
[19] David H Weinberg, Romeel Dav?e, Neal Katz, and Juna A Kollmeier. The Lyman-alpha forest
as a cosmological tool. Proceedings of the 13th Annual Astrophysica Conference in Maryland,
666, 2003.
9
| 5960 |@word middle:1 version:2 pw:1 proportion:1 nd:1 c0:3 twelfth:1 simulation:1 decomposition:2 attainable:1 brightness:2 dramatic:1 carry:2 daniel:2 denoting:1 outperforms:1 existing:1 lang:2 fn:16 distant:1 alam:1 pertinent:1 analytic:1 interpretable:1 depict:1 generative:4 leaf:1 spec:6 parameterization:1 inspection:1 short:1 farther:1 filtered:1 provides:1 characterization:1 location:1 firstly:1 org:1 five:6 constructed:2 fitting:1 eleventh:1 ch11231:1 sacrifice:1 behavior:1 roughly:3 multi:6 discretized:1 automatically:2 gov:1 actual:1 begin:1 estimating:1 notation:1 provided:1 moreover:1 mass:3 brett:1 lowest:2 what:1 dav:1 cm:1 informed:1 supplemental:1 unobserved:1 transformation:1 differentiation:1 astronomy:2 berkeley:3 sky:4 quantitative:1 combat:2 xd:5 ro:1 scaled:2 control:1 mcwilliams:1 unit:2 yn:11 planck:1 mcauliffe:2 positive:2 engineering:2 treat:2 sd:11 limit:2 despite:2 meng:1 black:3 easing:1 burn:1 examined:1 collect:1 shaded:2 ease:1 limited:1 range:9 acknowledgment:1 testing:1 procedure:3 thought:2 projection:1 convenient:1 pre:1 radial:1 integrating:3 donald:1 onto:1 close:1 selection:2 gelman:1 storage:1 www:1 map:6 center:2 missing:1 graphically:1 exposure:1 go:1 duration:1 convex:1 survey:10 resolution:9 independently:1 formulate:1 identifying:1 insight:1 estimator:3 erg:1 annals:1 target:3 massive:1 distinguishing:1 hypothesis:1 harvard:7 velocity:2 approximated:1 particularly:1 updating:1 curated:1 predicts:2 native:1 observed:7 bottom:2 role:1 preprint:2 capture:2 thousand:3 region:3 benjamin:1 cooccurrence:1 trained:2 carrying:1 brinkmann:1 predictive:3 upon:1 eric:1 basis:21 swap:2 po:1 joint:2 various:1 train:5 instantiated:1 describe:3 shortcoming:1 monte:1 xc0:2 aggregate:1 formation:1 outside:1 apparent:2 whose:1 widely:2 larger:1 supplementary:5 plausible:1 quite:1 allende:1 statistic:5 gp:3 jointly:3 noisy:3 itself:3 think:1 final:1 myers:2 propose:1 reconstruction:1 mission:1 relevant:1 combining:1 mixing:2 billion:1 sea:4 adam:5 leave:1 object:2 tim:1 andrew:4 radiation:2 silk:1 stat:1 measured:13 school:2 edward:1 strong:1 come:1 direction:1 filter:5 material:6 bin:1 explains:1 crc:1 atmosphere:1 electromagnetic:1 spectroscopic:15 ryan:3 absorption:1 secondly:1 around:3 duvenaud:1 hall:1 hennessy:1 brightest:2 exp:2 lawrence:1 overlaid:2 predict:1 maclaurin:1 bump:2 matthew:1 early:2 resample:1 earth:1 purpose:1 estimation:1 travel:1 radio:1 ross:1 tool:2 weighted:2 reflects:2 hoffman:2 gaussian:3 varying:1 broader:1 office:1 derived:3 emission:1 release:3 likelihood:5 kollmeier:1 contrast:1 bos:2 burdensome:1 posteriori:1 inference:12 helpful:1 anders:1 nn:6 sb:4 typically:1 eliminate:1 accept:1 entire:2 hidden:1 integrated:1 quasi:2 rpa:2 pixel:3 overall:1 among:2 flexible:2 mimicking:1 classification:1 spatial:3 integration:1 raised:1 marginal:4 construct:2 sampling:6 represents:1 broad:1 jones:1 icml:1 jon:4 peaked:1 photometric:35 future:5 report:1 others:1 develops:1 richard:1 employ:1 distinguishes:1 randomly:1 simultaneously:1 national:2 cheaper:1 autograd:2 intended:1 imputes:1 jeffrey:2 friedman:1 interest:3 acceptance:1 dougal:1 highly:1 replicates:1 mixture:4 extreme:1 yielding:1 light:6 diagnostics:1 chain:12 accurate:5 grove:1 necessary:1 conduct:1 lyman:3 lbl:1 re:1 hip:1 modeling:4 cover:1 disadvantage:1 zn:30 tractability:1 neutral:1 subset:2 uniform:1 connolly:1 stored:1 dependency:1 generalizability:1 rees:1 density:4 peak:2 sensitivity:3 international:1 probabilistic:2 off:1 contract:1 hoyt:1 michael:2 yao:1 jo:1 imagery:1 reflect:1 recorded:1 augmentation:1 opposed:1 positivity:1 brent:1 inject:1 expert:2 tam:1 li:1 potential:1 photon:3 de:3 star:6 wk:2 automation:1 erin:1 sloan:4 astrophysical:4 later:1 root:1 observer:1 lab:1 characterizes:1 red:3 carlos:1 complicated:2 parallel:4 rmse:2 sed:42 square:1 php:1 accuracy:1 variance:5 characteristic:3 efficiently:2 miller:2 stretching:1 correspond:2 bayesian:3 accurately:1 produced:1 iid:1 carlo:1 confirmed:3 finer:1 explain:1 against:1 energy:12 galaxy:8 james:1 cosmology:2 redder:2 dataset:2 astronomical:6 knowledge:2 sophisticated:1 reflecting:1 steve:1 higher:2 methodology:1 response:2 modal:4 tom:1 formulation:1 quasar:50 box:1 done:1 dimmer:1 furthermore:3 anderson:3 lastly:1 hand:1 hastings:1 christopher:2 mode:4 aj:1 gray:2 scientific:2 effect:3 matt:1 normalized:4 true:1 barlow:1 evolution:1 chemical:1 laboratory:1 neal:2 regier:2 conditionally:1 ind:2 assistance:1 eisenstein:1 unnormalized:3 outline:2 temperature:1 image:1 wise:1 variational:1 kyle:1 common:1 physical:9 empirically:1 b4:1 million:1 extend:3 interpretation:2 mae:2 numerically:1 marginals:1 katz:1 mellon:1 measurement:18 composition:1 isabelle:1 imposing:1 cosmological:2 stretched:2 tuning:1 automatic:2 grid:2 mathematics:2 hogg:2 bruno:1 longer:2 ahn:1 base:1 patrick:4 posterior:24 recent:1 showed:1 optimizing:1 driven:1 discard:1 termed:1 route:1 reverse:1 dawson:1 jorge:1 joshua:1 schneider:1 ii:5 stephen:3 multiple:5 mix:1 full:4 characterized:2 ultraviolet:1 prediction:5 involving:1 regression:1 basic:1 cmu:1 albert:1 physically:1 kernel:1 represent:3 iteration:2 arxiv:3 whereas:2 background:2 harrison:1 source:22 modality:1 rest:21 sr:1 member:1 prabhat:3 leverage:1 near:2 split:11 iii:7 wn:25 affect:1 fit:3 brighter:1 andreas:1 ac02:1 avenue:2 tradeoff:1 expression:1 pca:1 energetic:1 peter:1 skewing:1 dramatically:1 proportionally:1 clear:1 amount:3 nonparametric:2 traction:1 band:18 ph:1 http:3 specifies:1 generate:1 percentage:1 problematic:2 carnegie:1 write:1 radiator:1 express:1 alessandra:1 four:1 salient:2 enormous:2 tempering:4 alina:1 preprocessed:1 bautista:2 tenth:1 nocedal:1 graph:2 sum:1 nersc:1 estimand:2 run:1 inverse:1 letter:1 uncertainty:5 place:4 reasonable:1 wu:1 oscillation:2 draw:1 ob:5 ric:3 comparable:1 graham:1 layer:1 display:1 correspondence:1 g:1 annual:1 ri:2 flat:1 diffuse:1 sake:1 schiminovich:2 sheldon:1 generates:1 franco:1 extremely:2 relatively:1 martin:2 redshift:43 department:2 structured:2 mercurio:1 combination:2 instantiates:1 describes:3 across:1 metropolis:1 joseph:2 intuitively:3 invariant:2 explained:1 pr:1 computationally:2 equation:3 visualization:1 resource:1 instrument:6 photo:3 available:5 gaussians:3 discretizes:1 observe:2 spectral:17 nicholas:1 bailey:3 robustness:1 original:1 denotes:1 assumes:1 include:2 remaining:2 top:1 graphical:1 tina:2 newton:1 xc:2 implied:1 already:1 exhibit:2 gradient:2 distance:5 separate:2 thank:2 prieto:1 maryland:1 chris:1 astro:1 collected:1 extent:1 toward:3 modeled:1 relationship:2 providing:1 convincing:1 balance:1 julian:2 berlind:1 robert:3 weinberg:1 negative:3 astrophysics:3 perform:1 allowing:1 upper:4 conversion:2 observation:17 markov:3 descent:1 gas:1 luminosity:2 incorporated:2 frame:14 jakob:1 smoothed:1 inferred:3 david:9 bk:17 specified:2 photometry:5 california:1 catalog:1 learned:1 tractably:1 brook:1 able:1 scott:4 summarize:1 program:2 including:2 interpretability:2 hot:1 explorer:1 predicting:1 weaver:1 advanced:1 mn:26 representing:2 axis:5 galin:1 prior:14 review:1 discovery:1 python:1 law:2 loss:1 interesting:2 age:1 digital:4 validation:3 integrate:2 conveyed:1 xiao:1 intractability:1 translation:1 summary:2 changed:1 supported:1 free:1 enjoys:1 guide:1 allow:1 exponentiated:1 absolute:2 distributed:1 slice:3 curve:1 dip:1 xn:5 benefit:1 overcome:1 complicating:1 dale:1 author:1 collection:2 jump:1 preprocessing:1 mcmahon:1 flux:28 alpha:1 observable:1 handbook:1 b1:3 conclude:1 francisco:1 butler:1 spectrum:13 don:1 hydrogen:1 latent:15 table:2 additionally:1 promising:1 learn:1 baryon:1 spectroscopy:3 forest:2 expansion:1 constructing:1 significance:1 universe:2 subsample:1 convey:1 body:1 broadband:5 quantiles:1 depicts:2 slow:1 aid:1 inferring:2 momentum:1 mape:2 late:1 dustin:2 stellar:2 specific:2 admits:3 exists:1 essential:1 incorporating:1 workshop:1 strauss:1 magnitude:3 conditioned:2 suited:1 depicted:2 tc:2 wavelength:15 appearance:2 lbfgs:1 simply:1 expressed:1 terior:1 radford:1 acm:2 conditional:2 goal:1 swath:2 jeff:1 shared:1 sampler:2 distributes:1 called:2 accepted:1 college:1 people:2 mark:2 overload:1 incorporate:2 mcmc:6 phenomenon:1 |
5,481 | 5,961 | Neural Adaptive Sequential Monte Carlo
Shixiang Gu??
Zoubin Ghahramani?
Richard E. Turner?
University of Cambridge, Department of Engineering, Cambridge UK
?
MPI for Intelligent Systems, T?ubingen, Germany
sg717@cam.ac.uk, zoubin@eng.cam.ac.uk, ret26@cam.ac.uk
?
Abstract
Sequential Monte Carlo (SMC), or particle filtering, is a popular class of methods for sampling from an intractable target distribution using a sequence of simpler intermediate distributions. Like other importance sampling-based methods,
performance is critically dependent on the proposal distribution: a bad proposal
can lead to arbitrarily inaccurate estimates of the target distribution. This paper
presents a new method for automatically adapting the proposal using an approximation of the Kullback-Leibler divergence between the true posterior and the
proposal distribution. The method is very flexible, applicable to any parameterized proposal distribution and it supports online and batch variants. We use the
new framework to adapt powerful proposal distributions with rich parameterizations based upon neural networks leading to Neural Adaptive Sequential Monte
Carlo (NASMC). Experiments indicate that NASMC significantly improves inference in a non-linear state space model outperforming adaptive proposal methods
including the Extended Kalman and Unscented Particle Filters. Experiments also
indicate that improved inference translates into improved parameter learning when
NASMC is used as a subroutine of Particle Marginal Metropolis Hastings. Finally
we show that NASMC is able to train a latent variable recurrent neural network
(LV-RNN) achieving results that compete with the state-of-the-art for polymorphic music modelling. NASMC can be seen as bridging the gap between adaptive
SMC methods and the recent work in scalable, black-box variational inference.
1
Introduction
Sequential Monte Carlo (SMC) is a class of algorithms that draw samples from a target distribution
of interest by sampling from a series of simpler intermediate distributions. More specifically, the sequence constructs a proposal for importance sampling (IS) [1, 2]. SMC is particularly well-suited for
performing inference in non-linear dynamical models with hidden variables, since filtering naturally
decomposes into a sequence, and in many such cases it is the state-of-the-art inference method [2, 3].
Generally speaking, inference methods can be used as modules in parameter learning systems. SMC
has been used in such a way for both approximate maximum-likelihood parameter learning [4] and
in Bayesian approaches such as the recently developed Particle MCMC methods [3].
Critically, in common with any importance sampling method, the performance of SMC is strongly
dependent on the choice of the proposal distribution. If the proposal is not well-matched to the target distribution, then the method can produce samples that have low effective sample size and this
leads to Monte Carlo estimates that have pathologically high variance [1]. The SMC community
has developed approaches to mitigate these limitations such as resampling to improve particle diversity when the effective sample size is low [1] and applying MCMC transition kernels to improve
particle diversity [5, 2, 3]. A complementary line of research leverages distributional approximate
inference methods, such as the extended Kalman Filter and Unscented Kalman Filter, to construct
better proposals, leading to the Extended Kalman Particle Filter (EKPF) and Unscented Particle Fil1
ter (UPF) [5]. In general, however, the construction of good proposal distributions is still an open
question that severely limits the applicability of SMC methods.
This paper proposes a new gradient-based black-box adaptive SMC method that automatically tunes
flexible proposal distributions. The quality of a proposal distribution can be assessed using the (intractable) Kullback-Leibler (KL) divergence between the target distribution and the parametrized
proposal distribution. We approximate the derivatives of this objective using samples derived from
SMC. The framework is very general and tractably handles complex parametric proposal distributions. For example, here we use neural networks to carry out the parameterization thereby leveraging
the large literature and efficient computational tools developed by this community. We demonstrate
that the method can efficiently learn good proposal distributions that significantly outperform existing adaptive proposal methods including the EKPF and UPF on standard benchmark models used
in the particle filter community. We show that improved performance of the SMC algorithm translates into improved mixing of the Particle Marginal Metropolis-Hasting (PMMH) [3]. Finally, we
show that the method allows higher-dimensional and more complicated models to be accurately handled using SMC, such as those parametrized using neural networks (NN), that are challenging for
traditional particle filtering methods .
The focus of this work is on improving SMC, but many of the ideas are inspired by the burgeoning
literature on approximate inference for unsupervised neural network models. These connections are
explored in section 6.
2
Sequential Monte Carlo
We begin by briefly reviewing two fundamental SMC algorithms, sequential importance sampling
(SIS) and sequential importance resampling (SIR). Consider a probabilistic model comprising (possibly multi-dimensional) hidden and observed states z1:T and x1:T respectively, whose joint disQT
tribution factorizes as p(z1:T , x1:T ) = p(z1 )p(x1 |z1 ) t=2 p(zt |z1:t?1 )p(xt |z1:t , x1:t?1 ). This
general form subsumes common state-space models, such as Hidden Markov Models (HMMs), as
well as non-Markovian models for the hidden state, such as Gaussian processes.
The goal of the sequential importance sampler is to approximate the posterior distribution over
PN
(n)
(n)
the hidden state sequence, p(z1:T |x1:T ) ? n=1 w
?t ?(z1:T ? z1:T ), through a weighted set of
(n)
N sampled trajectories drawn from a simpler proposal distribution {z1:T }n=1:N ? q(z1:T |x1:T ).
Any form of proposal distribution can be used in principle, but a particularly convenient one takes
QT
the same factorisation as the true posterior q(z1:T |x1:T ) = q(z1 |x1 ) t=2 q(zt |z1:t?1 , x1:t ), with
filtering dependence on x. A short derivation (see supplementary material) then shows that the
normalized importance weights are defined by a recursion:
(n)
(n)
(n)
w(z1:T ) =
(n)
(n)
(n)
p(zT |z1:T ?1 )p(xT |z1:T , x1:T ?1 )
w(z1:T )
(n)
(n)
,
w(z
?
)
=
? w(z
? 1:T ?1 )
P
1:T
(n)
(n)
(n) (n)
q(z1:T |x1:T )
q(zT |z1:T ?1 , x1:T )
n w(z1:T )
p(z1:T , x1:T )
SIS is elegant as the samples and weights can be computed in sequential fashion using a single
forward pass. However, na??ve implementation suffers from a severe pathology: the distribution
of importance weights often become highly skewed as t increases, with many samples attaining
very low weight. To alleviate the problem, the Sequential Importance Resampling (SIR) algorithm
(n)
[1] adds an additional step that resamples zt at time t from a multinomial distribution given by
(n)
w(z
? 1:t ) and gives the new particles equal weight.1 This replaces degenerated particles that have low
weight with samples that have more substantial importance weights without violating the validity of
the method. SIR requires knowledge of the full trajectory of previous samples at each stage to draw
the samples and compute the importance weights. For this reason, when carrying out resampling,
(n)
each new particle needs to update its ancestry information. Letting a?,t represent the ancestral
index of particle n at time t for state z? , where 1 ? ? ? t, and collecting these into the set
(n)
At
A
(n)
(n)
(i)
(n)
(n)
A
(i)
t?1
{z1:t?1
, zt } where z1:tt
1
(i)
(a
)
(n)
?,t
= {a1,t , ..., at,t }, where a? ?1,t = a? ?1,?
?1 , the resampled trajectory can be denoted z1:t =
(i)
a
(i)
a
= {z1 1,t , ..., zt t,t }. Finally, to lighten notation, we use the shorthand
More advanced implementations resample only when the effective sample size falls below a threshold [2].
2
(n)
(n)
wt = w(z1:t ) for the weights. Note that, when employing resampling, these do not depend on
(n)
the previous weights wt?1 since resampling has given the previous particles uniform weight. The
implementation of SMC is given by Algorithm 1 in the supplementary material.
2.1
The Critical Role of Proposal Distributions in Sequential Monte Carlo
The choice of the proposal distribution in SMC is critical. Even when employing the resampling
step, a poor proposal distribution will produce trajectories that, when traced backwards, quickly
collapse onto a single ancestor. Clearly this represents a poor approximation to the true posterior
p(z1:T |x1:T ). These effects can be mitigated by increasing the number of particles and/or applying
more complex additional MCMC moves [5, 2], but these strategies increase the computational cost.
The conclusion is that the proposal should be chosen with care. The optimal choice for an unconstrained proposal that has access to all of the observed data at all times is the intractable posterior
distribution q? (z1:T |x1:T ) = p? (z1:T |x1:T ). Given the restrictions imposed by the factorization,
this becomes q(zt |z1:t?1 , x1:t ) = p(zt |z1:t?1 , x1:t ), which is still typically intractable. The bootstrap filter instead uses the prior q(zt |z1:t?1 , x1:t ) = p(zt |z1:t?1 , x1:t?1 ) which is often tractable,
but fails to incorporate information from the current observation xt . A halfway-house employs
distributional approximate inference techniques to approximate p(zt |z1:t?1 , x1:t ). Examples include the EKPF and UPF [5]. However, these methods suffer from three main problems. First,
the extended and unscented Kalman Filter from which these methods are derived are known to be
inaccurate and poorly behaved for many problems outside of the SMC setting [6]. Second, these
approximations must be applied on a sample by sample basis, leading to significant additional computational overhead. Third, neither approximation is tuned using an SMC-relevant criterion. In the
next section we introduce a new method for adapting the proposal that addresses these limitations.
3
Adapting Proposals by Descending the Inclusive KL Divergence
In this work the quality of the proposal distribution will be optimized using the
inclusive KL-divergence between the true posterior distribution and the proposal,
KL[p? (z1:T |x1:T )||q? (z1:T |x1:T )]. (Parameters are made explicit since we will shortly be
interested in both adapting the proposal ? and learning the model ?.) This objective is chosen for
four main reasons. First, this is a direct measure of the quality of the proposal, unlike those typically
used such as effective sample size. Second, if the true posterior lies in the class of distributions
attainable by the proposal family then the objective has a global optimum at this point. Third, if
the true posterior does not lie within this class, then this KL divergence tends to find proposal
distributions that have higher entropy than the original which is advantageous for importance
sampling (the exclusive KL is unsuitable for this reason [7]). Fourth, the derivative of the objective
can be approximated efficiently using a sample based approximation that will now be described.
The gradient of the negative KL divergence with respect to the parameters of the proposal distribution takes a simple form,
Z
?
?
?
KL[p? (z1:T |x1:T )||q? (z1:T |x1:T )] = p? (z1:T |x1:T )
log q? (z1:T |x1:T )dz1:T .
??
??
The expectation over the posterior can be approximated using samples from SMC. One option would
use the weighted sample trajectories at the final time-step of SMC, but although asymptotically
unbiased such an estimator would have high variance due to the collapse of the trajectories. An
alternative, that reduces variance at the cost of introducing some bias, uses the intermediate ancestral
trees i.e. a filtering approximation (see the supplementary material for details),
(n)
X X (n) ?
?
At?1
(n)
KL[p? (z1:T |x1:T )||q? (z1:T |x1:T )] ?
w
?t
log q? (zt |x1:t , z1:t?1
). (1)
?
??
??
t
n
The simplicity of the proposed approach brings with it several advantages and opportunities.
Online and batch variants. Since the derivatives distribute over time, it is trivial to apply this
update in an online way e.g. updating the proposal distribution every time-step. Alternatively, when
learning parameters in a batch setting, it might be more appropriate to update the proposal parameters after making a full forward pass of SMC. Conveniently, when performing approximate
3
maximum-likelihood learning the gradient update for the model parameters ? can be efficiently
approximated using the same sample particles from SMC (see supplementary material and Algorithm 1). A similar derivation for maximum likelihood learning is also discussed in [4].
(n)
X X (n) ?
?
At?1
(n)
log[p? (x1:T )] ?
w
?t
log p? (xt , zt |x1:t?1 , z1:t?1
).
(2)
??
??
t
n
Algorithm 1 Stochastic Gradient Adaptive SMC (batch inference and learning variants)
Require: proposal: q? , model: p? , observations: X = {x1:Tj }j=1:M , number of particles: N
repeat
(j)
{x1:Tj }j=1:m ? NextMiniBatch(X)
(i,j)
(i,j)
(j)
{z1:t , w
?t }i=1:N,j=1:m,t=1:Tj ? SMC(?, ?, N, {x1:Tj }j=1:m )
(i,j)
P PTj P (i,j) ?
At?1
(i,j) (j)
4? = j t=1 i w
?t ?? log q? (zt |x1:t , z1:t?1
)
(i,j)
P PTj P (i,j) ?
At?1
(j)
(i,j) (j)
4? = j t=1 i w
?t ?? log p? (xt , zt |x1:t?1 , z1:t?1
) (optional)
? ? Optimize(?, 4?)
? ? Optimize(?, 4?) (optional)
until convergence
Efficiency of the adaptive proposal. In contrast to the EPF and UPF, the new method employs an
analytic function for propagation and does not require costly particle-specific distributional approximation as an inner-loop. Similarly, although the method bears similarity to the assumed-density filter
(ADF) [8] which minimizes a (local) inclusive KL, the new method has the advantage of minimizing
a global cost and does not require particle-specific moment matching.
Training complex proposal models. The adaptation method described above can be applied to any
parametric proposal distribution. Special cases have been previously treated by [9]. We propose
a related, but arguably more straightforward and general approach to proposal adaptation. In the
next section, we describe a rich family of proposal distributions, that go beyond previous work,
based upon neural networks. This approach enables adaptive SMC methods to make use of the rich
literature and optimization tools available from supervised learning.
Flexibility of training. One option is to train the proposal distribution using samples from SMC
derived from the observed data. However, this is not the only approach. For example, the proposal
could be trained using data sampled from the generative model instead, which might mitigate overfitting effects for small datasets. Similarly, the trained proposal does not need to be the one used to
generate the samples in the first place. The bootstrap filter or more complex variants can be used.
4
Flexible and Trainable Proposal Distributions Using Neural Networks
The proposed adaption method can be applied to any parametric proposal distribution. Here we
briefly describe how to utilize this flexibility to employ powerful neural network-based parameterizations that have recently shown excellent performance in supervised sequence learning tasks [10, 11].
Generally speaking, applications of these techniques to unsupervised sequence modeling settings is
an active research area that is still in its infancy [12] and this work opens a new avenue in this wider
research effort.
In a nutshell, the goal is to parameterize q? (zt |z1:t?1 , x1:t ) ? the proposal?s stochastic mapping from
all previous hidden states z1:t?1 and all observations (up to and including the current observation)
x1:t , to the current hidden state, zt ? in a flexible, computationally efficient and trainable way. Here
we use a class of functions called Long Short-Term Memory (LSTM) that define a deterministic
mapping from an input sequence to an output sequence using parameter-efficient recurrent dynamics, and alleviate the common vanishing gradient problem in recurrent neural networks [13, 10, 11].
The distributions q? (zt |ht ) can be a mixture of Gaussians (a mixture density network (MDN) [14])
in which the mixing proportions, means and covariances are parameterised through another neural
network (see the supplementary for details on LSTM, MDN, and neural network architectures).
4
5
Experiments
The goal of the experiments is three fold. First, to evaluate the performance of the adaptive method
for inference on standard benchmarks used by the SMC community with known ground truth. Second, to evaluate the performance when SMC is used as an inner loop of a learning algorithm. Again
we use an example with known ground truth. Third, to apply SMC learning to complex models that
would normally be challenging for SMC comparing to the state-of-the-art in approximate inference.
One way of assessing the success of the proposed method would be to evaluate
KL[p(z1:T |x1:T )||q(z1:T |x1:T )]. However, this quantity is hard to accurately compute. Instead
we use a number of other metrics. For the experiments where ground truth states z1:T are known
we can evaluate the root mean square error (RMSE) between the approximate
posterior mean of the
P
latent variables (z?t ) and the true value RMSE(z1:T , z?1:T ) = ( T1 t (zt ?P
z?t )2 )1/2 . More generally, the estimate of the log-marginal likelihood (LML = log p(x1:T ) = t log p(xt |x1:t?1 ) =
P
P (n)
1
t log( N
n wt )) and its variance is also indicative of performance. Finally, we also employ a
common metric called the effective sample size (ESS) to measure the effectiveness of our SMC
P
(n)
method. ESS of particles at time t is given by ESSt = ( n (w
?t )2 )?1 . If q(z1:T |x1:T ) =
p(z1:T |x1:T ), expected ESS is maximized and equals the number of particles (equivalently, the
normalized importance weights are uniform). Note that ESS alone is not a sufficient metric, since it
does not measure the absolute quality of samples, but rather the relative quality.
5.1
Inference in a Benchmark Nonlinear State-Space Model
In order to evaluate the effectiveness of our adaptive SMC method, we tested our method on a
standard nonlinear state-space model often used to benchmark SMC algorithms [2, 3]. The model is
given by Eq. 3, where ? = (?v , ?w ). The posterior distribution p? (z1:T |x1:T ) is highly multi-modal
due to uncertainty about the signs of the latent states.
p(zt |zt?1 ) = N (zt ; f (zt?1 , t), ?v2 ), p(z1 ) = N (z1 ; 0, 5),
2
p(xt |zt ) = N (xt ; g(zt?1 ), ?w
),
f (zt?1 , t) = zt?1 /2 + 25zt?1 /(1 +
2
zt?1
)
+ 8 cos(1.2t),
(3)
g(zt ) =
zt2 /20
The experiments investigated how the new proposal adaptation method performed in comparison to
standard methods including the bootstrap filter, EKPF, and UKPF. In particular, we were interested
in the following questions: Do rich multi-modal proposals improve inference? For this we compared
a Gaussian proposal with a diagonal Gaussian to a mixture density network with three components (MD-). Does a recurrent parameterization of the proposal help? For this we compared a non-recurrent
neural network with 100 hidden units (-NN-) to a recurrent neural network with 50 LSTM units (RNN-). Can injecting information about the prior dynamics into the proposal improve performance
(similar in spirit to [15] for variational methods)? To assess this, we parameterized proposals for vt
(process noise) instead of zt (-f-), and let the proposal have access to the prior dynamics f (zt?1 , t) .
For
? all experiments, the parameters in the non-linear state-space model were fixed to (?v , ?w ) =
( 10, 1). Adaptation of the proposal was performed on 1000 samples from the generative process
at each iteration. Results are summarized in Fig. 1 and Table 1 (see supplementary material for
additional results). Average run times for the algorithms over a sequence of length 1000 were:
0.782s bootstrap, 12.1s EKPF, 41.4s UPF, 1.70s NN-NASMC, and 2.67s RNN-NASMC, where
EKPF and UPF implementations are provided by [5]. Although these numbers should only be taken
as a guide as the implementations had differing levels of acceleration.
The new adaptive proposal methods significantly outperform the bootstrap, EKPF, and UPF methods, in terms of ESS, RMSE and the variance in the LML estimates. The multi-modal proposal
outperforms a simple Gaussian proposal (compare RNN-MD-f to RNN-f) indicating multi-modal
proposals can improve performance. Moreover, the RNN outperforms the non-recurrent NN (compare RNN to NN). Although the proposal models can effectively learn the transition function, injecting information about the prior dynamics into the proposal does help (compare RNN-f to RNN).
Interestingly, there is no clear cut winner between the EKPF and UPF, although the UPF does return
LML estimates that have lower variance [5]. All methods converged to similar LMLs that were close
to the values computed using large numbers of particles indicating the implementations are correct.
5
80
?2600
70
effective sample size (/100)
log marginal likelihood
?2800
?3000
?3200
?3400
?3600
EKPF
NN-MD
prior
RNN-f
RNN-MD-f
RNN-MD
RNN
UPF
50
40
30
20
?3800
?4000
60
F
EKP
D
NN-M
r
prio
-f
RNN
-f
D
-MD NN-M
R
RNN
10
0
UPF
RNN
200
400
600
800
1000
iteration
Figure 1: Left: Box plots for LML estimates from iteration 200 to 1000. Right: Average ESS over
the first 1000 iterations.
ESS (iter)
mean std
36.66 0.25
60.15 0.83
50.58 0.63
69.64 0.60
73.88 0.71
69.25 1.04
76.71 0.68
69.39 1.08
prior
EKPF
UPF
RNN
RNN-f
RNN-MD
RNN-MD-f
NN-MD
LML
mean std
-2957 148
-2829 407
-2696 79
-2774 34
-2633 36
-2636 40
-2622 32
-2634 36
RMSE
mean std
3.266 0.578
3.578 0.694
2.956 0.629
3.505 0.977
2.568 0.430
2.612 0.472
2.509 0.409
2.731 0.608
Table 1: Left, Middle: Average ESS and log marginal likelihood estimates over the last 400 iterations. Right: The RMSE over 100 new sequences with no further adaptation.
5.2
Inference in the Cart and Pole System
As a second and more physically meaningful system we considered a cart-pole system that consists
of an inverted pendulum that rests on a movable base [16]. The system was driven by a white noise
input. An ODE solver was used to simulate the system from its equations of motion. We considered
the problem of inferring the true position of the cart and orientation of the pendulum (along with
their derivatives and the input noise) from noisy measurements of the location of the tip of the pole.
The results are presented in Fig. 2. The system is significantly more intricate than the model in
Sec. 5.1, and does not directly admit the usage of EKPF or UPF. Our RNN-MD proposal model
successfully learns good proposals without any direct access to the prior dynamics.
ESS
0.45
0.40
0.35
x
2.0
RNN-MD
prior-?
prior-(? + 1?)
prior-(? ? 1?)
1.5
0.5
1.0
0.0
0.5
0.25
?? (rad)
x(m)
ESS
0.30
?0.5
0.0
?0.5
0.20
?1.5
500
1000
1500
2000
iteration
2500
3000
?2.0
0
?1.0
prior
RNN-MD
ground-truth
?1.0
0.15
0.10
0
??
1.0
2
4
?1.5
6
time (s)
8
10
?2.0
0
2
4
6
8
10
time (s)
Figure 2: Left: Normalized ESS over iterations. Middle, Right: Posterior mean vs. ground-truth
for x, the horizontal location of the cart, and 4?, the change in relative angle of the pole. RNN-MD
learns to have higher ESS than the prior and more accurately estimates the latent states.
6
?w (N=100)
6
5
5
4
4
3
3
2
2
1
1
0
0
100
200
300
?w (N=10)
6
400
0
0
500
prior
RNN-MD-f-pre
RNN-MD-f
RNN-MD-pre
RNN-MD
200
400
600
800
1000
iteration
iteration
Figure 3: PMMH samples of ?w values for N = {100, 10} particles. For small numbers of particles
(right) PMMH is very slow to burn in and mix when proposing from the prior distribution due to the
large variance in the marginal likelihood estimates it returns.
5.3
Bayesian learning in a Nonlinear SSM
SMC is often employed as an inner loop of a more complex algorithm. One prominent example
is Particle Markov Chain Monte Carlo [3], a class of methods that sample from the joint posterior
over model parameters ? and latent state trajectories, p(?, z1:T |x1:T ). Here we consider the Particle
Marginal Metropolis-Hasting sampler (PMMH). In this context SMC is used to construct a proposal
distribution for a Metropolis-Hasting (MH) accept/reject step. The proposal is formed by sampling a
proposed set of parameters e.g. by perturbing the current parameters using a Gaussian random walk,
then SMC is used to sample a proposed set of latent state variables, resulting in a joint proposal
?
?
|x1:T ). The MH step uses the SMC marginal likelihood
|?, z1:T ) = q(?? |?)p?? (z1:T
q(?? , z1:T
estimates to determine acceptance. Full details are given in the supplementary material.
In this experiment, we evaluate our method in a PMMH sampler on the same model from Section 5.1 following [3].2 A random walk proposal is used to sample ? = (?v , ?w ), q(?? |?) =
N (?? |?, diag([0.15, 0.08])). The prior over ? is set as IG(0.01, 0.01). ? is initialized as (10, 10),
and the PMMH is run for 500 iterations.
Two of the adaptive models considered section 5.1 are used for comparison (RNN-MD and RNNMD-f) , where ?-pre-? models are pre-trained for 500 iterations using samples from the initial ? =
(10, 10). The results are shown in Fig. 3 and were typical for a range of parameter settings. Given a
sufficient number of particles (N = 100), there is almost no difference between the prior proposal
and our method. However, when the number of particles gets smaller (N = 10), NASMC enables
significantly faster burn-in to the posterior, particularly on the measurement noise ?w and, for similar
reasons, NASMC mixes more quickly. The limitation with the NASMC-PMMH is that the model
needs to continuously adapt as the global parameter is sampled, but note this is still not as costly as
adapting on a particle-by-particle basis as is the case for the EKPF and UPF.
5.4
Polyphonic Music Generation
Finally, the new method is used to train a latent variable recurrent neural network (LV-RNN) for
modelling four polymorphic music datasets of varying complexity [17]. These datasets are often
used to benchmark RNN models because of their high dimensionality and the complex temporal
dependencies involved at different time scales [17, 18, 19]. Each dataset contains at least 7 hours of
polyphonic music with an average polyphony (number of simultaneous notes) of 3.9 out of 88. LVRNN contains a recurrent neural network with LSTM layers that is driven by i.i.d. stochastic latent
variables (zt ) at each time-point and stochastic outputs (xt ) that are fed back into the dynamics (full
details in the supplementary material). Both the LSTM layers in the generative and proposal models
are set as 1000 units and Adam [20] is used as the optimizer. The bootstrap filter is compared to
the new adaptive method (NASMC). 10 particles are used in the training. The hyperparameters
are tuned using the validation set [17]. A diagonal Gaussian output is used in the proposal model,
with an additional hidden layer of size 200. The log likelihood on the test set, a standard metric
for comparison in generative models [18, 21, 19], is approximated using SMC with 500 particles.
2
Only the prior proposal is compared, since Sec. 5.1 shows the advantage of our method over EKPF/UPF.
7
The results are reported in Table 2.3 The adaptive method significantly outperforms the bootstrap
filter on three of the four datasets. On the piano dataset the bootstrap method performs marginally
better. In general, the NLLs for the new methods are comparable to the state-of-the-art although
detailed comparison is difficult as the methods with stochastic latent states require approximate
marginalization using importance sampling or SMC.
Dataset
Piano-midi-de
Nottingham
MuseData
JSBChorales
LV-RNN
(NASMC)
7.61
2.72
6.89
3.99
LV-RNN
(Bootstrap)
7.50
3.33
7.21
4.26
STORN
(SGVB)
7.13
2.85
6.16
6.91
FD-RNN
sRNN
RNN-NADE
7.39
3.09
6.75
8.01
7.58
3.43
6.99
8.58
7.03
2.31
5.60
5.19
Table 2: Estimated negative log likelihood on test data. ?FD-RNN? and ?STORN? are from [19],
and ?sRNN? and ?RNN-NADE? are results from [18].
6
Comparison of Variational Inference to the NASMC approach
There are several similarities between NASMC and Variational Free-energy methods that employ recognition models. Variational Free-energy methods refine an approximation q? (z|x) to
the posterior distribution p? (z|x) by optimising the exclusive (or variational) KL-divergence
KL[q? (z|x)||p? (z|x)]. It is common to approximate this integral using samples from the approximate posterior [21, 22, 23]. This general approach is similar in spirit to the way that the proposal is
adapted in NASMC, except that the inclusive KL-divergence is employed KL[p? (z|x)||q? (z|x)] and
this entails that sample based approximation requires simulation from the true posterior. Critically,
NASMC uses the approximate posterior as a proposal distribution to construct a more accurate posterior approximation. The SMC algorithm therefore can be seen as correcting for the deficiencies in
the proposal approximation. We believe that this can lead to significant advantages over variational
free-energy methods, especially in the time-series setting where variational methods are known to
have severe biases [24]. Moreover, using the inclusive KL avoids having to compute the entropy
of the approximating distribution which can prove problematic when using complex approximating
distributions (e.g. mixtures and heavy tailed distributions) in the variational framework. There is a
close connection between NASMC and the wake-sleep algorithm [25] . The wake-sleep algorithm
also employs the inclusive KL divergence to refine a posterior approximation and recent generalizations have shown how to incorporate this idea into importance sampling [26]. In this context, the
NASMC algorithm extends this work to SMC.
7
Conclusion
This paper developed a powerful method for adapting proposal distributions within general SMC
algorithms. The method parameterises a proposal distribution using a recurrent neural network
to model long-range contextual information, allows flexible distributional forms including mixture
density networks, and enables efficient training by stochastic gradient descent. The method was
found to outperform existing adaptive proposal mechanisms including the EKPF and UPF on a standard SMC benchmark, it improves burn in and mixing of the PMMH sampler, and allows effective
training of latent variable recurrent neural networks using SMC. We hope that the connection between SMC and neural network technologies will inspire further research into adaptive SMC methods. In particular, application of the methods developed in this paper to adaptive particle smoothing,
high-dimensional latent models and adaptive PMCMC for probabilistic programming are particular
exciting avenues.
Acknowledgments
SG is generously supported by Cambridge-T?ubingen Fellowship, the ALTA Institute, and Jesus
College, Cambridge. RET thanks the EPSRC (grants EP/G050821/1 and EP/L000776/1). We thank
Theano developers for their toolkit, the authors of [5] for releasing the source code, and Roger
Frigola, Sumeet Singh, Fredrik Lindsten, and Thomas Sch?on for helpful suggestions on experiments.
3
Results for RNN-NADE are separately provided for reference, since this is a different model class.
8
References
[1] N. J. Gordon, D. J. Salmond, and A. F. Smith, ?Novel approach to nonlinear/non-gaussian bayesian state
estimation,? in IEE Proceedings F (Radar and Signal Processing), vol. 140, pp. 107?113, IET, 1993.
[2] A. Doucet, N. De Freitas, and N. Gordon, Sequential monte carlo methods in practice. Springer-Verlag,
2001.
[3] C. Andrieu, A. Doucet, and R. Holenstein, ?Particle markov chain monte carlo methods,? Journal of the
Royal Statistical Society: Series B (Statistical Methodology), vol. 72, no. 3, pp. 269?342, 2010.
[4] G. Poyiadjis, A. Doucet, and S. S. Singh, ?Particle approximations of the score and observed information
matrix in state space models with application to parameter estimation,? Biometrika, vol. 98, no. 1, pp. 65?
80, 2011.
[5] R. Van Der Merwe, A. Doucet, N. De Freitas, and E. Wan, ?The unscented particle filter,? in Advances in
Neural Information Processing Systems, pp. 584?590, 2000.
[6] R. Frigola, Y. Chen, and C. Rasmussen, ?Variational gaussian process state-space models,? in Advances
in Neural Information Processing Systems, pp. 3680?3688, 2014.
[7] D. J. MacKay, Information theory, inference, and learning algorithms, vol. 7. Cambridge university press
Cambridge, 2003.
[8] T. P. Minka, ?Expectation propagation for approximate bayesian inference,? in Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence, pp. 362?369, Morgan Kaufmann Publishers
Inc., 2001.
[9] J. Cornebise, Adaptive Sequential Monte Carlo Methods. PhD thesis, Ph. D. thesis, University Pierre and
Marie Curie?Paris 6, 2009.
[10] A. Graves, Supervised sequence labelling with recurrent neural networks, vol. 385. Springer, 2012.
[11] I. Sutskever, O. Vinyals, and Q. V. Le, ?Sequence to sequence learning with neural networks,? in Advances
in Neural Information Processing Systems, pp. 3104?3112, 2014.
[12] A. Graves, ?Generating sequences with recurrent neural networks,? CoRR, vol. abs/1308.0850, 2013.
[13] S. Hochreiter and J. Schmidhuber, ?Long short-term memory,? Neural computation, vol. 9, no. 8,
pp. 1735?1780, 1997.
[14] C. M. Bishop, ?Mixture density networks,? 1994.
[15] K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra, ?DRAW: A recurrent neural network
for image generation,? in Proceedings of the 32nd International Conference on Machine Learning, ICML
2015, Lille, France, 6-11 July 2015, pp. 1462?1471, 2015.
[16] A. McHutchon, Nonlinear modelling and control using Gaussian processes. PhD thesis, University of
Cambridge UK, Department of Engineering, 2014.
[17] N. Boulanger-Lewandowski, Y. Bengio, and P. Vincent, ?Modeling temporal dependencies in highdimensional sequences: Application to polyphonic music generation and transcription,? in International
Conference on Machine Learning (ICML), 2012.
[18] Y. Bengio, N. Boulanger-Lewandowski, and R. Pascanu, ?Advances in optimizing recurrent networks,? in
Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pp. 8624?
8628, IEEE, 2013.
[19] J. Bayer and C. Osendorfer, ?Learning stochastic recurrent networks,? arXiv preprint arXiv:1411.7610,
2014.
[20] D. P. Kingma and J. Ba, ?Adam: A method for stochastic optimization,? The International Conference on
Learning Representations (ICLR), 2015.
[21] D. P. Kingma and M. Welling, ?Auto-encoding variational bayes,? The International Conference on
Learning Representations (ICLR), 2014.
[22] D. J. Rezende, S. Mohamed, and D. Wierstra, ?Stochastic backpropagation and approximate inference in
deep generative models,? International Conference on Machine Learning (ICML), 2014.
[23] A. Mnih and K. Gregor, ?Neural variational inference and learning in belief networks,? International
Conference on Machine Learning (ICML), 2014.
[24] R. E. Turner and M. Sahani, ?Two problems with variational expectation maximisation for time-series
models,? in Bayesian Time series models (D. Barber, T. Cemgil, and S. Chiappa, eds.), ch. 5, pp. 109?
130, Cambridge University Press, 2011.
[25] G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal, ?The? wake-sleep? algorithm for unsupervised neural
networks,? Science, vol. 268, no. 5214, pp. 1158?1161, 1995.
[26] J. Bornschein and Y. Bengio, ?Reweighted wake-sleep,? The International Conference on Learning Representations (ICLR), 2015.
9
| 5961 |@word middle:2 briefly:2 advantageous:1 proportion:1 nd:1 open:2 dz1:1 simulation:1 eng:1 covariance:1 attainable:1 thereby:1 g050821:1 carry:1 moment:1 initial:1 series:5 contains:2 score:1 tuned:2 interestingly:1 outperforms:3 existing:2 freitas:2 current:4 comparing:1 contextual:1 si:2 must:1 analytic:1 enables:3 plot:1 update:4 polyphonic:3 resampling:7 alone:1 generative:5 v:1 intelligence:1 parameterization:2 indicative:1 es:12 vanishing:1 short:3 smith:1 parameterizations:2 pascanu:1 location:2 ssm:1 simpler:3 wierstra:2 along:1 direct:2 become:1 shorthand:1 consists:1 prove:1 overhead:1 introduce:1 intricate:1 expected:1 multi:5 inspired:1 automatically:2 solver:1 increasing:1 becomes:1 begin:1 provided:2 matched:1 notation:1 mitigated:1 moreover:2 minimizes:1 developer:1 developed:5 proposing:1 lindsten:1 differing:1 ret:1 temporal:2 mitigate:2 every:1 collecting:1 nutshell:1 biometrika:1 uk:5 control:1 normally:1 unit:3 grant:1 arguably:1 danihelka:1 t1:1 engineering:2 local:1 frey:1 tends:1 limit:1 severely:1 sgvb:1 cemgil:1 encoding:1 black:2 might:2 burn:3 challenging:2 co:1 hmms:1 collapse:2 smc:46 factorization:1 range:2 storn:2 seventeenth:1 acknowledgment:1 practice:1 tribution:1 maximisation:1 backpropagation:1 bootstrap:9 area:1 rnn:38 adapting:6 significantly:6 convenient:1 matching:1 pre:4 reject:1 zoubin:2 get:1 onto:1 close:2 context:2 applying:2 descending:1 restriction:1 optimize:2 imposed:1 deterministic:1 straightforward:1 go:1 simplicity:1 factorisation:1 correcting:1 estimator:1 lewandowski:2 handle:1 target:5 construction:1 programming:1 us:4 approximated:4 particularly:3 updating:1 recognition:1 std:3 cut:1 distributional:4 observed:4 role:1 module:1 epsrc:1 ep:2 preprint:1 parameterize:1 substantial:1 complexity:1 cam:3 dynamic:6 radar:1 trained:3 carrying:1 reviewing:1 depend:1 singh:2 upon:2 efficiency:1 basis:2 gu:1 srnn:2 icassp:1 joint:3 mh:2 polymorphic:2 derivation:2 train:3 effective:7 describe:2 monte:11 artificial:1 outside:1 whose:1 supplementary:8 noisy:1 final:1 online:3 sequence:15 advantage:4 bornschein:1 propose:1 adaptation:5 relevant:1 loop:3 mixing:3 poorly:1 flexibility:2 sutskever:1 convergence:1 optimum:1 assessing:1 produce:2 generating:1 adam:2 wider:1 help:2 recurrent:16 ac:3 chiappa:1 mchutchon:1 qt:1 eq:1 fredrik:1 indicate:2 correct:1 filter:13 stochastic:9 material:7 require:4 generalization:1 alleviate:2 unscented:5 considered:3 ground:5 mapping:2 optimizer:1 resample:1 ptj:2 estimation:2 injecting:2 applicable:1 successfully:1 tool:2 weighted:2 hope:1 clearly:1 generously:1 gaussian:9 rather:1 pn:1 factorizes:1 ret26:1 varying:1 derived:3 focus:1 rezende:2 modelling:3 likelihood:10 contrast:1 helpful:1 inference:20 dependent:2 dayan:1 nn:9 inaccurate:2 typically:2 accept:1 hidden:9 ancestor:1 france:1 subroutine:1 comprising:1 germany:1 interested:2 flexible:5 orientation:1 denoted:1 proposes:1 smoothing:1 art:4 special:1 mackay:1 marginal:8 equal:2 construct:4 having:1 sampling:10 optimising:1 represents:1 lille:1 unsupervised:3 icml:4 osendorfer:1 intelligent:1 richard:1 employ:6 lighten:1 gordon:2 ve:1 divergence:9 ab:1 interest:1 acceptance:1 fd:2 highly:2 mnih:1 severe:2 mixture:6 tj:4 chain:2 accurate:1 integral:1 bayer:1 tree:1 walk:2 initialized:1 modeling:2 markovian:1 applicability:1 cost:3 introducing:1 pole:4 uniform:2 iee:1 reported:1 dependency:2 thanks:1 density:5 fundamental:1 lstm:5 international:8 ancestral:2 probabilistic:2 tip:1 quickly:2 continuously:1 na:1 epf:1 again:1 thesis:3 wan:1 possibly:1 admit:1 derivative:4 leading:3 sg717:1 return:2 distribute:1 diversity:2 attaining:1 de:3 summarized:1 subsumes:1 sec:2 inc:1 performed:2 root:1 pendulum:2 bayes:1 option:2 complicated:1 rmse:5 curie:1 ass:1 square:1 formed:1 variance:7 merwe:1 efficiently:3 maximized:1 sumeet:1 kaufmann:1 bayesian:5 parameterises:1 vincent:1 accurately:3 critically:3 marginally:1 carlo:11 trajectory:7 converged:1 holenstein:1 simultaneous:1 suffers:1 ed:1 energy:3 pp:12 involved:1 minka:1 mohamed:1 naturally:1 sampled:3 dataset:3 popular:1 knowledge:1 improves:2 dimensionality:1 back:1 adf:1 higher:3 violating:1 supervised:3 methodology:1 modal:4 improved:4 inspire:1 box:3 strongly:1 parameterised:1 stage:1 nottingham:1 roger:1 until:1 hastings:1 horizontal:1 nonlinear:5 propagation:2 brings:1 quality:5 behaved:1 believe:1 usage:1 effect:2 validity:1 normalized:3 true:9 unbiased:1 andrieu:1 leibler:2 neal:1 white:1 reweighted:1 skewed:1 shixiang:1 mpi:1 criterion:1 prominent:1 tt:1 demonstrate:1 performs:1 motion:1 resamples:1 variational:13 image:1 novel:1 recently:2 common:5 multinomial:1 perturbing:1 winner:1 discussed:1 significant:2 measurement:2 cambridge:8 unconstrained:1 similarly:2 particle:38 pathology:1 had:1 toolkit:1 access:3 entail:1 similarity:2 add:1 movable:1 base:1 posterior:20 recent:2 optimizing:1 driven:2 schmidhuber:1 verlag:1 ubingen:2 outperforming:1 arbitrarily:1 success:1 vt:1 der:1 inverted:1 seen:2 morgan:1 additional:5 care:1 employed:2 upf:16 determine:1 signal:2 july:1 full:4 mix:2 reduces:1 faster:1 adapt:2 long:3 l000776:1 a1:1 hasting:3 variant:4 scalable:1 expectation:3 metric:4 physically:1 iteration:11 kernel:1 represent:1 arxiv:2 hochreiter:1 proposal:78 fellowship:1 separately:1 ode:1 wake:4 source:1 publisher:1 sch:1 rest:1 unlike:1 releasing:1 cart:4 elegant:1 leveraging:1 spirit:2 effectiveness:2 leverage:1 ter:1 intermediate:3 backwards:1 bengio:3 marginalization:1 architecture:1 inner:3 idea:2 avenue:2 translates:2 handled:1 bridging:1 effort:1 suffer:1 speech:1 speaking:2 deep:1 generally:3 clear:1 detailed:1 tune:1 ph:1 generate:1 outperform:3 problematic:1 sign:1 estimated:1 vol:8 iter:1 four:3 burgeoning:1 threshold:1 achieving:1 drawn:1 traced:1 neither:1 marie:1 ht:1 utilize:1 asymptotically:1 halfway:1 compete:1 run:2 parameterized:2 powerful:3 fourth:1 uncertainty:2 angle:1 place:1 family:2 almost:1 extends:1 draw:3 comparable:1 layer:3 resampled:1 fold:1 replaces:1 sleep:4 refine:2 adapted:1 deficiency:1 inclusive:6 simulate:1 performing:2 department:2 poor:2 smaller:1 metropolis:4 making:1 theano:1 taken:1 lml:5 computationally:1 equation:1 previously:1 mechanism:1 letting:1 tractable:1 fed:1 available:1 gaussians:1 apply:2 v2:1 appropriate:1 pierre:1 batch:4 alternative:1 shortly:1 original:1 thomas:1 include:1 opportunity:1 nasmc:18 mdn:2 music:5 unsuitable:1 ghahramani:1 especially:1 approximating:2 society:1 gregor:2 boulanger:2 objective:4 move:1 question:2 quantity:1 parametric:3 strategy:1 dependence:1 exclusive:2 traditional:1 costly:2 diagonal:2 md:18 pathologically:1 gradient:6 iclr:3 thank:1 parametrized:2 barber:1 trivial:1 reason:4 degenerated:1 kalman:5 length:1 index:1 code:1 minimizing:1 equivalently:1 difficult:1 negative:2 ba:1 implementation:6 zt:34 observation:4 markov:3 datasets:4 benchmark:6 musedata:1 descent:1 optional:2 nlls:1 extended:4 hinton:1 community:4 paris:1 kl:17 trainable:2 connection:3 z1:63 optimized:1 rad:1 acoustic:1 hour:1 kingma:2 tractably:1 address:1 able:1 beyond:1 dynamical:1 below:1 zt2:1 including:6 memory:2 royal:1 cornebise:1 belief:1 critical:2 treated:1 recursion:1 turner:2 advanced:1 improve:5 technology:1 prio:1 auto:1 sahani:1 prior:17 literature:3 piano:2 sg:1 relative:2 sir:3 graf:3 frigola:2 bear:1 generation:3 limitation:3 filtering:5 suggestion:1 lv:4 pmmh:8 ekp:1 validation:1 jesus:1 sufficient:2 principle:1 exciting:1 heavy:1 pmcmc:1 repeat:1 last:1 free:3 supported:1 rasmussen:1 bias:2 guide:1 salmond:1 institute:1 fall:1 absolute:1 van:1 transition:2 avoids:1 rich:4 forward:2 made:1 adaptive:20 author:1 ig:1 employing:2 welling:1 approximate:16 midi:1 kullback:2 transcription:1 global:3 overfitting:1 active:1 doucet:4 assumed:1 alternatively:1 ancestry:1 latent:11 iet:1 decomposes:1 tailed:1 table:4 learn:2 polyphony:1 improving:1 excellent:1 complex:8 investigated:1 diag:1 main:2 noise:4 hyperparameters:1 complementary:1 x1:48 fig:3 nade:3 fashion:1 slow:1 fails:1 inferring:1 position:1 explicit:1 lie:2 house:1 infancy:1 third:3 learns:2 bad:1 xt:9 specific:2 bishop:1 explored:1 intractable:4 sequential:13 effectively:1 importance:15 corr:1 phd:2 labelling:1 gap:1 chen:1 suited:1 entropy:2 conveniently:1 vinyals:1 springer:2 ch:1 truth:5 adaption:1 goal:3 acceleration:1 hard:1 change:1 specifically:1 typical:1 except:1 sampler:4 wt:3 called:2 pas:2 meaningful:1 indicating:2 college:1 highdimensional:1 support:1 assessed:1 incorporate:2 evaluate:6 mcmc:3 tested:1 alta:1 |
5,482 | 5,962 | Convolutional Spike-triggered Covariance Analysis
for Neural Subunit Models
Anqi Wu1
Il Memming Park2
Jonathan W. Pillow1
Princeton Neuroscience Institute, Princeton University
{anqiw, pillow}@princeton.edu
Department of Neurobiology and Behavior, Stony Brook University
memming.park@stonybrook.edu
1
2
Abstract
Subunit models provide a powerful yet parsimonious description of neural responses to complex stimuli. They are defined by a cascade of two linear-nonlinear
(LN) stages, with the first stage defined by a linear convolution with one or more
filters and common point nonlinearity, and the second by pooling weights and an
output nonlinearity. Recent interest in such models has surged due to their biological plausibility and accuracy for characterizing early sensory responses. However,
fitting poses a difficult computational challenge due to the expense of evaluating
the log-likelihood and the ubiquity of local optima. Here we address this problem
by providing a theoretical connection between spike-triggered covariance analysis and nonlinear subunit models. Specifically, we show that a ?convolutional?
decomposition of a spike-triggered average (STA) and covariance (STC) matrix
provides an asymptotically efficient estimator for class of quadratic subunit models. We establish theoretical conditions for identifiability of the subunit and pooling weights, and show that our estimator performs well even in cases of model
mismatch. Finally, we analyze neural data from macaque primary visual cortex
and show that our moment-based estimator outperforms a highly regularized generalized quadratic model (GQM), and achieves nearly the same prediction performance as the full maximum-likelihood estimator, yet at substantially lower cost.
1
Introduction
A central problem in systems neuroscience is to build flexible and accurate models of the sensory
encoding process. Neurons are often characterized as responding to a small number of features in the
high-dimensional space of natural stimuli. This motivates the idea of using dimensionality reduction
methods to identify the features that affect the neural response [1?9]. However, many neurons in
the early visual pathway pool signals from a small population of upstream neurons, each of which
integrates and nolinearly transforms the light from a small region of visual space. For such neurons,
stimulus selectivity is often not accurately described with a small number of filters [10]. A more
accurate description can be obtained by assuming that such neurons pool inputs from an earlier stage
of shifted, identical nonlinear ?subunits? [11?13].
Recent interest in subunit models has surged due to their biological plausibility and accuracy for
characterizing early sensory responses. In the visual system, linear pooling of shifted rectified linear
filters was first proposed to describe sensory processing in the cat retina [14, 15], and more recent
work has proposed similar models for responses in other early sensory areas [16?18]. Moreover,
recent research in machine learning and computer vision has focused on hierarchical stacks of such
subunit models, often referred to as Convolutional Neural Networks (CNN) [19?21].
The subunit models we consider here describe neural responses in terms of an LN-LN cascade, that
is, a cascade of two linear-nonlinear (LN) processing stages, each of which involves linear projection
and a nonlinear transformation. The first LN stage is convolutional, meaning it is formed from one or
1
more banks of identical, spatially shifted subunit
filters, with outputs transformed by a shared subunit nonlinearity. The second LN stage consists
of a set of weights for linearly pooling the nonlinear subunits, an output nonlinearity for mapping
the output into the neuron?s response range, and
finally, an noise source for capturing the stochasticity of neural responses (typically assumed to be
Gaussian, Bernoulli or Poisson). Vintch et al proposed one variant of this type of subunit model, and
showed that it could account parsimoniously for the
multi-dimensional input-output properties revealed
by spike-triggered analysis of V1 responses [12, 13].
stimulus
1st LN stage
subunit
filter
subunit
nonliearity
W4
pooling
weights
W1
W3
W2
output
nonlinearity
W5
W6
W7
2nd LN stage
However, fitting such models remains a challengPoisson
ing problem. Simple LN models with Gaussian or
spiking
Poisson noise can be fit very efficiently with spiketriggered-moment based estimators [6?8], but there
response
is no equivalent theory for LN-LN or subunit models. This paper aims to fill that gap. We show that a
convolutional decomposition of the spike-triggered Figure 1: Schematic of subunit LN-LNP casaverage (STA) and covariance (STC) provides an cade model. For simplicity, we show only 1
asymptotically efficient estimator for a Poisson sub- subunit type.
unit model under certain technical conditions: the
stimulus is Gaussian, the subunit nonlinearity is well
described by a second-order polynomial, and the final nonlinearity is exponential. In this case, the
subunit model represents a special case of a canonical Poisson generalized quadratic model (GQM),
which allows us to apply the expected log-likelihood trick [7, 8] to reduce the log-likelihood to a
form involving only the moments of the spike-triggered stimulus distribution. Estimating the subunit
model from these moments, an approach we refer to as convolutional STC, has fixed computational
cost that does not scale with the dataset size after a single pass through the data to compute sufficient
statistics. We also establish theoretical conditions under which the model parameters are identifiable. Finally, we show that convolutional STC is robust to modest degrees of model mismatch, and
is nearly as accurate as the full maximum likelihood estimator when applied to neural data from V1
simple and complex cells.
2
Subunit Model
We begin with a general definition of the Poisson convolutional subunit model (Fig. 1). The model
is specified by:
subunit outputs:
spike rate:
smi = f (km ? xi )
?XX
?
=g
wmi smi
m
spike count:
(1)
(2)
i
y| ? Poiss( ),
(3)
where km is the filter for the m?th type of subunit, xi is the vectorized stimulus segment in the i?th
position of the shifted filter during convolution, and f is the nonlinearity governing subunit outputs.
For the second stage, wmi is a linear pooling weight from the m?th subunit at position i, and g is the
neuron?s output nonlinearity. Spike count y is conditionally Poisson with rate .
Fitting subunit models with arbitrary g and f poses significant computational challenges. However,
if we set g to exponential and f takes the form of second-order polynomial, the model reduces to
? X
?
X
= exp 12
wmi (km ? xi )2 + wmi (km ? xi ) + a
(4)
?
?
1 >
= exp
b>
+a
(5)
[w,k] x
2 x C[w,k] x +
where
C[w,k] =
X
>
Km
diag(wm )Km ,
m
b[w,k] =
X
m
2
>
Km
wm ,
(6)
and Km is a Toeplitz matrix consisting of shifted copies of km satisfying Km x
[x1 , x2 , x3 , . . .]> km .
=
In essence, these restrictions on the two nonlinearities reduce the subunit model to a (canonicalform) Poisson generalized quadratic model (GQM) [7, 8, 22], that is, a model in which the Poisson
spike rate takes the form of an exponentiated quadratic function of the stimulus. We will pursue
the implications of this mapping below. We assume that k is a spatial filter vector without time
expansion. If we have a spatio-temporal stimulus-response, k should be a spatial-temporal filter, but
the subunit convolution (across filter position i) involves only the spatial dimension(s). From (eqs. 4
and 5) it can be seen that the subunit model contains fewer parameters than a full GLM, making it a
more parsimonious description for neurons with multi-dimensional stimulus selectivity.
3
Estimators for Subunit Model
With the above definitions and formulations, we now present three estimators for the model parameters {w, k}. To simplify the notation, we omit the subscript in C[w,k] and b[w,k] , but their
dependence on the model parameters is assumed throughout.
Maximum Log-Likelihood Estimator
The maximum log-likelihood estimator (MLE) has excellent asymptotic properties, though it comes
with the high computational cost. The log-likelihood function can be written:
X
X
LMLE (?) =
yi log i
(7)
i
i
=
=
i
X
X
>
>
yi ( 21 x>
exp( 12 x>
i Cxi + b xi + a)
i Cxi + b xi + a)
"
#
X
>
>
1 >
Tr[C?] + b ? + ansp
exp( 2 xi Cxi + b xi + a)
(8)
(9)
i
P
P
where ? = i yi xi is the spike-triggered
average (STA) and ? = i yi xi x>
i is the spike-triggered
P
covariance (STC) and nsp = i yi is the total number of spikes. We denote the MLE as ?MLE .
Moment-based Estimator with Expected Log-Likelihood Fitting
If the stimuli are drawn from x ? N (0, ), a zero-mean Gaussian with covariance , then the
expression in square brackets divided by N in (eq. 9) will converge to its expectation, given by
1
?
?
>
E exp( 12 x>
C| 2 exp 12 b> ( 1 C) 1 b + a
(10)
i Cxi + b xi + a) = |I
Substituting this expectation into (9) yields a quantity called expected log-likelihood, with the objective function as,
LELL (?) = Tr[C?] + b> ? + ansp
N |I
C|
1
2
exp
1 >
2b (
1
C)
1
(11)
b+a
where N is the number of time bins. We refer to ?MELE = arg max? LELL (?) as the MELE (maximum expected log-likelihood estimator) [7, 8, 22].
Moment-based Estimator with Least Squares Fitting
Maximizing (11) w.r.t {C, b, a} yields analytical expected maximum likelihood estimates [7]:
Cmele =
1
?
1
, bmele = ?
1
?, amele = log(
nsp
N |
?
1
1 2
| )
1 >
2?
1
?
1
?
(12)
With these analytical estimates, it is straightforward and to optimize w and k by directly minimizing
squared error:
LLS (?) = ||Cmele
K > diag(w)K||22 + ||bmele
K > w||22
(13)
which corresponds to an optimal ?convolutional? decomposition of the moment-based estimates.
This formulation shows that the eigenvectors of Cmele are spanned by shifted copies of k. We
denote this estimate ?LS .
All three estimators, ?MLE , ?MELE and ?LS should provide consistent estimates for the subunit model
parameters due to consistency of ML and MELE estimates. However, the moment-based estimates
3
(MELE and LS) are computationally much simpler, and scale much better to large datasets, due
to the fact that they depend on the data only via the spike-triggered moments. In fact their only
dependence on the dataset size is the cost of computing the STA and STC in one pass through the
data. As for efficiency, ?LS has the drawback of being sensitive to noise in the Cmele estimate, which
has far more free parameters than in the two vectors w and k (for a 1-subunit model). Therefore,
accurate estimation of Cmele should be a precondition for good performance of ?LS , and we expect
?MELE to perform better for small datasets.
4
Identifiability
The equality C = C[w,k] = K > diag(w)K is a core assumption to bridge the theoretical connection between a subunit model and the spike-triggered moments (STA & STC). In case we care
about recovering the underlying biological structure, we maybe interested to know when the solution
is unique and naively interpretable. Here we address the identifiability of the convolution decomposition of C for k and w estimation. Specifically, we briefly study the uniqueness of the form
C = K > diag(w)K for a single subunit and multiple subunits respectively. We provide the proof
for the single subunit case in the main text, and the proof for multiple subunits sharing the same
pooling weight w in the supplement.
Note that failure of identifiability only indicates that there are possible symmetries in the solution
space so that there are multiple equivalent optima, which is a question of theoretical interest, but it
holds no implications for practical performance.
4.1
Identifiability for Single Subunit Model
We will frequently make use of frequency domain representation. Let B 2 Rd?d denote the discrete
Fourier transform (DFT) matrix with j-th column is,
h
i>
2?
2?
2?
2?
bj = 1, e d (j 1) , e d 2(j 1) , e d 3(j 1) , . . . , e d (d 1)(j 1) .
(14)
e be a d-dimensional vector resulting from a discrete Fourier transform, that is, k
e = Bk k where
Let k
e 2 Rd be a Fourier representation of w.
Bk is a d ? dk DFT matrix, and similarly w
We assume that k and w have full support in the frequency domain.
e or w
e is zero.
Assumption 1. No element in k
Theorem. Suppose Assumption 1 holds, the convolution decomposition C = K > diag(w)K is
uniquely identifiable up to shift and scale, where C 2 Rd?d and d = dk + dw 1.
e to be a unit vector to deal with the obvious scale invariance. First note
Proof. We fix k (and thus k)
that we can rewrite the convolution operator K using DFT matrices as,
K = B H diag(Bk k)Bw
(15)
where B 2 Rd?d is the DFT matrix and (?)H denotes conjugate transpose operation. Thus,
C
=
e H Bw diag(w)B H diag(k)B
e
B H diag(k)
w
H
f := Bw diag(w)Bw
Note that W
is a circulant matrix,
0
w
e1
w
ed
e2
w
e1
B w
B .
..
f := circulant(w)
.
e =B
W
.
B .
@w
e
w
e
d 1
w
ed
???
???
..
.
d 2
w
ed
Hence, we can rewrite (16) in the frequency domain as,
1
???
???
e HW
e =W
e = BCB H = diag(k)
f diag(k)
f
C
w
e3
w
e4
..
.
w
e1
w
e2
1
w
e2
w
e3 C
.. C
C
. C
w
e A
(16)
(17)
d
w
e1
ek
e H )>
(k
(18)
Since B is invertible, the uniqueness of the original C decomposition is equivalent to the uniqueness
e decomposition. The newly defined decomposition is
of C
e
C
=
f
W
4
ek
e H )> .
(k
(19)
e and {Ve , g
e and {g, g
f , k}
e}, where both {k, k}
e}
Suppose there are two distinct decompositions {W
H >
H
>
e
e
e
f
e
f
e
e ) . Since both W and V have no zero,
are unit vectors, such that C = W (kk ) = V (e
gg
f ./Ve )> 2 Rd?d , then we have
define the element-wise ratio R := (W
ek
eH = g
eg
eH
R k
(20)
Note that rank(R
ek
eH ) = rank(e
eH ) = 1.
k
gg
R is also a circulant matrix which can be diagonalized by DFT [23]: R = B diag (r1 , . . . , rd )B H .
Pd
We can express R as R = i=1 ri bi bH
i . Using the identity for Hadamard product that for any
vector a and b, (aaH ) (bbH ) = (a b)(a b)H , we get
R
ek
eH =
k
d
X
ri (bi bH
i )
i=1
By Lemma 1 (in the appendix), {b1
e b2
k,
ek
eH ) =
(k
d
X
ri (bi
i=1
e . . . , bd
k,
e i
k)(b
e H
k)
(21)
e is a linearly independent set.
k}
ek
eH ) = 1, ri can be non-zero at most a single i.
Therefore, to satisfy the rank constraint rank(R k
Without loss of generality, let ri 6= 0 and all other r? to be zero, then we have,
ek
eH = g
ek
eH diag(bi )H = g
eg
eH =) ri diag(bi )k
eg
eH
ri (bi bH
k
(22)
i )
?
?
e and g
e is the Fourier
e are unit vectors, ri = 1. By recognizing that diag(bi )k
Because bi , k
transform of i 1 positions shifted k, denoted as ki 1 , we have, ki 1 (ki 1 )> = gg> . Therefore,
f thus, vi 1 = w. that is, v must also
g = ki 1 . Moreover, from (20) and (22), (bi bH
Ve = W
i )
be a shifted version of w.
If restricting k and g to be unit vectors, then any solution v and g would satisfy w = vi
g = ki 1 . Therefore, the two decompositions are identical up to scale and shift.
4.2
1
and
Identifiability for Multiple Subunits Model
Multiple subunits model (with m > 1 subunits) is far more complicated to analyze due to large
degree of hidden invariances. In this study, we only provide the analysis under a specific condition
when all subunits share a common pooling weight w.
Assumption 2. All models share a common w.
We make a few additional assumptions. We would like to consider a tight parameterization where
no combination of subunits can take over another subunit?s task.
Assumption 3. K := [k1 , k2 , k3 , . . . , km ] spans an m-dimensional subspace where ki is the subunit filter for i-th subunit and K 2 Rdk ?m . In addition, K has orthogonal columns.
We denote K with p positions shifted along the column as Kp := [kp1 , kp2 , kp3 , . . . , kpm ]. Also, note
that trivially, m ? dk < dk + dw 1 < d since dw > 1.
To allow arbitrary scale corresponding to each unit vector ki , we introduce coefficient ?i to the i-th
subunit, thus extending (19) to,
!>
m
m
X
X
H >
H
e
e
e
e
e K
e H )>
f
f
f (KA
C=
W (?i ki k ) = W
?i k i k
=W
(23)
i
i=1
i
i=1
e 2 Rd?m is the DFT of K.
where A 2 Rm?m is a diagonal matrix of ?i and K
m?m
i
Assumption 4. @? 2 R
such that K ? = P Ki , 8i, where P 2 Rdk ?dk is the permutation
i
j
matrix from K to K by shifting rows, namely Kj = P Ki , 8i, j, and ? is a linear projection
coefficient matrix satisfying Kj = Ki ?.
Assumption 5. A has all positive or all negative values on the diagonal.
Given these assumptions, we establish the proposition for multiple subunits model.
e K
e H )> is
f (KA
Proposition. Under Assumptions (1-5), the convolutional decomposition C = W
uniquely identifiable up to shift and scale.
The proof for the proposition and illustrations of Assumption 4-5 are in the supplement.
5
b)
0.4
true parameters
MELE
smoothMELE
0.4
0.2
0
0
0
10
20
-0.4
0
30
10
20
exponential
MSE
quadratic
sigmoid
subunit nonlinearity
c)
run time (sec)
a)
7
3.5
0 3
10
30
output nonlinearity
0.7
0.6
0.64
0.56
0.38
0.39
0.14
0.37
0.07
0.17
0.88
0.74
0.7
0.71
0.45
0.39
0.4
0.43
4
10
5
10
0.05
0.09
smoothLS
5
10
soft-rectifier
0.76
sample size
4
10
sample size
1.13
0.03 3
10
smoothLS
smoothMELE
smoothMLE
0.14
smoothMELE
smoothMLE
Figure 2: a) True parameters and MELE and smoothMELE estimations. b) Speed performance for
smoothLS, smoothMELE and smoothMLE. The slightly decreasing running time along with a larger
size is resulted from a more and more fully supported subspace, which makes optimization require
fewer iterations. c) Accuracy performance for all combinations of subunit and output nonlinearities
for smoothLS, smoothMELE and smoothMLE. Top left is the subunit model matching the data;
others are model mismatch.
5
Experiments
5.1 Initialization
All three estimators are non-convex and contain many local optima, thus the selection of model
initialization would affect the optimization substantially. Similar to [12] using ?convolutional STC?
for initialization, we also use a simple moment based method with some assumptions. For simplicity,
we assume all subunit models sharing the same w with different scaling factors as in eq. (23). Our
initializer is generated from a shallow bilinear regression. Firstly, initialize w with a wide Gaussian
e K
e H from element-wise division of Cmele by W
f . Secondly, use SVD
profile, then estimate KA
H
e K
e into an orthogonal base set K
e and a positive diagonal matrix A, where K
e
to decompose KA
and A contain information about ki ?s and ??s respectively, hypothesizing that k?s are orthogonal to
each other and ??s are all positive (Assumptions 3 and 5). Based on the ki ?s and ?i ?s we estimated
from the rough Gaussian profile of w, now we fix those and re-estimate w with the same elementf . This bilinear iterative procedure proceeds only a few times in order to avoid
wise division for W
overfitting to Cmele which is a coarse estimate of C.
5.2
Smoothing prior
Neural receptive fields are generally smooth, thus a prior smoothing out high frequency fluctuations
would improve the performance of estimators, unless the data likelihood provides sufficient evidence
for jaggedness. We apply automatic smoothness determination (ASD [24]) to both w and k, each
with an associated balancing hyper parameter w and k . Assuming w ? N (0, Cw ) with
?
?
k k2
Cw = exp
?w
(24)
2
2 w
2
where
is the vector of differences between neighboring locations in w. ?w and w
are variance
and length scale of Cw that belong to the hyper parameter set. k also has the same ASD prior with
hyper parameters ?k and k2 . For multiple subunits, each wi and ki would have its own ASD prior.
6
smoothLS(#2)
smoothMELE(#1)
smoothMELE(#2)
goodness-of-fit
(nats/spk)
low?rank,
smooth GQM
smoothLS(#1)
0
smoothMLE(#1)
smoothMLE(#2)
?1
0
?0.06
?2
?0.12
?0.18
4
?3
10
4
10
5
speed
250
200
150
100
50
10
5
training size
running time (sec)
performance
low?rank, smooth,
expected GQM
10
10
4
5
10
training size
Figure 3: Goodness-of-model fits from various estimators and their running speeds (without GQM
comparisons). Black curves are regularized GQM (with and without expected log-likelihood trick);
blue is smooth LS; green is smooth MELE; red is smooth MLE. All the subunit estimators have
results for 1 subunit and 2 subunits. The inset figure in performance is the enlarged view for large
goodness-of-fit values. The right figure is the speed result showing that MLE-based methods require
exponentially increasing running time when increasing the training size, but our moment-based ones
have quite consistent speed.
Fig. 2a shows the true w and k and the estimations from MELE and smoothMELE (MELE with
smoothing prior). From now on, we use smoothing prior by default.
5.3
Simulations
To illustrate the performance of our moment-based estimators, we generated Gaussian stimuli from
an LNP neuron with exponentiated-quadratic nonlinearity and 1 subunit model with 8-element filter
k and 33-element pooling weights w. Mean firing rate is 0.91 spk/s. In our estimation, each time
bin stimulus with 40 dimensions is treated as one sample to generate spike response. Fig. 2 b and c
show the speed and accuracy performance of three estimators LS, MELE and MLE (with smoothing
prior). LS and MELE are comparable with baseline MLE in terms of accuracy but are exponentially
faster.
Although LNP with exponential nonlinearity has been widely adapted in neuroscience for its simplicity, the actual nonlinearity of neural systems is often sub-exponential, such as soft-rectifier nonlinearity. But exponential is favored as a convenient approximation of soft-rectifier within a small
regime around the origin. Also generally, LNP neuron leans towards sigmoid subunit nonlinearity
rather than quadratic. Quadratic could well approximate a sigmoid within a small nonlinear regime
before the linear regime of the sigmoid. Therefore, in order to check the generalization performance
of LS and MELE on mismatch models, we stimulated data from a neuron with sigmoid subunit nonlinearity or soft-rectifier output nonlinearity as shown in Fig. 2c. All the full MLEs formulated with
no model mismatch provide a baseline for inspecting the performance of the ELL methods. Despite
the model-mismatch, our estimators (LS and MELE) are on par with MLE when the subunit nonlinearity is quadratic, but the performance is notably worse for the sigmoid nonlinearity. Even so, in
real applications, we will explore fits with different subunit nonlinearities using full MLE, where the
exponential and quadratic assumption is thus primarily useful for a reasonable and extremely fast
initializer. Moreover, the running time for moment-based estimators is always exponentially faster.
5.4
Application to neural data
In order to show the predictive performance more comprehensively in real neural dataset, we applied
LS, MELE and MLE estimators to data from a population of 57 V1 simple and complex cells (data
published in [11]). The stimulus consisted of oriented binary white noise (?flickering bar?) aligned
with the cell?s preferred orientation. The size of receptive field was chosen to be # of bars d ?10
time bins, yielding a 10d-dimensional stimulus space. The time bin size is 10 ms and the number of
bars (d) is 16 in our experiment.
We compared moment-based estimators and MLE with smoothed low-rank expected GQM and
smoothed low-rank GQM [7, 8]. Models are trained on stimuli with size varying from 6.25 ? 103
to 105 and tested on 5 ? 104 samples. Each subunit filter has a length of 5. All hyper parameters
are chosen by cross validation. Fig. 3 shows that GQM is weakly better than LS but its running
time is far more than LS (data not shown). Both MELE and MLE (but not LS) outfight GQM and
7
a)
subunit #1
subunit #1
subunit #2
subunit #2
0
0.2
-0.1
0.1
-0.2
0
STA
b)
10
20
excitatory STC filters
0
0
10
20
suppressive STC filters
V1
responses
subunit
model
Figure 4: Estimating visual receptive fields from a complex cell (544l029.p21). a) k and w by
fitting smoothMELE(#2). Subunit #1 is suppressive (negative w) and #2 is excitatory (positive
w). Form the y-axis we can tell from w that both imply that middle subunits contribute more than
the ends. b) Qualitative analysis. Each image corresponds to a normalized 24 dimensions spatial
pixels (horizontal) by 10 time bins (vertical) filter. Top row: STA/STC from true data; Bottom row:
simulated response from 2-subunit MELE model given true stimuli and applied the same subspace
analysis.
expected GQM with both 1 subunit and 2 subunits. Especially the improvement is the greatest with
1 subunit, which results from the average over all simple and complex cells. Generally, the more
?complex? the cell is, the higher probability that multiple subunits would fit better. Outstandingly,
MELE outperforms others with best goodness-of-fit and flat speed curve. The goodness-of-fit is
defined to be the log-likelihood on the test set divided by spike count.
For qualitative analysis, we ran smoothMELE(#2) for a complex cell and learned the optimal subunit filters and pooling weights (Fig. 4a), and then simulated V1 response by fitting 2-subunit MELE
generative model given the optimal parameters. STA/STC analysis is applied to both neural data and
simulated V1 response data. The quality of the filters trained on 105 stimuli are qualitatively close to
that obtained by STA/STC (Fig. 4b). Subunit models can recover STA, the first six excitatory STC
filters and the last four suppressive ones but with a considerably parsimonious parameter space.
6
Conclusion
We proposed an asymptotically efficient estimator for quadratic convolutional subunit models, which
forges an important theoretical link between spike-triggered covariance analysis and nonlinear subunit models. We have shown that the proposed method works well even when the assumptions
about model specification (nonlinearity and input distribution) were violated. Our approach reduces
the difficulty of fitting subunit models because computational cost does not depend on dataset size
(beyond the cost of a single pass through the data to compute the spike-triggered moments). We
also proved conditions for identifiability of the convolutional decomposition, which reveals that for
most cases the parameters are indeed identifiable. We applied our estimators to the neural data from
macaque primary visual cortex, and showed that they outperform a highly regularized form of the
GQM and achieve similar performance to the subunit model MLE at substantially lower computational cost.
References
[1] R. R. de Ruyter van Steveninck and W. Bialek. Real-time performance of a movement-senstivive neuron
in the blowfly visual system: coding and information transmission in short spike sequences. Proc. R. Soc.
Lond. B, 234:379?414, 1988.
8
[2] J. Touryan, B. Lau, and Y. Dan. Isolation of relevant visual features from random stimuli for cortical
complex cells. Journal of Neuroscience, 22:10811?10818, 2002.
[3] B. Aguera y Arcas and A. L. Fairhall. What causes a neuron to spike? Neural Computation, 15(8):1789?
1807, 2003.
[4] Tatyana Sharpee, Nicole C. Rust, and William Bialek. Analyzing neural responses to natural signals:
maximally informative dimensions. Neural Comput, 16(2):223?250, Feb 2004.
[5] O. Schwartz, J. W. Pillow, N. C. Rust, and E. P. Simoncelli. Spike-triggered neural characterization.
Journal of Vision, 6(4):484?507, 7 2006.
[6] J. W. Pillow and E. P. Simoncelli. Dimensionality reduction in neural models: An information-theoretic
generalization of spike-triggered average and covariance analysis. Journal of Vision, 6(4):414?428, 4
2006.
[7] Il Memming Park and Jonathan W. Pillow. Bayesian spike-triggered covariance analysis. In J. ShaweTaylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 1692?1700, 2011.
[8] Il M. Park, Evan W. Archer, Nicholas Priebe, and Jonathan W. Pillow. Spectral methods for neural characterization using generalized quadratic models. In Advances in Neural Information Processing Systems
26, pages 2454?2462, 2013.
[9] Ross S. Williamson, Maneesh Sahani, and Jonathan W. Pillow. The equivalence of information-theoretic
and likelihood-based methods for neural dimensionality reduction. PLoS Comput Biol, 11(4):e1004141,
04 2015.
[10] Kanaka Rajan, Olivier Marre, and Ga?sper Tka?cik. Learning quadratic receptive fields from neural responses to natural stimuli. Neural Computation, 25(7):1661?1692, 2013/06/19 2013.
[11] Nicole C. Rust, Odelia Schwartz, J. Anthony Movshon, and Eero P. Simoncelli. Spatiotemporal elements
of macaque v1 receptive fields. Neuron, 46(6):945?956, Jun 2005.
[12] B Vintch, A Zaharia, J A Movshon, and E P Simoncelli. Efficient and direct estimation of a neural
subunit model for sensory coding. In Adv. Neural Information Processing Systems (NIPS*12), volume 25,
Cambridge, MA, 2012. MIT Press. To be presented at Neural Information Processing Systems 25, Dec
2012.
[13] Brett Vintch, Andrew Zaharia, J Movshon, and Eero P Simoncelli. A convolutional subunit model for
neuronal responses in macaque v1. J. Neursoci, page in press, 2015.
[14] HB Barlow and W Ro Levick. The mechanism of directionally selective units in rabbit?s retina. The
Journal of physiology, 178(3):477, 1965.
[15] S. Hochstein and R. Shapley. Linear and nonlinear spatial subunits in y cat retinal ganglion cells. J.
Physiol., 262:265?284, 1976.
[16] Jonathan B Demb, Kareem Zaghloul, Loren Haarsma, and Peter Sterling. Bipolar cells contribute to
nonlinear spatial summation in the brisk-transient (y) ganglion cell in mammalian retina. The Journal of
neuroscience, 21(19):7447?7454, 2001.
[17] Joanna D Crook, Beth B Peterson, Orin S Packer, Farrel R Robinson, John B Troy, and Dennis M Dacey.
Y-cell receptive field and collicular projection of parasol ganglion cells in macaque monkey retina. The
Journal of neuroscience, 28(44):11277?11291, 2008.
[18] PX Joris, CE Schreiner, and A Rees. Neural processing of amplitude-modulated sounds. Physiological
reviews, 84(2):541?577, 2004.
[19] Kunihiko Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern
recognition unaffected by shift in position. Biological cybernetics, 36(4):193?202, 1980.
[20] T. Serre, L. Wolf, S. Bileschi, M. Riesenhuber, and T. Poggio. Robust object recognition with cortex-like
mechanisms. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(3):411?426, 2007.
[21] Yann LeCun, L?eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[22] AlexandroD. Ramirez and Liam Paninski. Fast inference in generalized linear models via expected loglikelihoods. Journal of Computational Neuroscience, pages 1?20, 2013.
[23] Philip J Davis. Circulant matrices. American Mathematical Soc., 1979.
[24] M. Sahani and J. Linden. Evidence optimization techniques for estimating stimulus-response functions.
NIPS, 15, 2003.
9
| 5962 |@word cnn:1 version:1 briefly:1 polynomial:2 middle:1 nd:1 km:12 simulation:1 covariance:9 decomposition:12 tr:2 reduction:3 moment:16 contains:1 document:1 outperforms:2 diagonalized:1 ka:4 anqi:1 yet:2 stony:1 written:1 bd:1 must:1 john:1 physiol:1 informative:1 shawetaylor:1 interpretable:1 generative:1 fewer:2 intelligence:1 parameterization:1 core:1 short:1 stonybrook:1 provides:3 coarse:1 location:1 contribute:2 characterization:2 firstly:1 simpler:1 mathematical:1 along:2 direct:1 qualitative:2 consists:1 fitting:8 pathway:1 dan:1 shapley:1 introduce:1 notably:1 indeed:1 expected:10 behavior:1 frequently:1 multi:2 decreasing:1 actual:1 increasing:2 begin:1 estimating:3 moreover:3 xx:1 notation:1 underlying:1 brett:1 what:1 substantially:3 pursue:1 monkey:1 kp3:1 transformation:1 temporal:2 bipolar:1 ro:1 k2:3 rm:1 schwartz:2 unit:7 omit:1 positive:4 before:1 local:2 bilinear:2 encoding:1 despite:1 analyzing:1 demb:1 subscript:1 fluctuation:1 firing:1 black:1 initialization:3 equivalence:1 liam:1 range:1 bi:9 steveninck:1 unique:1 practical:1 lecun:1 x3:1 procedure:1 evan:1 area:1 w4:1 maneesh:1 cascade:3 physiology:1 projection:3 matching:1 convenient:1 get:1 close:1 selection:1 operator:1 bh:4 ga:1 restriction:1 equivalent:3 optimize:1 nicole:2 maximizing:1 straightforward:1 l:14 convex:1 focused:1 rabbit:1 simplicity:3 schreiner:1 estimator:27 fill:1 spanned:1 dw:3 population:2 suppose:2 olivier:1 origin:1 trick:2 element:6 satisfying:2 recognition:3 mammalian:1 lean:1 bottom:1 precondition:1 region:1 adv:1 plo:1 movement:1 ran:1 pd:1 nats:1 cade:1 depend:2 rewrite:2 segment:1 tight:1 trained:2 predictive:1 weakly:1 division:2 efficiency:1 bbh:1 spk:2 cat:2 various:1 distinct:1 fast:2 describe:2 kp:1 zemel:1 tell:1 hyper:4 quite:1 larger:1 widely:1 toeplitz:1 statistic:1 transform:3 final:1 directionally:1 triggered:15 sequence:1 analytical:2 product:1 neighboring:1 aligned:1 hadamard:1 relevant:1 loglikelihoods:1 organizing:1 achieve:1 description:3 optimum:3 r1:1 extending:1 transmission:1 object:1 illustrate:1 andrew:1 pose:2 eq:3 soc:2 recovering:1 involves:2 come:1 drawback:1 filter:19 transient:1 bin:5 require:2 fix:2 generalization:2 decompose:1 proposition:3 biological:4 secondly:1 inspecting:1 summation:1 hold:2 around:1 exp:8 k3:1 mapping:2 bj:1 substituting:1 dacey:1 achieves:1 early:4 uniqueness:3 estimation:6 proc:1 integrates:1 ross:1 sensitive:1 bridge:1 rough:1 wmi:4 mit:1 gaussian:7 always:1 aim:1 beth:1 rather:1 avoid:1 poi:1 varying:1 improvement:1 bernoulli:1 likelihood:16 indicates:1 check:1 rank:8 baseline:2 kp2:1 inference:1 typically:1 hidden:1 transformed:1 archer:1 interested:1 selective:1 pixel:1 arg:1 smi:2 flexible:1 orientation:1 denoted:1 favored:1 smoothing:5 special:1 spatial:6 initialize:1 ell:1 field:6 identical:3 represents:1 park:3 nearly:2 hypothesizing:1 others:2 stimulus:21 simplify:1 lls:1 few:2 retina:4 sta:10 oriented:1 kp1:1 primarily:1 packer:1 ve:3 resulted:1 parsimoniously:1 sterling:1 consisting:1 bw:4 william:1 fukushima:1 interest:3 w5:1 highly:2 bracket:1 yielding:1 light:1 implication:2 accurate:4 poggio:1 modest:1 orthogonal:3 unless:1 re:1 theoretical:6 column:3 earlier:1 soft:4 goodness:5 cost:7 recognizing:1 spatiotemporal:1 considerably:1 mles:1 st:1 rees:1 loren:1 collicular:1 pool:2 invertible:1 w1:1 squared:1 central:1 initializer:2 mele:20 worse:1 ek:9 american:1 account:1 nonlinearities:3 de:1 retinal:1 parasol:1 b2:1 sec:2 coding:2 coefficient:2 satisfy:2 vi:2 view:1 analyze:2 red:1 wm:2 recover:1 complicated:1 identifiability:7 memming:3 il:3 formed:1 accuracy:5 convolutional:14 square:2 variance:1 efficiently:1 yield:2 identify:1 bayesian:1 accurately:1 rectified:1 cybernetics:1 published:1 unaffected:1 sharing:2 ed:3 definition:2 failure:1 frequency:4 obvious:1 e2:3 proof:4 rdk:2 associated:1 senstivive:1 newly:1 dataset:4 proved:1 dimensionality:3 amplitude:1 cik:1 levick:1 higher:1 response:20 maximally:1 formulation:2 though:1 generality:1 governing:1 stage:9 horizontal:1 dennis:1 nonlinear:10 quality:1 serre:1 contain:2 true:5 consisted:1 normalized:1 barlow:1 equality:1 hence:1 spatially:1 deal:1 conditionally:1 eg:3 white:1 during:1 self:1 uniquely:2 essence:1 davis:1 m:1 generalized:5 gg:3 neocognitron:1 theoretic:2 performs:1 meaning:1 wise:3 image:1 common:3 sigmoid:6 spiking:1 rust:3 exponentially:3 volume:1 belong:1 refer:2 significant:1 cambridge:1 dft:6 smoothness:1 rd:7 automatic:1 consistency:1 trivially:1 similarly:1 nonlinearity:21 stochasticity:1 specification:1 cortex:3 base:1 feb:1 patrick:1 own:1 recent:4 showed:2 kpm:1 selectivity:2 certain:1 binary:1 yi:5 lnp:4 seen:1 additional:1 care:1 converge:1 signal:2 full:6 multiple:8 simoncelli:5 reduces:2 sound:1 ing:1 technical:1 smooth:6 characterized:1 plausibility:2 determination:1 faster:2 cross:1 aguera:1 divided:2 mle:14 e1:4 schematic:1 prediction:1 variant:1 involving:1 regression:1 vision:3 expectation:2 poisson:8 arca:1 iteration:1 cell:13 dec:1 addition:1 source:1 suppressive:3 w2:1 touryan:1 pooling:10 revealed:1 bengio:1 hb:1 affect:2 fit:8 isolation:1 w3:1 wu1:1 reduce:2 idea:1 haffner:1 zaghloul:1 shift:4 expression:1 six:1 bartlett:1 movshon:3 peter:1 e3:2 cause:1 generally:3 useful:1 eigenvectors:1 maybe:1 transforms:1 generate:1 outperform:1 canonical:1 shifted:9 neuroscience:7 estimated:1 blue:1 discrete:2 express:1 rajan:1 four:1 drawn:1 ce:1 asd:3 v1:8 asymptotically:3 run:1 powerful:1 throughout:1 reasonable:1 yann:1 parsimonious:3 appendix:1 scaling:1 comparable:1 capturing:1 ki:14 quadratic:14 identifiable:4 fairhall:1 adapted:1 constraint:1 x2:1 ri:8 flat:1 fourier:4 speed:7 span:1 extremely:1 lond:1 hochstein:1 px:1 department:1 combination:2 conjugate:1 across:1 slightly:1 wi:1 shallow:1 making:1 lau:1 glm:1 ln:12 gqm:13 computationally:1 remains:1 count:3 mechanism:3 know:1 end:1 operation:1 forge:1 apply:2 hierarchical:1 blowfly:1 spectral:1 ubiquity:1 nicholas:1 joanna:1 weinberger:1 original:1 responding:1 denotes:1 running:6 top:2 joris:1 eon:1 k1:1 build:1 establish:3 especially:1 objective:1 question:1 quantity:1 spike:24 receptive:6 primary:2 dependence:2 diagonal:3 bialek:2 gradient:1 subspace:3 cw:3 link:1 simulated:3 philip:1 w7:1 assuming:2 w6:1 length:2 kk:1 providing:1 minimizing:1 ratio:1 illustration:1 difficult:1 expense:1 troy:1 negative:2 priebe:1 motivates:1 perform:1 vertical:1 convolution:6 neuron:14 datasets:2 riesenhuber:1 spiketriggered:1 subunit:86 neurobiology:1 stack:1 smoothed:2 arbitrary:2 tatyana:1 bk:3 namely:1 specified:1 connection:2 learned:1 lell:2 nip:2 macaque:5 brook:1 address:2 bar:3 proceeds:1 below:1 beyond:1 mismatch:6 robinson:1 pattern:2 regime:3 challenge:2 max:1 green:1 shifting:1 greatest:1 natural:3 eh:11 regularized:3 treated:1 difficulty:1 improve:1 imply:1 axis:1 jun:1 kj:2 nsp:2 text:1 prior:7 sahani:2 review:1 asymptotic:1 loss:1 expect:1 permutation:1 fully:1 par:1 zaharia:2 validation:1 degree:2 tka:1 sufficient:2 vectorized:1 consistent:2 editor:1 bank:1 share:2 balancing:1 row:3 excitatory:3 supported:1 last:1 copy:2 free:1 transpose:1 exponentiated:2 allow:1 institute:1 circulant:4 wide:1 characterizing:2 comprehensively:1 kareem:1 peterson:1 vintch:3 van:1 curve:2 dimension:4 default:1 evaluating:1 pillow:6 cortical:1 sensory:6 qualitatively:1 far:3 transaction:1 approximate:1 preferred:1 ml:1 overfitting:1 reveals:1 b1:1 assumed:2 eero:2 spatio:1 xi:11 iterative:1 stimulated:1 robust:2 ruyter:1 marre:1 symmetry:1 brisk:1 expansion:1 mse:1 bottou:1 excellent:1 bileschi:1 complex:8 upstream:1 stc:14 diag:16 domain:3 williamson:1 main:1 anthony:1 linearly:2 noise:4 profile:2 x1:1 enlarged:1 fig:7 referred:1 cxi:4 neuronal:1 sub:2 position:6 pereira:1 exponential:7 comput:2 hw:1 theorem:1 e4:1 specific:1 aah:1 rectifier:4 inset:1 showing:1 dk:5 physiological:1 linden:1 evidence:2 naively:1 restricting:1 supplement:2 gap:1 yoshua:1 paninski:1 explore:1 ramirez:1 ganglion:3 crook:1 visual:8 corresponds:2 wolf:1 ma:1 kunihiko:1 identity:1 formulated:1 towards:1 flickering:1 shared:1 specifically:2 lemma:1 total:1 called:1 pas:3 invariance:2 svd:1 sharpee:1 support:1 bcb:1 odelia:1 modulated:1 jonathan:5 violated:1 p21:1 princeton:3 tested:1 biol:1 |
5,483 | 5,963 | Rectified Factor Networks
Djork-Arn?e Clevert, Andreas Mayr, Thomas Unterthiner and Sepp Hochreiter
Institute of Bioinformatics, Johannes Kepler University, Linz, Austria
{okko,mayr,unterthiner,hochreit}@bioinf.jku.at
Abstract
We propose rectified factor networks (RFNs) to efficiently construct very sparse,
non-linear, high-dimensional representations of the input. RFN models identify
rare and small events in the input, have a low interference between code units,
have a small reconstruction error, and explain the data covariance structure. RFN
learning is a generalized alternating minimization algorithm derived from the posterior regularization method which enforces non-negative and normalized posterior means. We proof convergence and correctness of the RFN learning algorithm.
On benchmarks, RFNs are compared to other unsupervised methods like autoencoders, RBMs, factor analysis, ICA, and PCA. In contrast to previous sparse
coding methods, RFNs yield sparser codes, capture the data?s covariance structure more precisely, and have a significantly smaller reconstruction error. We
test RFNs as pretraining technique for deep networks on different vision datasets,
where RFNs were superior to RBMs and autoencoders. On gene expression data
from two pharmaceutical drug discovery studies, RFNs detected small and rare
gene modules that revealed highly relevant new biological insights which were so
far missed by other unsupervised methods.
RFN package for GPU/CPU is available at http://www.bioinf.jku.at/software/rfn.
1
Introduction
The success of deep learning is to a large part based on advanced and efficient input representations
[1, 2, 3, 4]. These representations are sparse and hierarchical. Sparse representations of the input
are in general obtained by rectified linear units (ReLU) [5, 6] and dropout [7]. The key advantage of
sparse representations is that dependencies between coding units are easy to model and to interpret.
Most importantly, distinct concepts are much less likely to interfere in sparse representations. Using
sparse representations, similarities of samples often break down to co-occurrences of features in
these samples. In bioinformatics sparse codes excelled in biclustering of gene expression data [8]
and in finding DNA sharing patterns between humans and Neanderthals [9].
Representations learned by ReLUs are not only sparse but also non-negative. Non-negative representations do not code the degree of absence of events or objects in the input. As the vast majority of
events is supposed to be absent, to code for their degree of absence would introduce a high level of
random fluctuations. We also aim for non-linear input representations to stack models for constructing hierarchical representations. Finally, the representations are supposed to have a large number
of coding units to allow coding of rare and small events in the input. Rare events are only observed
in few samples like seldom side effects in drug design, rare genotypes in genetics, or small customer
groups in e-commerce. Small events affect only few input components like pathways with few genes
in biology, few relevant mutations in oncology, or a pattern of few products in e-commerce. In summary, our goal is to construct input representations that (1) are sparse, (2) are non-negative, (3) are
non-linear, (4) use many code units, and (5) model structures in the input data (see next paragraph).
Current unsupervised deep learning approaches like autoencoders or restricted Boltzmann machines
(RBMs) do encode all peculiarities in the data (including noise). Generative models can be design
1
to model specific structures in the data, but their codes cannot be enforced to be sparse and nonnegative. The input representation of a generative model is its posterior?s mean, median, or mode,
which depends on the data. Therefore, sparseness and non-negativity cannot be guaranteed independent of the data. For example, generative models with rectified priors, like rectified factor analysis,
have zero posterior probability for negative values, therefore their means are positive and not sparse
[10, 11]. Sparse priors like Laplacian and Jeffrey?s do not guarantee sparse posteriors (see experiments in Tab. 1). To address the data dependence of the code, we employ the posterior regularization
method [12]. This method separates model characteristics from data dependent characteristics that
are enforced by constraints on the model?s posterior.
We aim at representations that are feasible for many code units and massive datasets, therefore
the computational complexity of generating a code is essential in our approach. For non-Gaussian
priors, the computation of the posterior mean of a new input requires either to numerically solve
an integral or to iteratively update variational parameters [13]. In contrast, for Gaussian priors the
posterior mean is the product between the input and a matrix that is independent of the input. Still
the posterior regularization method leads to a quadratic (in the number of coding units) constrained
optimization problem in each E-step (see Eq. (3) below). To speed up computation, we do not solve
the quadratic problem but perform a gradient step. To allow for stochastic gradients and fast GPU
implementations, also the M-step is a gradient step. These E-step and M-step modifications of the
posterior regularization method result in a generalized alternating minimization (GAM) algorithm
[12]. We will show that the GAM algorithm used for RFN learning (i) converges and (ii) is correct.
Correctness means that the RFN codes are non-negative, sparse, have a low reconstruction error, and
explain the covariance structure of the data.
2
Rectified Factor Network
Our goal is to construct representations of the input that (1) are sparse, (2) are non-negative, (3) are
non-linear, (4) use many code units, and (5) model structures in the input. Structures in the input
are identified by a generative model, where the model assumptions determine which input structures
to explain by the model. We want to model the covariance structure of the input, therefore we
choose maximum likelihood factor analysis as model. The constraints on the input representation
are enforced by the posterior regularization method [12]. Non-negative constraints lead to sparse
and non-linear codes, while normalization constraints scale the signal part of each hidden (code)
unit. Normalizing constraints avoid that generative models explain away rare and small signals by
noise. Explaining away becomes a serious problem for models with many coding units since their
capacities are not utilized. Normalizing ensures that all hidden units are used but at the cost of coding
also random and spurious signals. Spurious and true signals must be separated in a subsequent step
either by supervised techniques, by evaluating coding units via additional data, or by domain experts.
A generative model with hidden units h and data v is defined by its prior p(h) and its likelihood
p(v | h). The full model distribution p(h, v) = p(v | h)p(h) can be expressed by the model?s
posterior p(h | v) and its evidence (marginal likelihood) p(v): p(h, v) = p(h | v)p(v). The
representation of input v is the posterior?s mean, median, or mode. The posterior regularization
method introduces a variational distribution Q(h | v) ? Q from a family Q, which approximates
the posterior p(h | v). We choose Q to constrain the posterior means to be non-negative and
normalized. The full model distribution p(h, v) contains all model assumptions and, thereby, defines
which structures of the data are modeled. Q(h | v) contains data dependent constraints on the
posterior, therefore on the code.
For data {v} = {v1 , . . . , vn }, the posterior regularization method maximizes the objective F [12]:
n
n
1 X
1 X
log p(vi ) ?
DKL (Q(hi | vi ) k p(hi | vi ))
n i=1
n i=1
n Z
n
1 X
1 X
=
Q(hi | vi ) log p(vi | hi ) dhi ?
DKL (Q(hi | vi ) k p(hi )) ,
n i=1
n i=1
F =
(1)
where DKL is the Kullback-Leibler distance. Maximizing F achieves two goals simultaneously: (1)
extracting desired structures and information from the data as imposed by the generative model and
(2) ensuring desired code properties via Q ? Q.
2
The factor analysis model v = W h + extracts the covarih1
h2
h3
h4
ance structure of the data. The prior h ? N (0, I) of the
l
W22
hidden units (factors) h ? R and the noise ? N (0, ?)
of visible units (observations) v ? Rm are independent. The
model parameters are the weight (loading) matrix W ? Rm?l
W11
and the noise covariance matrix ? ? Rm?m . We assume div1
v2
agonal ? to explain correlations between input components by
the hidden units and not by correlated noise. The factor analysis model is depicted in Fig. 1. Given the mean-centered data
{v} = {v1 , . . . , vn }, the posterior p(hi | vi ) is Gaussian with
1
2
mean vector (?p )i and covariance matrix ?p :
?1 T ?1
Figure 1: Factor analysis model:
(?p )i = I + W T ??1 W
W ? vi ,
hidden units (factors) h, visible
?1
?p = I + W T ??1 W
.
(2) units v, weight matrix W , noise .
A rectified factor network (RFN) consists of a single or stacked factor analysis model(s) with constraints on the posterior. To incorporate the posterior constraints into the factor analysis model,
we use the posterior regularization method that maximizes the objective F given in Eq. (1) [12].
Like the expectation-maximization (EM) algorithm, the posterior regularization method alternates
between an E-step and an M-step. Minimizing the first DKL of Eq. (1) with respect to Q leads to a
constrained optimization problem. For Gaussian distributions, the solution with (?p )i and ?p from
Eq. (2) is Q(hi | vi ) ? N (?i , ?) with ? = ?p and the quadratic problem:
n
min
?i
1X
(?i ? (?p )i )T ??1
p (?i ? (?p )i ) ,
n i=1
n
s.t. ?i : ?i ? 0 , ?j :
1X 2
? = 1 , (3)
n i=1 ij
where ??? is component-wise. This is a constraint non-convex quadratic optimization problem in
the number of hidden units which is too complex to be solved in each EM iteration. Therefore, we
perform a step of the gradient projection algorithm [14, 15], which performs first a gradient step
and then projects the result to the feasible set. We start by a step of the projected Newton method,
then we try the gradient projection algorithm, thereafter the scaled gradient projection algorithm
with reduced matrix [16] (see also [15]). If these methods fail to decrease the objective in Eq. (3),
we use the generalized reduced method [17]. It solves each equality constraint for one variable and
inserts it into the objective while ensuring convex constraints. Alternatively, we use Rosen?s gradient
projection method [18] or its improvement [19]. These methods guarantee a decrease of the E-step
objective.
Since the projection P by Eq. (6) is very fast, the projected Newton and projected gradient update is very fast, too. A projected Newton step requires O(nl) steps (see Eq. (7) and P defined
in Theorem 1), a projected gradient step requires O(min{nlm, nl2 }) steps, and a scaled gradient
projection step requires O(nl3 ) steps. The RFN complexity per iteration is O(n(m2 + l2 )) (see
Alg. 1). In contrast, a quadratic program solver typically requires for the (nl) variables (the means
of the hidden units for all samples) O(n4 l4 ) steps to find the minimum [20]. We exemplify these
values on our benchmark datasets MNIST (n = 50k, l = 1024, m = 784) and CIFAR (n = 50k,
l = 2048, m = 1024). The speedup with projected Newton or projected gradient in contrast to
a quadratic solver is O(n3 l2 ) = O(n4 l4 )/O(nl2 ), which gives speedup ratios of 1.3 ? 1020 for
MNIST and 5.2 ? 1020 for CIFAR. These speedup ratios show that efficient E-step updates are
essential for RFN learning. Furthermore, on our computers, RAM restrictions limited quadratic
program solvers to problems with nl ? 20k. Running times of RFNs with the Newton step and a
quadratic program solver are given in the supplementary Section 15.
The M-step decreases the expected reconstruction error
n Z
1 X
E = ?
Q(hi | vi ) log (p(vi | hi )) dhi
(4)
n i=1 Rl
1
=
m log (2?) + log |?| + Tr ??1 C ? 2 Tr ??1 W U T + Tr W T ??1 W S
.
2
from Eq. (1) with respect to the model parameters W and ?. Definitions of C, U and S are
given in Alg. 1. The M-step performs a gradient step in the Newton direction, since we want to
3
Algorithm 1 Rectified Factor Network.
P
T
1: C = n1 n
i=1 vi vi
2: while STOP=false do
3:
??E-step1??
4:
for all 1 ? i ? n do
?1
5:
(?p )i = I + W T ??1 W
W T ??1 vi
6:
end for
?1
7:
? = ?p = I + W T ??1 W
8:
??Constraint Posterior??
9:
(1) projected Newton, (2) projected gradient,
(3) scaled gradient projection, (4) generalized
reduced method, (5) Rosen?s gradient project.
10:
??E-step2??
Pn
T
11:
U = n1
i=1 vi ?i
P
n
T
1
12:
S = n
i=1 ?i ?i + ?
13:
??M-step??
14:
E = C ? U W T ? W U + W S W T
15:
W = W + ? U S ?1 ? W
16:
for all 1 ? k ? m do
17:
?kk = ?kk + ? (Ekk ? ?kk )
18:
end for
19:
if stopping criterion is met: STOP=true
20: end while
Complexity: objective F: O(min{nlm, nl2 } + l3 ); E-step1: O(min{m2 (m + l), l2 (m + l)} + nlm);
projected Newton: O(nl); projected gradient: O(min{nlm, nl2 }); scaled gradient projection: O(nl3 ); Estep2: O(nl(m+l)); M-step: O(ml(m+l)); overall complexity with projected Newton / gradient for (l+m) <
n: O(n(m2 + l2 )).
allow stochastic gradients, fast GPU implementation, and dropout regularization. The Newton step
is derived in the supplementary which gives further details, too. Also in the E-step, RFN learning
performs a gradient step using projected Newton or gradient projection methods. These projection
methods require the Euclidean projection P of the posterior means {(?p )i } onto the non-convex
feasible set:
min
?i
n
1 X
T
(?i ? (?p )i ) (?i ? (?p )i ) ,
n i=1
s.t. ?i ? 0 ,
n
1 X 2
? = 1.
n i=1 ij
(5)
The following Theorem 1 gives the Euclidean projection P as solution to Eq. (5).
Theorem 1 (Euclidean Projection). If at least one (?p )ij is positive for 1 ? j ? l, then the solution
to optimization problem Eq. (5) is
?
?ij
0
for (?p )ij ? 0
, ?
?ij =
. (6)
?ij = [P((?p )i )]j = q P
(?p )ij for (?p )ij > 0
n
1
2
?
?
i=1 ij
n
If all (??
p )ij are non-positive for 1 ? j ? l, then the optimization problem Eq. (5) has the solution
?ij = n for j = arg max?j {(?p )i?j } and ?ij = 0 otherwise.
Proof. See supplementary material.
Using the projection P defined in Eq. (6), the E-step updates for the posterior means ?i are:
old
= P ?old
+ ? d ? ?old
, d = P ?old
+ ? H ?1 ??1
(7)
?new
i
i
i
i
p ((?p )i ? ?i )
where we set for the projected Newton method H ?1 = ?p (thus H ?1 ??1
= I), and for the
p
?1
projected gradient method H
= I. For the scaled gradient projection algorithm with reduced
matrix, the -active set for i consists of all j with ?ij ? . The reduced matrix H is the Hessian
??1
p with -active columns and rows j fixed to unit vectors ej . The resulting algorithm is a posterior
regularization method with a gradient based E- and M-step, leading to a generalized alternating
minimization (GAM) algorithm [21]. The RFN learning algorithm is given in Alg. 1. Dropout
regularization can be included before E-step2 by randomly setting code units ?ij to zero with a
predefined dropout rate (note that convergence results will no longer hold).
3
Convergence and Correctness of RFN Learning
Convergence of RFN Learning. Theorem 2 states that Alg. 1 converges to a maximum of F.
Theorem 2 (RFN Convergence). The rectified factor network (RFN) learning algorithm given in
Alg. 1 is a ?generalized alternating minimization? (GAM) algorithm and converges to a solution
that maximizes the objective F.
4
Proof. We present a sketch of the proof which is given in detail in the supplement. For convergence,
we show that Alg. 1 is a GAM algorithm which convergences according to Proposition 5 in [21].
Alg. 1 ensures to decrease the M-step objective which is convex in W and ??1 . The update with
? = 1 leads to the minimum of the objective. Convexity of the objective guarantees a decrease in the
M-step for 0 < ? ? 1 if not in a minimum. Alg. 1 ensures to decrease the E-step objective by using
gradient projection methods. All other requirements for GAM convergence are also fulfilled.
Proposition 5 in [21] is based on Zangwill?s generalized convergence theorem, thus updates of the
RFN algorithm are viewed as point-to-set mappings [22]. Therefore, the numerical precision, the
choice of the methods in the E-step, and GPU implementations are covered by the proof.
Correctness of RFN Learning. The goal of the RFN algorithm is to explain the data and its
covariance structure. The expected approximation error E is defined in line 14 of Alg. 1. Theorem 3
states that the RFN algorithm is correct, that is, it explains the data (low reconstruction error) and
captures the covariance structure as good as possible.
Theorem 3 (RFN Correctness). The fixed point W of Alg. 1 minimizes Tr (?) given ?i and ? by
ridge regression with
n
2
1 X
2
Tr (?) =
ki k2 +
W ?1/2
,
(8)
n i=1
F
where i = vi ? W ?i . The model explains the data covariance matrix by
C = ? + W S WT
up to an error, which is quadratic in ? for ? W W T . The reconstruction error
is quadratic in ? for ? W W T .
(9)
1
n
Pn
i=1
2
ki k2
Proof. The fixed point equation for the W update is ?W = U S ?1 ? W = 0 ? W = U S ?1 .
1 Pn
?1
Pn
T
Using the definition of U and S, we have W = n1 i=1 vi ?Ti
.W
i=1 ?i ?i + ?
n
is the ridge regression solution of
!
n
n
2
1 X
1 X
2
1/2
T
T
kvi ? W ?i k2 +
W ?
= Tr
i i + W ? W
,
(10)
n i=1
n i=1
F
Pn
where Tr is the trace. After multiplying out all i Ti in 1/n i=1 i Ti , we obtain:
E =
n
1 X
i Ti + W ? W T .
n i=1
(11)
Pn
For the fixed point of ?, the update rule gives: diag (?) = diag n1 i=1 i Ti + W ?W T .
Thus, W minimizes Tr (?) given ?i and ?.
Multiplying the Woodbury identity for
?1
WWT + ?
from left and right by ? gives
?1
W ?W T = ? ? ? W W T + ?
?.
(12)
Inserting this into the expression for diag (?) and taking the trace gives
!
n
?1
?1
1 X
2
T
i i = Tr ? W W T + ?
? ? Tr W W T + ?
Tr (?) .
Tr
n i=1
(13)
Therefore, for ? W W T the error is quadratic in ?. W U T = W SW T = U W T follows
from fixed point equation U = W S. Using this and Eq. (12), Eq. (11) is
n
?1
1 X
i Ti ? ? W W T + ?
? = C ? ? ? W S WT .
(14)
n i=1
Using the trace norm (nuclear norm or Ky-Fan n-norm) on matrices, Eq. (13) states that the left
hand side of Eq. (14) is quadratic in ? for ? W W T . The trace norm of a positive semi-definite
matrix is its trace and bounds the Frobenius norm [23]. Thus, for ? W W T , the covariance is
approximated up to a quadratic error in ? according to Eq. (9). The diagonal is exactly modeled.
5
Since the minimization of the expected reconstruction error Tr (?) is based on ?i , the quality of
reconstruction depends on the correlation between ?i and vi . We ensure maximal information in ?i
on vi by the I-projection (the minimal Kullback-Leibler distance) of the posterior onto the family of
rectified and normalized Gaussian distributions.
4
Experiments
RFNs vs. Other Unsupervised Methods. We assess the performance of rectified factor networks
(RFNs) as unsupervised methods for data representation. We compare (1) RFN: rectified factor networks, (2) RFNn: RFNs without normalization, (3) DAE: denoising autoencoders with ReLUs, (4)
RBM: restricted Boltzmann machines with Gaussian visible units, (5) FAsp: factor analysis with
Jeffrey?s prior (p(z) ? 1/z) on the hidden units which is sparser than a Laplace prior, (6) FAlap:
factor analysis with Laplace prior on the hidden units, (7) ICA: independent component analysis
by FastICA [24], (8) SFA: sparse factor analysis with a Laplace prior on the parameters, (9) FA:
standard factor analysis, (10) PCA: principal component analysis. The number of components are
fixed to 50, 100 and 150 for each method. We generated nine different benchmark datasets (D1 to
D9), where each dataset consists of 100 instances. Each instance has 100 samples and 100 features
resulting in a 100?100 matrix. Into these matrices, biclusters are implanted [8]. A bicluster is a
pattern of particular features which is found in particular samples like a pathway activated in some
samples. An optimal representation will only code the biclusters that are present in a sample. The
datasets have different noise levels and different bicluster sizes. Large biclusters have 20?30 samples and 20?30 features, while small biclusters 3?8 samples and 3?8 features. The pattern?s signal
strength in a particular sample was randomly chosen according to the Gaussian N (1, 1). Finally,
to each matrix, zero-mean Gaussian background noise was added with standard deviation 1, 5, or
10. The datasets are characterized by Dx=(?, n1 , n2 ) with background noise ?, number of large
biclusters n1 , and the number of small biclusters n2 : D1=(1,10,10), D2=(5,10,10), D3=(10,10,10),
D4=(1,15,5), D5=(5,15,5), D6=(10,15,5), D7=(1,5,15), D8=(5,5,15), D9=(10,5,15).
We evaluated the methods according to the (1) sparseness of the components, the (2) input reconstruction error from the code, and the (3) covariance reconstruction error for generative models.
For RFNs sparseness is the percentage of the components that are exactly 0, while for others methods it is the percentage of components with an absolute value smaller than 0.01. The reconstruction
error is the sum of the squared errors across samples. The covariance reconstruction error is the
Frobenius norm of the difference between model and data covariance. See supplement for more
details on the data and for information on hyperparameter selection for the different methods. Tab. 1
gives averaged results for models with 50 (undercomplete), 100 (complete) and 150 (overcomplete)
coding units. Results are the mean of 900 instances consisting of 100 instances for each dataset
D1 to D9. In the supplement, we separately tabulate the results for D1 to D9 and confirm them
with different noise levels. FAlap did not yield sparse codes since the variational parameter did not
Table 1: Comparison of RFN with other unsupervised methods, where the upper part contains methods that yielded sparse codes. Criteria: sparseness of the code (SP), reconstruction error (ER),
difference between data and model covariance (CO). The panels give the results for models with 50,
100 and 150 coding units. Results are the mean of 900 instances, 100 instances for each dataset D1
to D9 (maximal value: 999). RFNs had the sparsest code, the lowest reconstruction error, and the
lowest covariance approximation error of all methods that yielded sparse representations (SP>10%).
undercomplete 50 code units
complete 100 code units
overcomplete 150 code units
RFN
RFNn
DAE
RBM
FAsp
SP
75?0
74?0
66?0
15?1
40?1
ER
249?3
295?4
251?3
310?4
999?63
CO
108?3
140?4
?
?
999?99
SP
81?1
79?0
69?0
7?1
63?0
ER
68?9
185?5
147?2
287?4
999?65
CO
26?6
59?3
?
?
999?99
SP
85?1
80?0
71?0
5?0
80?0
ER
17?6
142?4
130?2
286?4
999?65
CO
7?6
35?2
?
?
999?99
FAlap
ICA
SFA
FA
PCA
4?0
2?0
1?0
1?0
0?0
239?6
174?2
218?5
218?4
174?2
341?19
?
94?3
90?3
?
6?0
3?1
1?0
1?0
2?0
46?4
0?0
16?1
16?1
0?0
985?45
?
114?5
83?4
?
4?0
3?1
1?0
1?0
2?0
46?4
0?0
16?1
16?1
0?0
976?53
?
285?7
263?6
?
6
(a) MNIST digits
(b) MNIST digits with random image background
(c) MNIST digits with random noise background
(d) convex and concave shapes
(e) tall and wide rectangular
(f) rectangular images on background images
(g) CIFAR-10 images (best viewed in color)
(h) NORB images
Figure 2: Randomly selected filters trained on image datasets using an RFN with 1024 hidden units.
RFNs learned stroke, local and global blob detectors. RFNs are robust to background noise (b,c,f).
push the absolute representations below the threshold of 0.01. The variational approximation to the
Laplacian is a Gaussian [13]. RFNs had the sparsest code, the lowest reconstruction error, and the
lowest covariance approximation error of all methods yielding sparse representations (SP>10%).
RFN Pretraining for Deep Nets. We assess the performance of rectified factor networks (RFNs)
if used for pretraining of deep networks. Stacked RFNs are obtained by first training a single layer
RFN and then passing on the resulting representation as input for training the next RFN. The deep
network architectures use a RFN pretrained first layer (RFN-1) or stacks of 3 RFNs giving a 3hidden layer network. The classification performance of deep networks with RFN pretrained layers
was compared to (i) support vector machines, (ii) deep networks pretrained by stacking denoising
autoencoders (SDAE), (iii) stacking regular autoencoders (SAE), (iv) restricted Boltzmann machines
(RBM), and (v) stacking restricted Boltzmann machines (DBN).
The benchmark datasets and results are taken from previous publications [25, 26, 27, 28] and contain: (i) MNIST (original MNIST), (ii) basic (a smaller subset of MNIST for training), (iii) bg-rand
(MNIST with random noise background), (iv) bg-img (MNIST with random image background),
(v) rect (tall or wide rectangles), (vi) rect-img (tall or wide rectangular images with random background images), (vii) convex (convex or concave shapes), (viii) CIFAR-10 (60k color images in 10
classes), and (ix) NORB (29,160 stereo image pairs of 5 categories). For each dataset its size of
training, validation and test set is given in the second column of Tab. 2. As preprocessing we only
performed median centering. Model selection is based on the validation set [26]. The RFNs hyperparameters are (i) the number of units per layer from {1024, 2048, 4096} and (ii) the dropout rate
from {0.0, 0.25, 0.5, 0.75}. The learning rate was fixed to ? = 0.01 (default value). For supervised
fine-tuning with stochastic gradient descent, we selected the learning rate from {0.1, 0.01, 0.001},
the masking noise from {0.0, 0.25}, and the number of layers from {1, 3}. Fine-tuning was stopped
based on the validation set, see [26]. Fig. 2 shows learned filters. Test error rates and the 95%
Table 2: Results of deep networks pretrained by RFNs and other models (taken from [25, 26, 27,
28]). The test error rate is reported together with the 95% confidence interval. The best performing
method is given in bold, as well as those for which confidence intervals overlap. The first column
gives the dataset, the second the size of training, validation and test set, the last column indicates
the number of hidden layers of the selected deep network. In only one case RFN pretraining was
significantly worse than the best method but still the second best. In six out of the nine experiments
RFN pretraining performed best, where in four cases it was significantly the best.
Dataset
MNIST
basic
bg-rand
bg-img
rect
rect-img
convex
NORB
CIFAR
50k-10k-10k
10k-2k-50k
10k-2k-50k
10k-2k-50k
1k-0.2k-50k
10k-2k-50k
10k-2k-50k
19k-5k-24k
40k-10k-10k
SVM
RBM
DBN
SAE
SDAE
RFN
1.40?0.23
3.03?0.15
14.58?0.31
22.61?0.37
2.15?0.13
24.04?0.37
19.13?0.34
11.6?0.40
62.7?0.95
1.21?0.21
3.94?0.17
9.80?0.26
16.15?0.32
4.71?0.19
23.69?0.37
19.92?0.35
8.31?0.35
40.39?0.96
1.24?0.22
3.11?0.15
6.73?0.22
16.31?0.32
2.60?0.14
22.50?0.37
18.63?0.34
43.38?0.97
1.40?0.23
3.46?0.16
11.28?0.28
23.00?0.37
2.41?0.13
24.05?0.37
18.41?0.34
10.10?0.38
43.25?0.97
1.28?0.22
2.84?0.15
10.30?0.27
16.68?0.33
1.99?0.12
21.59?0.36
19.06?0.34
9.50?0.37
-
1.27?0.22 (1)
2.66?0.14 (1)
7.94?0.24 (3)
15.66?0.32 (1)
0.63?0.06 (1)
20.77?0.36 (1)
16.41?0.32 (1)
7.00?0.32 (1)
41.29?0.95 (1)
7
A
C
B
D
E
Micronuclei
Figure 3: Examples of small and rare events identified by RFN in two drug design studies, which
were missed by previous methods. Panel A and B: first row gives the coding unit, while the other
rows display expression values of genes for controls (red), active drugs (green), and inactive drugs
(black). Drugs (green) in panel A strongly downregulate the expression of tubulin genes which
hints at a genotoxic effect by the formation of micronuclei (C). The micronuclei were confirmed by
microscopic analysis (D). Drugs (green) in panel B show a transcriptional effect on genes with a
negative feedback to the MAPK signaling pathway (E) and therefore are potential cancer drugs.
confidence interval (computed according to [26]) for deep network pretraining by RFNs and other
methods are given in Tab. 2. Best results and those with overlapping confidence intervals are given
in bold. RFNs were only once significantly worse than the best method but still the second best.
In six out of the nine experiments RFNs performed best, where in four cases it was significantly
the best. Supplementary Section 14 shows results of RFN pretraining for convolutional networks,
where RFN pretraining decreased the test error rates to 7.63% for CIFAR-10 and to 29.75% for
CIFAR-100.
RFNs in Drug Discovery. Using RFNs we analyzed gene expression datasets of two projects in
the lead optimization phase of a big pharmaceutical company [29]. The first project aimed at finding
novel antipsychotics that target PDE10A. The second project was an oncology study that focused
on compounds inhibiting the FGF receptor. In both projects, the expression data was summarized
by FARMS [30] and standardized. RFNs were trained with 500 hidden units, no masking noise, and
a learning rate of ? = 0.01. The identified transcriptional modules are shown in Fig. 3. Panels A
and B illustrate that RFNs found rare and small events in the input. In panel A only a few drugs are
genotoxic (rare event) by downregulating the expression of a small number of tubulin genes (small
event). The genotoxic effect stems from the formation of micronuclei (panel C and D) since the
mitotic spindle apparatus is impaired. Also in panel B, RFN identified a rare and small event which
is a transcriptional module that has a negative feedback to the MAPK signaling pathway. Rare events
are unexpectedly inactive drugs (black dots), which do not inhibit the FGF receptor. Both findings
were not detected by other unsupervised methods, while they were highly relevant and supported
decision-making in both projects [29].
5
Conclusion
We have introduced rectified factor networks (RFNs) for constructing very sparse and non-linear
input representations with many coding units in a generative framework. Like factor analysis, RFN
learning explains the data variance by its model parameters. The RFN learning algorithm is a posterior regularization method which enforces non-negative and normalized posterior means. We have
shown that RFN learning is a generalized alternating minimization method which can be proved
to converge and to be correct. RFNs had the sparsest code, the lowest reconstruction error, and the
lowest covariance approximation error of all methods that yielded sparse representations (SP>10%).
RFNs have shown that they improve performance if used for pretraining of deep networks. In two
pharmaceutical drug discovery studies, RFNs detected small and rare gene modules that were so far
missed by other unsupervised methods. These gene modules were highly relevant and supported
the decision-making in both studies. RFNs are geared to large datasets, sparse coding, and many
representational units, therefore they have high potential as unsupervised deep learning techniques.
Acknowledgment. The Tesla K40 used for this research was donated by the NVIDIA Corporation.
8
References
[1] G. E. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
313(5786):504?507, 2006.
[2] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. In
B. Sch?olkopf, J. C. Platt, and T. Hoffman, editors, NIPS, pages 153?160. MIT Press, 2007.
[3] J. Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85?117, 2015.
[4] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436?444, 2015.
[5] V. Nair and G. E. Hinton. Rectified linear units improve restricted Boltzmann machines. In ICML, pages
807?814. Omnipress 2010, ISBN 978-1-60558-907-7, 2010.
[6] X. Glorot, A. Bordes, and Y. Bengio. Deep sparse rectifier neural networks. In AISTATS, volume 15,
pages 315?323, 2011.
[7] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to
prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929?1958, 2014.
[8] S. Hochreiter, U. Bodenhofer, et al. FABIA: factor analysis for bicluster acquisition. Bioinformatics,
26(12):1520?1527, 2010.
[9] S. Hochreiter. HapFABIA: Identification of very short segments of identity by descent characterized by
rare variants in large sequencing data. Nucleic Acids Res., 41(22):e202, 2013.
[10] B. J. Frey and G. E. Hinton. Variational learning in nonlinear Gaussian belief networks. Neural Computation, 11(1):193?214, 1999.
[11] M. Harva and A. Kaban. Variational learning for rectified factor analysis. Signal Processing, 87(3):509?
527, 2007.
[12] K. Ganchev, J. Graca, J. Gillenwater, and B. Taskar. Posterior regularization for structured latent variable
models. Journal of Machine Learning Research, 11:2001?2049, 2010.
[13] J. Palmer, D. Wipf, K. Kreutz-Delgado, and B. Rao. Variational EM algorithms for non-Gaussian latent
variable models. In NIPS, volume 18, pages 1059?1066, 2006.
[14] D. P. Bertsekas. On the Goldstein-Levitin-Polyak gradient projection method. IEEE Trans. Automat.
Control, 21:174?184, 1976.
[15] C. T. Kelley. Iterative Methods for Optimization. Society for Industrial and Applied Mathematics (SIAM),
Philadelphia, 1999.
[16] D. P. Bertsekas. Projected Newton methods for optimization problems with simple constraints. SIAM J.
Control Optim., 20:221?246, 1982.
[17] J. Abadie and J. Carpentier. Optimization, chapter Generalization of the Wolfe Reduced Gradient Method
to the Case of Nonlinear Constraints. Academic Press, 1969.
[18] J. B. Rosen. The gradient projection method for nonlinear programming. part ii. nonlinear constraints.
Journal of the Society for Industrial and Applied Mathematics, 9(4):514?532, 1961.
[19] E. J. Haug and J. S. Arora. Applied optimal design. J. Wiley & Sons, New York, 1979.
[20] A. Ben-Tal and A. Nemirovski. Interior Point Polynomial Time Methods for Linear Programming, Conic
Quadratic Programming, and Semidefinite Programming, chapter 6, pages 377?442. Society for Industrial
and Applied Mathematics, 2001.
[21] A. Gunawardana and W. Byrne. Convergence theorems for generalized alternating minimization procedures. Journal of Machine Learning Research, 6:2049?2073, 2005.
[22] W. I. Zangwill. Nonlinear Programming: A Unified Approach. Prentice Hall, Englewood Cliffs, N.J.,
1969.
[23] N. Srebro. Learning with Matrix Factorizations. PhD thesis, Department of Electrical Engineering and
Computer Science, Massachusetts Institute of Technology, 2004.
[24] A. Hyv?arinen and E. Oja. A fast fixed-point algorithm for independent component analysis. Neural
Comput., 9(7):1483?1492, 1999.
[25] Y. LeCun, F.-J. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to
pose and lighting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
(CVPR). IEEE Press, 2004.
[26] P. Vincent, H. Larochelle, et al. Stacked denoising autoencoders: Learning useful representations in a
deep network with a local denoising criterion. JMLR, 11:3371?3408, 2010.
[27] H. Larochelle, D. Erhan, et al. An empirical evaluation of deep architectures on problems with many
factors of variation. In ICML, pages 473?480, 2007.
[28] A. Krizhevsky. Learning multiple layers of features from tiny images. Master?s thesis, Deptartment of
Computer Science, University of Toronto, 2009.
[29] B. Verbist, G. Klambauer, et al. Using transcriptomics to guide lead optimization in drug discovery
projects: Lessons learned from the {QSTAR} project. Drug Discovery Today, 20(5):505 ? 513, 2015.
[30] S. Hochreiter, D.-A. Clevert, and K. Obermayer. A new summarization method for Affymetrix probe
level data. Bioinformatics, 22(8):943?949, 2006.
9
| 5963 |@word polynomial:1 loading:1 norm:6 hyv:1 d2:1 covariance:17 automat:1 thereby:1 tr:13 delgado:1 contains:3 tabulate:1 jku:2 affymetrix:1 current:1 optim:1 dx:1 must:1 gpu:4 visible:3 subsequent:1 numerical:1 shape:2 hochreit:1 update:8 v:1 generative:9 selected:3 greedy:1 short:1 kepler:1 toronto:1 h4:1 mapk:2 consists:3 pathway:4 paragraph:1 introduce:1 expected:3 ica:3 excelled:1 salakhutdinov:2 company:1 cpu:1 solver:4 becomes:1 project:9 maximizes:3 step2:2 panel:8 lowest:6 minimizes:2 sfa:2 unified:1 finding:3 corporation:1 guarantee:3 graca:1 ti:6 concave:2 donated:1 exactly:2 rm:3 scaled:5 k2:3 control:3 unit:37 platt:1 mayr:2 bertsekas:2 positive:4 before:1 engineering:1 local:2 frey:1 apparatus:1 receptor:2 cliff:1 fluctuation:1 black:2 co:5 limited:1 nemirovski:1 palmer:1 factorization:1 averaged:1 commerce:2 woodbury:1 enforces:2 zangwill:2 acknowledgment:1 lecun:2 definite:1 ance:1 digit:3 signaling:2 procedure:1 linz:1 drug:14 empirical:1 significantly:5 projection:19 confidence:4 regular:1 cannot:2 onto:2 selection:2 interior:1 prentice:1 www:1 restriction:1 imposed:1 customer:1 maximizing:1 sepp:1 convex:8 ekk:1 rectangular:3 focused:1 m2:3 insight:1 rule:1 d5:1 importantly:1 nuclear:1 lamblin:1 variation:1 laplace:3 deptartment:1 target:1 today:1 massive:1 programming:5 wolfe:1 approximated:1 recognition:2 utilized:1 observed:1 taskar:1 module:5 solved:1 capture:2 unexpectedly:1 electrical:1 ensures:3 k40:1 decrease:6 inhibit:1 convexity:1 complexity:4 trained:2 segment:1 chapter:2 stacked:3 separated:1 distinct:1 fast:5 detected:3 fabia:1 formation:2 supplementary:4 solve:2 cvpr:1 otherwise:1 farm:1 advantage:1 blob:1 net:1 isbn:1 propose:1 reconstruction:16 clevert:2 product:2 maximal:2 inserting:1 relevant:4 representational:1 supposed:2 frobenius:2 ky:1 olkopf:1 sutskever:1 convergence:10 impaired:1 requirement:1 generating:1 converges:3 ben:1 object:2 tall:3 illustrate:1 pose:1 ij:15 h3:1 eq:17 solves:1 larochelle:3 met:1 direction:1 correct:3 filter:2 peculiarity:1 stochastic:3 centered:1 human:1 nlm:4 material:1 explains:3 require:1 arinen:1 generalization:1 proposition:2 biological:1 insert:1 hold:1 hall:1 mapping:1 inhibiting:1 achieves:1 correctness:5 ganchev:1 hoffman:1 minimization:7 mit:1 gaussian:11 aim:2 avoid:1 pn:6 ej:1 publication:1 encode:1 derived:2 improvement:1 sequencing:1 likelihood:3 indicates:1 contrast:4 industrial:3 dependent:2 stopping:1 typically:1 hidden:14 spurious:2 overall:1 arg:1 classification:1 constrained:2 marginal:1 construct:3 once:1 biology:1 unsupervised:9 icml:2 wipf:1 rosen:3 others:1 mitotic:1 serious:1 employ:1 hint:1 few:6 randomly:3 oja:1 simultaneously:1 pharmaceutical:3 phase:1 consisting:1 jeffrey:2 n1:6 englewood:1 highly:3 evaluation:1 introduces:1 analyzed:1 genotype:1 semidefinite:1 nl:5 yielding:1 activated:1 predefined:1 integral:1 antipsychotic:1 iv:2 unterthiner:2 euclidean:3 old:4 desired:2 re:1 overcomplete:2 dae:2 minimal:1 stopped:1 instance:6 column:4 rao:1 maximization:1 cost:1 stacking:3 deviation:1 subset:1 rare:13 undercomplete:2 krizhevsky:2 fastica:1 too:3 reported:1 dependency:1 spindle:1 siam:2 together:1 d9:5 dhi:2 squared:1 thesis:2 gunawardana:1 choose:2 huang:1 d8:1 worse:2 expert:1 leading:1 potential:2 coding:13 bold:2 summarized:1 depends:2 vi:20 bg:4 performed:3 break:1 try:1 tab:4 red:1 start:1 relus:2 masking:2 mutation:1 ass:2 convolutional:1 variance:1 characteristic:2 efficiently:1 acid:1 yield:2 identify:1 lesson:1 identification:1 vincent:1 multiplying:2 confirmed:1 rectified:16 lighting:1 stroke:1 explain:6 detector:1 nl2:4 sae:2 sharing:1 definition:2 centering:1 rbms:3 acquisition:1 proof:6 rbm:4 stop:2 dataset:6 proved:1 massachusetts:1 austria:1 exemplify:1 color:2 dimensionality:1 goldstein:1 wwt:1 supervised:2 rand:2 evaluated:1 strongly:1 furthermore:1 djork:1 autoencoders:7 correlation:2 sketch:1 hand:1 nonlinear:5 overlapping:1 interfere:1 defines:1 mode:2 quality:1 effect:4 normalized:4 concept:1 true:2 contain:1 byrne:1 regularization:14 equality:1 alternating:6 iteratively:1 leibler:2 d4:1 criterion:3 generalized:9 ridge:2 complete:2 performs:3 omnipress:1 image:12 variational:7 wise:2 novel:1 superior:1 rl:1 overview:1 volume:2 approximates:1 interpret:1 numerically:1 tuning:2 seldom:1 dbn:2 mathematics:3 kelley:1 gillenwater:1 had:3 dot:1 l3:1 geared:1 similarity:1 longer:1 posterior:32 schmidhuber:1 compound:1 nvidia:1 success:1 minimum:3 additional:1 arn:1 determine:1 converge:1 signal:6 ii:5 semi:1 full:2 d7:1 multiple:1 stem:1 characterized:2 academic:1 cifar:7 dkl:4 laplacian:2 ensuring:2 variant:1 regression:2 basic:2 implanted:1 vision:2 expectation:1 iteration:2 normalization:2 hochreiter:4 background:9 want:2 separately:1 fine:2 interval:4 decreased:1 median:3 sch:1 extracting:1 revealed:1 iii:2 easy:1 bengio:3 affect:1 relu:1 architecture:2 identified:4 polyak:1 andreas:1 absent:1 inactive:2 expression:8 pca:3 six:2 stereo:1 transcriptomics:1 hessian:1 nine:3 pretraining:9 passing:1 york:1 deep:19 useful:1 covered:1 aimed:1 johannes:1 category:1 dna:1 reduced:6 http:1 percentage:2 fulfilled:1 per:2 hyperparameter:1 levitin:1 group:1 key:1 thereafter:1 four:2 threshold:1 d3:1 prevent:1 carpentier:1 rectangle:1 v1:2 vast:1 ram:1 nl3:2 sum:1 enforced:3 package:1 master:1 family:2 vn:2 missed:3 decision:2 dropout:6 bound:1 hi:10 layer:9 guaranteed:1 display:1 fan:1 quadratic:14 nonnegative:1 yielded:3 strength:1 precisely:1 constraint:15 constrain:1 w22:1 n3:1 software:1 step1:2 tal:1 speed:1 min:6 performing:1 speedup:3 structured:1 department:1 according:5 alternate:1 smaller:3 across:1 em:3 son:1 n4:2 modification:1 making:2 restricted:5 interference:1 taken:2 equation:2 fail:1 end:3 available:1 probe:1 gam:6 hierarchical:2 away:2 v2:1 generic:1 occurrence:1 thomas:1 original:1 standardized:1 running:1 ensure:1 biclusters:6 newton:13 sw:1 giving:1 society:3 objective:11 added:1 fa:2 dependence:1 bicluster:3 diagonal:1 transcriptional:3 microscopic:1 gradient:30 obermayer:1 distance:2 separate:1 capacity:1 majority:1 d6:1 viii:1 code:28 modeled:2 kk:3 ratio:2 minimizing:1 trace:5 negative:12 design:4 implementation:3 boltzmann:5 summarization:1 perform:2 upper:1 observation:1 nucleic:1 datasets:10 benchmark:4 descent:2 hinton:5 stack:2 oncology:2 introduced:1 pair:1 fgf:2 learned:4 nip:2 trans:1 address:1 below:2 pattern:5 program:3 including:1 max:1 green:3 belief:1 event:12 overlap:1 advanced:1 improve:2 w11:1 technology:1 conic:1 arora:1 negativity:1 extract:1 philadelphia:1 prior:10 popovici:1 discovery:5 bioinf:2 l2:4 srebro:1 validation:4 h2:1 degree:2 editor:1 tiny:1 bordes:1 row:3 cancer:1 genetics:1 summary:1 supported:2 last:1 side:2 allow:3 guide:1 institute:2 explaining:1 wide:3 taking:1 absolute:2 sparse:26 feedback:2 default:1 evaluating:1 klambauer:1 projected:16 preprocessing:1 far:2 erhan:1 kaban:1 kullback:2 gene:11 confirm:1 ml:1 global:1 active:3 overfitting:1 kreutz:1 img:4 rect:4 norb:3 alternatively:1 latent:2 iterative:1 table:2 nature:1 robust:1 correlated:1 alg:10 bottou:1 complex:1 constructing:2 domain:1 diag:3 did:2 sp:7 aistats:1 big:1 noise:15 hyperparameters:1 n2:2 tesla:1 fig:3 sdae:2 fasp:2 wiley:1 precision:1 harva:1 sparsest:3 comput:1 jmlr:1 ix:1 down:1 theorem:9 specific:1 rectifier:1 kvi:1 er:4 abadie:1 svm:1 normalizing:2 evidence:1 essential:2 glorot:1 mnist:11 false:1 supplement:3 phd:1 push:1 sparseness:4 sparser:2 vii:1 depicted:1 likely:1 expressed:1 biclustering:1 pretrained:4 nair:1 goal:4 viewed:2 identity:2 absence:2 feasible:3 included:1 reducing:1 wt:2 denoising:4 principal:1 invariance:1 l4:2 support:1 bioinformatics:4 incorporate:1 d1:5 srivastava:1 |
5,484 | 5,964 | Embed to Control: A Locally Linear Latent
Dynamics Model for Control from Raw Images
Manuel Watter?
Jost Tobias Springenberg?
Joschka Boedecker
University of Freiburg, Germany
{watterm,springj,jboedeck}@cs.uni-freiburg.de
Martin Riedmiller
Google DeepMind
London, UK
riedmiller@google.com
Abstract
We introduce Embed to Control (E2C), a method for model learning and control
of non-linear dynamical systems from raw pixel images. E2C consists of a deep
generative model, belonging to the family of variational autoencoders, that learns
to generate image trajectories from a latent space in which the dynamics is constrained to be locally linear. Our model is derived directly from an optimal control
formulation in latent space, supports long-term prediction of image sequences and
exhibits strong performance on a variety of complex control problems.
1
Introduction
Control of non-linear dynamical systems with continuous state and action spaces is one of the key
problems in robotics and, in a broader context, in reinforcement learning for autonomous agents.
A prominent class of algorithms that aim to solve this problem are model-based locally optimal
(stochastic) control algorithms such as iLQG control [1, 2], which approximate the general nonlinear control problem via local linearization. When combined with receding horizon control [3], and
machine learning methods for learning approximate system models, such algorithms are powerful
tools for solving complicated control problems [3, 4, 5]; however, they either rely on a known system
model or require the design of relatively low-dimensional state representations. For real autonomous
agents to succeed, we ultimately need algorithms that are capable of controlling complex dynamical
systems from raw sensory input (e.g. images) only. In this paper we tackle this difficult problem.
If stochastic optimal control (SOC) methods were applied directly to control from raw image data,
they would face two major obstacles. First, sensory data is usually high-dimensional ? i.e. images
with thousands of pixels ? rendering a naive SOC solution computationally infeasible. Second,
the image content is typically a highly non-linear function of the system dynamics underlying the
observations; thus model identification and control of this dynamics are non-trivial.
While both problems could, in principle, be addressed by designing more advanced SOC algorithms we approach the ?optimal control from raw images? problem differently: turning the problem of locally optimal control in high-dimensional non-linear systems into one of identifying a
low-dimensional latent state space, in which locally optimal control can be performed robustly and
easily. To learn such a latent space we propose a new deep generative model belonging to the class
of variational autoencoders [6, 7] that is derived from an iLQG formulation in latent space. The
resulting Embed to Control (E2C) system is a probabilistic generative model that holds a belief over
viable trajectories in sensory space, allows for accurate long-term planning in latent space, and is
trained fully unsupervised. We demonstrate the success of our approach on four challenging tasks
for control from raw images and compare it to a range of methods for unsupervised representation
learning. As an aside, we also validate that deep up-convolutional networks [8, 9] are powerful
generative models for large images.
?
Authors contributed equally.
1
2
The Embed to Control (E2C) model
We briefly review the problem of SOC for dynamical systems, introduce approximate locally optimal
control in latent space, and finish with the derivation of our model.
2.1
Problem Formulation
We consider the control of unknown dynamical systems of the form
st+1 = f (st , ut ) + ?, ? ? N (0, ?? ),
(1)
ns
nu
where t denotes the time steps, st ? R the system state, ut ? R the applied control and ?
the system noise. The function f (st , ut ) is an arbitrary, smooth, system dynamics. We equivalently
refer to Equation (1) using the notation P (st+1 |st , ut ), which we assume to be a multivariate normal
distribution N (f (st , ut ), ?? ). We further assume that we are only given access to visual depictions
xt ? Rnx of state st . This restriction requires solving a joint state identification and control problem.
For simplicity we will in the following assume that xt is a fully observed depiction of st , but relax
this assumption later.
Our goal then is to infer a low-dimensional latent state space model in which optimal control can
be performed. That is, we seek to learn a function m, mapping from high-dimensional images xt
to low-dimensional vectors zt ? Rnz with nz nx , such that the control problem can be solved
using zt instead of xt :
zt = m(xt ) + ?, ? ? N (0, ?? ),
(2)
where ? accounts for system noise; or equivalently zt ? N (m(xt ), ?? ). Assuming for the moment
that such a function can be learned (or approximated), we will first define SOC in a latent space and
introduce our model thereafter.
2.2
Stochastic locally optimal control in latent spaces
Let zt ? Rnz be the inferred latent state from image xt of state st and f lat (zt , ut ) the transition
dynamics in latent space, i.e., zt+1 = f lat (zt , ut ). Thus f lat models the changes that occur in
zt when control ut is applied to the underlying system as a latent space analogue to f (st , ut ).
Assuming f lat is known, optimal controls for a trajectory of length T in the dynamical system can
be derived by minimizing the function J(z1:T , u1:T ) which gives the expected future costs when
following (z1:T , u1:T ):
"
#
T
?1
X
J(z1:T , u1:T ) = Ez cT (zT , uT ) +
c(zt , ut ) ,
(3)
t0
where c(zt , ut ) are instantaneous costs, cT (zT , uT ) denotes terminal costs and z1:T = {z1 , . . . , zT }
and u1:T = {u1 , . . . , uT } are state and action sequences respectively. If zt contains sufficient information about st , i.e., st can be inferred from zt alone, and f lat is differentiable, the cost-minimizing
controls can be computed from J(z1:T , u1:T ) via SOC algorithms [10]. These optimal control algorithms approximate the global non-linear dynamics with locally linear dynamics at each time step
t. Locally optimal actions can then be found in closed form. Formally, given a reference trajectory
?1:T ? the current estimate for the optimal trajectory ? together with corresponding controls u
? 1:T
z
the system is linearized as
zt+1 = A(?
zt )zt + B(?
zt )ut+1 + o(?
zt ) + ?, ? ? N (0, ?? ),
(4)
lat
lat
(?
zt ,?
ut )
where A(?
zt ) = ?f ??
, B(?
zt ) = ?f ?(?uz?tt,?ut ) are local Jacobians, and o(?
zt ) is an offset. To
zt
enable efficient computation of the local controls we assume the costs to be a quadratic function of
the latent representation
c(zt , ut ) = (zt ? zgoal )T Rz (zt ? zgoal ) + uTt Ru ut ,
(5)
nz ?nz
where Rz ? R
and Ru ? Rnu ?nu are cost weighting matrices and zgoal is the inferred
representation of the goal state. We also assume cT (zT , uT ) = c(zT , uT ) throughout this paper.
In combination with Equation (4) this gives us a local linear-quadratic-Gaussian formulation at
each time step t which can be solved by SOC algorithms such as iterative linear-quadratic regulation (iLQR) [11] or approximate inference control (AICO) [12]. The result of this trajectory
optimization step is a locally optimal trajectory with corresponding control sequence (z?1:T , u?1:T ) ?
arg min z1:T J(z1:T , u1:T ).
u1:T
2
henc
?
At
htrans
Bt
?
ot
?t
?t
xt
KL
??
Q
Q?
?t+1 enc
h
?t+1 ?
?t+1 ? zt+1
z
zt
pt
hdec
?
ut
pt hdec
?
xt+1
encode
decode
transition
Figure 1: The information flow in the E2C model. From left to right, we encode and decode an
dec
image xt with the networks henc
? and h? , where we use the latent code zt for the transition step.
trans
The h? network computes the local matrices At , Bt , ot with which we can predict ?
zt+1 from zt
and ut . Similarity to the encoding zt+1 is enforced by a KL divergence on their distributions and
reconstruction is again performed by hdec
? .
2.3
A locally linear latent state space model for dynamical systems
Starting from the SOC formulation, we now turn to the problem of learning an appropriate lowdimensional latent representation zt ? P (Zt |m(xt ), ?? ) of xt . The representation zt has to fulfill
three properties: (i) it must capture sufficient information about xt (enough to enable reconstruction); (ii) it must allow for accurate prediction of the next latent state zt+1 and thus, implicitly, of the
next observation xt+1 ; (iii) the prediction f lat of the next latent state must be locally linearizable for
all valid control magnitudes ut . Given some representation zt , properties (ii) and (iii) in particular
require us to capture possibly highly non-linear changes of the latent representation due to transformations of the observed scene induced by control commands. Crucially, these are particularly hard
to model and subsequently linearize. We circumvent this problem by taking a more direct approach:
instead of learning a latent space z and transition model f lat which are then linearized and combined
with SOC algorithms, we directly impose desired transformation properties on the representation zt
during learning. We will select these properties such that prediction in the latent space as well as
locally linear inference of the next observation according to Equation (4) are easy.
The transformation properties that we desire from a latent representation can be formalized directly
from the iLQG formulation given in Section 2.2 . Formally, following Equation (2), let the latent
representation be Gaussian P (Z|X) = N (m(xt ), ?? ). To infer zt from xt we first require a
method for sampling latent states. Ideally, we would generate samples directly from the unknown
true posterior P (Z|X), which we, however, have no access to. Following the variational Bayes
approach (see Jordan et al. [13] for an overview) we resort to sampling zt from an approximate
posterior distribution Q? (Z|X) with parameters ?.
Inference model for Q? . In our work this is always a diagonal Gaussian distribution Q? (Z|X) =
N (?t , diag(? 2t )), whose mean ?t ? Rnz and covariance ?t = diag(? 2t ) ? Rnz ?nz are computed
by an encoding neural network with outputs
?t = W? henc
? (xt ) + b? ,
(6)
W? henc
? (xt )
(7)
log ? t =
+ b? ,
ne
where henc
is the activation of the last hidden layer and where ? is given by the set of all
? ? R
learnable parameters of the encoding network, including the weight matrices W? , W? and biases
b? , b? . Parameterizing the mean and variance of a Gaussian distribution based on a neural network
gives us a natural and very expressive model for our latent space. It additionally comes with the
benefit that we can use the reparameterization trick [6, 7] to backpropagate gradients of a loss
function based on samples through the latent distribution.
Generative model for P? . Using the approximate posterior distribution Q? we generate observed
? t and x
? t+1 from latent samples zt and zt+1 by enforcing a locally linear relasamples (images) x
tionship in latent space according to Equation (4), yielding the following generative model
zt
?t+1
z
?t, x
? t+1
x
?
?
?
Q? (Z | X)
= N (?t , ?t ),
? ? (Z? | Z, u) = N (At ?t + Bt ut + ot , Ct ),
Q
P? (X | Z)
= Bernoulli(pt ),
(8)
? ? is the next latent state posterior distribution, which exactly follows the linear form rewhere Q
quired for stochastic optimal control. With ? t ? N (0, Ht ) as an estimate of the system noise,
3
C can be decomposed as Ct = At ?t ATt + Ht . Note that while the transition dynamics in our
generative model operates on the inferred latent space, it takes untransformed controls into account.
That is, we aim to learn a latent space such that the transition dynamics in z linearizes the non-linear
observed dynamics in x and is locally linear in the applied controls u. Reconstruction of an image
from zt is performed by passing the sample through multiple hidden layers of a decoding neural
network which computes the mean pt of the generative Bernoulli distribution1 P? (X|Z) as
pt = Wp hdec
? (zt ) + bp ,
(9)
nd
where hdec
is the response of the last hidden layer in the decoding network. The set of
? (zt ) ? R
parameters for the decoding network, including weight matrix Wp and bias bp , then make up the
learned generative parameters ?.
? ? . What remains is to specify how the linearization matrices At ? Rnz ?nz ,
Transition model for Q
nz ?nu
Bt ? R
and offset ot ? Rnz are predicted. Following the same approach as for distribution
means and covariance matrices, we predict all local transformation parameters from samples zt
nt
based on the hidden representation htrans
of a third neural network with parameters ? ?
? (zt ) ? R
to which we refer as the transformation network. Specifically, we parametrize the transformation
matrices and offset as
vec[At ] = WA htrans
? (zt ) + bA ,
vec[Bt ] = WB htrans
(10)
? (zt ) + bB ,
ot = Wo htrans
(z
)
+
b
,
t
o
?
2
where vec denotes vectorization and therefore vec[At ] ? R(nz ) and vec[Bt ] ? R(nz ?nu ) . To circumvent estimating the full matrix At of size nz ? nz , we can choose it to be a perturbation of the
identity matrix At = (I + vt rTt ) which reduces the parameters to be estimated for At to 2nz .
A sketch of the complete architecture is shown in Figure 1. It also visualizes an additional constraint
?t+1
that is essential for learning a representation for long-term predictions: we require samples z
?
from the state transition distribution Q? to be similar to the encoding of xt+1 through Q? . While it
?t+1 is enough, we require multimight seem that just learning a perfect reconstruction of xt+1 from z
step predictions for planning in Z which must correspond to valid trajectories in the observed space
? ? and Q? , following a transition in latent
X. Without enforcing similarity between samples from Q
?t+1 , from which reconstruction of xt+1 is possible,
space from zt with action ut may lead to a point z
?t+1 ). Executing
but that is not a valid encoding (i.e. the model will never encode any image as z
?t+1 then does not result in a valid latent state ? since the transition model is
another action in z
conditional on samples coming from the inference network ? and thus long-term predictions fail.
In a nutshell, such a divergence between encodings and the transition model results in a generative
model that does not accurately model the Markov chain formed by the observations.
2.4
Learning via stochastic gradient variational Bayes
For training the model we use a data set D = {(x1 , u1 , x2 ), . . . , (xT ?1 , uT ?1 , xT )} containing observation tuples with corresponding controls obtained from interactions with the dynamical system.
Using this data set, we learn the parameters of the inference, transition and generative model by
minimizing a variational bound on the true data negative log-likelihood ? log P (xt , ut , xt+1 ) plus
an additional constraint on the latent representation. The complete loss function2 is given as
X
? ? (Z? | ?t , ut )
Q? (Z | xt+1 ) . (11)
L(D) =
Lbound (xt , ut , xt+1 ) + ? KL Q
(xt ,ut ,xt+1 )?D
The first part of this loss is the per-example variational bound on the log-likelihood
Lbound (xt , ut , xt+1 ) = E
zt ?Q?
??
?t+1 ?Q
z
[? log P? (xt |zt ) ? log P? (xt+1 |?
zt+1 )] + KL(Q? ||P (Z)), (12)
? ? are the parametric inference, generative and transition distributions from
where Q? , P? and Q
Section 2.3 and P (Zt ) is a prior on the approximate posterior Q? ; which we always chose to be
1
2
A Bernoulli distribution for P? is a common choice when modeling black-and-white images.
Note that this is the loss for the latent state space model and distinct from the SOC costs.
4
an isotropic Gaussian distribution with mean zero and unit variance. The second KL divergence in
Equation (11) is an additional contraction term with weight ?, that enforces agreement between the
transition and inference models. This term is essential for establishing a Markov chain in latent space
that corresponds to the real system dynamics (see Section 2.3 above for an in depth discussion). This
KL divergence can also be seen as a prior on the latent transition model. Note that all KL terms can
be computed analytically for our model (see supplementary for details).
During training we approximate the expectation in L(D) via sampling. Specifically, we take one
sample zt for each input xt and transform that sample using Equation (10) to give a valid sample
? ? . We then jointly learn all parameters of our model by minimizing L(D) using SGD.
?t+1 from Q
z
3
Experimental Results
We evaluate our model on four visual tasks: an agent in a plane with obstacles, a visual version of the
classic inverted pendulum swing-up task, balancing a cart-pole system, and control of a three-link
arm with larger images. These are described in detail below.
3.1
Experimental Setup
Model training. We consider two different network types for our model: Standard fully connected
neural networks with up to three layers, which work well for moderately sized images, are used for
the planar and swing-up experiments; A deep convolutional network for the encoder in combination
with an up-convolutional network as the decoder which, in accordance with recent findings from
the literature [8, 9], we found to be an adequate model for larger images. Training was performed
using Adam [14] throughout all experiments. The training data set D for all tasks was generated by
randomly sampling N state observations and actions with corresponding successor states. For the
plane we used N =3, 000 samples, for the inverted pendulum and cart-pole system we used N =
15, 000 and for the arm N=30, 000. A complete list of architecture parameters and hyperparameter
choices as well as an in-depth explanation of the up-convolutional network are specified in the
supplementary material. We will make our code and a video containing controlled trajectories for all
systems available under http://ml.informatik.uni-freiburg.de/research/e2c .
Model variants. In addition to the Embed to Control (E2C) dynamics model derived above, we
also consider two variants: By removing the latent dynamics network htrans
? , i.e. setting its output
to one in Equation (10) ? we obtain a variant in which At , Bt and ot are estimated as globally
linear matrices (Global E2C). If we instead replace the transition model with a network estimating
the dynamics as a non-linear function f?lat and only linearize during planning, estimating At , Bt , ot
as Jacobians to f?lat as described in Section 2.2, we obtain a variant with nonlinear latent dynamics.
Baseline models. For a thorough comparison and to exhibit the complicated nature of the tasks,
we also test a set of baseline models on the plane and the inverted pendulum task (using the same
architecture as the E2C model): a standard variational autoencoder (VAE) and a deep autoencoder
(AE) are trained on the autoencoding subtask for visual problems. That is, given a data set D
used for training our model, we remove all actions from the tuples in D and disregard temporal
context between images. After autoencoder training we learn a dynamics model in latent space,
approximating f lat from Section 2.2. We also consider a VAE variant with a slowness term on the
latent representation ? a full description of this variant is given in the supplementary material.
Optimal control algorithms. To perform optimal control in the latent space of different models,
we employ two trajectory optimization algorithms: iterative linear quadratic regulation (iLQR) [11]
(for the plane and inverted pendulum) and approximate inference control (AICO) [12] (all other
? ? . AICO
experiments). For all VAEs both methods operate on the mean of distributions Q? and Q
additionally makes use of the local Gaussian covariances ?t and Ct . Except for the experiments
on the planar system, control was performed in a model predictive control fashion using the receding horizon scheme introduced in [3]. To obtain closed loop control given an image xt , it is first
passed through the encoder to obtain the latent state zt . A locally optimal trajectory is subsequently
found by optimizing (z?t:t+T , u?t:t+T ) ? arg min zt:t+T J(zt:t+T , ut:t+T ) with fixed, small horizon
ut:t+T
T (with T = 10 unless noted otherwise). Controls u?t are applied to the system and a transition to
zt+1 is observed (by encoding the next image xt+1 ). Then a new control sequence ? with horizon
5
AE
VAE with slowness
VAE
35
30
25
20
15
10
5
5 10 15 20 25 30 35
Non-linear E2C
Global E2C
E2C
Figure 2: The true state space of the planar system (left) with examples (obstacles encoded as circles)
and the inferred spaces (right) of different models. The spaces are spanned by generating images for
every valid position of the agent and embedding them with the respective encoders.
T ? starting in zt+1 is found using the last estimated trajectory as a bootstrap. Note that planning
is performed entirely in the latent state without access to any observations except for the depiction
of the current state. To compute the cost function c(zt , ut ) required for trajectory optimization in
z we assume knowledge of the observation xgoal of the goal state sgoal . This observation is then
transformed into latent space and costs are computed according to Equation (5).
3.2
Control in a planar system
The agent in the planar system can move in a bounded two-dimensional plane by choosing a continuous offset in x- and y-direction. The high-dimensional representation of a state is a 40 ? 40
black-and-white image. Obstructed by six circular obstacles, the task is to move to the bottom right
of the image, starting from a random x position at the top of the image. The encodings of obstacles
are obtained prior to planning and an additional quadratic cost term is penalizing proximity to them.
A depiction of the observations on which control is performed ? together with their corresponding
state values and embeddings into latent space ? is shown in Figure 2. The figure also clearly shows
a fundamental advantage the E2C model has over its competitors: While the separately trained
autoencoders make for aesthetically pleasing pictures, the models failed to discover the underlying
structure of the state space, complicating dynamics estimation and largely invalidating costs based
on distances in said space. Including the latent dynamics constraints in these end-to-end models on
the other hand, yields latent spaces approaching the optimal planar embedding.
We test the long-term accuracy by accumulating latent and real trajectory costs to quantify whether
the imagined trajectory reflects reality. The results for all models when starting from random positions at the top and executing 40 pre-computed actions are summarized in Table 1 ? using a seperate
test set for evaluating reconstructions. While all methods achieve a low reconstruction loss, the difference in accumulated real costs per trajectory show the superiority of the E2C model. Using the
globally or locally linear E2C model, trajectories planned in latent space are as good as trajectories
planned on the real state. All models besides E2C fail to give long-term predictions that result in
good performance.
3.3
Learning swing-up for an inverted pendulum
We next turn to the task of controlling the classical inverted pendulum system [15] from images.
We create depictions of the state by rendering a fixed length line starting from the center of the
image at an angle corresponding to the pendulum position. The goal in this task is to swing-up and
balance an underactuated pendulum from a resting position (pendulum hanging down). Exemplary
observations and reconstructions for this system are given in Figure 3(d). In the visual inverted
pendulum task our algorithm faces two additional difficulties: the observed space is non-Markov, as
the angular velocity cannot be inferred from a single image, and second, discretization errors due to
rendering pendulum angles as small 48x48 pixel images make exact control difficult. To restore the
Markov property, we stack two images (as input channels), thus observing a one-step history.
Figure 3 shows the topology of the latent space for our model, as well as one sample trajectory in
true state and latent space. The fact that the model can learn a meaningful embedding, separating
6
Table 1: Comparison between different approaches to model learning from raw pixels for the planar
and pendulum system. We compare all models with respect to their prediction quality on a test set
of sampled transitions and with respect to their performance when combined with SOC (trajectory
cost for control from different start states). Note that trajectory costs in latent space are not necessarily comparable. The ?real? trajectory cost was computed on the dynamics of the simulator while
executing planned actions. For the true models for st , real trajectory costs were 20.24 ? 4.15 for the
planar system, and 9.8 ? 2.4 for the pendulum. Success was defined as reaching the goal state and
staying -close to it for the rest of the trajectory (if non terminating). All statistics quantify over 5/30
(plane/pendulum) different starting positions. A ? marks separately trained dynamics networks.
Algorithm
AE?
VAE?
VAE + slowness?
Non-linear E2C
Global E2C
E2C
AE?
VAE?
VAE + slowness?
E2C no latent KL
Non-linear E2C
Global E2C
E2C
State Loss
log p(xt |?
xt )
Next State Loss
Trajectory Cost
log p(xt+1 |?
xt , ut )
Latent
Real
Planar System
11.5 ? 97.8
3538.9 ? 1395.2
1325.6 ? 81.2
273.3 ? 16.4
3.6 ? 18.9
652.1 ? 930.6
43.1 ? 20.8
91.3 ? 16.4
10.5 ? 22.8
104.3 ? 235.8
47.1 ? 20.5
89.1 ? 16.4
8.3 ? 5.5
11.3 ? 10.1
19.8 ? 9.8
42.3 ? 16.4
6.9 ? 3.2
9.3 ? 4.6
12.5 ? 3.9
27.3 ? 9.7
7.7 ? 2.0
9.7 ? 3.2
10.3 ? 2.8
25.1 ? 5.3
Inverted Pendulum Swing-Up
8.9 ? 100.3
13433.8 ? 6238.8
1285.9 ? 355.8 194.7 ? 44.8
7.5 ? 47.7
8791.2 ? 17356.9
497.8 ? 129.4
237.2 ? 41.2
26.5 ? 18.0
779.7 ? 633.3
419.5 ? 85.8
188.2 ? 43.6
64.4 ? 32.8
87.7 ? 64.2
489.1 ? 87.5
213.2 ? 84.3
59.6 ? 25.2
72.6 ? 34.5
313.3 ? 65.7
37.4 ? 12.4
115.5 ? 56.9
125.3 ? 62.6
628.1 ? 45.9
125.1 ? 10.7
84.0 ? 50.8
89.3 ? 42.9
275.0 ? 16.6
15.4 ? 3.4
Success
percent
0%
0%
0%
96.6 %
100 %
100 %
0%
0%
0%
0%
63.33 %
0%
90 %
velocities and positions, from this data is remarkable (no other model recovered this shape). Table 1
again compares the different models quantitatively. While the E2C model is not the best in terms of
reconstruction performance, it is the only model resulting in stable swing-up and balance behavior.
We explain the failure of the other models with the fact that the non-linear latent dynamics model
cannot be guaranteed to be linearizable for all control magnitudes, resulting in undesired behavior around unstable fixpoints of the real system dynamics, and that for this task a globally linear
dynamics model is inadequate.
3.4
Balancing a cart-pole and controlling a simulated robot arm
Finally, we consider control of two more complex dynamical systems from images using a six layer
convolutional inference and six layer up-convolutional generative network, resulting in a 12-layer
deep path from input to reconstruction. Specifically, we control a visual version of the classical cartpole system [16] from a history of two 80 ? 80 pixel images as well as a three-link planar robot arm
based on a history of two 128 ? 128 pixel images. The latent space was set to be 8-dimensional in
both experiments. The real state dimensionality for the cart-pole is four and is controlled using one
2
3
2
1
0
0
0
?1
?1
(a)
10?3
?2 ?1
0
z0
1
2
3
3
(b)
? 70
x
?2
z0
?5
0
5
Angular velocity
?3
?3
?2
?1
0
1
z1
2
? 100
x
?1
?2
?2
x100
1
z2
1
?3
?10
3
2
z2
Angle
3
1
0
?3 ?2 ?1 z
1
(c)
2
x70
?3
3
(d)
Figure 3: (a) The true state space of the inverted pendulum task overlaid with a successful trajectory
taken by the E2C agent. (b) The learned latent space. (c) The trajectory from (a) traced out in the
? showing current positions (right) and history (left).
latent space. (d) Images x and reconstructions x
7
1
2
5
6
3
4
Observed
Predicted
7
8
Figure 4: Left: Trajectory from the cart-pole domain. Only the first image (green) is ?real?, all
other images are ?dreamed up? by our model. Notice discretization artifacts present in the real
image. Right: Exemplary observed (with history image omitted) and predicted images (including
the history image) for a trajectory in the visual robot arm domain with the goal marked in red.
action, while for the arm the real state can be described in 6 dimensions (joint angles and velocities)
and controlled using a three-dimensional action vector corresponding to motor torques.
As in previous experiments the E2C model seems to have no problem finding a locally linear embedding of images into latent space in which control can be performed. Figure 4 depicts exemplary
images ? for both problems ? from a trajectory executed by our system. The costs for these trajectories (11.13 for the cart-pole, 85.12 for the arm) are only slightly worse than trajectories obtained
by AICO operating on the real system dynamics starting from the same start-state (7.28 and 60.74
respectively). The supplementary material contains additional experiments using these domains.
4
Comparison to recent work
In the context of representation learning for control (see B?ohmer et al. [17] for a review), deep
autoencoders (ignoring state transitions) similar to our baseline models have been applied previously,
e.g. by Lange and Riedmiller [18]. A more direct route to control based on image streams is taken
by recent work on (model free) deep end-to-end Q-learning for Atari games by Mnih et al. [19], as
well as kernel based [20] and deep policy learning for robot control [21].
Close to our approach is a recent paper by Wahlstr?om et al. [22], where autoencoders are used to
extract a latent representation for control from images, on which a non-linear model of the forward
dynamics is learned. Their model is trained jointly and is thus similar to the non-linear E2C variant
in our comparison. In contrast to our model, their formulation requires PCA pre-processing and does
neither ensure that long-term predictions in latent space do not diverge, nor that they are linearizable.
As stated above, our system belongs to the family of VAEs and is generally similar to recent work
such as Kingma and Welling [6], Rezende et al. [7], Gregor et al. [23], Bayer and Osendorfer [24].
Two additional parallels between our work and recent advances for training deep neural networks
can be observed. First, the idea of enforcing desired transformations in latent space during learning
? such that the data becomes easy to model ? has appeared several times already in the literature.
This includes the development of transforming auto-encoders [25] and recent probabilistic models
for images [26, 27]. Second, learning relations between pairs of images ? although without control ?
has received considerable attention from the community during the last years [28, 29]. In a broader
context our model is related to work on state estimation in Markov decision processes (see Langford
et al. [30] for a discussion) through, e.g., hidden Markov models and Kalman filters [31, 32].
5
Conclusion
We presented Embed to Control (E2C), a system for stochastic optimal control on high-dimensional
image streams. Key to the approach is the extraction of a latent dynamics model which is constrained
to be locally linear in its state transitions. An evaluation on four challenging benchmarks revealed
that E2C can find embeddings on which control can be performed with ease, reaching performance
close to that achievable by optimal control on the real system model.
Acknowledgments
We thank A. Radford, L. Metz, and T. DeWolf for sharing code, as well as A. Dosovitskiy for useful
discussions. This work was partly funded by a DFG grant within the priority program ?Autonomous
learning? (SPP1597) and the BrainLinks-BrainTools Cluster of Excellence (grant number EXC
1086). M. Watter is funded through the State Graduate Funding Program of Baden-W?urttemberg.
8
References
[1] D. Jacobson and D. Mayne. Differential dynamic programming. American Elsevier, 1970.
[2] E. Todorov and W. Li. A generalized iterative LQG method for locally-optimal feedback control of
constrained nonlinear stochastic systems. In ACC. IEEE, 2005.
[3] Y. Tassa, T. Erez, and W. D. Smart. Receding horizon differential dynamic programming. In Proc. of
NIPS, 2008.
[4] Y. Pan and E. Theodorou. Probabilistic differential dynamic programming. In Proc. of NIPS, 2014.
[5] S. Levine and V. Koltun. Variational policy search via trajectory optimization. In Proc. of NIPS, 2013.
[6] D. P. Kingma and M. Welling. Auto-encoding variational bayes. In Proc. of ICLR, 2014.
[7] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in
deep generative models. In Proc. of ICML, 2014.
[8] M. D. Zeiler, D. Krishnan, G. W. Taylor, and R. Fergus. Deconvolutional networks. In CVPR, 2010.
[9] A. Dosovitskiy, J. T. Springenberg, and T. Brox. Learning to generate chairs with convolutional neural
networks. In Proc. of CVPR, 2015.
[10] R. F. Stengel. Optimal Control and Estimation. Dover Publications, 1994.
[11] W. Li and E. Todorov. Iterative Linear Quadratic Regulator Design for Nonlinear Biological Movement
Systems. In Proc. of ICINCO, 2004.
[12] M. Toussaint. Robot Trajectory Optimization using Approximate Inference. In Proc. of ICML, 2009.
[13] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for
graphical models. In Machine Learning, 1999.
[14] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In Proc. of ICLR, 2015.
[15] H. Wang, K. Tanaka, and M. Griffin. An approach to fuzzy control of nonlinear systems; stability and
design issues. IEEE Trans. on Fuzzy Systems, 4(1), 1996.
[16] R. S. Sutton and A. G. Barto. Introduction to Reinforcement Learning. MIT Press, Cambridge, MA,
USA, 1st edition, 1998. ISBN 0262193981.
[17] W. B?ohmer, J. T. Springenberg, J. Boedecker, M. Riedmiller, and K. Obermayer. Autonomous learning
of state representations for control. KI - K?unstliche Intelligenz, 2015.
[18] S. Lange and M. Riedmiller. Deep auto-encoder neural networks in reinforcement learning. In Proc. of
IJCNN, 2010.
[19] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller,
A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran,
D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature,
518(7540), 02 2015.
[20] H. van Hoof, J. Peters, and G. Neumann. Learning of non-parametric control policies with highdimensional state features. In Proc. of AISTATS, 2015.
[21] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. CoRR,
abs/1504.00702, 2015. URL http://arxiv.org/abs/1504.00702.
[22] N. Wahlstr?om, T. B. Sch?on, and M. P. Deisenroth. From pixels to torques: Policy learning with deep
dynamical models. CoRR, abs/1502.02251, 2015. URL http://arxiv.org/abs/1502.02251.
[23] K. Gregor, I. Danihelka, A. Graves, D. Rezende, and D. Wierstra. DRAW: A recurrent neural network for
image generation. In Proc. of ICML, 2015.
[24] J. Bayer and C. Osendorfer. Learning stochastic recurrent networks. In NIPS 2014 Workshop on Advances
in Variational Inference, 2014.
[25] G. Hinton, A. Krizhevsky, and S. Wang. Transforming auto-encoders. In Proc. of ICANN, 2011.
[26] L. Dinh, D. Krueger, and Y. Bengio. Nice: Non-linear independent components estimation. CoRR,
abs/1410.8516, 2015. URL http://arxiv.org/abs/1410.8516.
[27] T. Cohen and M. Welling. Transformation properties of learned visual representations. In ICLR, 2015.
[28] G. W. Taylor, L. Sigal, D. J. Fleet, and G. E. Hinton. Dynamical binary latent variable models for 3d
human pose tracking. In Proc. of CVPR, 2010.
[29] R. Memisevic. Learning to relate images. IEEE Trans. on PAMI, 35(8):1829?1846, 2013.
[30] J. Langford, R. Salakhutdinov, and T. Zhang. Learning nonlinear dynamic models. In ICML, 2009.
[31] M. West and J. Harrison. Bayesian Forecasting and Dynamic Models (Springer Series in Statistics).
Springer-Verlag, February 1997. ISBN 0387947256.
[32] T. Matsubara, V. G?omez, and H. J. Kappen. Latent Kullback Leibler control for continuous-state systems
using probabilistic graphical models. UAI, 2014.
9
| 5964 |@word version:2 briefly:1 achievable:1 seems:1 nd:1 fixpoints:1 seek:1 linearized:2 crucially:1 covariance:3 contraction:1 sgd:1 xgoal:1 kappen:1 moment:1 contains:2 att:1 series:1 deconvolutional:1 current:3 com:1 nt:1 manuel:1 discretization:2 activation:1 recovered:1 z2:2 must:4 shape:1 lqg:1 motor:1 remove:1 aside:1 alone:1 generative:14 plane:6 isotropic:1 dover:1 org:3 zhang:1 wierstra:3 direct:2 differential:3 viable:1 koltun:1 consists:1 introduce:3 excellence:1 expected:1 behavior:2 planning:5 nor:1 uz:1 simulator:1 terminal:1 torque:2 salakhutdinov:1 globally:3 decomposed:1 springj:1 becomes:1 estimating:3 underlying:3 notation:1 bounded:1 discover:1 what:1 atari:1 deepmind:1 fuzzy:2 intelligenz:1 finding:2 transformation:8 temporal:1 thorough:1 every:1 tackle:1 nutshell:1 exactly:1 uk:1 control:76 unit:1 grant:2 superiority:1 danihelka:1 local:7 accordance:1 sutton:1 encoding:9 establishing:1 untransformed:1 path:1 pami:1 black:2 plus:1 chose:1 nz:11 challenging:2 rnx:1 ease:1 range:1 graduate:1 acknowledgment:1 enforces:1 backpropagation:1 bootstrap:1 riedmiller:6 pre:2 petersen:1 cannot:2 close:3 context:4 bellemare:1 accumulating:1 restriction:1 function2:1 center:1 attention:1 starting:7 simplicity:1 identifying:1 formalized:1 parameterizing:1 spanned:1 reparameterization:1 classic:1 embedding:4 stability:1 autonomous:4 controlling:3 pt:5 decode:2 exact:1 programming:3 designing:1 agreement:1 trick:1 velocity:4 approximated:1 particularly:1 observed:10 bottom:1 levine:2 solved:2 capture:2 wang:2 thousand:1 connected:1 movement:1 linearizable:3 subtask:1 transforming:2 moderately:1 ideally:1 tobias:1 dynamic:32 ultimately:1 trained:5 terminating:1 solving:2 smart:1 predictive:1 lbound:2 easily:1 joint:2 differently:1 x100:1 derivation:1 distinct:1 seperate:1 london:1 visuomotor:1 choosing:1 whose:1 encoded:1 supplementary:4 solve:1 larger:2 cvpr:3 relax:1 otherwise:1 encoder:3 statistic:2 transform:1 jointly:2 autoencoding:1 sequence:4 differentiable:1 advantage:1 exemplary:3 isbn:2 propose:1 reconstruction:11 lowdimensional:1 coming:1 interaction:1 enc:1 loop:1 achieve:1 mayne:1 description:1 validate:1 cluster:1 darrell:1 neumann:1 generating:1 perfect:1 executing:3 adam:2 staying:1 silver:1 hoof:1 linearize:2 recurrent:2 pose:1 received:1 strong:1 soc:11 aesthetically:1 c:1 predicted:3 come:1 quantify:2 direction:1 utt:1 filter:1 stochastic:10 subsequently:2 human:2 enable:2 successor:1 material:3 require:5 abbeel:1 biological:1 hold:1 proximity:1 around:1 normal:1 overlaid:1 mapping:1 predict:2 major:1 omitted:1 estimation:4 proc:14 create:1 tool:1 reflects:1 mit:1 clearly:1 gaussian:6 always:2 aim:2 fulfill:1 reaching:2 rusu:1 barto:1 command:1 broader:2 vae:8 publication:1 jaakkola:1 encode:3 derived:4 rezende:3 legg:1 bernoulli:3 likelihood:2 contrast:1 baseline:3 elsevier:1 inference:12 brainlinks:1 accumulated:1 typically:1 bt:8 hidden:5 relation:1 transformed:1 germany:1 pixel:7 arg:2 issue:1 development:1 constrained:3 brox:1 never:1 extraction:1 veness:1 sampling:4 unsupervised:2 icml:4 osendorfer:2 future:1 quantitatively:1 dosovitskiy:2 employ:1 randomly:1 divergence:4 dfg:1 ab:6 pleasing:1 ostrovski:1 highly:2 circular:1 mnih:2 evaluation:1 yielding:1 jacobson:1 chain:2 accurate:2 bayer:2 capable:1 respective:1 x48:1 cartpole:1 unless:1 taylor:2 desired:2 circle:1 modeling:1 obstacle:5 wb:1 planned:3 wahlstr:2 cost:19 pole:6 krizhevsky:1 successful:1 inadequate:1 aico:4 theodorou:1 encoders:3 combined:3 st:15 fundamental:1 memisevic:1 probabilistic:4 decoding:3 diverge:1 together:2 again:2 baden:1 containing:2 choose:1 possibly:1 priority:1 worse:1 resort:1 american:1 jacobians:2 li:2 account:2 underactuated:1 de:2 stengel:1 summarized:1 includes:1 stream:2 performed:10 later:1 closed:2 observing:1 pendulum:16 red:1 start:2 bayes:3 metz:1 complicated:2 parallel:1 om:2 formed:1 accuracy:1 convolutional:7 variance:2 largely:1 correspond:1 yield:1 raw:7 identification:2 kavukcuoglu:1 accurately:1 bayesian:1 informatik:1 trajectory:34 visualizes:1 history:6 acc:1 explain:1 sharing:1 competitor:1 failure:1 mohamed:1 sampled:1 knowledge:1 ut:36 dimensionality:1 planar:10 response:1 specify:1 formulation:7 obstructed:1 just:1 angular:2 autoencoders:5 langford:2 sketch:1 hand:1 expressive:1 nonlinear:6 google:2 quality:1 artifact:1 usa:1 true:6 swing:6 analytically:1 wp:2 leibler:1 white:2 undesired:1 during:5 game:1 noted:1 generalized:1 prominent:1 freiburg:3 tt:1 demonstrate:1 complete:3 percent:1 image:52 variational:11 instantaneous:1 funding:1 krueger:1 common:1 overview:1 cohen:1 tassa:1 imagined:1 resting:1 refer:2 dinh:1 cambridge:1 vec:5 erez:1 funded:2 access:3 stable:1 similarity:2 depiction:5 robot:5 operating:1 multivariate:1 posterior:5 recent:7 joschka:1 optimizing:1 belongs:1 route:1 slowness:4 verlag:1 binary:1 success:3 vt:1 inverted:9 seen:1 additional:7 impose:1 ii:2 multiple:1 full:2 infer:2 reduces:1 smooth:1 long:7 equally:1 controlled:3 jost:1 prediction:10 variant:7 ae:4 expectation:1 arxiv:3 kernel:1 robotics:1 dec:1 addition:1 separately:2 addressed:1 harrison:1 sch:1 ot:7 operate:1 rest:1 henc:5 induced:1 cart:6 flow:1 seem:1 jordan:2 linearizes:1 revealed:1 iii:2 enough:2 easy:2 rendering:3 variety:1 embeddings:2 finish:1 todorov:2 krishnan:1 architecture:3 approaching:1 topology:1 lange:2 idea:1 t0:1 whether:1 six:3 pca:1 fleet:1 url:3 passed:1 forecasting:1 wo:1 peter:1 passing:1 action:11 adequate:1 deep:15 generally:1 useful:1 sgoal:1 locally:20 generate:4 http:4 notice:1 estimated:3 per:2 hyperparameter:1 key:2 four:4 thereafter:1 traced:1 penalizing:1 neither:1 ht:2 year:1 enforced:1 angle:4 powerful:2 springenberg:3 family:2 throughout:2 draw:1 decision:1 griffin:1 comparable:1 entirely:1 layer:7 ct:6 bound:2 guaranteed:1 ki:1 quadratic:6 occur:1 ijcnn:1 constraint:3 bp:2 scene:1 x2:1 u1:9 regulator:1 rnz:6 min:2 chair:1 martin:1 relatively:1 according:3 hanging:1 combination:2 belonging:2 slightly:1 pan:1 taken:2 computationally:1 equation:9 remains:1 previously:1 turn:2 fail:2 finn:1 antonoglou:1 end:6 parametrize:1 available:1 icinco:1 appropriate:1 robustly:1 hassabis:1 rz:2 denotes:3 top:2 ensure:1 lat:12 zeiler:1 graphical:2 ghahramani:1 approximating:1 classical:2 gregor:2 february:1 move:2 already:1 matsubara:1 parametric:2 diagonal:1 said:1 exhibit:2 gradient:2 iclr:3 obermayer:1 distance:1 link:2 thank:1 separating:1 simulated:1 decoder:1 fidjeland:1 nx:1 exc:1 unstable:1 trivial:1 bengio:1 enforcing:3 assuming:2 ru:2 length:2 code:3 besides:1 kalman:1 minimizing:4 balance:2 equivalently:2 difficult:2 regulation:2 setup:1 executed:1 relate:1 negative:1 stated:1 ba:2 design:3 zt:67 policy:5 unknown:2 contributed:1 perform:1 observation:11 kumaran:1 markov:6 benchmark:1 hinton:2 perturbation:1 rtt:1 stack:1 arbitrary:1 community:1 inferred:6 introduced:1 pair:1 required:1 kl:8 specified:1 z1:9 learned:5 kingma:3 nu:4 nip:4 trans:3 tanaka:1 distribution1:1 dynamical:11 receding:3 usually:1 below:1 appeared:1 program:2 including:4 green:1 explanation:1 belief:1 analogue:1 video:1 natural:1 rely:1 circumvent:2 difficulty:1 turning:1 restore:1 braintools:1 advanced:1 arm:7 scheme:1 ne:1 picture:1 rnu:1 naive:1 autoencoder:3 extract:1 auto:4 review:2 prior:3 literature:2 nice:1 graf:2 fully:3 loss:7 ilqg:3 generation:1 remarkable:1 toussaint:1 agent:6 sufficient:2 principle:1 sigal:1 balancing:2 last:4 free:1 infeasible:1 bias:2 allow:1 saul:1 face:2 taking:1 benefit:1 van:1 feedback:1 depth:2 complicating:1 transition:20 valid:6 evaluating:1 computes:2 sensory:3 author:1 dimension:1 reinforcement:4 forward:1 welling:3 bb:1 approximate:12 uni:2 implicitly:1 kullback:1 ml:1 global:5 uai:1 tuples:2 fergus:1 continuous:3 latent:68 iterative:4 vectorization:1 search:1 reality:1 additionally:2 table:3 learn:7 nature:2 channel:1 ignoring:1 complex:3 necessarily:1 domain:3 diag:2 aistats:1 icann:1 noise:3 edition:1 hdec:5 x1:1 west:1 depicts:1 fashion:1 n:1 position:8 watter:2 weighting:1 third:1 learns:1 removing:1 down:1 embed:6 z0:2 xt:41 invalidating:1 showing:1 learnable:1 offset:4 list:1 essential:2 workshop:1 corr:3 magnitude:2 linearization:2 horizon:5 backpropagate:1 boedecker:2 ez:1 visual:8 failed:1 desire:1 omez:1 tracking:1 sadik:1 radford:1 springer:2 corresponds:1 ma:1 succeed:1 conditional:1 goal:6 identity:1 sized:1 marked:1 king:1 replace:1 content:1 change:2 hard:1 considerable:1 specifically:3 except:2 operates:1 beattie:1 partly:1 experimental:2 disregard:1 meaningful:1 vaes:2 formally:2 select:1 highdimensional:1 deisenroth:1 support:1 mark:1 evaluate:1 |
5,485 | 5,965 | Bayesian Dark Knowledge
Anoop Korattikara, Vivek Rathod, Kevin Murphy
Google Research
{kbanoop, rathodv, kpmurphy}@google.com
Max Welling
University of Amsterdam
m.welling@uva.nl
Abstract
We consider the problem of Bayesian parameter estimation for deep neural networks, which is important in problem settings where we may have little data, and/
or where we need accurate posterior predictive densities p(y|x, D), e.g., for applications involving bandits or active learning. One simple approach to this is to use
online Monte Carlo methods, such as SGLD (stochastic gradient Langevin dynamics). Unfortunately, such a method needs to store many copies of the parameters
(which wastes memory), and needs to make predictions using many versions of
the model (which wastes time).
We describe a method for ?distilling? a Monte Carlo approximation to the posterior predictive density into a more compact form, namely a single deep neural
network. We compare to two very recent approaches to Bayesian neural networks,
namely an approach based on expectation propagation [HLA15] and an approach
based on variational Bayes [BCKW15]. Our method performs better than both of
these, is much simpler to implement, and uses less computation at test time.
1
Introduction
Deep neural networks (DNNs) have recently been achieving state of the art results in many fields.
However, their predictions are often over confident, which is a problem in applications such as
active learning, reinforcement learning (including bandits), and classifier fusion, which all rely on
good estimates of uncertainty.
A principled way to tackle this problem is to use Bayesian inference. Specifically, we first comQN
pute the posterior distribution over the model parameters, p(?|DN ) ? p(?) i=1 p(yi |xi , ?), where
N
DN = {(xi , yi )}i=1 , xi ? X D is the i?th input (where D is the number of features), and
Ryi ? Y is the i?th output. Then we compute the posterior predictive distribution, p(y|x, DN ) =
p(y|x, ?)p(?|DN )d?, for each test point x.
For reasons of computational speed, it is common to approximate the posterior distribution by a
point estimate such as the MAP estimate, ?? = argmax p(?|DN ). When N is large, we often use
? Finally, we make a plug-in approximation to the
stochastic gradient descent (SGD) to compute ?.
? Unfortunately, this loses most of the benefits
predictive distribution: p(y|x, DN ) ? p(y|x, ?).
of the Bayesian approach, since uncertainty in the parameters (which induces uncertainty in the
predictions) is ignored.
Various ways of more accurately approximating p(?|DN ) (and hence p(y|x, DN )) have been developed. Recently, [HLA15] proposed a method called ?probabilistic backpropagation? (PBP) based
on an online version of expectation propagation (EP), (i.e., using repeated assumed density filtering
(ADF)), where the posterior
Q is approximated as a product of univariate Gaussians, one per parameter: p(?|DN ) ? q(?) , i N (?i |mi , vi ).
An alternative to EP is variational Bayes (VB) where we optimize a lower bound on the marginal
likelihood. [Gra11] presented a (biased) Monte Carlo estimate of this lower bound and applies
1
his method, called ?variational inference? (VI), to infer the neural network weights. More recently,
[BCKW15] proposed an approach called ?Bayes by Backprop? (BBB), which extends the VI method
with an unbiased MC estimate of the lower bound based on the ?reparameterization trick? of [KW14,
RMW14]. In both [Gra11] and [BCKW15], the posterior is approximated by a product of univariate
Gaussians.
Although EP and VB scale well with data size (since they use online learning), there are several
problems with these methods: (1) they can give poor approximations when the posterior p(?|DN )
does not factorize, or if it has multi-modality or skew; (2) at test time, computing the predictive
density p(y|x, DN ) can be much slower than using the plug-in approximation, because of the need
to integrate out the parameters; (3) they need to use double the memory of a standard plug-in method
(to store the mean and variance of each parameter), which can be problematic in memory-limited
settings such as mobile phones; (4) they can be quite complicated to derive and implement.
A common alternative to EP and VB is to use MCMC methods to approximate p(?|DN ). Traditional MCMC methods are batch algorithms, that scale poorly with dataset size. However, recently a method called stochastic gradient Langevin dynamics (SGLD) [WT11] has been devised
that can draw samples approximately from the posterior in an online fashion, just as SGD updates a
point estimate of the parameters online. Furthermore, various extensions of SGLD have been proposed, including stochastic gradient hybrid Monte Carlo (SGHMC) [CFG14], stochastic gradient
Nos?e-Hoover Thermostat (SG-NHT) [DFB+ 14] (which improves upon SGHMC), stochastic gradient Fisher scoring (SGFS) [AKW12] (which uses second order information), stochastic gradient
Riemannian Langevin Dynamics [PT13], distributed SGLD [ASW14], etc. However, in this paper,
we will just use ?vanilla? SGLD [WT11].1
All these MCMC methods (whether batch or online) produce a Monte Carlo approximation to the
PS
posterior, q(?) = S1 s=1 ?(? ? ?s ), where S is the number of samples. Such an approximation can be more accurate than that produced by EP or VB, and the method is much easier to
implement (for SGLD, you essentially just add Gaussian noise to your SGD updates). However,
at test time, things are S times slower than using a plug-in estimate, since we need to compute
PS
q(y|x) = S1 s=1 p(y|x, ?s ), and the memory requirements are S times bigger, since we need to
store the ?s . (For our largest experiment, our DNN has 500k parameters, so we can only afford to
store a single sample.)
In this paper, we propose to train a parametric model S(y|x, w) to approximate the Monte Carlo
posterior predictive distribution q(y|x) in order to gain the benefits of the Bayesian approach while
only using the same run time cost as the plugin method. Following [HVD14], we call q(y|x) the
?teacher? and S(y|x, w) the ?student?. We use SGLD2 to estimate q(?) and hence q(y|x) online;
we simultaneously train the student online to minimize KL(q(y|x)||S(y|x, w)). We give the details
in Section 2.
Similar ideas have been proposed in the past. In particular, [SG05] also trained a parametric student
model to approximate a Monte Carlo teacher. However, they used batch training and they used
mixture models for the student. By contrast, we use online training (and can thus handle larger
datasets), and use deep neural networks for the student.
[HVD14] also trained a student neural network to emulate the predictions of a (larger) teacher network (a process they call ?distillation?), extending earlier work of [BCNM06] which approximated
an ensemble of classifiers by a single one. The key difference from our work is that our teacher
is generated using MCMC, and our goal is not just to improve classification accuracy, but also to
get reliable probabilistic predictions, especially away from the training data. [HVD14] coined the
term ?dark knowledge? to represent the information which is ?hidden? inside the teacher network,
and which can then be distilled into the student. We therefore call our approach ?Bayesian dark
knowledge?.
1
We did some preliminary experiments with SG-NHT for fitting an MLP to MNIST data, but the results
were not much better than SGLD.
2
Note that SGLD is an approximate sampling algorithm and introduces a slight bias in the predictions of
the teacher and student network. If required, we can replace SGLD with an exact MCMC method (e.g. HMC)
to get more accurate results at the expense of more training time.
2
In summary, our contributions are as follows. First, we show how to combine online MCMC methods with model distillation in order to get a simple, scalable approach to Bayesian inference of the
parameters of neural networks (and other kinds of models). Second, we show that our probabilistic
predictions lead to improved log likelihood scores on the test set compared to SGD and the recently
proposed EP and VB approaches.
2
Methods
Our goal is to train a student neural network (SNN) to approximate the Bayesian predictive distribution of the teacher, which is a Monte Carlo ensemble of teacher neural networks (TNN).
If we denote the predictions of the teacher by p(y|x, DN ) and the parameters of the student network
by w, our objective becomes
L(w|x) = KL(p(y|x, DN )||S(y|x, w)) = ?Ep(y|x,DN ) log S(y|x, w) + const
Z Z
=?
p(y|x, ?)p(?|DN )d? log S(y|x, w)dy
Z
Z
= ? p(?|DN ) p(y|x, ?) log S(y|x, w)dy d?
Z
= ? p(?|DN ) Ep(y|x,?) log S(y|x, w) d?
(1)
Unfortunately, computing this integral is not analytically tractable. However, we can approximate
this by Monte Carlo:
1 X
?
L(w|x)
=?
Ep(y|x,?s ) log S(y|x, w)
(2)
|?| s
? ??
where ? is a set of samples from p(?|DN ).
To make this a function just of w, we need to integrate out x. For this, we need a dataset to train
the student network on, which we will denote by D0 . Note that points in this dataset do not need
ground truth labels; instead the labels (which will be probability distributions) will be provided
by the teacher. The choice of student data controls the domain over which the student will make
accurate predictions. For low dimensional problems (such as in Section 3.1), we can uniformly
sample the input domain. For higher dimensional problems, we can sample ?near? the training
data, for example by perturbing the inputs slightly. In any case, we will compute a Monte Carlo
approximation to the loss as follows:
Z
1 X
?
L(w|x0 )
L(w)
=
p(x)L(w|x)dx ? 0
|D | 0 0
x ?D
1 1 X X
? ?
Ep(y|x0 ,?s ) log S(y|x0 , w)
(3)
|?| |D0 | s
0
0
? ?? x ?D
It can take a lot of memory to pre-compute and store the set of parameter samples ? and the set of
data samples D0 , so in practice we use the stochastic algorithm shown in Algorithm 1, which uses a
single posterior sample ?s and a minibatch of x0 at each step.
The hyper-parameters ? and ? from Algorithm 1 control the strength of the priors for the teacher
and student networks. We use simple spherical Gaussian priors (equivalent to L2 regularization);
we set the precision (strength) of these Gaussian priors by cross-validation. Typically ? ?, since
the student gets to ?see? more data than the teacher. This is true for two reasons: first, the teacher
is trained to predict a single label per input, whereas the student is trained to predict a distribution,
which contains more information (as argued in [HVD14]); second, the teacher makes multiple passes
over the same training data, whereas the student sees ?fresh? randomly generated data D0 at each
step.
2.1
Classification
For classification problems, each teacher network ?s models the observations using a standard softmax model, p(y = k|x, ?s ). We want to approximate this using a student network, which also has a
3
Algorithm 1: Distilled SGLD
Input: DN = {(xi , yi )}N
i=1 , minibatch size M , number of iterations T , teacher learning schedule
?t , student learning schedule ?t , teacher prior ?, student prior ?
for t = 1 : T do
// Train teacher (SGLD step)
Sample minibatch indices S ? [1, N ] of size M
Sample zt ? N (0, ?t I)
P
N
Update ?t+1 := ?t + ?2t ?? log p(?|?) + M
i?S ?? log p(yi |xi , ?) + zt
// Train student (SGD step)
Sample D0 of sizeM from student data generator
P
0
?
wt+1 := wt ? ?t 1
0
0 ?w L(w, ?t+1 |x ) + ?wt
x ?D
M
softmax output, S(y = k|x, w). Hence from Eqn. 2, our loss function estimate is the standard cross
entropy loss:
s
?
L(w|?
, x) = ?
K
X
p(y = k|x, ?s ) log S(y = k|x, w)
(4)
k=1
The student network outputs ?k (x, w) = log S(y = k|x, w). To estimate the gradient w.r.t. w, we
just have to compute the gradients w.r.t. ? and back-propagate through the network. These gradients
s
?
L(w,?
|x)
are given by ???
= ?p(y = k|x, ?s ).
k (x,w)
2.2
Regression
In regression, the observations are modeled as p(yi |xi , ?) = N (yi |f (xi |?), ??1
n ) where f (x|?) is
the prediction of the TNN and ?n is the noise precision. We want to approximate the predictive
distribution as p(y|x, DN ) ? S(y|x, w) = N (y|?(x, w), e?(x,w) ). We will train a student network
to output the parameters of the approximating distribution ?(x, w) and ?(x, w); note that this is
twice the number of outputs of the teacher network, since we want to capture the (data dependent)
variance.3 We use e?(x,w) instead of directly predicting the variance ? 2 (x|w) to avoid dealing with
positivity constraints during training.
To train the SNN, we will minimize the objective defined in Eqn. 2:
s
?
L(w|?
, x)
= ?Ep(y|x,?s ) log N (y|?(x, w), e?(x,w) )
h
i
1
=
Ep(y|x,?s ) ?(x, w) + e??(x,w) (y ? ?(x, w)2 )
2
1
1
2
??(x,w)
s
=
?(x, w) + e
(f (x|? ) ? ?(x, w)) +
2
?n
?
Now, to estimate ?w L(w,
?s |x), we just have to compute
through the network. These gradients are:
?
? L(w,
?s |x)
??(x, w)
?
? L(w,
?s |x
??(x, w)
3
?
?L
??(x,w)
and
?
?L
??(x,w) ,
and back propagate
=
e??(x,w) {?(x, w) ? f (x|?s )}
(5)
=
1
1
1 ? e??(x,w) (f (x|?s ) ? ?(x, w))2 +
2
?n
(6)
Experimental results
In this section, we compare SGLD and distilled SGLD with other approximate inference methods,
including the plugin approximation using SGD, the PBP approach of [HLA15], the BBB approach of
3
This is not necessary in the classification case, since the softmax distribution already captures uncertainty.
4
Dataset
ToyClass
MNIST
ToyReg
Boston Housing
N
20
60k
10
506
D
2
784
1
13
Y
{0, 1}
{0, . . . , 9}
R
R
PBP
N
N
Y
Y
BBB
N
Y
N
N
HMC
Y
N
Y
N
Table 1: Summary of our experimental configurations.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 1: Posterior predictive density for various methods on the toy 2d dataset. (a) SGD (plugin)
using the 2-10-2 network. (b) HMC using 20k samples. (c) SGLD using 1k samples. (d-f) Distilled
SGLD using a student network with the following architectures: 2-10-2, 2-100-2 and 2-10-10-2.
[BCKW15], and Hamiltonian Monte Carlo (HMC) [Nea11], which is considered the ?gold standard?
for MCMC for neural nets. We implemented SGD and SGLD using the Torch library (torch.ch).
For HMC, we used Stan (mc-stan.org). We perform this comparison for various classification
and regression problems, as summarized in Table 1.4
3.1
Toy 2d classification problem
We start with a toy 2d binary classification problem, in order to visually illustrate the performance
of different methods. We generate a synthetic dataset in 2 dimensions with 2 classes, 10 points per
class. We then fit a multi layer perceptron (MLP) with one hidden layer of 10 ReLu units and 2
softmax outputs (denoted 2-10-2) using SGD. The resulting predictions are shown in Figure 1(a).
We see the expected sigmoidal probability ramp orthogonal to the linear decision boundary. Unfortunately, this method predicts a label of 0 or 1 with very high confidence, even for points that are far
from the training data (e.g., in the top left and bottom right corners).
In Figure 1(b), we show the result of HMC using 20k samples. This is the ?true? posterior predictive
density which we wish to approximate. In Figure 1(c), we show the result of SGLD using about 1000
samples. Specifically, we generate 100k samples, discard the first 2k for burnin, and then keep every
100?th sample. We see that this is a good approximation to the HMC distribution.
In Figures 1(d-f), we show the results of approximating the SGLD Monte Carlo predictive distribution with a single student MLP of various sizes. To train this student network, we sampled points at
random from the domain of the input, [?10, 10] ? [?10, 10]; this encourages the student to predict
accurately at all locations, including those far from the training data. In (d), the student has the same
4
Ideally, we would apply all methods to all datasets, to enable a proper comparison. Unfortunately, this was
not possible, for various reasons. First, the open source code for the EP approach only supports regression, so
we could not evaluate this on classification problems. Second, we were not able to run the BBB code, so we just
quote performance numbers from their paper [BCKW15]. Third, HMC is too slow to run on large problems, so
we just applied it to the small ?toy? problems. Nevertheless, our experiments show that our methods compare
favorably to these other methods.
5
Model
SGD
SGLD
Distilled 2-10-2
Distilled 2-100-2
Distilled 2-10-10-2
Num. params.
40
40k
40
400
140
KL
0.246
0.007
0.031
0.014
0.009
Table 2: KL divergence on the 2d classification dataset.
SGD [BCKW15]
1.83
Dropout
1.51
BBB
1.82
SGD (our impl.)
1.536 ? 0.0120
SGLD
1.271 ? 0.0126
Dist. SGLD
1.307 ? 0.0169
Table 3: Test set misclassification rate on MNIST for different methods using a 784-400-400-10
MLP. SGD (first column), Dropout and BBB numbers are quoted from [BCKW15]. For our implmentation of SGD (fourth column), SGLD and distilled SGLD, we report the mean misclassification
rate over 10 runs and its standard error.
size as the teacher (2-10-2), but this is too simple a model to capture the complexity of the predictive
distribution (which is an average over models). In (e), the student has a larger hidden layer (2-1002); this works better. However, we get best results using a two hidden layer model (2-10-10-2), as
shown in (f).
In Table 2, we show the KL divergence between the HMC distribution (which we consider as ground
truth) and the various approximations mentioned above. We computed this by comparing the probability distributions pointwise on a 2d grid. The numbers match the qualitative results shown in
Figure 1.
3.2
MNIST classification
Now we consider the MNIST digit classification problem, which has N = 60k examples, 10
classes, and D = 784 features. The only preprocessing we do is divide the pixel values by 126
(as in [BCKW15]). We train only on 50K datapoints and use the remaining 10K for tuning hyperparameters. This means our results are not strictly comparable to a lot of published work, which
uses the whole dataset for training; however, the difference is likely to be small.
Following [BCKW15], we use an MLP with 2 hidden layers with 400 hidden units per layer, ReLU
activations, and softmax outputs; we denote this by 784-400-400-10. This model has 500k parameters.
We first fit this model by SGD, using these hyper parameters: fixed learning rate of ?t = 5 ? 10?6 ,
prior precision ? = 1, minibatch size M = 100, number of iterations T = 1M . As shown in
Table 3, our final error rate on the test set is 1.536%, which is a bit lower than the SGD number
reported in [BCKW15], perhaps due to the slightly different training/ validation configuration.
Next we fit this model by SGLD, using these hyper parameters: fixed learning rate of ?t = 4?10?6 ,
thinning interval ? = 100, burn in iterations B = 1000, prior precision ? = 1, minibatch size
M = 100. As shown in Table 3, our final error rate on the test set is about 1.271%, which is better
than the SGD, dropout and BBB results from [BCKW15].5
Finally, we consider using distillation, where the teacher is an SGLD MC approximation of the
posterior predictive. We use the same 784-400-400-10 architecture for the student as well as the
teacher. We generate data for the student by adding Gaussian noise (with standard deviation of
0.001) to randomly sampled training points6 We use a constant learning rate of ? = 0.005, a batch
size of M = 100, a prior precision of 0.001 (for the student) and train for T = 1M iterations. We
obtain a test error of 1.307% which is very close to that obtained with SGLD (see Table 4).
5
We only show the BBB results with the same Gaussian prior that we use. Performance of BBB can be
improved using other priors, such as a scale mixture of Gaussians, as shown in [BCKW15]. Our approach
could probably also benefit from such a prior, but we did not try this.
6
In the future, we would like to consider more sophisticated data perturbations, such as elastic distortions.
6
SGD
-0.0613 ? 0.0002
SGLD
-0.0419 ? 0.0002
Distilled SGLD
-0.0502 ? 0.0007
Table 4: Log likelihood per test example on MNIST. We report the mean over 10 trials ? one
standard error.
Method
PBP (as reported in [HLA15])
VI (as reported in [HLA15])
SGD
SGLD
SGLD distilled
Avg. test log likelihood
-2.574 ? 0.089
-2.903 ? 0.071
-2.7639 ? 0.1527
-2.306 ? 0.1205
-2.350 ? 0.0762
Table 5: Log likelihood per test example on the Boston housing dataset. We report the mean over
20 trials ? one standard error.
We also report the average test log-likelihood of SGD, SGLD and distilled SGLD in Table 4. The
log-likelihood is equivalent to the logarithmic scoring rule [Bic07] used in assessing the calibration
of probabilistic models. The logarithmic rule is a strictly proper scoring rule, meaning that the
score is uniquely maximized by predicting the true probabilities. From Table 4, we see that both
SGLD and distilled SGLD acheive higher scores than SGD, and therefore produce better calibrated
predictions.
Note that the SGLD results were obtained by averaging predictions from ? 10,000 models sampled
from the posterior, whereas distillation produces a single neural network that approximates the average prediction of these models, i.e. distillation reduces both storage and test time costs of SGLD
by a factor of 10,000, without sacrificing much accuracy. In terms of training time, SGD took 1.3
ms, SGLD took 1.6 ms and distilled SGLD took 3.2 ms per iteration. In terms of memory, distilled
SGLD requires only twice as much as SGD or SGLD during training, and the same as SGD during
testing.
3.3
Toy 1d regression
We start with a toy 1d regression problem, in order to visually illustrate the performance of different
methods. We use the same data and model as [HLA15]. In particular, we use N = 20 points in
D = 1 dimensions, sampled from the function y = x3 + n , where n ? N (0, 9). We fit this data
with an MLP with 10 hidden units and ReLU activations. For SGLD, we use S = 2000 samples.
For distillation, the teacher uses the same architecture as the student.
The results are shown in Figure 2. We see that SGLD is a better approximation to the ?true? (HMC)
posterior predictive density than the plugin SGD approximation (which has no predictive uncertainty), and the VI approximation of [Gra11]. Finally, we see that distilling SGLD incurs little loss
in accuracy, but saves a lot computationally.
3.4
Boston housing
Finally, we consider a larger regression problem, namely the Boston housing dataset, which was
also used in [HLA15]. This has N = 506 data points (456 training, 50 testing), with D = 13
dimensions. Since this data set is so small, we repeated all experiments 20 times, using different
train/ test splits.
Following [HLA15], we use an MLP with 1 layer of 50 hidden units and ReLU activations. First
we use SGD, with these hyper parameters7 : Minibatch size M = 1, noise precision ?n = 1.25,
prior precision ? = 1, number of trials 20, constant learning rate ?t = 1e ? 6, number of iterations
T = 170K. As shown in Table 5, we get an average log likelihood of ?2.7639.
Next we fit the model using SGLD. We use an initial learning rate of ?0 = 1e ? 5, which we reduce
by a factor of 0.5 every 80K iterations; we use 500K iterations, a burnin of 10K, and a thinning
7
We choose all hyper-parameters using cross-validation whereas [HLA15] performs posterior inference on
the noise and prior precisions, and uses Bayesian optimization to choose the remaining hyper-parameters.
7
Figure 2: Predictive distribution for different methods on a toy 1d regression problem. (a) PBP of
[HLA15]. (b) HMC. (c) VI method of [Gra11]. (d) SGD. (e) SGLD. (f) Distilled SGLD. Error bars
denote 3 standard deviations. (Figures a-d kindly provided by the authors of [HLA15]. We replace
their term ?BP? (backprop) with ?SGD? to avoid confusion.)
interval of 10. As shown in Table 5, we get an average log likelihood of ?2.306, which is better
than SGD.
Finally, we distill our SGLD model. The student architecture is the same as the teacher. We use the
following teacher hyper parameters: prior precision ? = 2.5; initial learning rate of ?0 = 1e ? 5,
which we reduce by a factor of 0.5 every 80K iterations. For the student, we use generated training
data with Gaussian noise with standard deviation 0.05, we use a prior precision of ? = 0.001, an
initial learning rate of ?0 = 1e ? 2, which we reduce by 0.8 after every 5e3 iterations. As shown
in Table 5, we get an average log likelihood of ?2.350, which is only slightly worse than SGLD,
and much better than SGD. Furthermore, both SGLD and distilled SGLD are better than the PBP
method of [HLA15] and the VI method of [Gra11].
4
Conclusions and future work
We have shown a very simple method for ?being Bayesian? about neural networks (and other kinds
of models), that seems to work better than recently proposed alternatives based on EP [HLA15] and
VB [Gra11, BCKW15].
There are various things we would like to do in the future: (1) Show the utility of our model in
an end-to-end task, where predictive uncertainty is useful (such as with contextual bandits or active
learning). (2) Consider ways to reduce the variance of the algorithm, perhaps by keeping a running
minibatch of parameters uniformly sampled from the posterior, which can be done online using
reservoir sampling. (3) Exploring more intelligent data generation methods for training the student.
(4) Investigating if our method is able to reduce the prevalence of confident false predictions on
adversarially generated examples, such as those discussed in [SZS+ 14].
Acknowledgements
We thank Jos?e Miguel Hern?andez-Lobato, Julien Cornebise, Jonathan Huang, George Papandreou,
Sergio Guadarrama and Nick Johnston.
8
References
[AKW12]
S. Ahn, A. Korattikara, and M. Welling. Bayesian Posterior Sampling via Stochastic Gradient
Fisher Scoring. In ICML, 2012.
[ASW14]
Sungjin Ahn, Babak Shahbaba, and Max Welling. Distributed stochastic gradient MCMC. In
ICML, 2014.
[BCKW15] C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. Weight uncertainty in neural networks. In ICML, 2015.
[BCNM06] Cristian Bucila, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In KDD,
2006.
[Bic07]
J Eric Bickel. Some comparisons among quadratic, spherical, and logarithmic scoring rules. Decision Analysis, 4(2):49?65, 2007.
[CFG14]
Tianqi Chen, Emily B Fox, and Carlos Guestrin. Stochastic Gradient Hamiltonian Monte Carlo.
In ICML, 2014.
[DFB+ 14] N Ding, Y Fang, R Babbush, C Chen, R Skeel, and H Neven. Bayesian sampling using stochastic
gradient thermostats. In NIPS, 2014.
[GG15]
Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model
uncertainty in deep learning. 6 June 2015.
[Gra11]
Alex Graves. Practical variational inference for neural networks. In NIPS, 2011.
[HLA15]
J. Hern?andez-Lobato and R. Adams. Probabilistic backpropagation for scalable learning of
bayesian neural networks. In ICML, 2015.
[HVD14]
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. In
NIPS Deep Learning Workshop, 2014.
[KW14]
Diederik P Kingma and Max Welling. Stochastic gradient VB and the variational auto-encoder.
In ICLR, 2014.
[Nea11]
Radford Neal. MCMC using hamiltonian dynamics. In Handbook of Markov chain Monte Carlo.
Chapman and Hall, 2011.
[PT13]
Sam Patterson and Yee Whye Teh. Stochastic gradient riemannian langevin dynamics on the
probability simplex. In NIPS, 2013.
[RBK+ 14] Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and
Yoshua Bengio. FitNets: Hints for thin deep nets. Arxiv, 19 2014.
[RMW14]
D. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference
in deep generative models. In ICML, 2014.
[SG05]
Edward Snelson and Zoubin Ghahramani. Compact approximations to bayesian predictive distributions. In ICML, 2005.
[SZS+ 14]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In ICLR, 2014.
[WT11]
Max Welling and Yee W Teh. Bayesian learning via stochastic gradient Langevin dynamics. In
ICML, 2011.
9
| 5965 |@word trial:3 version:2 compression:1 seems:1 open:1 propagate:2 incurs:1 sgd:30 initial:3 configuration:2 contains:1 score:3 past:1 guadarrama:1 com:1 comparing:1 contextual:1 activation:3 diederik:1 dx:1 intriguing:1 romero:1 kdd:1 christian:1 update:3 generative:1 hamiltonian:3 num:1 location:1 org:1 simpler:1 sigmoidal:1 wierstra:2 dn:21 qualitative:1 fitting:1 combine:1 inside:1 x0:4 expected:1 dist:1 multi:2 spherical:2 szs:2 little:2 snn:2 becomes:1 provided:2 kind:2 developed:1 gal:1 every:4 impl:1 tackle:1 zaremba:1 classifier:2 control:2 unit:4 plugin:4 approximately:1 burn:1 twice:2 fitnets:1 limited:1 practical:1 testing:2 practice:1 implement:3 backpropagation:3 x3:1 prevalence:1 digit:1 pre:1 confidence:1 zoubin:2 get:8 close:1 storage:1 yee:2 optimize:1 equivalent:2 map:1 dean:1 lobato:2 emily:1 rule:4 his:1 datapoints:1 reparameterization:1 fang:1 handle:1 exact:1 us:6 goodfellow:1 trick:1 approximated:3 kahou:1 predicts:1 ep:14 bottom:1 ding:1 capture:3 principled:1 ryi:1 mentioned:1 complexity:1 ideally:1 dynamic:6 babak:1 trained:4 predictive:18 upon:1 patterson:1 eric:1 various:8 emulate:1 train:12 describe:1 monte:14 kevin:1 hyper:7 quite:1 larger:4 distortion:1 ramp:1 encoder:1 final:2 online:11 cristian:1 housing:4 net:2 took:3 propose:1 product:2 korattikara:2 poorly:1 gold:1 sutskever:1 double:1 p:2 requirement:1 extending:1 produce:3 assessing:1 adam:1 tianqi:1 derive:1 dfb:2 illustrate:2 miguel:1 edward:1 implemented:1 distilling:3 alexandru:1 stochastic:16 enable:1 backprop:2 argued:1 dnns:1 andez:2 hoover:1 preliminary:1 extension:1 strictly:2 exploring:1 considered:1 ground:2 hall:1 sgld:50 visually:2 predict:3 bickel:1 estimation:1 label:4 quote:1 largest:1 gaussian:6 avoid:2 mobile:1 rezende:1 june:1 likelihood:10 contrast:1 inference:7 dependent:1 niculescu:1 neven:1 typically:1 torch:2 hidden:8 bandit:3 wt11:3 dnn:1 pixel:1 classification:11 among:1 denoted:1 art:1 softmax:5 marginal:1 field:1 distilled:16 sampling:4 chapman:1 adversarially:1 icml:8 thin:1 future:3 simplex:1 report:4 yoshua:1 intelligent:1 hint:1 randomly:2 simultaneously:1 divergence:2 murphy:1 argmax:1 mlp:7 introduces:1 mixture:2 nl:1 bbb:9 chain:1 accurate:4 integral:1 necessary:1 orthogonal:1 fox:1 divide:1 sacrificing:1 column:2 earlier:1 papandreou:1 caruana:1 cost:2 deviation:3 distill:1 too:2 samira:1 reported:3 teacher:25 akw12:2 params:1 synthetic:1 calibrated:1 confident:2 density:7 probabilistic:5 jos:1 ilya:1 choose:2 huang:1 positivity:1 worse:1 corner:1 wojciech:1 toy:7 szegedy:1 student:37 waste:2 summarized:1 vi:7 try:1 lot:3 shahbaba:1 start:2 bayes:3 carlos:1 complicated:1 contribution:1 minimize:2 accuracy:3 variance:4 ensemble:2 maximized:1 bayesian:17 kavukcuoglu:1 accurately:2 produced:1 mc:3 carlo:15 published:1 chassang:1 mohamed:1 mi:1 riemannian:2 gain:1 sampled:5 dataset:10 knowledge:4 improves:1 schedule:2 sophisticated:1 back:2 thinning:2 adf:1 higher:2 improved:2 done:1 furthermore:2 just:9 eqn:2 propagation:2 google:2 minibatch:7 perhaps:2 unbiased:1 true:4 hence:3 analytically:1 regularization:1 rbk:1 neal:1 vivek:1 during:3 encourages:1 uniquely:1 m:3 whye:1 confusion:1 performs:2 meaning:1 variational:5 snelson:1 recently:6 pbp:6 common:2 perturbing:1 ballas:1 discussed:1 slight:1 approximates:1 distillation:6 tuning:1 vanilla:1 grid:1 pute:1 bruna:1 calibration:1 ahn:2 etc:1 add:1 sergio:1 posterior:20 recent:1 phone:1 discard:1 store:5 binary:1 yi:6 scoring:5 guestrin:1 george:1 multiple:1 infer:1 d0:5 reduces:1 match:1 plug:4 cross:3 devised:1 bigger:1 prediction:15 involving:1 scalable:2 regression:8 essentially:1 expectation:2 arxiv:1 iteration:10 represent:1 whereas:4 want:3 interval:2 adriana:1 johnston:1 source:1 modality:1 biased:1 pass:1 probably:1 acheive:1 thing:2 call:3 near:1 split:1 bengio:1 fit:5 relu:4 architecture:4 reduce:5 idea:1 blundell:1 whether:1 utility:1 e3:1 afford:1 deep:8 ignored:1 useful:1 dark:3 induces:1 generate:3 problematic:1 per:7 key:1 nevertheless:1 achieving:1 run:4 uncertainty:8 you:1 fourth:1 extends:1 draw:1 decision:2 dy:2 vb:7 comparable:1 bit:1 dropout:4 bound:3 layer:7 quadratic:1 strength:2 constraint:1 your:1 bp:1 alex:1 speed:1 poor:1 slightly:3 sam:1 rob:1 s1:2 computationally:1 hern:2 skew:1 tractable:1 end:2 gaussians:3 sghmc:2 apply:1 away:1 save:1 alternative:3 batch:4 slower:2 ebrahimi:1 top:1 remaining:2 running:1 const:1 coined:1 ghahramani:2 especially:1 approximating:3 objective:2 already:1 parametric:2 traditional:1 antoine:1 gradient:18 iclr:2 thank:1 reason:3 fresh:1 code:2 index:1 modeled:1 pointwise:1 unfortunately:5 hmc:11 expense:1 favorably:1 zt:2 proper:2 perform:1 teh:2 observation:2 datasets:2 markov:1 descent:1 langevin:5 hinton:1 perturbation:1 namely:3 required:1 kl:5 nick:1 kingma:1 nip:4 able:2 bar:1 max:4 memory:6 including:4 reliable:1 cornebise:2 misclassification:2 rely:1 hybrid:1 predicting:2 mizil:1 representing:1 improve:1 library:1 julien:1 stan:2 auto:1 joan:1 prior:15 sg:2 rathod:1 l2:1 acknowledgement:1 graf:1 loss:4 generation:1 filtering:1 geoffrey:1 generator:1 validation:3 integrate:2 summary:2 copy:1 keeping:1 bias:1 perceptron:1 benefit:3 distributed:2 boundary:1 dimension:3 skeel:1 rich:1 author:1 reinforcement:1 preprocessing:1 avg:1 sungjin:1 far:2 erhan:1 welling:6 approximate:12 compact:2 keep:1 dealing:1 active:3 investigating:1 handbook:1 assumed:1 xi:7 factorize:1 quoted:1 fergus:1 table:15 nicolas:1 elastic:1 domain:3 uva:1 did:2 kindly:1 whole:1 noise:6 hyperparameters:1 repeated:2 yarin:1 reservoir:1 fashion:1 slow:1 precision:10 wish:1 third:1 ian:1 dumitru:1 tnn:2 fusion:1 thermostat:2 bucila:1 mnist:6 false:1 adding:1 workshop:1 babbush:1 chen:2 easier:1 boston:4 entropy:1 logarithmic:3 univariate:2 likely:1 gatta:1 vinyals:1 amsterdam:1 applies:1 radford:1 ch:1 loses:1 truth:2 goal:2 jeff:1 replace:2 fisher:2 specifically:2 uniformly:2 wt:3 averaging:1 called:4 experimental:2 burnin:2 support:1 jonathan:1 anoop:1 oriol:1 evaluate:1 mcmc:9 |
5,486 | 5,966 | GP Kernels for Cross-Spectrum Analysis
1
Kyle Ulrich, 3 David E. Carlson, 2 Kafui Dzirasa, 1 Lawrence Carin
Department of Electrical and Computer Engineering, Duke University
2
Department of Psychiatry and Behavioral Sciences, Duke University
3
Department of Statistics, Columbia University
{kyle.ulrich, kafui.dzirasa, lcarin}@duke.edu
david.edwin.carlson@gmail.com
1
Abstract
Multi-output Gaussian processes provide a convenient framework for multi-task
problems. An illustrative and motivating example of a multi-task problem is
multi-region electrophysiological time-series data, where experimentalists are interested in both power and phase coherence between channels. Recently, Wilson
and Adams (2013) proposed the spectral mixture (SM) kernel to model the spectral density of a single task in a Gaussian process framework. In this paper, we
develop a novel covariance kernel for multiple outputs, called the cross-spectral
mixture (CSM) kernel. This new, flexible kernel represents both the power and
phase relationship between multiple observation channels. We demonstrate the
expressive capabilities of the CSM kernel through implementation of a Bayesian
hidden Markov model, where the emission distribution is a multi-output Gaussian process with a CSM covariance kernel. Results are presented for measured
multi-region electrophysiological data.
1
Introduction
Gaussian process (GP) models have become an important component of the machine learning literature. They have provided a basis for non-linear multivariate regression and classification tasks, and
have enjoyed much success in a wide variety of applications [16].
A GP places a prior distribution over latent functions, rather than model parameters. In the sense
that these functions are defined for any number of sample points and sample positions, as well
as any general functional form, GPs are nonparametric. The properties of the latent functions are
defined by a positive definite covariance kernel that controls the covariance between the function
at any two sample points. Recently, the spectral mixture (SM) kernel was proposed by Wilson and
Adams [24] to model a spectral density with a scale-location mixture of Gaussians. This flexible
and interpretable class of kernels is capable of recovering any composition of stationary kernels
[27, 9, 13]. The SM kernel has been used for GP regression of a scalar output (i.e., single function, or
observation ?task?), achieving impressive results in extrapolating atmospheric CO2 concentrations
[24]; image inpainting [25]; and feature extraction from electrophysiological signals [21].
However, the SM kernel is not defined for multiple outputs (multiple correlated functions). Multioutput GPs intersect with the field of multi-task learning [4], where solving similar problems jointly
allows for the transfer of statistical strength between problems, improving learning performance
when compared to learning all tasks individually. In this paper, we consider neuroscience applications where low-frequency (< 200 Hz) extracellular potentials are simultaneously recorded from
implanted electrodes in multiple brain regions of a mouse [6]. These signals are known as local field
potentials (LFPs) and are often highly correlated between channels. Inferring and understanding
that interdependence is biologically significant.
1
A multi-output GP can be thought of as a standard GP (all observations are jointly normal) where the
covariance kernel is a function of both the input space and the output space (see [2] and references
therein for a comprehensive review); here ?input space? means the points at which the functions are
sampled (e.g., time), and the ?output space? may correspond to different brain regions. A particular
positive definite form of this multi-output covariance kernel is the sum of separable (SoS) kernels,
or the linear model of coregionalization (LMC) in the geostatistics literature [10], where a separable
kernel is represented by the product of separate kernels for the input and output spaces.
While extending the SM kernel to the multi-output setting via the LMC framework (i.e., the SMLMC kernel) provides a powerful modeling framework, the SM-LMC kernel does not intuitively
represent the data. Specifically, the SM-LMC kernel encodes the cross-amplitude spectrum (square
root of the cross power spectral density) between every pair of channels, but provides no crossphase information. Together, the cross-amplitude and cross-phase spectra form the cross-spectrum,
defined as the Fourier transform of the cross-covariance between the pair of channels.
Motivated by the desire to encode the full cross-spectra into the covariance kernel, we design a novel
kernel termed the cross-spectral mixture (CSM) kernel, which provides an intuitive representation
of the power and phase dependencies between multiple outputs. The need for embedding the full
cross-spectrum into the covariance kernel is illustrated by a recent surge in neuroscience research
discovering that LFP interdependencies between regions exhibit phase synchrony patterns that are
dependent on frequency band [11, 17, 18].
The remainder of the paper is organized as follows. Section 2 provides a summary of GP regression
models for vector-valued data, and Section 3 introduces the SM, SM-LMC, and novel CSM covariance kernels. In Section 4, the CSM kernel is incorporated in a Bayesian hidden Markov model
(HMM) [14] with a GP emission distribution as a demonstration of its utility in hierarchical modeling. Section 5 provides details on inverting the Bayesian HMM with variational inference, as well
as details on a fast, novel GP fitting process that approximates the CSM kernel by its representation
in the spectral domain. Section 6 analyzes the performance of this approximation and presents results for the CSM kernel in the neuroscience application, considering measured multi-region LFP
data from the brain of a mouse. We conclude in Section 7 by discussing how this novel kernel can
trivially be extended to any time-series application where GPs and the cross-spectrum are of interest.
2
Review of Multi-Output Gaussian Process Regression
A multi-output regression task estimates samples from C output channels, y n = [yn1 , . . . , ynC ]T
corresponding to the n-th input point xn (e.g., the n-th temporal sample). An unobserved latent
function f (x) = [f1 (x), . . . , fC (x)]T is responsible for generating the observations, such that y n ?
N (f (xn ), H ?1 ), where H = diag(?1 , . . . , ?C ) is the precision of additive Gaussian noise.
A GP prior on the latent function is formalized by f (x) ? GP(m(x), K(x, x0 )) for arbitrary
input x, where the mean function m(x) ? RC is set to equal 0 without loss of generality, and
0
the covariance function (K(x, x0 ))c,c0 = k c,c (x, x0 ) = cov(fc (x), fc0 (x0 )) creates dependencies
0
between observations at input points x and x , as observed on channels c and c0 . In general, the input
space x could be vector valued, but for simplicity we here assume it to be scalar, consistent with our
motivating neuroscience application in which x corresponds to time.
A convenient representation for multi-output kernel functions is to separate the kernel into the product of a kernel for the input space and a kernel for the interactions between the outputs. This is
known as a separable kernel. A sum of separable kernels (SoS) representation [2] is given by
0
k c,c (x, x0 ) =
Q
X
bq (c, c0 )kq (x, x0 ),
or
q=1
K(x, x0 ) =
Q
X
B q kq (x, x0 ),
(1)
q=1
where kq (x, x0 ) is the input space kernel for component q, bq (c, c0 ) is the q-th output interaction
kernel, and B q ? RC?C is a positive semi-definite output kernel matrix. Note that we have a discrete set of C output spaces, c ? {1, . . . , C}, where the input space x is continuous, and discretely
sampled arbitrarily in experiments. The SoS formulation is also known as the linear model of coregionalization (LMC) [10] and B q is termed the coregionalization matrix. When Q = 1, the LMC
reduces to the intrinsic coregionalization model (ICM) [2], and when rank(B q ) is restricted to equal
1, the LMC reduces to the semiparametric latent factor model (SLFM) [19].
2
Any finite number of latent functional evaluations f = [f1 (x), . . . , fC (x)]T at locations x =
[x1 , . . . , xN ]T has a multivariate normal distribution N (f ; 0, K), such that K is formed through
the block partitioning
? 1,1
?
k (x, x) ? ? ? k 1,C (x, x)
Q
?
? X
..
..
..
=
B q ? kq (x, x),
(2)
K=?
?
.
.
.
k C,1 (x, x) ? ? ?
k C,C (x, x)
q=1
0
where each k c,c (x, x) is an N ? N matrix and ? symbolizes the Kronecker product.
A vector-valued dataset consists of observations y = vec([y 1 , . . . , y N ]T ) ? RCN at the respective
locations x = [x1 , . . . , xN ]T such that the first N elements of y are from channel 1 up to the last N
elements belonging to channel C. Since both the likelihood p(y|f , x) and distribution over latent
functions p(f |x) are Gaussian, the marginal likelihood is conveniently represented by
Z
p(y|x) = p(y|f , x)p(f |x)df = N (0, ?),
? = K + H ?1 ? I N ,
(3)
where all possible functions f have been marginalized out.
Each input-space covariance kernel is defined by a set of hyperparameters, ?. This conditioning was
removed for notational simplicity, but will henceforth be included in the notation. For example, if
the squared exponential kernel is used, then kSE (x, x0 ; ?) = exp(? 12 ||x ? x0 ||2 /`2 ), defined by a
single hyperparameter ? = {`}. To fit a GP to the dataset, the hyperparameters are typically chosen
to maximize the marginal likelihood in (3) via gradient ascent.
3
Expressive Kernels in the Spectral Domain
This section first introduces the spectral mixture (SM) kernel [24] as well as a multi-output extension
of the SM kernel within the LMC framework. While the SM-LMC model is capable of representing complex spectral relationships between channels, it does not intuitively model the cross-phase
spectrum between channels. We propose a novel kernel known as the cross-spectral mixture (CSM)
kernel that provides both the cross-amplitude and cross-phase spectra of multi-channel observations.
Detailed derivations of each of these kernels is found in the Supplemental Material.
3.1
The Spectral Mixture Kernel
A spectral Gaussian (SG) kernel is defined by an amplitude spectrum with a single Gaussian distribution reflected about the origin,
1
SSG (?; ?) = [N (?; ??, ?) + N (?; ?, ?)] ,
(4)
2
where ? = {?, ?} are the kernel hyperparameters, ? represents the peak frequency, and the variance
? is a scale parameter that controls the spread of the spectrum around ?. This spectrum is a function
of angular frequency. The Fourier transform of (4) results in the stationary, positive definite autocovariance function
1
kSG (? ; ?) = exp(? ?? 2 ) cos(?? ),
(5)
2
0
where stationarity implies dependence on input domain differences k(? ; ?) = k(x,
? x ; ?) with ? =
0
x ? x . The SG kernel may also be derived by considering a latent signal f (x) = 2 cos(?(x + ?))
with frequency uncertainty ? ? N (?, ?) and phase offset ??. The kernel is the auto-covariance
function for f (x), such that kSG (? ; ?) = cov(f (x), f (x+? )). When computing the auto-covariance,
the frequency ? is marginalized out, providing the kernel in (5) that includes all frequencies in the
spectral domain with probability 1.
A weighted, linear combination of SG kernels gives the spectral mixture (SM) kernel [24],
kSM (? ; ?) =
Q
X
aq kSG (? ; ? q ),
SSM (?; ?) =
q=1
Q
X
aq SSG (?; ? q ),
(6)
q=1
where ? q = {aq , ?q , ?q } and ? = {? q } has 3Q degrees of freedom. The SM kernel may be derived
as the Fourier transform of the spectral density SSM (?; ?) or as the auto-covariance of latent funcPQ p
tions f (x) = q=1 2aq cos(?q (x + ?q )) with uncertainty in angular frequency ?q ? N (?q , ?q ).
3
4
4
0
-2
-2
-4
-4
0.2
0.4
0.6
0.8
1
f 2 (x)
Amplitude
0
0
3
3.14
2
1.57
1
0
0
-1.57
f 1 (x)
2
f 2 (x)
0
0
0.2
Time
0.4
0.6
0.8
1
Phase
f 1 (x)
2
-3.14
3
3.5
4
Time
4.5
5
5.5
6
Frequency
Figure 1: Latent functions drawn for two channels f1 (x) (blue) and f2 (x) (red) using the CSM kernel (left)
and rank-1 SM-LMC kernel (center). The functions are comprised of two SG components centered at 4 and 5
Hz. For the CSM kernel, we set the phase shift ?c0 ,2 = ?. Right: the cross-amplitude (purple) and cross-phase
(green) spectra between f1 (x) and f2 (x) are shown for the CSM kernel (solid) and SM-LMC kernel (dashed).
The ability to tune phase relationships is beneficial for kernel design and interpretation.
The moniker for the SM kernel in (6) reflects the mixture of Gaussian components that define the
spectral density of the kernel. The SM kernel is able to represent any stationary covariance kernel
given large enough Q; to name a few, this includes any combination of squared exponential, Mat`ern,
rational quadratic, or periodic kernels [9, 16, 24].
3.2
The Cross-Spectral Mixture Kernel
A multi-output version of the SM kernel uses the SG kernel directly within the LMC framework:
Q
X
K SM-LMC (? ; ?) =
B q kSG (? ; ? q ),
(7)
q=1
where Q SG kernels are shared among the outputs via the coregionalization matrices {B q }Q
q=1 . A
generalized, non-stationary version of this SM-LMC kernel was proposed in [23] using the Gaussian
process regression network (GPRN) [26]. The marginal distribution for any single channel is simply
a Gaussian process with a SM covariance kernel. While this formulation is capable of providing
a full cross-amplitude spectrum between two channels, it contains no information
about a crossP
phase spectrum. Specifically, each channel is merely a weighted sum of q Rq latent functions
where Rq = rank(B q ). Whereas these functions are shared exactly across channels, our novel CSM
kernel shares phase-shifted versions of these latent functions across channels.
Definition 3.1. The cross-spectral mixture (CSM) kernel takes the form
Rq q
Q X
X
1
arcq arc0 q exp(? ?q ? 2 ) cos ?q ? + ?rc0 q ? ?rcq ,
(8)
2
q=1 r=1
PQ
Rq Q
}q=1 has 2Q + q=1 Rq (2C ? 1) degrees of freedom,
where ? = {?q , ?q , {arq , ?rq , ?r1q , 0}r=1
and arcq and ?rcq respectively represent the amplitude and shift in the input space for latent functions
associated with channel c. In the LMC framework, the CSM kernel is
( Q
)
Rq
X
X
K CSM (? ; ?) = Re
Bq e
kSG (? ; ? q ) ,
Bq =
? r (? r )? ,
0
c,c
kCSM
(? ; ?) =
q
q=1
q
r=1
p
1
r
r
e
kSG (? ; ? q ) = exp(? ?q ? 2 + j?q ? ),
?cq
= arcq exp(?j?cq
),
2
r
where e
kSG (?, ? q ) is phasor notation of the SG kernel, B q is rank-Rq , {?cq
} are complex scalar
r
r
coefficients encoding amplitude and phase, and ?cq , ?q ?cq is an alternative phase representation.
?
We use complex notation where j = ?1, Re{?} returns the real component of its argument, and
? ? represents the complex conjugate of ?.
Both the CSM and SM-LMC kernels force the marginal distribution of data from a single channel to be a Gaussian process with a SM covariance kernel. The CSM kernel is derived in the
Supplemental Material by considering functions represented by phase-shifted sinusoidal signals,
PQ PRq p r
iid
fc (x) =
2acq cos(?qr (x + ?rcq )), where each ?qr ? N (?q , ?q ). Computing the
r=1
q=1
cross-covariance function cov(fc (x), fc0 (x + ? )) provides the CSM kernel.
A comparison between draws from Gaussian processes with CSM and SM-LMC kernels is shown
in Figure 1. The utility of the CSM kernel is clearly illustrated by its ability to encode phase
4
information, as well as its powerful functional form of the full cross-spectrum (both amplitude
0 (?) are obtained by repand phase). The amplitude function Ac,c0 (?) and phase function ?c,cP
0
resenting the cross-spectrum in phasor notation, i.e., ?c,c0 (?; ?) =
q (B q )c,c SSG (?; ? q ) =
Ac,c0 (?) exp(j?c,c0 (?)). Interestingly, while the CSM and SM-LMC kernels have identical
marginal amplitude spectra for shared {?q , ?q , aq }, their cross-amplitude spectra differ due to the
inherent destructive interference of the CSM kernel (see Figure 1, right).
4
Multi-Channel HMM Analysis
Neuroscientists are interested in examining how the network structure of the brain changes as animals undergo a task, or various levels of arousal [15]. The LFP signal is a modality that allows
researchers to explore this network structure. In the model provided in this section, we cluster segments of the LFP signal into discrete ?brain states? [21]. Each brain state is represented by a unique
cross-spectrum provided by the CSM kernel. The use of the full cross-spectrum to define brain states
is supported by previous work discovering that 1) the power spectral density of LFP signals indicate
various levels of arousal states in mice [7, 21], and 2) frequency-dependent phase synchrony patterns
change as animals undergo different conditions in a task [11, 17, 18] (see Figure 2).
The vector-valued observations from C channels are segmented into W contiguous, non-overlapping
windows. The windows are common across channels, such that the C-channel data for window
w
w T
w
w ? {1, . . . , W } are represented by y w
n = [yn1 , . . . , ynC ] at sample location xn . Given data, each
window consists of Nw temporal samples, but the model is defined for any set of sample locations.
We model the observations {y w
n } as emissions from a hidden Markov model (HMM) with L hidden,
discrete states. State assignments are represented by latent variables ?w ? {1, . . . , L} for each window w ? {1, . . . , W }. In general, L is a set upper bound of the number of states (brain states [21],
or ?clusters?), but the model can shrink down and infer the number of states needed to fit the data.
This is achieved by defining the dynamics of the latent states according to a Bayesian HMM [14]:
?1 ? Categorical(?0 ),
?w ? Categorical(??w?1 ) ?w ? 2,
?0 , ?` ? Dirichlet(?),
where the initial state assignment is drawn from a categorical distribution with probability vector ?0
and all subsequent states assignments are drawn from the transition vector ??w?1 . Here, ?`h is the
probability of transitioning from state ` to state h. The vectors {?0 , ?1 , . . . , ?L } are independently
drawn from symmetric Dirichlet distributions centered around ? = [1/L, . . . , 1/L] to impose sparsity on transition probabilities. In effect, this allows the model to learn the number of states needed
for the data (i.e., fewer than L) [3].
Each cluster ` ? {1, . . . , L} is assigned GP parameters ? ` . The latent cluster assignment ?w for
window w indicates which set of GP parameters control the emission distribution of the HMM:
?1
w
yw
n ? N (f w (xn ), H ?w ),
f w (x) ? GP(0, K(x, x0 ; ? ?w )),
(9)
0
c,c
where (K(x, x0 ; ? ` ))c,c0 = kCSM
(x, x0 ; ? ` ) is the CSM kernel, and the cluster-dependent precision
H ?w = diag(? ?w ) generates independent Gaussian observation noise. In this way, each window w
is modeled as a stochastic process with a multi-channel cross-spectrum defined by ? ?w .
Raw LFP Data
Cross-Amplitude Spectrum
Cross-Phase Spectrum
1
BLA
IL Cortex
DELTA Waves
THETA Waves ALPHA Waves
BETA Waves
Lag (rad)
Potential
Amplitude
0.5
0
-0.5
-1
-1.5
0.1
0.2
0.3
0.4
Time (sec)
0.5
0.6
0.7
0.8
0
2
4
6
8
10
Frequency ( Hz)
12
14
16
0
2
4
6
8
10
12
14
16
Frequency ( Hz)
Figure 2: A short segment of LFP data recorded from the basolateral amygdala and infralimbic cortex is
shown on the left. The cross-amplitude and phase spectra are produced using Welch?s averaged periodogram
method [22] for several consecutive 5 second windows of LFP data. Frequency dependent phase synchrony lags
are consistently present in the cross-phase spectrum, motivating the CSM kernel. This frequency dependency
aligns with preconcieved notions of bands, or brain waves (e.g., 8-12 Hz alpha waves).
5
5
Inference
w T
A convenient notation vectorizes all observations within a window, y w = vec([y w
1 , . . . , y Nw ] ),
w
where vec(A) is the vectorization of matrix A; i.e., the first Nw elements of y are observations
from channel 1, up to the last Nw elements of y w belonging to channel C. Because samples are
obtained on an evenly spaced temporal grid, we fix Nw = N and align relative sample locations
within a window to an oracle xw = x = [x1 , . . . , xN ]T for all w.
The model in Section 4 generates the set of observations Y = {y w }W
w=1 at aligned sample locations
W
x given kernel hyperparameters ? = {? ` , ? ` }L
and
model
variables
? = {{?` }L
`=1
`=0 , {?w }w=1 }.
The latent variables ? are inverted using mean-field variational inference [3], obtaining an approxiQL
mate posterior distribution q(?) = q(?1:W ) `=0 Dir(?` ; ?` ). The approximate posterior is chosen
to minimize the KL divergence to the true posterior distribution p(?|Y , ?, x) using the standard
variational EM method detailed in Chapter 3 of [3].
During each iteration of the variational EM algorithm, the kernel hyperparameters ? are chosen to
PW PL
maximize the expected marginal log-likelihood Q = w=1 `=1 q(?w = `) log N (y w ; 0, ?` )via
gradient ascent, where q(?w = `) is the marginal posterior probability that window w is assigned
e ` } is the CSM kernel matrix for state ` with the complex form
to brain state `, and ?` = Re{?
P
`
e
e
?` = q B q ? kSG (x, x; ? ` ) + H ?1
` ? I N . Performing gradient ascent requires the derivatives
P
?1 w
?1 ??`
?Q
1
T
w,` tr((?`w ?`w ? ?` ) ??j ) where ?`w = ?` y [16]. A na??ve implementation of
??j = 2
this gradient requires the inversion of ?` , which has complexity O(N 3 C 3 ) and storage requirements
O(N 2 C 2 ) since a simple method to invert a sum of Kronecker products does not exist.
A common trick for GPs with evenly spaced samples (e.g., a temporal grid) is to use the discrete
Fourier transform (DFT) to approximate the inverse of ?` by viewing this as an approximately
circulant matrix [5, 12]. These methods can speed up inference because circulant matrices are diagonalizable by the DFT coefficient matrix. Adjusting these methods to the multi-output formulation,
we show how the DFT of the marginal covariance matrices retains the cross-spectrum information.
Proposition 5.1. Let y w ? N (0, ??w ) represent the marginal likelihood of circularly-symmetric
[8] real-valued observations in window w, and denote the concatenation of the DFT of each channel
as z w = (I C ? U )? y w where U is the N ? N unitary DFT matrix. Then, z w is shown in the
Supplemental Material to have the complex normal distribution [8]:
z w ? CN (0, 2S ?w ),
S ` = ? ?1
Q
X
B `q ? W `q + H ?1
` ? IN ,
(10)
q=1
where ? = xi+1 ? xi for all i = 2, . . . , N , and W `q ? diag([SSG (?; ? `q ), 0]) is approximately
diagonal. The spectral density SSG (?; ?) = [SSG (?1 ; ?), . . . , SSG (?b N +1 c ; ?)] is found via (4) at
2
N
2?
angular frequencies ? = N
, and 0 = [0, . . . , 0] is a row vector of N 2?1 zeros.
? 0, 1, . . . , 2
The hyperparameters of the CSM kernels ? may now be optimized from the expected marginal
log-likelihood of Z = {z w }W
w=1 instead of Y . Conceptually, the only difference during the fitting
process is that, with the latter, derivatives of the covariance kernel are used, while, with the former, derivatives of the power spectral density are used. Computationally, this method improves the
na??ve O(N 3 C 3 ) complexity of fitting the standard CSM kernel to O(N C 3 ) complexity. Memory
requirements are also reduced from O(N 2 C 2 ) to O(N C 2 ). The reason for this improvement is that
S ` is now represented as N independent C ? C blocks, reducing the inversion of S ` to inverting a
permuted block-diagonal matrix.
6
Experiments
Section 6.1 demonstrates the performance of the CSM kernel and the accuracy of the DFT approximation In Section 6.2, the DFT approximation for the CSM kernel is used in a Bayesian HMM
framework to cluster time-varying multi-channel LFP data based on the full cross-spectrum; the
HMM states here correspond to states of the brain during LFP recording.
6
Table 1: The mean and standard deviation of the difference between the
AIC value of a given model and the
AIC value of the rank-2 CSM model.
Lower values are better.
0.035
KL Divergence
0.03
0.025
7 = 0.5 Hz
7 = 1 Hz
7 = 3 Hz
0.02
0.015
0.01
0.005
0
0
1
2
3
4
5
6
7
8
Series Length (seconds)
Figure 3: Time-series data is drawn from a Gaussian process with
a known CSM covariance kernel, where the domain restricted to a
fixed number of seconds. A Gaussian process is then fitted to this
data using the DFT approximation. The KL-divergence of the fitted
marginal likelihood from the true marginal likelihood is shown.
6.1
Rank
Model
? AIC
1
1
1
2
2
2
3
3
3
SE-LMC
SM-LMC
CSM
SE-LMC
SM-LMC
CSM
SE-LMC
SM-LMC
CSM
4770 (993)
512 (190)
109 (110)
5180 (1120)
325 (167)
0 (0)
5550 (1240)
412 (184)
204 (71.7)
Performance and Inference Analysis
The performance of the CSM kernel is compared to the SM-LMC kernel and SE-LMC (squared
exponential) kernel. Each of these models allow Q=20, and the rank of the coregionalization matrices is varied from rank-1 to rank-3. For a given rank, the CSM kernel always obtains the largest
marginal likelihood for a window of LFP data, and the marginal likelihood always increases for
increasing rank. To penalize the number of kernel parameters (e.g., a rank-3, Q=20 CSM kernel for
7 channels has 827 free parameters to optimize), the Akaike information criterion (AIC) is used for
model selection [1]. For this reason, we do not test rank greater than 3. Table 1 shows that a rank-2
CSM kernel is selected using this criterion, followed by a rank-1 CSM kernel. To show the rank-2
CSM kernel is consistently selected as the preferred model we report means and standard deviations
of AIC value differences across 30 different randomly selected 3-second windows of LFP data.
Next, we provide numerical results for the conditions required when using the DFT approximation
in (10). This allows for one to define details of a particular application in order to determine if the
DFT approximation to the CSM kernel is appropriate. A CSM kernel is defined for two outputs
with a single Gaussian component, Q = 1. The mean frequency and variance for this component
are set to push the limits of the application. For example, with LFP data, low frequency content is
of interest, namely greater than 1 Hz; therefore, we test values of ?
e1 ? { 12 , 1, 3} Hz. We anticipate
2
variances at these frequencies to be around ?e1 = 1 Hz . A conversion to angular frequency gives
?1 = 2?e
?1 and ?1 = 4? 2 ?e1 . The covariance matrix ? in (3) is formed using these parameters, a
fixed noise variance, and N observations on a time grid with sampling rate of 200 Hz. Data y are
drawn from the marginal likelihood with covariance ?.
? The KL
A new CSM kernel is fit to y using the DFT approximation, providing an estimate ?.
divergence of the fitted marginal likelihood from the true marginal likelihood is
"
#
1
|?|
?1
?
? ,
KL(p(y|?)||p(y|?))
=
log
? N + tr(? ?)
?
2
|?|
where | ? | and tr(?) are the determinant and trace operators, respectively.
Computing
1
?
KL(p(y|
?)||p(y|?))
for
various
values
of
?
e
and
N
provides
the
results
in
Figure
3.
This plot
1
N
shows that the DFT approximation struggles to resolve low frequency components unless the series length is sufficiently long. Due to the approximation error, when using the DFT approximation
on LFP data we a priori filter out frequencies below 1.5 Hz and perform analyses with a series
length of 3 seconds. This ensures the DFT approximation represents the true covariance matrix. The
following application of the CSM kernel uses these settings.
6.2
Including the CSM Kernel in a Bayesian Hierarchical Model
We analyze 12 hours of LFP data of a mouse transitioning between different stages of sleep [7, 21].
Observations were recorded simultaneously from 4 channels [6], high-pass filtered at 1.5 Hz, and
subsampled to 200 Hz. Using 3 second windows provides N = 600 and W = 14, 400. The HMM
was implemented with the number of kernel components Q = 15 and the number of states L = 7.
7
1.57
2
Amplitude
0
1
Phase
3
?1.57
0
?3.14
6
Amplitude
2.5
3.14
BasalAmy
5
3.14
DLS
1.5
1
1.57
5
3
0
1
Phase
Amplitude
6
0.5
?1.57
0
0
0
15
3
5
3
0
1
Phase
1.57
2
1
Phase
?1.57
?3.14
6
DHipp
3.14
1.57
5
3
0
1
0
?1
Phase
Amplitude
10
Frequency (Hz )
3.14
DMS
0
Amplitude
5
?3.14
6
?2
?1.57
0
0
5
10
15 0
5
Frequency
10
15 0
Frequency
5
10
15 0
5
10
?3
?3.14
15
0
5
10
State 7
State 6
State 5
State 4
State 3
State 2
State 1
Dzirasa et al.
CSM Kernel
0
15
Frequency (Hz )
Frequency
Frequency
20
40
60
80
100
120
140
160
Minutes
Figure 4: A subset of results from the Bayesian HMM analysis of brain states. In the upper left, the full crossspectrum for an arbitrary state (state 7) is plotted. In the upper right, the amplitude (top) and phase (bottom)
functions for the cross-spectrum between the Dorsomedial Striatum (DMS) and Hippocampus (DHipp) are
shown for all seven states. On the bottom, the maximum likelihood state assignments are shown and compared
to the state assignments from [7]. The same colors between the CSM state assignments and the phase and
amplitude functions correspond to the same state. These colors are alligned to the [7] states, but there is no
explicit relationship between the colors of the two state sequences.
This was chosen because sleep staging tasks categorize as many as seven states: various levels of
rapid eye movement, slow wave sleep, and wake [20]. Although rigorous model selection on L
is necessary to draw scientific conclusions from the results, the purpose of this experiment is to
illustrate the utility of the CSM kernel in this application.
An illustrative subset of the results are shown in Figure 4. The full cross-spectrum is shown for
a single state (state 7), and the cross-spectrum between the Dorsomedial Striatum and the Dorsal
Hippocampus are shown for all states. Furthermore, we show the progression of these brain state
assignments over 3 hours and compare them to states from the method of [7], where statistics of the
Hippocampus spectral density were clustered in an ad hoc fashion. To the best of our knowledge,
this method represents the most relevant and accurate results for sleep staging from LFP signals in
the neuroscience literature. From these results, it is apparent that our clusters pick up sub-states of
[7]. For instance, states 3, 6, and 7 all appear with high probability when the method from [7] infers
state 3. Observing the cross-phase function of sub-state 7 reveals striking differences from other
states in the theta wave (4-7 Hz) and the alpha wave (8-15 Hz). This cross-phase function is nearly
identical for states 2 and 5, implying that significant differences in the cross-amplitude spectrum
may have played a role in identifying the difference between these two brain states.
Many more of these interesting details exist due to the expressive nature of the CSM kernel. As a
full interpretation of the cross-spectrum results is not the focus of this work, we contend that the
CSM kernel has the potential to have a tremendous impact in fields such as neuroscience, where the
dynamics of cross-spectrum relationships of LFP signals are of great interest.
7
Conclusion
This work introduces the cross-spectral mixture kernel as an expressive kernel capable of extracting
patterns for multi-channel observations. Combined with the powerful nonparametric representation
of a Gaussian process, the CSM kernel expresses a functional form for every pairwise cross-spectrum
between channels. This is a novel approach that merges Gaussian processes in the machine learning
community to standard signal processing techniques. We believe the CSM kernel has the potential
to impact a broad array of disciplines since the kernel can trivially be extended to any time-series
application where Gaussian processes and the cross-spectrum are of interest.
Acknowledgments
The research reported here was funded in part by ARO, DARPA, DOE, NGA and ONR.
8
References
[1] H. Akaike. A new look at the statistical model identification. IEEE Transactions on Automatic Control,
19(6):716?723, 1974.
[2] M. A. Alvarez, L. Rosasco, and N. D. Lawrence. Kernels for vector-valued functions: a review. Foundations and Trends in Machine Learning, 4(3):195?266, 2012.
[3] M. J. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, University College
London.
[4] R. Caruana. Multitask learning. Machine Learning, 28(1):41?75, 1997.
[5] C. R. Dietrich and G. N. Newsam. Fast and exact simulation of stationary Gaussian processes through
circulant embedding of the covariance matrix. SIAM Journal on Scientific Computing, 18(4):1088?1107,
1997.
[6] K. Dzirasa, R. Fuentes, S. Kumar, J. M. Potes, and M. A. L. Nicolelis. Chronic in vivo multi-circuit
neurophysiological recordings in mice. Journal of Neuroscience Methods, 195(1):36?46, 2011.
[7] K. Dzirasa, S. Ribeiro, R. Costa, L. M. Santos, S. C. Lin, A. Grosmark, T. D. Sotnikova, R. R. Gainetdinov, M. G. Caron, and M. A. L. Nicolelis. Dopaminergic control of sleep?wake states. The Journal of
Neuroscience, 26(41):10577?10589, 2006.
[8] R. G. Gallager. Principles of digital communication. pages 229?232, 2008.
[9] M. G?onen and E. Alpaydn. Multiple kernel learning algorithms. JMLR, 12:2211?2268, 2011.
[10] P. Goovaerts. Geostatistics for Natural Resources Evaluation. Oxford University Press, 1997.
[11] G. G. Gregoriou, S. J. Gotts, H. Zhou, and R. Desimone. High-frequency, long-range coupling between
prefrontal and visual cortex during attention. Science, 324(5931):1207?1210, 2009.
[12] M. L?azaro-Gredilla, J. Qui?nonero Candela, C. E. Rasmussen, and A. R. Figueiras-Vidal. Sparse spectrum
Gaussian process regression. JMLR, (11):1865?1881, 2010.
[13] J. R. Lloyd, D. Duvenaud, R. Grosse, J. B. Tenenbaum, and Z. Ghahramani. Automatic construction and
natural-language description of nonparametric regression models. AAAI, 2014.
[14] D. J. C. MacKay. Ensemble learning for hidden Markov models. Technical report, 1997.
[15] D. Pfaff, A. Ribeiro, J. Matthews, and L. Kow. Concepts and mechanisms of generalized central nervous
system arousal. ANYAS, 2008.
[16] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. 2006.
[17] P. Sauseng and W. Klimesch. What does phase information of oscillatory brain activity tell us about
cognitive processes? Neuroscience and Biobehavioral Reviews, 32:1001?1013, 2008.
[18] C. M. Sweeney-Reed, T. Zaehle, J. Voges, F. C. Schmitt, L. Buentjen, K. Kopitzki, C. Esslinger, H. Hinrichs, H. J. Heinze, R. T. Knight, and A. Richardson-Klavehn. Corticothalamic phase synchrony and
cross-frequency coupling predict human memory formation. eLIFE, 2014.
[19] Y. W. Teh, M. Seeger, and M. I. Jordan. Semiparametric latent factor models. AISTATS, 10:333?340,
2005.
[20] M. A. Tucker, Y. Hirota, E. J. Wamsley, H. Lau, A. Chaklader, and W. Fishbein. A daytime nap containing
solely non-REM sleep enhances declarative but not procedural memory. Neurobiology of Learning and
Memory, 86(2):241?7, 2006.
[21] K. Ulrich, D. E. Carlson, W. Lian, J. S. Borg, K. Dzirasa, and L. Carin. Analysis of brain states from
multi-region LFP time-series. NIPS, 2014.
[22] P. D. Welch. The use of fast Fourier transform for the estimation of power spectra: A method based on
time averaging over short, modified periodograms. IEEE Transactions on Audio and Electroacoustics,
15(2):70?73, 1967.
[23] A. G. Wilson. Covariance kernels for fast automatic pattern discovery and extrapolation with Gaussian
processes. PhD thesis, University of Cambridge, 2014.
[24] A. G. Wilson and R. P. Adams. Gaussian process kernels for pattern discovery and extrapolation. ICML,
2013.
[25] A. G. Wilson, E. Gilboa, A. Nehorai, and J. P. Cunningham. Fast kernel learning for multidimensional
pattern extrapolation. NIPS, 2014.
[26] A. G. Wilson and D. A. Knowles. Gaussian process regression networks. ICML, 2012.
? la carte ? learning fast kernels. AISTATS, 2015.
[27] Z. Yang, A. J. Smola, L. Song, and A. G. Wilson. A
9
| 5966 |@word multitask:1 determinant:1 version:3 pw:1 inversion:2 hippocampus:3 c0:10 simulation:1 covariance:27 pick:1 inpainting:1 solid:1 tr:3 initial:1 series:8 contains:1 interestingly:1 com:1 gmail:1 multioutput:1 additive:1 subsequent:1 numerical:1 extrapolating:1 interpretable:1 plot:1 stationary:5 implying:1 discovering:2 fewer:1 selected:3 nervous:1 ksm:1 short:2 filtered:1 provides:9 location:7 ssm:2 rc:2 become:1 beta:1 borg:1 consists:2 fitting:3 behavioral:1 interdependence:1 pairwise:1 x0:14 periodograms:1 expected:2 rapid:1 surge:1 multi:24 brain:16 rem:1 resolve:1 window:15 considering:3 increasing:1 biobehavioral:1 provided:3 notation:5 circuit:1 what:1 santos:1 supplemental:3 unobserved:1 temporal:4 every:2 multidimensional:1 exactly:1 demonstrates:1 control:5 partitioning:1 appear:1 positive:4 engineering:1 local:1 struggle:1 limit:1 striatum:2 encoding:1 oxford:1 nap:1 solely:1 approximately:2 therein:1 co:5 range:1 averaged:1 unique:1 responsible:1 acknowledgment:1 lfp:18 block:3 definite:4 bla:1 lcarin:1 goovaerts:1 intersect:1 thought:1 convenient:3 selection:2 operator:1 storage:1 optimize:1 center:1 chronic:1 williams:1 attention:1 independently:1 sweeney:1 welch:2 formalized:1 simplicity:2 identifying:1 array:1 embedding:2 notion:1 diagonalizable:1 construction:1 exact:1 duke:3 gps:4 us:2 akaike:2 origin:1 trick:1 element:4 trend:1 observed:1 bottom:2 role:1 electrical:1 region:7 ensures:1 movement:1 removed:1 knight:1 rq:8 complexity:3 co2:1 dynamic:2 nehorai:1 solving:1 segment:2 creates:1 f2:2 basis:1 edwin:1 darpa:1 represented:7 various:4 phasor:2 chapter:1 derivation:1 fast:6 london:1 tell:1 formation:1 apparent:1 lag:2 valued:6 ability:2 statistic:2 dzirasa:6 lfps:1 cov:3 gp:15 jointly:2 transform:5 richardson:1 beal:1 hoc:1 sequence:1 dietrich:1 propose:1 aro:1 interaction:2 product:4 remainder:1 aligned:1 relevant:1 nonero:1 intuitive:1 description:1 qr:2 figueiras:1 electrode:1 cluster:7 extending:1 requirement:2 generating:1 adam:3 tions:1 illustrate:1 develop:1 ac:2 coupling:2 measured:2 recovering:1 implemented:1 implies:1 indicate:1 differ:1 filter:1 stochastic:1 centered:2 human:1 viewing:1 material:3 f1:4 fix:1 clustered:1 proposition:1 anticipate:1 extension:1 pl:1 around:3 sufficiently:1 duvenaud:1 normal:3 exp:6 great:1 lawrence:2 nw:5 predict:1 matthew:1 consecutive:1 csm:54 purpose:1 estimation:1 individually:1 largest:1 weighted:2 reflects:1 carte:1 clearly:1 gaussian:27 always:2 modified:1 rather:1 zhou:1 varying:1 wilson:7 vectorizes:1 encode:2 derived:3 emission:4 basolateral:1 focus:1 notational:1 consistently:2 rank:16 likelihood:14 indicates:1 improvement:1 seeger:1 psychiatry:1 rigorous:1 sense:1 inference:6 dependent:4 typically:1 cunningham:1 hidden:5 interested:2 classification:1 flexible:2 among:1 priori:1 animal:2 mackay:1 marginal:17 field:4 equal:2 extraction:1 sampling:1 identical:2 represents:5 broad:1 look:1 icml:2 carin:2 nearly:1 report:2 inherent:1 few:1 randomly:1 simultaneously:2 divergence:4 comprehensive:1 ve:2 subsampled:1 phase:36 freedom:2 stationarity:1 interest:4 neuroscientist:1 highly:1 evaluation:2 introduces:3 mixture:13 staging:2 accurate:1 desimone:1 capable:4 necessary:1 respective:1 bq:4 autocovariance:1 unless:1 re:3 arousal:3 plotted:1 fitted:3 instance:1 modeling:2 contiguous:1 retains:1 caruana:1 assignment:8 deviation:2 subset:2 kq:4 comprised:1 examining:1 motivating:3 reported:1 dependency:3 periodic:1 dir:1 combined:1 density:9 peak:1 siam:1 discipline:1 together:1 mouse:5 na:2 squared:3 thesis:2 recorded:3 aaai:1 central:1 rosasco:1 prefrontal:1 containing:1 henceforth:1 cognitive:1 derivative:3 return:1 potential:5 sinusoidal:1 sec:1 lloyd:1 includes:2 coefficient:2 ad:1 root:1 extrapolation:3 candela:1 analyze:1 observing:1 red:1 wave:9 capability:1 synchrony:4 vivo:1 acq:1 minimize:1 square:1 formed:2 purple:1 il:1 variance:4 accuracy:1 ensemble:1 correspond:3 symbolizes:1 ssg:7 spaced:2 conceptually:1 bayesian:8 raw:1 identification:1 produced:1 iid:1 researcher:1 oscillatory:1 aligns:1 definition:1 frequency:29 destructive:1 tucker:1 dm:2 associated:1 sampled:2 rational:1 dataset:2 adjusting:1 costa:1 color:3 knowledge:1 improves:1 electrophysiological:3 organized:1 infers:1 amplitude:24 corticothalamic:1 reflected:1 alvarez:1 formulation:3 shrink:1 generality:1 furthermore:1 angular:4 stage:1 smola:1 expressive:4 overlapping:1 heinze:1 sotnikova:1 scientific:2 believe:1 name:1 effect:1 concept:1 true:4 former:1 assigned:2 symmetric:2 yn1:2 illustrated:2 lmc:27 during:4 illustrative:2 criterion:2 generalized:2 demonstrate:1 cp:1 image:1 variational:5 kyle:2 recently:2 novel:8 rcn:1 common:2 permuted:1 functional:4 conditioning:1 interpretation:2 approximates:1 significant:2 composition:1 caron:1 vec:3 dft:14 cambridge:1 enjoyed:1 automatic:3 trivially:2 grid:3 language:1 aq:5 pq:2 funded:1 impressive:1 cortex:3 align:1 dorsomedial:2 gainetdinov:1 multivariate:2 posterior:4 recent:1 termed:2 onr:1 success:1 discussing:1 arbitrarily:1 inverted:1 analyzes:1 greater:2 impose:1 determine:1 maximize:2 dashed:1 signal:10 semi:1 multiple:7 full:9 interdependency:1 reduces:2 infer:1 segmented:1 technical:1 cross:46 long:2 lin:1 e1:3 impact:2 regression:9 implanted:1 experimentalists:1 df:1 iteration:1 kernel:128 represent:4 achieved:1 invert:1 penalize:1 whereas:1 semiparametric:2 wake:2 modality:1 ascent:3 hz:19 recording:2 undergo:2 jordan:1 extracting:1 unitary:1 yang:1 enough:1 variety:1 fit:3 cn:1 pfaff:1 shift:2 motivated:1 utility:3 song:1 detailed:2 yw:1 tune:1 se:4 nonparametric:3 band:2 tenenbaum:1 reduced:1 exist:2 shifted:2 neuroscience:9 delta:1 blue:1 discrete:4 hyperparameter:1 mat:1 express:1 procedural:1 achieving:1 drawn:6 merely:1 sum:4 nga:1 inverse:1 powerful:3 uncertainty:2 striking:1 place:1 knowles:1 draw:2 coherence:1 qui:1 bound:1 followed:1 played:1 aic:5 quadratic:1 sleep:6 discretely:1 oracle:1 activity:1 strength:1 kronecker:2 encodes:1 generates:2 fourier:5 speed:1 argument:1 elife:1 kumar:1 performing:1 separable:4 extracellular:1 dopaminergic:1 ern:1 department:3 according:1 gredilla:1 combination:2 belonging:2 conjugate:1 beneficial:1 across:4 em:2 biologically:1 intuitively:2 restricted:2 lau:1 interference:1 computationally:1 resource:1 slfm:1 mechanism:1 needed:2 electroacoustics:1 gaussians:1 vidal:1 progression:1 hierarchical:2 spectral:25 appropriate:1 alternative:1 top:1 dirichlet:2 marginalized:2 xw:1 carlson:3 ghahramani:1 concentration:1 dependence:1 prq:1 diagonal:2 exhibit:1 gradient:4 enhances:1 separate:2 concatenation:1 hmm:10 evenly:2 seven:2 reason:2 declarative:1 length:3 arq:1 modeled:1 relationship:5 cq:5 providing:3 demonstration:1 reed:1 onen:1 trace:1 implementation:2 design:2 perform:1 teh:1 contend:1 upper:3 conversion:1 observation:17 fuentes:1 markov:4 sm:30 finite:1 mate:1 kse:1 defining:1 extended:2 incorporated:1 communication:1 neurobiology:1 varied:1 arbitrary:2 community:1 atmospheric:1 david:2 inverting:2 pair:2 required:1 kl:6 namely:1 optimized:1 rad:1 merges:1 tremendous:1 hour:2 geostatistics:2 nip:2 able:1 below:1 pattern:6 sparsity:1 green:1 memory:4 including:1 power:7 nicolelis:2 force:1 natural:2 representing:1 theta:2 eye:1 categorical:3 daytime:1 auto:3 columbia:1 prior:2 literature:3 understanding:1 review:4 sg:7 discovery:2 relative:1 ksg:8 loss:1 interesting:1 digital:1 foundation:1 degree:2 consistent:1 schmitt:1 principle:1 ulrich:3 share:1 row:1 summary:1 supported:1 last:2 free:1 rasmussen:2 gilboa:1 allow:1 circulant:3 wide:1 ync:2 sparse:1 amygdala:1 xn:7 transition:2 coregionalization:6 ribeiro:2 transaction:2 alpha:3 approximate:3 obtains:1 preferred:1 reveals:1 conclude:1 xi:2 spectrum:38 continuous:1 latent:18 vectorization:1 table:2 rcq:3 channel:33 transfer:1 learn:1 nature:1 obtaining:1 improving:1 complex:6 domain:5 diag:3 hinrichs:1 aistats:2 spread:1 noise:3 hyperparameters:6 icm:1 x1:3 fashion:1 grosse:1 slow:1 precision:2 sub:2 position:1 inferring:1 explicit:1 exponential:3 periodogram:1 jmlr:2 down:1 minute:1 transitioning:2 offset:1 dl:1 intrinsic:1 circularly:1 phd:2 push:1 newsam:1 fc:5 simply:1 explore:1 azaro:1 gallager:1 neurophysiological:1 visual:1 conveniently:1 desire:1 scalar:3 corresponds:1 grosmark:1 shared:3 content:1 change:2 included:1 specifically:2 reducing:1 averaging:1 kafui:2 called:1 pas:1 la:1 infralimbic:1 college:1 latter:1 dorsal:1 categorize:1 lian:1 audio:1 correlated:2 |
5,487 | 5,967 | End-to-end Learning of LDA by Mirror-Descent Back
Propagation over a Deep Architecture
Jianshu Chen? , Ji He? , Yelong Shen? , Lin Xiao? , Xiaodong He? , Jianfeng Gao? ,
Xinying Song? and Li Deng?
?
Microsoft Research, Redmond, WA 98052, USA,
{jianshuc,yeshen,lin.xiao,xiaohe,jfgao,xinson,deng}@microsoft.com
?
Department of Electrical Engineering, University of Washington, Seattle, WA 98195, USA,
jvking@uw.edu
Abstract
We develop a fully discriminative learning approach for supervised Latent Dirichlet Allocation (LDA) model using Back Propagation (i.e., BP-sLDA), which maximizes the posterior probability of the prediction variable given the input document. Different from traditional variational learning or Gibbs sampling approaches, the proposed learning method applies (i) the mirror descent algorithm
for maximum a posterior inference and (ii) back propagation over a deep architecture together with stochastic gradient/mirror descent for model parameter estimation, leading to scalable and end-to-end discriminative learning of the model. As
a byproduct, we also apply this technique to develop a new learning method for
the traditional unsupervised LDA model (i.e., BP-LDA). Experimental results on
three real-world regression and classification tasks show that the proposed methods significantly outperform the previous supervised topic models, neural networks, and is on par with deep neural networks.
1
Introduction
Latent Dirichlet Allocation (LDA) [5], among various forms of topic models, is an important probabilistic generative model for analyzing large collections of text corpora. In LDA, each document is
modeled as a collection of words, where each word is assumed to be generated from a certain topic
drawn from a topic distribution. The topic distribution can be viewed as a latent representation of
the document, which can be used as a feature for prediction purpose (e.g., sentiment analysis). In
particular, the inferred topic distribution is fed into a separate classifier or regression model (e.g.,
logistic regression or linear regression) to perform prediction. Such a separate learning structure
usually significantly restricts the performance of the algorithm. For this purpose, various supervised
topic models have been proposed to model the documents jointly with the label information. In
[4], variational methods was applied to learn a supervised LDA (sLDA) model by maximizing the
lower bound of the joint probability of the input data and the labels. The DiscLDA method developed in [15] learns the transformation matrix from the latent topic representation to the output in a
discriminative manner, while learning the topic to word distribution in a generative manner similar
to the standard LDA. In [26], max margin supervised topic models are developed for classification and regression, which are trained by optimizing the sum of the variational bound for the log
marginal likelihood and an additional term that characterizes the prediction margin. These methods
successfully incorporate the information from both the input data and the labels, and showed better
performance in prediction compared to the vanilla LDA model.
One challenge in LDA is that the exact inference is intractable, i.e., the posterior distribution of the
topics given the input document cannot be evaluated explicitly. For this reason, various approximate
1
?
zd,n
?d
yd
wd,n
N
D
k
K
U,
Figure 1: Graphical representation of the supervised LDA model. Shaded nodes are observables.
inference methods are proposed, such as variational learning [4, 5, 26] and Gibbs sampling [9, 27],
for computing the approximate posterior distribution of the topics. In this paper, we will show that,
although the full posterior probability of the topic distribution is difficult, its maximum a posteriori
(MAP) inference, as a simplified problem, is a convex optimization problem when the Dirichlet parameter satisfies certain conditions, which can be solved efficiently by the mirror descent algorithm
(MDA) [2, 18, 21]. Indeed, Sontag and Roy [19] pointed out that the MAP inference problem of
LDA in this situation is polynomial-time and can be solved by an exponentiated gradient method,
which shares a same form as our mirror-descent algorithm with constant step-size. Nevertheless,
different from [19], which studied the inference problem alone, our focus in this paper is to integrate back propagation with mirror-descent algorithm to perform fully discriminative training of
supervised topic models, as we proceed to explain below.
Among the aforementioned methods, one training objective of the supervised LDA model is to maximize the joint likelihood of the input and the output variables [4]. Another variant is to maximize
the sum of the log likelihood (or its variable bound) and a prediction margin [26, 27]. Moreover,
the DiscLDA optimizes part of the model parameters by maximizing the marginal likelihood of the
input variables, and optimizes the other part of the model parameters by maximizing the conditional likelihood. For this reason, DiscLDA is not a fully discriminative training of all the model
parameters. In this paper, we propose a fully discriminative training of all the model parameters by
maximizing the posterior probability of the output given the input document. We will show that the
discriminative training can be performed in a principled manner by naturally integrating the backpropagation with the MDA-based exact MAP inference. To our best knowledge, this paper is the
first work to perform a fully end-to-end discriminative training of supervised topic models. Discriminative training of generative model is widely used and usually outperforms standard generative
training in prediction tasks [3, 7, 12, 14, 25]. As pointed out in [3], discriminative training increases
the robustness against the mismatch between the generative model and the real data. Experimental
results on three real-world tasks also show the superior performance of discriminative training.
In addition to the aforementioned related studies on topic models [4, 15, 26, 27], there have been
another stream of work that applied empirical risk minimization to graphical models such as Markov
Random Field and nonnegative matrix factorization [10, 20]. Specifically, in [20], an approximate
inference algorithm, belief propagation, is used to compute the belief of the output variables, which
is further fed into a decoder to produce the prediction. The approximate inference and the decoder
are treated as an entire black-box decision rule, which is tuned jointly via back propagation. Our
work is different from the above studies in that we use an MAP inference based on optimization
theory to motivate the discriminative training from a principled probabilistic framework.
2
Smoothed Supervised LDA Model
We consider the smoothed supervised LDA model in Figure 1. Let K be the number of topics,
N be the number of words in each document, V be the vocabulary size, and D be the number of
documents in the corpus. The generative process of the model in Figure 1 can be described as:
1. For each document d, choose the topic proportions according to a Dirichlet distribution:
?d ? p(?d |?) = Dir(?), where ? is a K ? 1 vector consisting of nonnegative components.
2. Draw each column k of a V ? K matrix independently from an exchangeable Dirichlet
distribution: k ? Dir( ) (i.e., ? p( | )), where > 0 is the smoothing parameter.
3. To generate each word wd,n :
2
(a) Choose a topic zd,n ? p(zd,n |?d ) = Multinomial(?d ). 1
(b) Choose a word wd,n ? p(wd,n |zd,n , ) = Multinomial(
zd,n ).
4. Choose the C ? 1 response vector: yd ? p(yd |?, U, ).
(a) In regression, p(yd |?d , U, ) = N (U ?d , 1 ), where U is a C ? K matrix consisting
of regression coefficients.
(b) In multi-class classification, p(yd |?d , U, ) = Multinomial Softmax( U ?d ) , where
xc
the softmax function is defined as Softmax(x)c = PC e exc0 , c = 1, . . . , C.
c0 =1
Therefore, the entire model can be described by the following joint probability
p( | )
D h
Y
d=1
p(yd |?d , U, ) ? p(?d |?) ? p(wd,1:N |zd,1:N , ) ? p(zd,1:N |?d )
|
{z
}
,p(yd ,?d ,wd,1:N ,zd,1:N | ,U,?, )
i
(1)
where wd,1:N and zd,1:N denotes all the words and the associated topics, respectively, in the d-th
document. Note that the model in Figure 1 is slightly different from the one proposed in [4], where
the response variable yd in Figure 1 is coupled with ?d instead of zd,1:N as in [4]. Blei and Mcauliffe
also pointed out this choice as an alternative in [4]. This modification will lead to a differentiable
end-to-end cost trainable by back propagation with superior prediction performance.
To develop a fully discriminative training method for the model parameters and U , we follow the
argument in [3], which states that the discriminative training is also equivalent to maximizing the
joint likelihood of a new model family with an additional set of parameters:
arg max p( | )p( ? | )
,U, ?
D
Y
d=1
p(yd |wd,1:N , , U, ?, )
D
Y
d=1
p(wd,1:N | ? , ?)
(2)
where p(wd,1:N | ? , ?) is obtained by marginalizing p(yd , ?d , wd,1:N , zd,1:N | , U, ?, ) in (1) and
replace with ? . The above problem (2) decouples into
D
h
i
X
arg max ln p( | ) +
ln p(yd |wd,1:N , , U, ?, )
,U
D
h
i
X
arg max ln p( ? | ) +
ln p(wd,1:N | ? , ?)
?
(3)
d=1
(4)
d=1
which are the discriminative learning problem of supervised LDA (Eq. (3)), and the unsupervised
learning problem of LDA (Eq. (4)), respectively. We will show that both problems can be solved in
a unified manner using a new MAP inference and back propagation.
3
Maximum A Posterior (MAP) Inference
We first consider the inference problem in the smoothed LDA model. For the supervised case, the
main objective is to infer yd given the words wd,1:N in each document d, i.e., computing
Z
p(yd |wd,1:N , , U, ?, ) =
p(yd |?d , U, )p(?d |wd,1:N , , ?)d?d
(5)
?d
where the probability p(yd |?d , U, ) is known (e.g., multinomial or Gaussian for classification and
regression problems ? see Section 2). The main challenge is to evaluate p(?d |wd,1:N , , ?), i.e.,
infer the topic proportion given each document, which is also the important inference problem in
the unsupervised LDA model. However, it is well known that the exact evaluation of the posterior
probability p(?d |wd,1:N , , ?) is intractable [4, 5, 9, 15, 26, 27]. For this reason, various approximate inference methods, such as variational inference [4, 5, 15, 26] and Gibbs sampling [9, 27],
1
We will represent all the multinomial variables by a one-hot vector that has a single component equal to
one at the position determined by the multinomial variable and all other components being zero.
3
have been proposed to compute the approximate posterior probability. In this paper, we take an
alternative approach for inference; given each document d, we only seek a point (MAP) estimate
of ?d , instead of its full (approximate) posterior probability. The major motivation is that, although
the full posterior probability of ?d is difficult, its MAP estimate, as a simplified problem, is more
tractable (and it is a convex problem under certain conditions). Furthermore, with the MAP estimate
of ?d , we can infer the prediction variable yd according to the following approximation from (5):
p(yd |wd,1:N , , U, ?, ) = E?d |wd,1:N [p(yd |?d , U, )] ? p(yd |??d|wd,1:N , U, )
(6)
??d|wd,1:N = arg max p(?d |wd,1:N , , ?, )
(7)
where E?d |wd,1:N denotes the conditional expectation with respect to ?d given wd,1:N , and the expectation is sampled by the MAP estimate, ??d|wd,1:N , of ?d given wd,1:N , defined as
?d
The approximation gets more precise when p(?d |wd,1:N , , ?, ) becomes more concentrated
around ??d|wd,1;N . Experimental results on several real datasets (Section 5) show that the approximation (6) provides excellent prediction performance.
Using the Bayesian rule p(?d |wd,1:N , , ?) = p(?d |?)p(wd,1:N |?d , )/p(wd,1:N | , ?) and the fact
that p(wd,1:N | , ?) is independent of ?d , we obtain the equivalent form of (7) as
?
?
??d|wd,1:N = arg max ln p(?d |?) + ln p(wd,1:N |?d , )
(8)
?d 2PK
PK
where PK = {? 2 R : ?j
0, j=1 ?j = 1} denotes the (K 1)-dimensional probability
simplex, p(?d |?) is the Dirichlet distribution, and p(wd,1:N |?d , ) can be computed by integrating
QN
p(wd,1:N , zd,1:N |?d , ) = n=1 p(wd,n |zd,n , )p(zd,n |?d ) over zd,1:N , which leads to (derived in
Section A of the supplementary material)
?xd,v
V ?X
K
Y
p(wd,1:N |?d , ) =
?d,j vj
= p(xd |?d , )
(9)
K
v=1
j=1
where xd,v denotes the term frequency of the v-th word (in vocabulary) inside the d-th document,
and xd denotes the V -dimensional bag-of-words (BoW) vector of the d-th document. Note that
p(wd,1:N |?d , ) depends on wd,1:N only via the BoW vector xd , which is the sufficient statistics.
Therefore, we use p(xd |?d , ) and p(wd,1:N |?d , ) interchangeably from now on. Substituting the
expression of Dirichlet distribution and (9) into (8), we get
?
?
??d|wd,1:N = arg max xTd ln( ?d ) + (? 1)T ln ?d
?d 2PK
?
?
= arg min
xTd ln( ?d ) (? 1)T ln ?d
(10)
?d 2PK
where we dropped the terms independent of ?d , and 1 denotes an all-one vector. Note that when
? 1 (? > 1), the optimization problem (10) is (strictly) convex and is non-convex otherwise.
3.1
Mirror Descent Algorithm for MAP Inference
An efficient approach to solving the constrained optimization problem (10) is the mirror descent
algorithm (MDA) with Bregman divergence chosen to be generalized Kullback-Leibler divergence
[2, 18, 21]. Specifically, let f (?d ) denote the cost function in (10), then the MDA updates the MAP
estimate of ?d iteratively according to:
?
1
?d,` = arg min f (?d,` 1 ) + [r?d f (?d,` 1 )]T (?d ?d,` 1 ) +
(?d , ?d,` 1 )
(11)
?d 2PK
Td,`
?d,` denotes the estimate of ?d,` at the `-th iteration, Td,` denotes the step-size of MDA, and (x, y)
is the Bregman divergence chosen to be (x, y) = xT ln(x/y) 1T x + 1T y. The argmin in (11)
can be solved in closed-form (see Section B of the supplementary material) as
?
?
?
1
xd
? 1
1
T
?d,` =
? ?d,` 1 exp Td,`
+
, ` = 1, . . . , L, ?d,0 = 1 (12)
C?
?d,` 1
?d,` 1
K
4
Normalization
Mirror Descent Cell
?
Mirror Descent Cell
Figure 2: Layered deep architecture for computing p(yd |wd,1:N , , U, ?, ), where ()/() denotes
element-wise division, denotes Hadamard product, and exp() denotes element-wise exponential.
where C? is a normalization factor such that ?d,` adds up to one, denotes Hadamard product, L is
the number of MDA iterations, and the divisions in (12) are element-wise operations. Note that the
recursion (12) naturally enforces each ?d,` to be on the probability simplex. The MDA step-size Td,`
can be either constant, i.e., Td,` = T , or adaptive over iterations and samples, determined by line
search (see Section C of the supplementary material). The computation complexity in (12) is low
since most computations are sparse matrix operations. For example, although by itself ?d,` 1 in
(12) is a dense matrix multiplication, we only need to evaluate the elements of ?d,` 1 at the positions where the corresponding elements of xd are nonzero, because all other elements of xd / ?d,` 1
is known to be zero. Overall, the computation complexity in each iteration of (12) is O(nTok ? K),
where nTok denotes the number of unique tokens in the document. In practice, we only use a small
number of iterations, L, in (12) and use ?d,L to approximate ??d|wd,1:N so that (6) becomes
p(yd |wd,1:N , , U, ?, ) ? p(yd |?d,L , U, )
(13)
In summary, the inference of ?d and yd can be implemented by the layered architecture in Figure 2,
where the top layer infers yd using (13) and the MDA layers infer ?d iteratively using (12). Figure 2
also implies that the the MDA layers act as a feature extractor by generating the MAP estimate ?d,L
for the output layer. Our end-to-end learning strategy developed in the next section jointly learns the
model parameter U at the output layer and the model parameter at the feature extractor layers to
maximize the posterior of the prediction variable given the input document.
4
Learning by Mirror-Descent Back Propagation
We now consider the supervised learning problem (3) and the unsupervised learning problem (4),
respectively, using the developed MDA-based MAP inference. We first consider the supervised
learning problem. With (13), the discriminative learning problem (3) can be approximated by
"
#
D
X
arg min
ln p( | )
ln p(yd |?d,L , U, )
(14)
,U
d=1
which can be solved by stochastic mirror descent (SMD). Note that the cost function in (14) depends
on U explicitly through p(yd |?d,L , U, ), which can be computed directly from its definition in
Section 2. On the other hand, the cost function in (14) depends on implicitly through ?d,L . From
Figure 2, we observe that ?d,L not only depends on explicitly (as indicated in the MDA block on
the right-hand side of Figure 2) but also depends on implicitly via ?d,L 1 , which in turn depends
on both explicitly and implicitly (through ?d,L 2 ) and so on. That is, the dependency of the
cost function on is in a layered manner. Therefore, we devise a back propagation procedure to
efficiently compute its gradient with respect to according to the mirror-descent graph in Figure
2, which back propagate the error signal through the MDA blocks at different layers. The gradient
formula and the implementation details of the learning algorithm can be found in Sections C?D in
the supplementary material.
For the unsupervised learning problem (4), the gradient of ln p( ? | ) with respect to ? assumes the
same form as that of ln p( | ). Moreover, it can be shown that the gradient of ln p(wd,1:N | ? , ?, )
5
with respect ? can be expressed as (see Section E of the supplementary material):
?
(a) @
@ ln p(wd,1:N | ? , ?)
@
= E?d |xd
ln p(xd |?d , ? ) ?
ln p(xd |?d,L , ? )
@?
@?
@?
(15)
where p(xd |?d , ? ) assumes the same form as (9) except is replaced by ? . The expectation is
evaluated with respect to the posterior probability p(?d |wd,1:N , ? , ?), and is sampled by the MAP
estimate of ?d in step (a). ?d,L is an approximation of ??d|wd,1:N computed via (12) and Figure 2.
5
Experiments
5.1
Description of Datasets and Baselines
We evaluated our proposed supervised learning (denoted as BP-sLDA) and unsupervised learning
(denoted as BP-LDA) methods on three real-world datasets. The first dataset we use is a large-scale
dataset built on Amazon movie reviews (AMR) [16]. The data set consists of 7.9 million movie
reviews (1.48 billion words) from Amazon, written by 889,176 users, on a total of 253,059 movies.
For text preprocessing we removed punctuations and lowercasing capital letters. A vocabulary of
size 5,000 is built by selecting the most frequent words. (In another setup, we keep the full vocabulary of 701K.) Same as [24], we shifted the review scores so that they have zero mean. The task
is formulated as a regression problem, where we seek to predict the rating score using the text of
the review. Second, we consider a multi-domain sentiment (MultiSent) classification task [6], which
contains a total 342,104 reviews on 25 types of products, such as apparel, electronics, kitchen and
housewares. The task is formulated as a binary classification problem to predict the polarity (positive or negative) of each review. Likewise, we preprocessed the text by removing punctuations and
lowercasing capital letters, and built a vocabulary of size 1,000 from the most frequent words. In addition, we also conducted a second binary text classification experiment on a large-scale proprietary
dataset for business-centric applications (1.2M documents and vocabulary size of 128K).
The baseline algorithms we considered include Gibbs sampling (Gibbs-LDA) [17], logistic/linear regression on bag-of-words, supervised-LDA (sLDA) [4], and MedLDA [26], which are implemented
either in C++ or Java. And our proposed algorithms are implemented in C#.2 For BP-LDA and
Gibbs-LDA, we first train the models in an unsupervised manner, and then generate per-document
topic proportion ?d as their features in the inference steps, on top of which we train a linear (logistic)
regression model on the regression (classification) tasks.
5.2
Prediction Performance
We first evaluate the prediction performance of our models and compare them with the traditional
(supervised) topic models. Since the training of the baseline topic models takes much longer time
than BP-sLDA and BP-LDA (see Figure 5), we compare their performance on two smaller datasets,
namely a subset (79K documents) of AMR (randomly sampled from the 7.9 million reviews) and the
MultiSent dataset (342K documents), which are all evaluated with 5-fold cross validation. For AMR
2
2
regression,
use the predictive
P we
P o Ro 2to measure othe prediction performance, defined as: pR =
o
2
1 ( d (yd yd ) )/( d (yd y?d ) ), where yd denotes the label of the d-th document in the
heldout (out-of-fold) set during the 5-fold cross validation, y?do is the mean of all ydo in the heldout
set, and yd is the predicted value. The pR2 scores of different models with varying number of topics
are shown in Figure 3(a). Note that the BP-sLDA model outperforms the other baselines with large
margin. Moreover, the unsupervised BP-LDA model outperforms the unsupervised LDA model
trained by Gibbs sampling (Gibbs-LDA). Second, on the MultiSent binary classification task, we
use the area-under-the-curve (AUC) of the operating curve of probability of correct positive versus
probability of false positive as our performance metric, which are shown in Figure 3(b). It also shows
that BP-sLDA outperforms other methods and that BP-LDA outperforms the Gibbs-LDA model.
Next, we compare our BP-sLDA model with other strong discriminative models (such as neural networks) by conducting two large-scale experiments: (i) regression task on AMR full dataset (7.9M
documents) and (ii) binary classification task on the proprietary business-centric dataset (1.2M documents). For the large-scale AMR regression, we can see that pR2 improves significantly compared
2
A third-party code is available online at https://github.com/jvking/bp-lda.
6
95
0.4
pR2
BP?sLDA
Logistic regression
0.3
0.2
BP?sLDA
Logistic regression
MedLDA
sLDA
BP?LDA
Gibbs?LDA
85
80
92
AUC (%)
BP?sLDA
Linear
MedLDA
sLDA
BP?LDA
Gibbs?LDA
AUC (%)
0.5
75
91
90
70
89
0.1
0
0
93
90
65
20
40
60
80
100
120
60
0
20
(a) AMR regression task (79K)
40
60
80
100
120
88
0
20
Number of topics
Number of topics
(b) MultiSent classification task
40
60
80
100
120
Number of topics
(c) MultiSent task (zoom in)
Figure 3: Prediction performance on AMR regression task (measured in pR2 ) and MultiSent classification task (measured in AUC). Higher score is better for both, with perfect value being one.
Table 1: pR2 (in percentage) on full AMR data (7.9M documents). The standard deviations in the
parentheses are obtained from 5-fold cross validation.
Number of topics
Linear Regression (voc5K)
Neural Network (voc5K)
BP-sLDA (? = 1.001, voc5K)
BP-sLDA (? = 0.5, voc5K)
BP-sLDA (? = 0.1, voc5K)
Linear Regression (voc701K)
BP-sLDA (?=1.001,voc701K)
5
10
59.0 (0.1)
61.4 (0.1)
54.7 (0.1)
53.3 (2.8)
61.0 (0.1)
65.3 (0.3)
54.5 (1.2)
56.1 (0.1)
69.8 (0.2) 74.3 (0.3)
20
50
38.4 (0.1)
62.3 (0.4) 63.5 (0.7)
69.1 (0.2) 74.7 (0.3)
57.0 (0.2) 61.3 (0.3)
58.4 (0.1) 64.1 (0.1)
41.5 (0.2)
78.5 (0.2) 83.6 (0.6)
100
63.1 (0.8)
74.3 (2.4)
67.1 (0.1)
70.6 (0.3)
200
63.5 (0.4)
78.3 (1.1)
74.5 (0.2)
75.7 (0.2)
80.1 (0.9) 84.7 (2.8)
to the best results on the 79K dataset shown in Figure 3(a), and also significantly outperform the neural network models with same number of model parameters. Moreover, the best deep neural network
(200 ? 200 in hidden layers) gives pR2 of 76.2%(?0.6%), which is worse than 78.3% of BP-sLDA.
In addition, BP-sLDA also significantly outperforms Gibbs-sLDA [27], Spectral-sLDA [24], and the
Hybrid method (Gibbs-sLDA initialized with Spectral-sLDA) [24], whose pR2 scores (reported in
[24]) are between 10% and 20% for 5 ? 10 topics (and deteriorate when further increasing the topic
number). The results therein are obtained under same setting as this paper. To further demonstrate
the superior performance of BP-sLDA on the large vocabulary scenario, we trained BP-sLDA on
full vocabulary (701K) AMR and show the results in Table 1, which are even better than the 5K
vocabulary case. Finally, for the binary text classification task on the proprietary dataset, the AUCs
are given in Table 2, where BP-sLDA (200 topics) achieves 31% and 18% relative improvements
over logistic regression and neural network, respectively. Moreover, on this task, BP-sLDA is also
on par with the best DNN (a larger model consisting of 200 ? 200 hidden units with dropout), which
achieves an AUC of 93.60.
5.3
Analysis and Discussion
We now analyze the influence of different hyper parameters on the prediction performance. Note
from Figure 3(a) that, when we increase the number of topics, the pR2 score of BP-sLDA first
improves and then slightly deteriorates after it goes beyond 20 topics. This is most likely to be
caused by overfitting on the small dataset (79K documents), because the BP-sLDA models trained
on the full 7.9M dataset produce much higher pR2 scores (Table 1) than that on the 79K dataset
and keep improving as the model size (number of topics) increases. To understand the influence
of the mirror descent steps on the prediction performance, we plot in Figure 4(a) the pR2 scores
of BP-sLDA on the 7.9M AMR dataset for different values of mirror-descent steps L. When L
increases, for small models (K = 5 and K = 20), the pR2 score remains the same, and, for a larger
model (K = 100), the pR2 score first improves and then remain the same. One explanation for
this phenomena is that larger K implies that the inference problem (10) becomes an optimization
problem of higher dimension, which requires more mirror descent iterations. Moreover, the mirrordescent back propagation, as an end-to-end training of the prediction output, would compensate
the imperfection caused by the limited number of inference steps, which makes the performance
insensitive to L once it is large enough. In Figure 4(b), we plot the percentage of the dominant
7
Table 2: AUC (in percentage) on the business-centric proprietary data (1.2M documents, 128K vocabulary). The standard deviations in the parentheses are obtained from five random initializations.
10
20
50
100
200
90.56 (0.00)
90.95 (0.07) 91.25 (0.05) 91.32 (0.23) 91.54 (0.11) 91.90 (0.05) 91.98 (0.05)
92.02 (0.02) 92.21 (0.03) 92.35 (0.07) 92.58 (0.03) 92.82 (0.07) 93.50 (0.06)
0.8
0.75
0.7
pR2
0.65
0.6
0.55
5 topics
20 topics
100 topics
0.5
0.45
0.4
0
20
40
60
80
100
50
40
30
20
10
0
0
20
40
60
80
100
12
10
8
BP?LDA (?=1.001)
BP?LDA (?=0.5)
BP?LDA (?=0.1)
Gibbs?LDA (?=0.5)
Gibbs?LDA (?=0.1)
6
4
2
0
5
Number of topics
Number of mirror descent iterations (layers)
(a) Influence of MDA iterations L
BP?sLDA (?=1.001)
BP?sLDA (?=0.5)
BP?sLDA (?=0.1)
Gibbs?LDA (?=0.5)
Gibbs?LDA (?=0.1)
BP?LDA (?=0.5)
BP?LDA (?=0.1)
Negative per?word log?likelihood
5
Percentage of dominant topics (%)
Number of topics
Logistic Regression
Neural Network
BP-sLDA
10
20
Number of topics
(b) Sparsity of the topic distribution
(c) Per-word log-likelihoods
Figure 4: Analysis of the behaviors of BP-sLDA and BP-LDA models.
topics (which add up to 90% probability) on AMR, which shows that BP-sLDA learns sparse topic
distribution even when ? = 1.001 and obtains sparser topic distribution with smaller ? (i.e., 0.5 and
0.1). In Figure 4(c), we evaluate the per-word log-likelihoods of the unsupervised models on AMR
dataset using the method in [23]. The per-word log-likelihood of BP-LDA with ? = 1.001 is worse
than the case of ? = 0.5 and ? = 0.1 for Gibbs-LDA, although its prediction performance is better.
This suggests the importance of the Dirichlet prior in text modeling [1, 22] and a potential tradeoff
between the text modeling performance and the prediction performance.
Efficiency in Computation Time
To compare the efficiency of the algorithms, we
show the training time of different models on the
AMR dataset (79K and 7.9M) in Figure 5, which
shows that our algorithm scales well with respect
to increasing model size (number of topics) and increasing number of data samples.
6
Conclusion
3
10
Training time in hours
5.4
2
10
sLDA (79K)
BP?sLDA (79K)
MedLDA (79K)
BP?sLDA (7.9M)
1
10
0
10
10
We have developed novel learning approaches for
supervised LDA models, using MAP inference and
?2
10
mirror-descent back propagation, which leads to an
0
20
40
60
80
100
end-to-end discriminative training. We evaluate the
Number of topics
prediction performance of the model on three realworld regression and classification tasks. The results show that the discriminative training signifi- Figure 5: Training time on the AMR dataset.
cantly improves the performance of the supervised (Tested on Intel Xeon E5-2680 2.80GHz.)
LDA model relative to previous learning methods.
Future works include (i) exploring faster algorithms for the MAP inference (e.g., accelerated mirror
descent), (ii) developing semi-supervised learning of LDA using the framework from [3], and (iii)
learning ? from data. Finally, also note that the layered architecture in Figure 2 could be viewed
as a deep feedforward neural network [11] with structures designed from the topic model in Figure
1. This opens up a new direction of combining the strength of both generative models and neural networks to develop new deep learning models that are scalable, interpretable and having high
prediction performance for text understanding and information retrieval [13].
8
?1
References
[1] A. Asuncion, M. Welling, P. Smyth, and Y. W. Teh. On smoothing and inference for topic models. In
Proc. UAI, pages 27?34, 2009.
[2] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3):167?175, 2003.
[3] C. M. Bishop and J. Lasserre. Generative or discriminative? getting the best of both worlds. Bayesian
Statistics, 8:3?24, 2007.
[4] D. M. Blei and J. D. Mcauliffe. Supervised topic models. In Proc. NIPS, pages 121?128, 2007.
[5] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. JMLR, 3:993?1022, 2003.
[6] J. Blitzer, M. Dredze, and F. Pereira. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proc. ACL, volume 7, pages 440?447, 2007.
[7] G. Bouchard and B. Triggs. The tradeoff between generative and discriminative classifiers. In Proc.
COMPSTAT, pages 721?728, 2004.
[8] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic
optimization. Journal of Machine Learning Research, 12:2121?2159, Jul. 2011.
[9] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proc. of the National Academy of Sciences,
pages 5228?5235, 2004.
[10] J. R. Hershey, J. L. Roux, and F. Weninger. Deep unfolding: Model-based inspiration of novel deep
architectures. arXiv:1409.2574, 2014.
[11] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen,
T. N. Sainath, and B. Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The
shared views of four research groups. IEEE Signal Process. Mag., 29(6):82?97, 2012.
[12] A. Holub and P. Perona. A discriminative framework for modelling object classes. In Proc. IEEE CVPR,
volume 1, pages 664?671, 2005.
[13] P.-S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck. Learning deep structured semantic models
for web search using clickthrough data. In Proc. CIKM, pages 2333?2338, 2013.
[14] S. Kapadia. Discriminative Training of Hidden Markov Models. PhD thesis, University of Cambridge,
1998.
[15] S. Lacoste-Julien, F. Sha, and M. I. Jordan. DiscLDA: Discriminative learning for dimensionality reduction and classification. In Proc. NIPS, pages 897?904, 2008.
[16] J. J. McAuley and J. Leskovec. From amateurs to connoisseurs: modeling the evolution of user expertise
through online reviews. In Proc. WWW, pages 897?908, 2013.
[17] Andrew Kachites McCallum.
http://mallet.cs.umass.edu, 2002.
MALLET: A Machine Learning for Language Toolkit.
[18] D. B. Nemirovsky. A. S., Yudin. Problem Complexity and Method Efficiency in Optimization. Wiley,
New York, 1983.
[19] D. Sontag and D. Roy. Complexity of inference in latent dirichlet allocation. In Proc. NIPS, pages
1008?1016, 2011.
[20] V. Stoyanov, A. Ropson, and J. Eisner. Empirical risk minimization of graphical model parameters given
approximate inference, decoding, and model structure. In Proc. AISTATS, pages 725?733, 2011.
[21] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. SIAM Journal on
Optimization, 2008.
[22] H. M. Wallach, D. M. Mimno, and A. McCallum. Rethinking LDA: Why priors matter. In Proc. NIPS,
pages 1973?1981, 2009.
[23] H. M. Wallach, I. Murray, R. Salakhutdinov, and D. Mimno. Evaluation methods for topic models. In
Proc. ICML, pages 1105?1112, 2009.
[24] Y. Wang and J. Zhu. Spectral methods for supervised topic models. In Proc. NIPS, pages 1511?1519,
2014.
[25] Oksana Yakhnenko, Adrian Silvescu, and Vasant Honavar. Discriminatively trained Markov model for
sequence classification. In Proc. IEEE ICDM, 2005.
[26] J. Zhu, A. Ahmed, and E. P. Xing. MedLDA: maximum margin supervised topic models. JMLR,
13(1):2237?2278, 2012.
[27] J. Zhu, N. Chen, H. Perkins, and B. Zhang. Gibbs max-margin topic models with data augmentation.
JMLR, 15(1):1073?1110, 2014.
9
| 5967 |@word polynomial:1 proportion:3 c0:1 triggs:1 open:1 adrian:1 seek:2 propagate:1 blender:1 nemirovsky:1 mcauley:1 reduction:1 electronics:1 contains:1 score:10 selecting:1 mag:1 uma:1 tuned:1 document:27 outperforms:6 silvescu:1 com:2 wd:50 written:1 plot:2 designed:1 update:1 interpretable:1 alone:1 generative:9 mccallum:2 blei:3 provides:1 node:1 zhang:1 five:1 kingsbury:1 yelong:1 consists:1 inside:1 manner:6 deteriorate:1 indeed:1 behavior:1 multi:2 salakhutdinov:1 td:5 increasing:3 becomes:3 moreover:6 maximizes:1 argmin:1 developed:5 unified:1 finding:1 transformation:1 act:1 concave:1 xd:13 decouples:1 classifier:2 ro:1 exchangeable:1 unit:1 mcauliffe:2 positive:3 engineering:1 dropped:1 analyzing:1 yd:31 black:1 acl:1 therein:1 studied:1 initialization:1 wallach:2 suggests:1 shaded:1 factorization:1 limited:1 unique:1 enforces:1 practice:1 block:2 backpropagation:1 procedure:1 area:1 empirical:2 significantly:5 java:1 yakhnenko:1 word:18 integrating:2 griffith:1 get:2 cannot:1 layered:4 acero:1 risk:2 influence:3 www:1 equivalent:2 map:17 maximizing:5 compstat:1 go:1 independently:1 convex:6 sainath:1 shen:1 amazon:2 roux:1 rule:2 ropson:1 steyvers:1 user:2 exact:3 smyth:1 vasant:1 jaitly:1 element:6 roy:2 approximated:1 recognition:1 electrical:1 solved:5 wang:1 removed:1 principled:2 complexity:4 trained:5 motivate:1 solving:1 predictive:1 division:2 efficiency:3 observables:1 joint:4 various:4 train:2 jianfeng:1 hyper:1 whose:1 slda:39 widely:1 supplementary:5 larger:3 cvpr:1 otherwise:1 statistic:2 jointly:3 itself:1 online:3 sequence:1 differentiable:1 kapadia:1 propose:1 product:3 adaptation:1 frequent:2 hadamard:2 combining:1 bow:2 academy:1 description:1 getting:1 seattle:1 billion:1 produce:2 generating:1 perfect:1 object:1 blitzer:1 develop:4 andrew:1 measured:2 eq:2 strong:1 implemented:3 predicted:1 c:1 implies:2 direction:1 correct:1 stochastic:3 material:5 apparel:1 strictly:1 exploring:1 around:1 considered:1 exp:2 predict:2 substituting:1 major:1 achieves:2 purpose:2 estimation:1 proc:15 bag:2 label:4 jvking:2 successfully:1 minimization:2 unfolding:1 imperfection:1 gaussian:1 varying:1 derived:1 focus:1 improvement:1 modelling:1 likelihood:10 baseline:4 posteriori:1 inference:28 entire:2 hidden:3 perona:1 dnn:1 arg:9 classification:17 among:2 aforementioned:2 overall:1 denoted:2 smoothing:2 softmax:3 constrained:1 oksana:1 marginal:2 field:1 equal:1 once:1 having:1 washington:1 sampling:5 ng:1 yu:1 unsupervised:10 icml:1 future:1 simplex:2 randomly:1 divergence:3 zoom:1 national:1 beck:1 replaced:1 kitchen:1 consisting:3 microsoft:2 evaluation:2 punctuation:2 pc:1 bregman:2 byproduct:1 amateur:1 initialized:1 leskovec:1 column:1 modeling:4 xeon:1 teboulle:1 cost:5 deviation:2 subset:1 conducted:1 reported:1 dependency:1 dir:2 proximal:1 siam:1 cantly:1 probabilistic:2 decoding:1 together:1 thesis:1 augmentation:1 pr2:13 choose:4 huang:1 worse:2 leading:1 li:1 potential:1 heck:1 boom:1 coefficient:1 matter:1 explicitly:4 caused:2 depends:6 stream:1 performed:1 view:1 closed:1 analyze:1 characterizes:1 hazan:1 xing:1 bouchard:1 asuncion:1 jul:1 conducting:1 efficiently:2 likewise:1 bayesian:2 weninger:1 expertise:1 explain:1 definition:1 against:1 frequency:1 mohamed:1 naturally:2 associated:1 sampled:3 dataset:15 knowledge:1 infers:1 improves:4 dimensionality:1 holub:1 back:12 centric:3 higher:3 supervised:24 follow:1 response:2 hershey:1 evaluated:4 box:2 furthermore:1 hand:2 web:1 nonlinear:1 propagation:12 logistic:7 lda:51 indicated:1 scientific:1 dredze:1 xiaodong:1 usa:2 evolution:1 inspiration:1 leibler:1 iteratively:2 nonzero:1 semantic:1 interchangeably:1 during:1 auc:7 generalized:1 mallet:2 demonstrate:1 duchi:1 variational:5 wise:3 novel:2 superior:3 multinomial:6 ji:1 insensitive:1 volume:2 million:2 he:3 cambridge:1 gibbs:19 vanilla:1 xtd:2 pointed:3 language:1 toolkit:1 longer:1 operating:1 add:2 dominant:2 posterior:13 showed:1 optimizing:1 optimizes:2 scenario:1 certain:3 binary:5 devise:1 additional:2 deng:4 maximize:3 signal:2 ii:3 semi:1 full:8 infer:4 stoyanov:1 faster:1 ahmed:1 cross:3 compensate:1 lin:2 retrieval:1 icdm:1 parenthesis:2 prediction:23 scalable:2 regression:24 variant:1 expectation:3 metric:1 arxiv:1 iteration:8 represent:1 normalization:2 cell:2 addition:3 jordan:2 feedforward:1 iii:1 enough:1 architecture:6 tradeoff:2 expression:1 sentiment:3 song:1 sontag:2 speech:1 proceed:1 york:1 proprietary:4 deep:11 concentrated:1 jianshu:1 generate:2 http:2 outperform:2 percentage:4 restricts:1 shifted:1 deteriorates:1 cikm:1 per:5 zd:15 medlda:5 group:1 lowercasing:2 four:1 nevertheless:1 drawn:1 capital:2 preprocessed:1 dahl:1 lacoste:1 uw:1 graph:1 subgradient:2 sum:2 realworld:1 letter:3 family:1 draw:1 decision:1 disclda:4 dropout:1 bound:3 layer:9 smd:1 fold:4 nonnegative:2 mda:13 strength:1 perkins:1 bp:46 argument:1 min:3 department:1 developing:1 according:4 structured:1 honavar:1 smaller:2 slightly:2 remain:1 modification:1 pr:1 ln:19 remains:1 turn:1 singer:1 fed:2 tractable:1 end:14 available:1 operation:3 apply:1 observe:1 spectral:3 alternative:2 robustness:1 denotes:14 dirichlet:10 top:2 assumes:2 include:2 graphical:3 xc:1 eisner:1 murray:1 objective:2 amr:14 strategy:1 sha:1 traditional:3 gradient:7 separate:2 rethinking:1 decoder:2 topic:56 tseng:1 reason:3 code:1 modeled:1 polarity:1 difficult:2 setup:1 negative:2 implementation:1 clickthrough:1 perform:3 teh:1 markov:3 datasets:4 descent:20 situation:1 hinton:1 precise:1 smoothed:3 inferred:1 rating:1 namely:1 trainable:1 acoustic:1 hour:1 nip:5 beyond:1 redmond:1 usually:2 below:1 mismatch:1 sparsity:1 challenge:2 built:3 max:8 explanation:1 belief:2 hot:1 treated:1 business:3 hybrid:1 recursion:1 zhu:3 movie:3 github:1 julien:1 coupled:1 text:9 review:8 prior:2 understanding:1 multiplication:1 marginalizing:1 relative:2 fully:6 par:2 heldout:2 discriminatively:1 allocation:4 versus:1 validation:3 integrate:1 vanhoucke:1 sufficient:1 xiao:2 share:1 summary:1 token:1 side:1 exponentiated:1 understand:1 senior:1 sparse:2 ghz:1 mimno:2 curve:2 dimension:1 vocabulary:10 world:4 yudin:1 qn:1 collection:2 adaptive:2 preprocessing:1 simplified:2 projected:1 nguyen:1 party:1 welling:1 approximate:9 obtains:1 implicitly:3 kullback:1 keep:2 overfitting:1 uai:1 corpus:2 assumed:1 discriminative:24 search:2 latent:6 why:1 table:5 lasserre:1 learn:1 improving:1 e5:1 othe:1 excellent:1 domain:2 vj:1 bollywood:1 aistats:1 pk:6 main:2 dense:1 motivation:1 intel:1 wiley:1 position:2 pereira:1 exponential:1 kachites:1 jmlr:3 third:1 extractor:2 learns:3 formula:1 removing:1 xt:1 bishop:1 intractable:2 false:1 importance:1 mirror:20 jianshuc:1 phd:1 margin:6 chen:2 sparser:1 likely:1 gao:2 expressed:1 applies:1 satisfies:1 conditional:2 viewed:2 formulated:2 replace:1 shared:1 specifically:2 determined:2 except:1 total:2 experimental:3 accelerated:2 phenomenon:1 incorporate:1 evaluate:5 tested:1 biography:1 |
5,488 | 5,968 | Particle Gibbs for Infinite Hidden Markov Models
Nilesh Tripuraneni*
University of Cambridge
nt357@cam.ac.uk
Shixiang Gu*
University of Cambridge
MPI for Intelligent Systems
sg717@cam.ac.uk
Hong Ge
University of Cambridge
hg344@cam.ac.uk
Zoubin Ghahramani
University of Cambridge
zoubin@eng.cam.ac.uk
Abstract
Infinite Hidden Markov Models (iHMM?s) are an attractive, nonparametric generalization of the classical Hidden Markov Model which can automatically infer the
number of hidden states in the system. However, due to the infinite-dimensional
nature of the transition dynamics, performing inference in the iHMM is difficult.
In this paper, we present an infinite-state Particle Gibbs (PG) algorithm to resample state trajectories for the iHMM. The proposed algorithm uses an efficient
proposal optimized for iHMMs and leverages ancestor sampling to improve the
mixing of the standard PG algorithm. Our algorithm demonstrates significant convergence improvements on synthetic and real world data sets.
1
Introduction
Hidden Markov Models (HMM?s) are among the most widely adopted latent-variable models used
to model time-series datasets in the statistics and machine learning communities. They have also
been successfully applied in a variety of domains including genomics, language, and finance where
sequential data naturally arises [Rabiner, 1989; Bishop, 2006].
One possible disadvantage of the finite-state space HMM framework is that one must a-priori specify the number of latent states K. Standard model selection techniques can be applied to the finite state-space
HMM but bear a high computational overhead since they require the repetitive
training exploration of many HMM?s of different sizes.
Bayesian nonparametric methods offer an attractive alternative to this problem by adapting their
effective model complexity to fit the data. In particular, Beal et al. [2001] constructed an HMM over
a countably infinite state-space using a Hierarchical Dirichlet Process (HDP) prior over the rows
of the transition matrix. Various
approaches have been taken to perform full posterior inference
over the latent states, transition emission distributions and hyperparameters since it is impossible
to directly apply the forward-backwards algorithm due to the infinite-dimensional size of the state
space. The original Gibbs sampling approach proposed in Teh et al. [2006] suffered from slow
mixing due to the strong correlations between nearby time steps often present in time-series data
[Scott, 2002]. However, Van Gael et al. [2008] introduced a set of auxiliary slice variables to
dynamically ?truncate? the state space to be finite (referred to as beam sampling), allowing them
to use dynamic programming to jointly resample the latent states thus circumventing the problem.
Despite the power of the beam-sampling scheme, Fox et al. [2008] found that application of the
beam sampler to the (sticky) iHMM resulted in slow mixing relative to an inexact, blocked sampler
due to the introduction of auxiliary slice variables in the sampler.
*equal contribution.
1
The main contributions of this paper are to derive an infinite-state PG algorithm for the iHMM
using the stick-breaking construction for the HDP, and constructing an optimal importance proposal
to efficiently resample its latent state trajectories. The proposed algorithm is compared to existing
state-of-the-art inference algorithms for iHMMs, and empirical evidence suggests that the infinitestate PG algorithm consistently outperforms its alternatives. Furthermore, by construction the time
complexity of the proposed algorithm is O(T N K). Here T denotes the length of the sequence, N
denotes the number of particles in the PG sampler, and K denotes the number of ?active? states
in the model. Despite the simplicity of sampler, we find in a variety of synthetic and real-world
experiments that these particle methods dramatically improve convergence of the sampler, while
being more scalable.
We will first define the iHMM/sticky iHMM in Section 2, and review the Dirichlet Process (DP) and
Hierarchical Dirichlet Process (HDP) in our appendix. Then we move onto the description of our
MCMC sampling scheme in Section 3. In Section 4 we present our results on a variety of synthetic
and real-world datasets.
2
Model and Notation
2.1
Infinite Hidden Markov Models
We can formally define the iHMM (we review the theory of the HDP in our appendix) as follows:
? ? GEM(?),
iid
iid
(1)
? j |? ? DP(?, ?), ?j ? H, j = 1, . . . , ?
st |st?1 ? Cat(?|? st?1 ), yt |st ? f (?|?st ), t = 1, . . . , T.
Here ? is the shared DP measure defined on integers Z. Here s1:T = (s1 , ..., sT ) are the latent
states of the iHMM, y1:T = (y1 , ..., yT ) are the observed data, and ?j parametrizes the emission
distribution f . Usually H and f are chosen to be conjugate to simplify the inference. ?k0 can be
interpreted as the prior mean for transition probabilities into state k 0 , with ? governing the variability
of the prior mean across the rows of the transition matrix. The hyper-parameter ? controls how
concentrated or diffuse the probability mass of ? will be over the states of the transition
P? matrix. To
0
connect the HDP with the iHMM, note that given a draw from the HDP Gk =
k0 =1 ? kk ??k0
we identify ? kk0 with the transition probability from state k to state k 0 where ?k0 parametrize the
emission distributions.
1
1
Note that fixing ? = ( K
, ...., K
, 0, 0...) implies only transitions between the first K states of the
transition matrix are ever possible, leaving us with the finite Bayesian HMM. If we define a finite,
hierarchical Bayesian HMM by drawing
? ? Dir(?/K, ..., ?/K)
(2)
? k ? Dir(??)
with joint density over the latent/hidden states as
p? (s1:T , y1:T ) = ?Tt=1 ?(st |st?1 )f? (yt |st )
then after taking K ? ?, the hierarchical prior in Equation (2) approaches the HDP.
Figure 1: Graphical Model for the sticky HDP-HMM (setting ? = 0 recovers the HDP-HMM)
2
2.2
Prior and Emission Distribution Specification
The hyperparameter ? governs the variability of the prior mean across the rows of the transition
matrix and ? controls how concentrated or diffuse the probability mass of ? will be over the states
of the transition matrix. However, in the HDP-HMM we have each row of the transition matrix
is drawn as ? j ? DP(?, ?). Thus the HDP prior doesn?t differentiate self-transitions from jumps
between different states. This can be especially problematic in the non-parametric setting, since
non-Markovian state persistence in data can lead to the creation of unnecessary extra states and unrealistically, rapid switching dynamics in our model. In Fox et al. [2008], this problem is addressed
by including a self-transition bias parameter into the distribution of transitioning probability vector
?j :
?? + ??j
? j ? DP(? + ?,
)
(3)
?+?
to incorporate prior beliefs that smooth, state-persistent dynamics are more probable. Such a construction only involves the introduction of one further hyperparameter ? which controls the ?stickiness? of the transition matrix (note a similar self-transition was explored in Beal et al. [2001]).
For the standard iHMM, most approaches to inference have placed vague gamma hyper-priors on
the hyper-parameters ? and ?, which can be resampled efficiently as in Teh et al. [2006]. Similarly
in the sticky iHMM, in order to maintain tractable resampling of hyper-parameters Fox et al. [2008]
chose to place vague gamma priors on ?, ?+?, and a beta prior on ?/(?+?). In this work we follow
Teh et al. [2006]; Fox et al. [2008] and place priors ? ? Gamma(a? , b? ), ? + ? ? Gamma(as , bs ),
and ? ? Beta(a? , b? ) on the hyper-parameters.
We consider two conjugate emission models for the output states of the iHMM ? a multinomial
emission distribution for discrete data, and a normal emission distribution for continuous data. For
discrete data we choose ?k ? Dir(?? ) with f (? | ?st ) = Cat(?|?k ). For continuous data we choose
?k = (?, ? 2 ) ? N IG(?, ?, ?? , ?? ) with f (? | ?st ) = N (?|?k = (?, ? 2 )).
3
Posterior Inference for the iHMM
Let us first recall the collection of variables we need to sample: ? is a shared DP base measure, (? k )
is the transition matrix acting on the latent states, while ?k parametrizes the emission distribution
f , k = 1, . . . , K. We can then resample the variables of the iHMM in a series of Gibbs steps:
Step 1:
Step 2:
Step 3:
Step 4:
Step 5:
Sample s1:T | y1:T , ?1:K , ?, ? 1:K .
Sample ? | s1:T , ?.
Sample ? 1:K | ?, ?, ?, s1:T .
Sample ?1:K | y1:T , s1:T , H.
Sample (?, ?, ?) | s1:T , ?, ? 1:K .
Due to the strongly correlated nature of time-series data, resampling the latent hidden states in Step
1, is often the most difficult since the other variables can be sampled via the Gibbs sampler once a
sample of s1:T has been obtained. In the following section, we describe a novel efficient sampler for
the latent states s1:T of the iHMM, and refer the reader to our appendix and Teh et al. [2006]; Fox
et al. [2008] for a detailed discussion on steps for sampling variables ?, ?, ?, ?, ? 1:K , ?1:K .
3.1
Infinite State Particle Gibbs Sampler
Within the Particle MCMC framework of Andrieu et al. [2010], Sequential Monte Carlo (or particle
filtering) is used as a complex, high-dimensional proposal for the Metropolis-Hastings algorithm.
The Particle Gibbs sampler is a conditional SMC algorithm resulting from clamping one particle to
an apriori fixed trajectory. In particular, it is a transition kernel that has p(s1:T |y1:T ) as its stationary
distribution.
The key to constructing a generic, truncation-free sampler for the iHMM to resample the latent
states, s1:T , is to note that the finite number of particles in the sampler are ?localized? in the latent
space to a finite subset of the infinite set of possible states. Moreover, they can only transition
to finitely many new states as they are propagated through the forward pass. Thus the ?infinite?
measure ?, and ?infinite? transition matrix ? only need to be instantiated to support the number of
?active? states (defined as being {1, ..., K}) in the state space. In the particle Gibbs algorithm, if a
particle transitions to a state outside the ?active? set, the objects ? and ? can be lazily expanded via
3
the stick-breaking constructions derived for both objects in Teh et al. [2006] and stated in equations
(2), (4) and (5). Thus due to the properties of both the stick-breaking construction and the PGAS
kernel, this resampling procedure will leave the target distribution p(s1:T |y1:T ) invariant. Below we
first describe our infinite-state particle Gibbs algorithm for the iHMM then detail our notation (we
provide further background on SMC in our supplement):
Step 1: For iteration t = 1 initialize as:
(a) sample si1 ? q1 (?), for i ? 1, ..., N .
(b) initialize weights w1i = p(s1 )f1 (y1 |s1 ) q1 (s1 ) for i ? 1, ..., N .
Step 2: For iteration t > 1 use trajectory s01:T from t ? 1, ?, ?, ?, and K:
1:N
(a) sample the index ait?1 ? Cat(?|Wt?1
) of the ancestor of particle i for i ? 1, ..., N ? 1.
ai
t?1
(b) sample sit ? qt (? | st?1
) for i ? 1, ..., N ? 1. If sit = K + 1 then create a new state using the
stick-breaking construction for the HDP:
(i) Sample a new transition probability vector ? K+1 ? Dir(??).
(ii) Use stick-breaking construction to iteratively expand ? ? [?, ?K+1 ] as:
iid
0
?K+1
? Beta(1, ?),
0
0
?K+1 = ?K+1
?K
`=1 (1 ? ?` ).
(iii) Expand transition probability vectors (? k ), k = 1, . . . , K + 1, to include transitions
to K + 1st state via the HDP stick-breaking construction as:
? j ? [?j1 , ?j2 , . . . , ?j,K+1 ],
?j = 1, . . . , K + 1.
where
? 0jK+1 ? Beta ?0 ?K , ?0 (1 ?
K+1
X
0
?l ) , ? jK+1 = ? 0jK+1 ?K
`=1 (1 ? ? j` ).
`=1
(iv) Sample a new emission parameter ?K+1 ? H.
i
i
(c) compute the ancestor weights w
?t?1|T
= wt?1
?(s0t |sit?1 ) and resample aN
t as
i
P(aN
?t?1|T
.
t = i) ? w
(d) recompute and normalize particle weights using:
ai
ai
t?1
t?1
wt (sit ) = ?(sit | st?1
)f (yt | sit )/qt (sit | st?1
)
Wt (sit ) = wt (sit )/(
N
X
wt (sit ))
i=1
Step 3: Sample k with P(k = i) ?
wTi
and return s?1:T = sk1:T .
In the particle Gibbs sampler, at each step t a weighted particle system {sit , wti }N
i=1 serves as an
empirical point-mass approximation to the distribution p(s1:T ), with the variables ait denoting the
?ancestor? particles of sit . Here we have used ?(st |st?1 ) to denote the latent transition distribution,
f (yt |st ) the emission distribution, and p(s1 ) the prior over the initial state s1 .
3.2
More Efficient Importance Proposal qt (?)
In the PG algorithm described above, we have a choice of the importance sampling density qt (?) to
ai
ai
t?1
t?1
use at every time step. The simplest choice is to sample from the ?prior? ? qt (?|st?1
) = ?(sit |st?1
)
? which can lead to satisfactory performance when then observations are not too informative and the
dimension of the latent variables are not too large. However using the prior as importance proposal
in particle MCMC is known to be suboptimal. In order to improve the mixing rate of the sampler, it
ai
ai
t?1
t?1
is desirable to sample from the partial ?posterior? ? qt (? | st?1
) ? ?(sit |st?1
)f (yt |sit ) ? whenever
possible.
an
an
t?1
t?1
In general, sampling from the ?posterior?, qt (? | st?1
) ? ?(snt |st?1
)f (yt |snt ), may be impossible,
but in the iHMM we can show that it is analytically tractable. To see this, note that we have lazily
4
represented ?(?|snt?1 ) as a finite vector ? [?snt?1 ,1:K , ?snt?1 ,K+1 ]. Moreover, we can easily evaluate
the likelihood f (ytn |snt ,R?1:K ) for all snt ? 1, ..., K. However, if snt = K + 1, we need to compute
f (ytn |snt = K + 1) = f (ytn |snt = K + 1, ?)H(?)d?. If f and H are conjugate, we can analytically compute the marginal likelihood of the K + 1st state, but this can also be approximated by
Monte Carlo sampling for non-conjugate likelihoods ? see Neal [2000] for a more detailed discusPK+1
sion of this argument. Thus, we can compute p(yt |snt?1 ) = k=1 ?(k | snt?1 )f (yt | ?k ) for each
particle snt where n ? 1, ..., N ? 1.
We investigate the impact of ?posterior? vs. ?prior? proposals in Figure 5. Based on the convergence
of the number of states and joint log-likelihood, we can see that sampling from the ?posterior?
improves the mixing of the sampler. Indeed, we see from the ?prior? sampling experiments that
increasing the number of particles from N = 10 to N = 50 does seem to marginally improve the
mixing the sampler, but have found N = 10 particles sufficient to obtain good results. However,
we found no appreciable gain when increasing the number of particles from N = 10 to N = 50
when sampling from the ?posterior? and omitted the curves for clarity. It is worth noting that the
PG sampler (with ancestor resampling) does still perform reasonably even when sampling from the
?prior?.
3.3
Improving Mixing via Ancestor Resampling
It has been recognized that the mixing properties of the PG kernel can be poor due to path degeneracy
[Lindsten et al., 2014]. A variant of PG that is presented in Lindsten et al. [2014] attempts to
address this problem for any non-Markovian state-space model with a modification ? resample a new
value for the variable aN
t in an ?ancestor sampling? step at every time step, which can significantly
improve the mixing of the PG kernel with little extra computation in the case of Markovian systems.
To understand ancestor sampling, for t ? 2 consider the reference trajectory s0t:T ranging from the
current time step t to the final time T . Now, artificially assign a candidate history to this partial path,
by connecting s0t:T to one of the other particles history up until that point {si1:t?1 }N
i=1 which can be
N
achieved by simply assigning a new value to the variable at ? 1, ..., N . To do this, we first compute
the weights:
pT (si1:t?1 , s0t:T |y1:T )
i
i
w
?t?1|T
? wt?1
, i = 1, ..., N
(4)
pt?1 (si1:t?1 |y1:T )
N
i
Then aN
?t?1|T
. Remarkably, this ancestor sampling
t is sampled according to P(at = i) ? w
step leaves the density p(s1:T | y1:T ) invariant as shown in Lindsten et al. [2014] for arbitrary, nonMarkovian state-space models. However since the infinite HMM is Markovian, we can show the
computation of the ancestor sampling weights simplifies to
i
i
w
?t?1|T
= wt?1
?(s0t |sit?1 )
(5)
Note that the ancestor sampling step does not change the O(T N K) time complexity of the infinitestate PG sampler.
3.4
Resampling ?, ?, ?, ?, ?, and ?
Our resampling scheme for ?, ?, ?, ?, ?, and ? will follow straightforwardly from this scheme in
Fox et al. [2008]; Teh et al. [2006]. We present a review of their methods and related work in our
appendix for completeness.
4
Empirical Study
In the following experiments we explore the performance of the PG sampler on both the iHMM
and the sticky iHMM. Note that throughout this section we have only taken N = 10 and N =
50 particles for the PG sampler which has time complexity O(T N K) when sampling from the
?posterior? compared to the time complexity of O(T K 2 ) of the beam sampler. For completeness,
we also compare to the Gibbs sampler, which has been shown perform worse than the beam sampler
[Van Gael et al., 2008], due to strong correlations in the latent states.
4.1
Convergence on Synthetic Data
To study the mixing properties of the PG sampler on the iHMM and sticky iHMM, we consider
two synthetic examples with strongly positively correlated latent states. First as in Van Gael et al.
5
4-State: K
40
4-State: JLL
-5000
PG
35
Beam
Truth
-6000
30
25
-7000
20
-8000
15
10
PGAS-S
PGAS
PGAS-S
Beam
PGAS
Gibbs
Beam
Gibbs
-9000
5
0
0
500
iteration
1000
-10000
0
500
iteration
Figure 3: Learned Latent Transition Matrices for the PG sampler and Beam Sampler
vs Ground Truth (Transition Matrix for Gibbs
Sampler omitted for clarity). PG correctly recovers strongly correlated self-transition matrix, while the Beam Sampler supports extra
?spurious? states in the latent space.
1000
Figure 2: Comparing the performance of the PG
sampler, PG sampler on sticky iHMM (PG-S), beam
sampler, and Gibbs sampler on inferring data from
a 4 state strongly correlated HMM. Left: Number
of ?Active? States K vs. Iterations Right: Joint-Log
Likelihood vs. Iterations (Best viewed in color)
[2008], we generate sequences of length 4000 from a 4 state HMM with self-transition probability
of 0.75, and residual probability mass distributed uniformly over the remaining states where the
emission distributions are taken to be normal with fixed standard deviation 0.5 and emission means
of ?2.0, ?0.5, 1.0, 4.0 for the 4 states. The base distribution, H for the iHMM is taken to be normal
with mean 0 and standard deviation 2, and we initialized the sampler with K = 10 ?active? states.
In the 4-state case, we see in Figure 2 that the PG sampler applied to both the iHMM and the
sticky iHMM converges to the ?true? value of K = 4 much quicker than both the beam sampler
and Gibbs sampler ? uncovering the model dimensionality, and structure of the transition matrix
by more rapidly eliminating spurious ?active? states from the space as evidenced in the learned
transition matrix plots in Figure 3. Moreover, as evidenced by the joint log-likelihood in Figure 2,
we see that the PG sampler applied to both the iHMM and the sticky iHMM converges quickly to a
good mode, while the beam sampler has not fully converged within a 1000 iterations, and the Gibbs
sampler is performing poorly.
To further explore the mixing of the PG sampler vs. the beam sampler we consider a similar inference problem on synthetic data over a larger state space. We generate data from sequences of length
4000 from a 10 state HMM with self-transition probability of 0.75, and residual probability mass
distributed uniformly over the remaining states, and take the emission distributions to be normal
with fixed standard deviation 0.5 and means equally spaced 2.0 apart between ?10 and 10. The
base distribution, H, for the iHMM is also taken to be normal with mean 0 and standard deviation
2. The samplers were initialized with K = 3 and K = 30 states to explore the convergence and
robustness of the infinite-state PG sampler vs. the beam sampler.
10-State: K
30
10-State: JLL
-7000
10-State: K
30
25
-7500
25
20
-8000
20
-8000
15
-8500
15
-8500
10
-9000
10
-9000
5
-9500
5
0
PGAS-initK30
PGAS-initK30
PGAS-initK30-S
PGAS-initK30-S
PGAS-initK3
PGAS-initK3
PGAS-initK3-S
PGAS-initK3-S
Beam-initK30
Beam-initK30
Beam-initK3
Beam-initK3
-9500
0
200
400
600
800
1000
-10000
0
200
400
600
800
0
1000
Figure 4: Comparing the performance of the
PG sampler vs. beam sampler on inferring data
from a 10 state strongly correlated HMM with
different initializations. Left: Number of ?Active? States K from different Initial K vs. Iterations Right: Joint-Log Likelihood from different Initial K vs. Iterations
0
200
400
600
10-State: JLL
-7000
PGAS-n10-post-initK30
PGAS-n10-post-initK3
PGAS-n10-pri-initK30
PGAS-n10-pri-initK3
PGAS-n50-pri-initK30
PGAS-n50-pri-initK3
800
1000
-7500
-10000
PGAS-n10-post-initK30
PGAS-n10-post-initK3
PGAS-n10-pri-initK30
PGAS-n10-pri-initK3
PGAS-n50-pri-initK30
PGAS-n50-pri-initK3
0
200
400
600
800
1000
Figure 5: Influence of ?Posterior? vs. ?Prior?
proposal and Number of Particles in PG sampler on iHMM. Left: Number of ?Active? States
K from different Initial K, Numbers of Particles, and ?Prior?/?Posterior? proposal vs. Iterations Right: Joint-Log Likelihood from different Initial K, Numbers of Particles, and
?Prior?/?Posterior? proposal vs. Iterations
6
As observed in Figure 4, we see that the PG sampler applied to the iHMM and sticky iHMM,
converges far more quickly from both ?small? and ?large? initialization of K = 3 and K = 30
?active? states to the true value of K = 10 hidden states, as well as converging in JLL more quickly.
Indeed, as noted in Fox et al. [2008], the introduction of the extra slice variables in the beam
sampler can inhibit the mixing of the sampler, since for the beam sampler to consider transitions
with low prior probability one must also have sampled an unlikely corresponding slice variable so
as not to have truncated that state out of the space. This can become particularly problematic if
one needs to consider several of these transitions in succession. We believe this provides evidence
that the infinite-state Particle Gibbs sampler presented here, which does not introduce extra slice
variables, is mixing better than beam sampling in the iHMM.
4.2
Ion Channel Recordings
For our first real dataset, we investigate the behavior of the PG sampler and beam sampler on an ion
channel recording. In particular, we consider a 1MHz recording from Rosenstein et al. [2013] of
a single alamethicin channel previously investigated in Palla et al. [2014]. We subsample the time
series by a factor of 100, truncate it to be of length 2000, and further log transform and normalize it.
We ran both the beam and PG sampler on the iHMM for 1000 iterations (until we observed a convergence in the joint log-likelihood). Due to the large fluctuations in the observed time series, the
beam sampler infers the number of ?active? hidden states to be K = 5 while the PG sampler infers
the number of ?active? hidden states to be K = 4. However in Figure 6, we see that beam sampler
infers a solution for the latent states which rapidly oscillates between a subset of likely states during
temporal regions which intuitively seem to be better explained by a single state. However, the PG
sampler has converged to a mode which seems to better represent the latent transition dynamics, and
only seems to infer ?extra? states in the regions of large fluctuation. Indeed, this suggests that the
beam sampler is mixing worse with respect to the PG sampler.
Beam: Latent States
0.5
1
2
3
4
0
-0.5
y
y
0
PG: Latent States
0.5
1
2
3
4
5
-0.5
-1
-1
-1.5
-1.5
0
500
1000
1500
2000
0
t
500
1000
1500
2000
t
Figure 6: Left: Observations colored by an inferred latent trajectory using beam sampling inference.
Right: Observations colored by an inferred latent state trajectory using PG inference.
4.3
Alice in Wonderland Data
For our next example we consider the task of predicting sequences of letters taken from Alice?s
Adventures in Wonderland. We trained an iHMM on the 1000 characters from the first chapter of the
book, and tested on 4000 subsequent characters from the same chapter using a multinomial emission
model for the iHMM.
Once again, we see that the PG sampler applied to the iHMM/sticky iHMM converges quickly in
joint log-likelihood to a mode where it stably learns a value of K ? 10 as evidenced in Figure 7.
Though the performance of the PG and beam samplers appear to be roughly comparable here, we
would like to highlight two observations. Firstly, the inferred value of K obtained by the PG sampler quickly converges independent of the initialization K in the rightmost of Figure 7. However,
the beam sampler?s prediction for the number of active states K still appears to be decreasing and
more rapidly fluctuating than both the iHMM and sticky iHMM as evidenced by the error bars in
the middle plot in addition to being quite sensitive to the initialization K as shown in the rightmost plot. Based on the previous synthetic experiment (Section 4.1), and this result we suspect
that although both the beam sampler and PG sampler are quickly converging to good solutions as
evidenced by the training joint log-likelihood, the beam sampler is learning a transition matrix with
unnecessary spurious ?active? states. Next we calculate the predictive log-likelihood of the Alice
7
K
JLL
PGAS (N=50)
PGAS (N=10)
PGAS (N=50)
PGAS (N=10)
Beam
s
iteration
iteration
Figure 7: Left: Comparing the Joint Log-Likelihood vs. Iterations for the PG sampler and Beam
sampler. Middle: Comparing the convergence of the ?active? number of states for the iHMM and
sticky iHMM for the PG sampler and Beam sampler. Right: Trace plots of the number of states for
different initializations for K.
in Wonderland test data averaged over 2500 different realizations and find that the infinite-state PG
sampler with N = 10 particles achieves a predictive log-likelihood of ?5918.4 ? 123.8 while the
beam sampler achieves a predictive log-likelihood of ?6099.0 ? 106.0, showing the PG sampler
applied to the iHMM and Sticky iHMM learns hyperparameter and latent variable values that obtain
better predictive performance on the held-out dataset. We note that in this experiment as well, we
have only found it necessary to take N = 10 particles in the PG sampler achieve good mixing and
empirical performance, although increasing the number of particles to N = 50 does improve the
convergence of the sampler in this instance. Given that the PG sampler has a time complexity of
O(T N K) for a single pass, while the beam sampler (and truncated methods) have a time complexity of O(T K 2 ) for a single pass, we believe that the PG sampler is a competitive alternative to the
beam sampler for the iHMM.
5
Discussions and Conclusions
In this work we derive a new inference algorithm for the iHMM using the particle MCMC framework based on the stick-breaking construction for the HDP. We also develop an efficient proposal
inside PG optimized for iHMM?s, to efficiently resample the latent state trajectories for iHMM?s.
The proposed algorithm is empirically compared to existing state-of-the-art inference algorithms
for iHMMs, and shown to be promising because it converges more quickly and robustly to the true
number of states, in addition to obtaining better predictive performance on several synthetic and
realworld datasets. Moreover, we argued that the PG sampler proposed here is a competitive alternative to the beam sampler since the time complexity of the particle samplers presented is O(T N K)
versus the O(T K 2 ) of the beam sampler.
Another advantage of the proposed method is the simplicity of the PG algorithm, which doesn?t
require truncation or the introduction of auxiliary variables, also making the algorithm easily adaptable to challenging inference tasks. In particular, the PG sampler can be directly applied to the sticky
HDP-HMM with DP emission model considered in Fox et al. [2008] for which no truncation-free
sampler exists. We leave this development and application as an avenue for future work.
References
Andrieu, C., Doucet, A., and Holenstein, R. (2010). Particle Markov chain Monte Carlo methods.
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(3):269?342.
Beal, M. J., Ghahramani, Z., and Rasmussen, C. E. (2001). The infinite hidden Markov model. In
Advances in neural information processing systems, pages 577?584.
Bishop, C. M. (2006). Pattern recognition and machine learning, volume 4. Springer New York.
Fox, E. B., Sudderth, E. B., Jordan, M. I., and Willsky, A. S. (2008). An HDP?HMM for systems
with state persistence. In Proceedings of the 25th international conference on Machine learning,
pages 312?319. ACM.
Lindsten, F., Jordan, M. I., and Sch?on, T. B. (2014). Particle Gibbs with ancestor sampling. The
Journal of Machine Learning Research, 15(1):2145?2184.
8
Neal, R. M. (2000). Markov chain sampling methods for Dirichlet process mixture models. Journal
of Computational and Graphical Statistics, 9:249?265.
Palla, K., Knowles, D. A., and Ghahramani, Z. (2014). A reversible infinite hmm using normalised
random measures. arXiv preprint arXiv:1403.4206.
Rabiner, L. (1989). A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257?286.
Rosenstein, J. K., Ramakrishnan, S., Roseman, J., and Shepard, K. L. (2013). Single ion channel
recordings with cmos-anchored lipid membranes. Nano letters, 13(6):2682?2686.
Scott, S. L. (2002). Bayesian methods for hidden Markov models. Journal of the American Statistical Association, 97(457).
Teh, Y. W., Jordan, M. I., Beal, M. J., and Blei, D. M. (2006). Hierarchical Dirichlet processes.
Journal of the American Statistical Association, 101(476):1566?1581.
Van Gael, J., Saatci, Y., Teh, Y. W., and Ghahramani, Z. (2008). Beam sampling for the infinite
hidden Markov model. In Proceedings of the International Conference on Machine Learning,
volume 25.
9
| 5968 |@word middle:2 eliminating:1 seems:2 eng:1 pg:49 q1:2 ytn:3 initial:5 series:7 denoting:1 rightmost:2 outperforms:1 existing:2 current:1 comparing:4 assigning:1 must:2 subsequent:1 informative:1 j1:1 plot:4 resampling:7 stationary:1 v:13 leaf:1 selected:1 colored:2 blei:1 recompute:1 completeness:2 provides:1 firstly:1 si1:4 constructed:1 beta:4 become:1 persistent:1 overhead:1 inside:1 introduce:1 indeed:3 rapid:1 roughly:1 behavior:1 decreasing:1 palla:2 automatically:1 little:1 increasing:3 notation:2 moreover:4 mass:5 kk0:1 interpreted:1 lindsten:4 tripuraneni:1 temporal:1 every:2 finance:1 oscillates:1 demonstrates:1 uk:4 control:3 stick:7 appear:1 switching:1 despite:2 path:2 fluctuation:2 chose:1 initialization:5 dynamically:1 suggests:2 challenging:1 alice:3 smc:2 averaged:1 procedure:1 empirical:4 adapting:1 significantly:1 persistence:2 zoubin:2 onto:1 selection:1 impossible:2 influence:1 yt:9 simplicity:2 construction:9 target:1 pt:2 programming:1 us:1 approximated:1 jk:3 particularly:1 recognition:2 observed:4 quicker:1 preprint:1 snt:13 calculate:1 region:2 sticky:15 inhibit:1 ran:1 complexity:8 cam:4 dynamic:5 sk1:1 trained:1 creation:1 predictive:5 gu:1 vague:2 easily:2 joint:10 k0:4 various:1 cat:3 represented:1 chapter:2 instantiated:1 effective:1 describe:2 monte:3 hyper:5 outside:1 quite:1 widely:1 larger:1 drawing:1 statistic:2 jointly:1 transform:1 final:1 beal:4 differentiate:1 sequence:4 advantage:1 j2:1 realization:1 rapidly:3 mixing:15 poorly:1 achieve:1 description:1 normalize:2 convergence:8 cmos:1 leave:2 converges:6 object:2 derive:2 develop:1 ac:4 fixing:1 finitely:1 qt:7 strong:2 auxiliary:3 involves:1 implies:1 exploration:1 rosenstein:2 require:2 argued:1 assign:1 f1:1 generalization:1 probable:1 considered:1 ground:1 normal:5 achieves:2 omitted:2 resample:8 sensitive:1 create:1 successfully:1 weighted:1 sion:1 derived:1 emission:15 improvement:1 consistently:1 likelihood:15 nonmarkovian:1 inference:12 unlikely:1 hidden:15 spurious:3 ancestor:12 expand:2 pgas:29 uncovering:1 among:1 priori:1 development:1 art:2 initialize:2 apriori:1 equal:1 once:2 marginal:1 sampling:24 future:1 parametrizes:2 simplify:1 intelligent:1 gamma:4 resulted:1 saatci:1 maintain:1 attempt:1 investigate:2 mixture:1 held:1 chain:2 partial:2 necessary:1 fox:9 iv:1 initialized:2 instance:1 markovian:4 disadvantage:1 w1i:1 mhz:1 deviation:4 subset:2 too:2 straightforwardly:1 connect:1 dir:4 synthetic:8 st:25 density:3 international:2 nilesh:1 connecting:1 quickly:7 again:1 choose:2 nano:1 worse:2 book:1 american:2 sg717:1 return:1 competitive:2 contribution:2 efficiently:3 succession:1 rabiner:2 identify:1 spaced:1 bayesian:4 iid:3 marginally:1 carlo:3 trajectory:8 worth:1 history:2 converged:2 n10:8 holenstein:1 whenever:1 inexact:1 ihmm:49 naturally:1 recovers:2 degeneracy:1 propagated:1 sampled:3 gain:1 dataset:2 recall:1 color:1 improves:1 dimensionality:1 infers:3 adaptable:1 appears:1 follow:2 methodology:1 specify:1 though:1 strongly:5 furthermore:1 governing:1 correlation:2 until:2 hastings:1 reversible:1 jll:5 mode:3 stably:1 believe:2 true:3 andrieu:2 analytically:2 iteratively:1 satisfactory:1 pri:8 neal:2 attractive:2 during:1 self:6 shixiang:1 noted:1 mpi:1 hong:1 tt:1 ranging:1 adventure:1 novel:1 multinomial:2 empirically:1 shepard:1 volume:2 association:2 wonderland:3 significant:1 blocked:1 refer:1 cambridge:4 gibbs:19 ai:7 similarly:1 particle:36 language:1 specification:1 base:3 posterior:11 apart:1 recognized:1 ii:1 full:1 desirable:1 infer:2 smooth:1 offer:1 post:4 equally:1 impact:1 converging:2 scalable:1 variant:1 prediction:1 arxiv:2 repetitive:1 kernel:4 iteration:15 represent:1 achieved:1 ion:3 beam:43 proposal:10 background:1 unrealistically:1 remarkably:1 addition:2 addressed:1 sudderth:1 leaving:1 suffered:1 sch:1 extra:6 recording:4 suspect:1 seem:2 jordan:3 integer:1 leverage:1 backwards:1 noting:1 iii:1 variety:3 fit:1 wti:2 suboptimal:1 simplifies:1 avenue:1 speech:1 york:1 dramatically:1 gael:4 governs:1 detailed:2 nonparametric:2 concentrated:2 simplest:1 generate:2 problematic:2 tutorial:1 correctly:1 discrete:2 hyperparameter:3 key:1 drawn:1 clarity:2 circumventing:1 realworld:1 letter:2 place:2 throughout:1 reader:1 knowles:1 draw:1 appendix:4 comparable:1 resampled:1 diffuse:2 nearby:1 argument:1 performing:2 expanded:1 according:1 truncate:2 poor:1 conjugate:4 membrane:1 across:2 character:2 metropolis:1 b:1 s1:20 modification:1 making:1 intuitively:1 invariant:2 explained:1 taken:6 equation:2 previously:1 ge:1 tractable:2 serf:1 adopted:1 parametrize:1 apply:1 hierarchical:5 fluctuating:1 generic:1 robustly:1 alternative:4 robustness:1 original:1 s01:1 denotes:3 dirichlet:5 include:1 remaining:2 graphical:2 ghahramani:4 especially:1 classical:1 society:1 move:1 infinitestate:2 parametric:1 dp:7 hmm:18 willsky:1 hdp:16 length:4 index:1 kk:1 difficult:2 gk:1 trace:1 stated:1 perform:3 teh:8 ihmms:3 allowing:1 observation:4 markov:11 datasets:3 finite:8 truncated:2 variability:2 ever:1 y1:11 arbitrary:1 community:1 inferred:3 introduced:1 evidenced:5 optimized:2 learned:2 address:1 bar:1 usually:1 below:1 scott:2 pattern:1 including:2 royal:1 belief:1 power:1 predicting:1 residual:2 scheme:4 improve:6 stickiness:1 genomics:1 prior:22 review:3 relative:1 fully:1 bear:1 highlight:1 filtering:1 versus:1 localized:1 sufficient:1 row:4 placed:1 truncation:3 free:2 rasmussen:1 bias:1 normalised:1 understand:1 taking:1 van:4 slice:5 curve:1 dimension:1 distributed:2 transition:36 world:3 doesn:2 forward:2 collection:1 jump:1 ig:1 far:1 countably:1 doucet:1 active:14 gem:1 unnecessary:2 continuous:2 latent:26 anchored:1 lazily:2 promising:1 nature:2 reasonably:1 channel:4 obtaining:1 improving:1 investigated:1 complex:1 artificially:1 constructing:2 domain:1 main:1 subsample:1 hyperparameters:1 ait:2 positively:1 referred:1 slow:2 s0t:5 inferring:2 candidate:1 breaking:7 learns:2 transitioning:1 bishop:2 showing:1 explored:1 evidence:2 sit:16 exists:1 sequential:2 importance:4 supplement:1 clamping:1 simply:1 explore:3 likely:1 springer:1 ramakrishnan:1 truth:2 acm:1 conditional:1 viewed:1 appreciable:1 shared:2 change:1 infinite:20 uniformly:2 sampler:87 acting:1 wt:8 pas:3 formally:1 support:2 arises:1 lipid:1 incorporate:1 evaluate:1 mcmc:4 tested:1 correlated:5 |
5,489 | 5,969 | Sparse Local Embeddings for Extreme Multi-label
Classification
Kush Bhatia? , Himanshu Jain? , Purushottam Kar?? , Manik Varma? , and Prateek Jain?
?
Microsoft Research, India
?
Indian Institute of Technology Delhi, India
?
Indian Institute of Technology Kanpur, India
{t-kushb,prajain,manik}@microsoft.com
himanshu.j689@gmail.com, purushot@cse.iitk.ac.in
Abstract
The objective in extreme multi-label learning is to train a classifier that can automatically tag a novel data point with the most relevant subset of labels from an
extremely large label set. Embedding based approaches attempt to make training
and prediction tractable by assuming that the training label matrix is low-rank and
reducing the effective number of labels by projecting the high dimensional label
vectors onto a low dimensional linear subspace. Still, leading embedding approaches have been unable to deliver high prediction accuracies, or scale to large
problems as the low rank assumption is violated in most real world applications.
In this paper we develop the SLEEC classifier to address both limitations. The
main technical contribution in SLEEC is a formulation for learning a small ensemble of local distance preserving embeddings which can accurately predict infrequently occurring (tail) labels. This allows SLEEC to break free of the traditional
low-rank assumption and boost classification accuracy by learning embeddings
which preserve pairwise distances between only the nearest label vectors.
We conducted extensive experiments on several real-world, as well as benchmark data sets and compared our method against state-of-the-art methods for extreme multi-label classification. Experiments reveal that SLEEC can make significantly more accurate predictions then the state-of-the-art methods including both
embedding-based (by as much as 35%) as well as tree-based (by as much as 6%)
methods. SLEEC can also scale efficiently to data sets with a million labels which
are beyond the pale of leading embedding methods.
1
Introduction
In this paper we develop SLEEC (Sparse Local Embeddings for Extreme Classification), an extreme
multi-label classifier that can make significantly more accurate and faster predictions, as well as
scale to larger problems, as compared to state-of-the-art embedding based approaches.
eXtreme Multi-label Learning (XML) addresses the problem of learning a classifier that can automatically tag a data point with the most relevant subset of labels from a large label set. For instance,
there are more than a million labels (categories) on Wikipedia and one might wish to build a classifier that annotates a new article or web page with the subset of most relevant Wikipedia categories.
It should be emphasized that multi-label learning is distinct from multi-class classification where the
aim is to predict a single mutually exclusive label.
Challenges: XML is a hard problem that involves learning with hundreds of thousands, or even millions, of labels, features and training points. Although, some of these problems can be ameliorated
?
This work was done while P.K. was a postdoctoral researcher at Microsoft Research India.
1
using a label hierarchy, such hierarchies are unavailable in many applications [1, 2]. In this setting,
an obvious baseline is thus provided by the 1-vs-All technique which seeks to learn an an independent classifier per label. As expected, this technique is infeasible due to the prohibitive training and
prediction costs given the large number of labels.
Embedding-based approaches: A natural way of overcoming the above problem is to reduce the
effective number of labels. Embedding based approaches try to do so by projecting label vectors onto
a low dimensional space, based on an assumption that the label matrix is low-rank. More specifically,
given a set of n training points {(xi , yi )ni=1 } with d-dimensional feature vectors xi ? Rd and Ldimensional label vectors yi ? {0, 1}L , state-of-the-art embedding approaches project the label
b
vectors onto a lower L-dimensional
linear subspace as zi = Uyi . Regressors are then trained to
predict zi as Vxi . Labels for a novel point x are predicted by post-processing y = U? Vx where U?
is a decompression matrix which lifts the embedded label vectors back to the original label space.
Embedding methods mainly differ in the choice of their compression and decompression techniques
such as compressed sensing [3], Bloom filters [4], SVD [5], landmark labels [6, 7], output codes [8],
etc. The state-of-the-art LEML algorithm [9] directly optimizes for U? , V using a regularized
least squares objective. Embedding approaches have many advantages including simplicity, ease of
implementation, strong theoretical foundations, the ability to handle label correlations, as well as
adapt to online and incremental scenarios. Consequently, embeddings have proved to be the most
popular approach for tackling XML problems [6, 7, 10, 4, 11, 3, 12, 9, 5, 13, 8, 14].
Embedding approaches also have limitations ? they are slow at training and prediction even for small
b For instance, on WikiLSHTC [15, 16], a Wikipedia based challenge data
embedding dimensions L.
b
set, LEML with L = 500 takes ? 12 hours to train even with early termination whereas prediction
b
takes nearly 300 milliseconds per test point. In fact, for text applications with d-sparse
feature
b = 500), LEML?s prediction time ?(L(
b db + L))
vectors such as WikiLSHTC (where db = 42 L
b
can be an order of magnitude more than even 1-vs-All?s prediction time O(dL).
More importantly, the critical assumption made by embedding methods, that the training label matrix
is low-rank, is violated in almost all real world applications. Figure 1(a) plots the approximation
b is varied on the WikiLSHTC data set. As is clear, even with a 500error in the label matrix as L
dimensional subspace the label matrix still has 90% approximation error. This happens primarily
due to the presence of hundreds of thousands of ?tail? labels (Figure 1(b)) which occur in at most 5
data points each and, hence, cannot be well approximated by any linear low dimensional basis.
The SLEEC approach: Our algorithm SLEEC extends embedding methods in multiple ways to address these limitations. First, instead of globally projecting onto a linear low-rank subspace, SLEEC
learns embeddings zi which non-linearly capture label correlations by preserving the pairwise distances between only the closest (rather than all) label vectors, i.e. d(zi , zj ) ? d(yi , yj ) only if
i ? kNN(j) where d is a distance metric. Regressors V are trained to predict zi = Vxi . We propose a novel formulation for learning such embeddings that can be formally shown to consistently
preserve nearest neighbours in the label space. We build an efficient pipeline for training these
embeddings which can be orders of magnitude faster than state-of-the-art embedding methods.
During prediction, rather than using a decompression matrix, SLEEC uses a k-nearest neighbour
(kNN) classifier in the embedding space, thus leveraging the fact that nearest neighbours have been
preserved
P during training. Thus, for a novel point x, the predicted label vector is obtained using
y = i:Vxi ?kNN(Vx) yi . The use of a kNN classifier is well motivated as kNN outperforms discriminative methods in acutely low training data regimes [17] as is the case with tail labels.
The superiority of SLEEC?s proposed embeddings over traditional low-rank embeddings can be
seen by looking at Figure 1, which shows that the relative approximation error in learning SLEEC?s
embeddings is significantly smaller as compared to the low-rank approximation error. Moreover, we
also find that SLEEC can improve the prediction accuracy of state-of-the-art embedding methods
by as much as 35% (absolute) on the challenging WikiLSHTC data set. SLEEC also significantly
outperforms methods such as WSABIE [13] which also use kNN classification in the embedding
space but learn their embeddings using the traditional low-rank assumption.
Clustering based speedup: However, kNN classifiers are known to be slow at prediction. SLEEC
therefore clusters the training data into C clusters, learning a separate embedding per cluster and
performing kNN classification within the test point?s cluster alone. This allows SLEEC to be more
2
Global SVD
Local SVD
SLEEC NN Obj
0.5
0
100
200
300
400
Approximation Rank
500
(a)
Wiki10
1e5
90
1e4
Precision@1
Active Documents
Approximation Error
1
1e3
1e2
1e1
1e0
0
1
2
Label ID
(b)
3
4
5
x 10
85
SLEEC
LocalLEML
80
75
2
4
6
8
Number of Clusters
10
(c)
Figure 1: (a) error kY ? YLb k2F /kY k2F in approximating the label matrix Y . Global SVD denotes the error
b SVD of Y . Local SVD computes rank L
b SVD of Y within each cluster.
incurred by computing the rank L
SLEEC NN objective denotes SLEEC?s objective function. Global SVD incurs 90% error and the error is
decreasing at most linearly as well. (b) shows the number of documents in which each label is present for the
WikiLSHTC data set. There are about 300K labels which are present in < 5 documents lending it a ?heavy
tailed? distribution. (c) shows Precision@1 accuracy of SLEEC and localLEML on the Wiki-10 data set as we
vary the number of clusters.
than two orders of magnitude faster at prediction than LEML and other embedding methods on the
WikiLSHTC data. In fact, SLEEC also scales well to the Ads1M data set involving a million labels
which is beyond the pale of leading embedding methods. Moreover, the clustering trick does not
significantly benefit other state-of-the-art methods (see Figure 1(c), thus indicating that SLEEC?s
embeddings are key to its performance boost.
Since clustering can be unstable in large dimensions, SLEEC compensates by learning a small ensemble where each individual learner is generated by a different random clustering. This was empirically found to help tackle instabilities of clustering and significantly boost prediction accuracy with
only linear increases in training and prediction time. For instance, on WikiLSHTC, SLEEC?s prediction accuracy was 55% with an 8 millisecond prediction time whereas LEML could only manage
20% accuracy while taking 300 milliseconds for prediction per test point.
Tree-based approaches: Recently, tree based methods [1, 15, 2] have also become popular for
XML as they enjoy significant accuracy gains over the existing embedding methods. For instance,
FastXML [15] can achieve a prediction accuracy of 49% on WikiLSHTC using a 50 tree ensemble.
However, using SLEEC, we are now able to extend embedding methods to outperform tree ensembles, achieving 49.8% with 2 learners and 55% with 10. Thus, SLEEC obtains the best of both
worlds ? achieving the highest prediction accuracies across all methods on even the most challenging data sets, as well as retaining all the benefits of embeddings and eschewing the disadvantages of
large tree ensembles such as large model size and lack of theoretical understanding.
2
Method
Let D = {(x1 , y1 ) . . . (xn , yn )} be the given training data set, xi ? X ? Rd be the input feature
vector, yi ? Y ? {0, 1}L be the corresponding label vector, and yij = 1 iff the j-th label is turned
on for xi . Let X = [x1 , . . . , xn ] be the data matrix and Y = [y1 , . . . , yn ] be the label matrix. Given
D, the goal is to learn a multi-label classifier f : Rd ? {0, 1}L that accurately predicts the label
vector for a given test point. Recall that in XML settings, L is very large and is of the same order as
n and d, ruling out several standard approaches such as 1-vs-All.
We now present our algorithm SLEEC which is designed primarily to scale efficiently for large L.
Our algorithm is an embedding-style algorithm, i.e., during training we map the label vectors yi to
b
b
b
L-dimensional
vectors zi ? RL and learn a set of regressors V ? RL?d s.t. zi ? V xi , ?i. During
the test phase, for an unseen point x, we first compute its embedding V x and then perform kNN
over the set [V x1 , V x2 , . . . , V xn ]. To scale our algorithm, we perform a clustering of all the training
points and apply the above mentioned procedures in each of the cluster separately. Below, we first
discuss our method to compute the embeddings zi s and the regressors V . Section 2.2 then discusses
our approach for scaling the method to large data sets.
2.1
Learning Embeddings
As mentioned earlier, our approach is motivated by the fact that a typical real-world data set tends
to have a large number of tail labels that ensure that the label matrix Y cannot be well-approximated
using a low-dimensional linear subspace (see Figure 1). However, Y can still be accurately modeled
3
Algorithm 1 SLEEC: Train Algorithm
Sub-routine 3 SLEEC: SVP
Require: D = {(x1 , y1 ) . . . (xn , yn )}, embedding Require: Observations: G, index set: ?, dimensionb
b no. of neighbors: n
ality: L
dimensionality: L,
? , no. of
1:
M
:=
0, ? = 1
clusters: C, regularization parameter: ?, ?, L1
1
2: repeat
smoothing parameter ?
c ? M + ?(G ? P? (M ))
1: Partition X into Q1 , .., QC using k-means
3:
M
c, L)
b
2: for each partition Qj do
4:
[U ?] ? Top-EigenDecomp(M
3:
Form ? using n
? nearest neighbors of each label 5:
?ii ? max(0, ?ii ), ?i
vector yi ? Qj
6:
M ? U ? ? ? UT
T
b
7: until Convergence
4:
[U ?] ? SVP(P? (Y j Y j ), L)
1
8: Output: U , ?
5:
Zj ? U ? 2
j
6:
V ? ADM M (X j , Z j , ?, ?, ?)
7:
Zj = V j Xj
Sub-routine 4 SLEEC: ADMM
8: end for
Require: Data Matrix : X, Embeddings : Z, Regular9: Output: {(Q1 , V 1 , Z 1 ), . . . , (QC , V C , Z C }
ization Parameter : ?, ?, Smoothing Parameter : ?
1: ? := 0, ? := 0
Algorithm 2 SLEEC: Test Algorithm
2: repeat
Q ? (Z + ?(? ? ?))X >
Require: Test point: x, no. of NN: n
? , no. of desired 3:
4:
V ? Q(XX > (1 + ?) + ?I)?1
labels: p
5:
? ? (V X + ?)
1: Q? : partition closest to x
6:
?i = sign(?i ) ? max(0, |?i | ? ?? ), ?i
2: z ? V ? x
3: Nz ? n
? nearest neighbors of z in Z ?
7:
? ? ? + V X ? alpha
4: Px ? empirical label dist. for points ? Nz
8: until Convergence
5: ypred ? T opp (Px )
9: Output: V
using a low-dimensional non-linear manifold. That is, instead of preserving distances (or inner
products) of a given label vector to all the training points, we attempt to preserve the distance to
b
only a few nearest neighbors. That is, we wish to find a L-dimensional
embedding matrix Z =
b
L?n
which minimizes the following objective:
[z1 , . . . , zn ] ? R
min kP? (Y T Y ) ? P? (Z T Z)k2F + ?kZk1 ,
(1)
b
Z?RL?n
where the index set ? denotes the set of neighbors that we wish to preserve, i.e., P
(i, j) ? ? iff
j ? Ni . Ni denotes a set of nearest neighbors of i. We select Ni = arg maxS,|S|???n j?S (yTi yj ),
which is the set of ? ? n points with the largest inner products with yi . |N | is always chosen large
enough so that distances (inner products) to a few far away points are also preserved while optimizing for our objective function. This prohibits non-neighboring points from entering the immediate
neighborhood of any given point. P? : Rn?n ? Rn?n is defined as:
hyi , yj i , if (i, j) ? ?,
(P? (Y T Y ))ij =
(2)
0,
otherwise.
P
Also, we add L1 regularization, kZk1 = i kzi k1 , to the objective function to obtain sparse embeddings. Sparse embeddings have three key advantages: a) they reduce prediction time, b) reduce the
b
size of the model, and c) avoid overfitting. Now, given the embeddings Z = [z1 , . . . , zn ] ? RL?n ,
we wish to learn a multi-regression model to predict the embeddings Z using the input features.
b
That is, we require that Z ? V X where V ? RL?d . Combining the two formulations and adding
an L2 -regularization for V , we get:
min kP? (Y T Y ) ? P? (X T V T V X)k2F + ?kV k2F + ?kV Xk1 .
(3)
b
V ?RL?d
Note that the above problem formulation is somewhat similar to a few existing methods for nonlinear dimensionality reduction that also seek to preserve distances to a few near neighbors [18, 19].
However, in contrast to our approach, these methods do not have a direct out of sample generalization, do not scale well to large-scale data sets, and lack rigorous generalization error bounds.
Optimization: We first note that optimizing (3) is a significant challenge as the objective function is
non-convex as well as non-differentiable. Furthermore, our goal is to perform optimization for data
4
sets where L, n, d 100, 000. To this end, we divide the optimization into two phases. We first
learn embeddings Z = [z1 , . . . , zn ] and then learn regressors V in the second stage. That is, Z is
obtained by directly solving (1) but without the L1 penalty term:
min
b
Z,Z?RL?n
kP? (Y T Y ) ? P? (Z T Z)k2F ?
min
M 0,
b
rank(M )?L
kP? (Y T Y ) ? P? (M )k2F ,
(4)
where M = Z T Z. Next, V is obtained by solving the following problem:
min kZ ? V Xk2F + ?kV k2F + ?kV Xk1 .
(5)
b
V ?RL?d
Note that the Z matrix obtained using (4) need not be sparse. However, we store and use V X as our
embeddings, so that sparsity is still maintained.
Optimizing (4): Note that even the simplified problem (4) is an instance of the popular low-rank
matrix completion problem and is known to be NP-hard in general. The main challenge arises
due to the non-convex rank constraint on M . However, using the Singular Value Projection (SVP)
method [20], a popular matrix completion method, we can guarantee convergence to a local minima.
SVP is a simple projected gradient descent method where the projection is onto the set of low-rank
matrices. That is, the t-th step update for SVP is given by:
Mt+1 = PLb (Mt + ?P? (Y T Y ? Mt )),
(6)
where Mt is the t-th step iterate, ? > 0 is the step-size, and PLb (M ) is the projection of M onto
b positive semi-definite definite (PSD) matrices. Note that while the set of rankthe set of rank-L
b PSD matrices is non-convex, we can still project onto this set efficiently using the eigenvalue
L
T
decomposition of M . That is, if M = UM ?M UM
be the eigenvalue decomposition of M . Then,
PLb (M ) = UM (1 : r) ? ?M (1 : r) ? UM (1 : r)T ,
b L
b + ) and L
b + is the number of positive eigenvalues of M . ?M (1 : r) denotes
where r = min(L,
M
M
the top-r eigenvalues of M and UM (1 : r) denotes the corresponding eigenvectors.
b computing
While the above update restricts the rank of all intermediate iterates Mt to be at most L,
b
rank-L eigenvalue decomposition can still be fairly expensive for large n. However, by using special
structure in the update (6), one can significantly reduce eigenvalue decomposition?s computation
b
complexity as well. In general, the eigenvalue decomposition can be computed in time O(L?)
where ? is the time complexity of computing a matrix-vector product. Now, for SVP update (6),
? = Mt + ?P? (Y T Y ? Mt ). Hence ? = O(nL
b + n?
matrix has special structure of M
n) where
n
? = |?|/n2 is the average number of neighbors preserved by SLEEC. Hence, the per-iteration time
b 2 + nL?
b n) which is linear in n, assuming n
complexity reduces to O(nL
? is nearly constant.
Optimizing (5): (5) contains an L1 term which makes the problem non-smooth. Moreover, as the L1
term involves both V and X, we cannot directly apply the standard prox-function based algorithms.
Instead, we use the ADMM method to optimize (5). See Sub-routine 4 for the updates and [21] for
a detailed derivation of the algorithm.
Generalization Error Analysis: Let P be a fixed (but unknown) distribution over X ? Y. Let each
training point (xi , yi ) ? D be sampled i.i.d. from P. Then, the goal of our non-linear embedding
method (3) is to learn an embedding matrix A = V T V that preserves nearest neighbors (in terms
of label distance/intersection) of any (x, y) ? P. The above requirements can be formulated as the
following stochastic optimization problem:
min
A0
rank(A)?k
L(A) =
E
(x,y),(e
x,e
y)?P
e )),
`(A; (x, y), (e
x, y
(7)
e )) = g(he
eT Ax)2 , and g(he
where the loss function `(A; (x, y), (e
x, y
y, yi)(he
y, yi ? x
y, yi) =
? have
I [he
y, yi ? ? ], where I [?] is the indicator function. Hence, a loss is incurred only if y and y
a large inner product. For an appropriate selection of the neighborhood selection operator ?, (3)
indeed minimizes a regularized empirical estimate of the loss function (7), i.e., it is a regularized
ERM w.r.t. (7).
5
b to (3) indeed minimizes the loss (7) upto an additive
We now show that the optimal solution A
approximation error. The existing techniques for analyzing excess risk in stochastic optimization
require the empirical loss function to be decomposable over the training set, and as such do not
apply to (3) which contains loss-terms with two training points. Still, using techniques from the
AUC maximization literature [22], we can provide interesting excess risk bounds for Problem (7).
Theorem 1. With probability at least 1 ? ? over the sampling of the dataset D, the solution A? to the
optimization problem (3) satisfies
E-Risk(n)
z
}|
{
r
n
o
4 1
1
?
2
2
?
2
? ? inf L(A ) + C L
? + r + kA k R
,
L(A)
log
F
A? ?A
n
?
n
o
?
d?d
b .
where A? is the minimizer of (3), r = L
: A 0, rank(A) ? L
? and A := A ? R
See Appendix A for a proof of the result. Note that the generalization error bound is independent
of both d and L, which is critical for extreme multi-label classification problems with large d, L. In
? L, which is the average number of positive labels
fact, the error bound is only dependent on L
per data point. Moreover, our bound also provides a way to compute best regularization parameter
? that minimizes the error bound. However, in practice, we set ? to be a fixed constant.
Theorem 1 only preserves the population neighbors of a test point. Theorem 7, given in Appendix A,
extends Theorem 1 to ensure that the neighbors in the training set are also preserved. We would also
like to stress that our excess risk bound is universal and hence holds even if A? does not minimize
? ? L(A? ) + E-Risk(n) + (L(A)
? ? L(?(A? )), where E-Risk(n) is given in Theorem 1.
(3), i.e., L(A)
2.2
Scaling to Large-scale Data sets
b to be fairly large (say a few
For large-scale data sets, one might require the embedding dimension L
hundreds) which might make computing the updates (6) infeasible. Hence, to scale to such large
data sets, SLEEC clusters the given datapoints into smaller local region. Several text-based data sets
indeed reveal that there exist small local regions in the feature-space where the number of points as
well as the number of labels is reasonably small. Hence, we can train our embedding method over
such local regions without significantly sacrificing overall accuracy.
We would like to stress that despite clustering datapoints in homogeneous regions, the label matrix of
any given cluster is still not close to low-rank. Hence, applying a state-of-the-art linear embedding
method, such as LEML, to each cluster is still significantly less accurate when compared to our
method (see Figure 1). Naturally, one can cluster the data set into an extremely large number of
regions, so that eventually the label matrix is low-rank in each cluster. However, increasing the
number of clusters beyond a certain limit might decrease accuracy as the error incurred during the
cluster assignment phase itself might nullify the gain in accuracy due to better embeddings. Figure 1
illustrates this phenomenon where increasing the number of clusters beyond a certain limit in fact
decreases accuracy of LEML.
Algorithm 1 provides a pseudo-code of our training algorithm. We first cluster the datapoints into
C partitions. Then, for each partition we learn a set of embeddings using Sub-routine 3 and then
compute the regression parameters V ? , 1 ? ? ? C using Sub-routine 4. For a given test point x,
we first find out the appropriate cluster ? . Then, we find the embedding z = V ? x. The label vector
is then predicted using k-NN in the embedding space. See Algorithm 2 for more details.
Owing to the curse-of-dimensionality, clustering turns out to be quite unstable for data sets with
large d and in many cases leads to some drop in prediction accuracy. To safeguard against such
instability, we use an ensemble of models generated using different sets of clusters. We use different
initialization points in our clustering procedure to obtain different sets of clusters. Our empirical
results demonstrate that using such ensembles leads to significant increase in accuracy of SLEEC
(see Figure 2) and also leads to stable solutions with small variance (see Table 4).
3
Experiments
Experiments were carried out on some of the largest XML benchmark data sets demonstrating that
SLEEC could achieve significantly higher prediction accuracies as compared to the state-of-the-art.
It is also demonstrated that SLEEC could be faster at training and prediction than leading embedding
techniques such as LEML.
6
30
0
SLEEC
FastXML
LocalLEML?Ens
5
Model Size (GB)
(a)
10
50
SLEEC
FastXML
LocalLEML?ENS
40
30
0
5
10
Number of Learners
(b)
Wiki10 [L= 30K, d = 101K, n = 14K]
90
Precision@1
50
40
WikiLSHTC [L= 325K, d = 1.61M, n = 1.77M]
60
Precision@1
Precision@1
WikiLSHTC [L= 325K, d = 1.61M, n = 1.77M]
60
15
80
SLEEC
FastXML
LocalLEML?Ens
70
60
0
5
10
Number of Learners
15
(c)
Figure 2: Variation in Precision@1 accuracy with model size and the number of learners on large-scale data
sets. Clearly, SLEEC achieves better accuracy than FastXML and LocalLEML-Ensemble at every point of the
curve. For WikiLSTHC, SLEEC with a single learner is more accurate than LocalLEML-Ensemble with even
15 learners. Similarly, SLEEC with 2 learners achieves more accuracy than FastXML with 50 learners.
Data sets: Experiments were carried out on multi-label data sets including Ads1M [15] (1M labels), Amazon [23] (670K labels), WikiLSHTC (320K labels), DeliciousLarge [24] (200K labels)
and Wiki10 [25] (30K labels). All the data sets are publically available except Ads1M which is
proprietary and is included here to test the scaling capabilities of SLEEC.
Unfortunately, most of the existing embedding techniques do not scale to such large data sets. We
therefore also present comparisons on publically available small data sets such as BibTeX [26],
MediaMill [27], Delicious [28] and EURLex [29]. (Table 2 in the appendix lists their statistics).
Baseline algorithms: This paper?s primary focus is on comparing SLEEC to state-of-the-art methods which can scale to the large data sets such as embedding based LEML [9] and tree based
FastXML [15] and LPSR [2]. Na??ve Bayes was used as the base classifier in LPSR as was done
in [15]. Techniques such as CS [3], CPLST [30], ML-CSSP [7], 1-vs-All [31] could only be trained
on the small data sets given standard resources. Comparisons between SLEEC and such techniques
are therefore presented in the supplementary material. The implementation for LEML and FastXML
was provided by the authors. We implemented the remaining algorithms and ensured that the published results could be reproduced and were verified by the authors wherever possible.
Hyper-parameters: Most of SLEEC?s
hyper-parameters were kept fixed including the number of
clusters in a learner bNTrain /6000c , embedding dimension (100 for the small data sets and 50
for the large), number of learners in the ensemble (15), and the parameters used for optimizing (3).
The remaining two hyper-parameters, the k in kNN and the number of neighbours considered during
SVP, were both set by limited validation on a validation set.
The hyper-parameters for all the other algorithms were set using fine grained validation on each data
set so as to achieve the highest possible prediction accuracy for each method. In addition, all the
embedding methods were allowed a much larger embedding dimension (0.8L) than SLEEC (100)
to give them as much opportunity as possible to outperform SLEEC.
Evaluation Metrics: We evaluated algorithms using metrics that have been widely adopted for
XML and ranking tasks. Precision at k (P@k) is one such metric that counts the fraction of correct
? , and has been widely utilized [1, 3, 15, 13, 2, 9]. We
predictions in the top k scoring labels in y
use the ranking measure nDCG@k as another evaluation metric. We refer the reader to the supplementary material ? Appendix B.1 and Tables 5 and 6 ? for further descriptions of the metrics and
results.
Results on large data sets with more than 100K labels: Table 1a compares SLEEC?s prediction
accuracy, in terms of P@k (k= {1, 3, 5}), to all the leading methods that could be trained on five
such data sets. SLEEC could improve over the leading embedding method, LEML, by as much as
35% and 15% in terms of P@1 and P@5 on WikiLSHTC. Similarly, SLEEC outperformed LEML
by 27% and 22% in terms of P@1 and P@5 on the Amazon data set which also has many tail labels.
The gains on the other data sets are consistent, but smaller, as the tail label problem is not so acute.
SLEEC also outperforms the leading tree method, FastXML, by 6% in terms of both P@1 and P@5
on WikiLSHTC and Wiki10 respectively. This demonstrates the superiority of SLEEC?s overall
pipeline constructed using local distance preserving embeddings followed by kNN classification.
SLEEC also has better scaling properties as compared to all other embedding methods. In particular,
apart from LEML, no other embedding approach could scale to the large data sets and, even LEML
could not scale to Ads1M with a million labels. In contrast, a single SLEEC learner could be learnt
on WikiLSHTC in 4 hours on a single core and already gave ? 20% improvement in P@1 over
LEML (see Figure 2 for the variation in accuracy vs SLEEC learners). In fact, SLEEC?s training
7
Table 1: Precision Accuracies (a) Large-scale data sets : Our proposed method SLEEC is as much as 35%
more accurate in terms of P@1 and 22% in terms of P@5 than LEML, a leading embedding method. Other
embedding based methods do not scale to the large-scale data sets; we compare against them on small-scale
data sets in Table 3. SLEEC is also 6% more accurate (w.r.t. P@1 and P@5) than FastXML, a state-of-theart tree method. ?-? indicates LEML could not be run with the standard resources. (b) Small-scale data sets
: SLEEC consistently outperforms state of the art approaches. WSABIE, which also uses kNN classifier on
its embeddings is significantly less accurate than SLEEC on all the data sets, showing the superiority of our
embedding learning algorithm.
(b)
(a)
Data set
SLEEC LEML FastXML LPSR-NB
SLEEC LEML FastXML WSABIE OneVsAll
BibTex
65.57
40.02
29.30
62.53
38.4
28.21
63.73
39.00
28.54
54.77
32.38
23.98
61.83
36.44
26.46
18.59
15.43
14.07
Delicious
P@1
P@3
P@5
68.42
61.83
56.80
65.66
60.54
56.08
69.44
63.62
59.10
64.12
58.13
53.64
65.01
58.90
53.26
49.35
32.69
24.03
27.43
16.38
12.01
P@1
MediaMill P@3
P@5
87.09
72.44
58.45
84.00
67.19
52.80
84.24
67.39
53.14
81.29
64.74
49.82
83.57
65.50
48.57
8.13
6.83
6.03
33.36
29.30
26.12
28.65
24.88
22.37
P@1
P@3
P@5
80.17
65.39
53.75
61.28
48.66
39.91
68.69
57.73
48.00
70.87
56.62
46.2
74.96
62.92
53.42
-
23.11
13.86
10.12
17.08
11.38
8.83
85.54
73.59
63.10
73.50
62.38
54.30
82.56
66.67
56.70
72.71
58.51
49.40
P@1
Delicious-Large P@3
P@5
47.03
41.67
38.88
40.30
37.76
36.66
42.81
38.76
36.34
WikiLSHTC
P@1
P@3
P@5
55.57
33.84
24.07
19.82
11.43
8.39
Amazon
P@1
P@3
P@5
35.05
31.25
28.56
Ads-1m
P@1
P@3
P@5
21.84
14.30
11.01
Wiki10
Data set
P@1
P@3
P@5
P@1
P@3
P@5
EurLEX
time on WikiLSHTC was comparable to that of tree based FastXML. FastXML trains 50 trees in
7 hours on a single core to achieve a P@1 of 49.37% whereas SLEEC could achieve 49.98% by
training 2 learners in 8 hours. Similarly, SLEEC?s training time on Ads1M was 6 hours per learner
on a single core.
SLEEC?s predictions could also be up to 300 times faster than LEMLs. For instance, on WikiLSHTC, SLEEC made predictions in 8 milliseconds per test point as compared to LEML?s 279.
SLEEC therefore brings the prediction time of embedding methods to be much closer to that of
tree based methods (FastXML took 0.5 milliseconds per test point on WikiLSHTC) and within the
acceptable limit of most real world applications.
Effect of clustering and multiple learners: As mentioned in the introduction, other embedding
methods could also be extended by clustering the data and then learning a local embedding in each
cluster. Ensembles could also be learnt from multiple such clusterings. We extend LEML in such
a fashion, and refer to it as LocalLEML, by using exactly the same 300 clusters per learner in the
ensemble as used in SLEEC for a fair comparison. As can be seen in Figure 2, SLEEC significantly
outperforms LocalLEML with a single SLEEC learner being much more accurate than an ensemble
of even 10 LocalLEML learners. Figure 2 also demonstrates that SLEEC?s ensemble can be much
more accurate at prediction as compared to the tree based FastXML ensemble (the same plot is
also presented in the appendix depicting the variation in accuracy with model size in RAM rather
than the number of learners in the ensemble). The figure also demonstrates that very few SLEEC
learners need to be trained before accuracy starts saturating. Finally, Table 4 shows that the variance
in SLEEC s prediction accuracy (w.r.t. different cluster initializations) is very small, indicating that
the method is stable even though clustering in more than a million dimensions.
Results on small data sets: Table 3, in the appendix, compares the performance of SLEEC to several popular methods including embeddings, trees, kNN and 1-vs-All SVMs. Even though the tail
label problem is not acute on these data sets, and SLEEC was restricted to a single learner, SLEEC?s
predictions could be significantly more accurate than all the other methods (except on Delicious
where SLEEC was ranked second). For instance, SLEEC could outperform the closest competitor
on EurLex by 3% in terms of P1. Particularly noteworthy is the observation that SLEEC outperformed WSABIE [13], which performs kNN classification on linear embeddings, by as much as
10% on multiple data sets. This demonstrates the superiority of SLEEC?s local distance preserving
embeddings over the traditional low-rank embeddings.
Acknowledgments
We are grateful to Abhishek Kadian for helping with the experiments. Himanshu Jain is supported
by a Google India PhD Fellowship at IIT Delhi
8
References
[1] R. Agrawal, A. Gupta, Y. Prabhu, and M. Varma. Multi-label learning with millions of labels: Recommending advertiser bid phrases for web pages. In WWW, pages 13?24, 2013.
[2] J. Weston, A. Makadia, and H. Yee. Label partitioning for sublinear ranking. In ICML, 2013.
[3] D. Hsu, S. Kakade, J. Langford, and T. Zhang. Multi-label prediction via compressed sensing. In NIPS,
2009.
[4] M. Ciss?e, N. Usunier, T. Arti`eres, and P. Gallinari. Robust bloom filters for large multilabel classification
tasks. In NIPS, pages 1851?1859, 2013.
[5] F. Tai and H.-T. Lin. Multi-label classification with principal label space transformation. In Workshop
proceedings of learning from multi-label data, 2010.
[6] K. Balasubramanian and G. Lebanon. The landmark selection method for multiple output prediction. In
ICML, 2012.
[7] W. Bi and J.T.-Y. Kwok. Efficient multi-label classification with many labels. In ICML, 2013.
[8] Y. Zhang and J. G. Schneider. Multi-label output codes using canonical correlation analysis. In AISTATS,
pages 873?882, 2011.
[9] H.-F. Yu, P. Jain, P. Kar, and I. S. Dhillon. Large-scale multi-label learning with missing labels. ICML,
2014.
[10] Y.-N. Chen and H.-T. Lin. Feature-aware label space dimension reduction for multi-label classification.
In NIPS, pages 1538?1546, 2012.
[11] C.-S. Feng and H.-T. Lin. Multi-label classification with error-correcting codes. JMLR, 20, 2011.
[12] S. Ji, L. Tang, S. Yu, and J. Ye. Extracting shared subspace for multi-label classification. In KDD, 2008.
[13] J. Weston, S. Bengio, and N. Usunier. Wsabie: Scaling up to large vocabulary image annotation. In
IJCAI, 2011.
[14] Z. Lin, G. Ding, M. Hu, and J. Wang. Multi-label classification via feature-aware implicit label space
encoding. In ICML, pages 325?333, 2014.
[15] Y. Prabhu and M. Varma. FastXML: a fast, accurate and stable tree-classifier for extreme multi-label
learning. In KDD, pages 263?272, 2014.
[16] Wikipedia dataset for the 4th large scale hierarchical text classification challenge, 2014.
[17] A. Ng and M. Jordan. On Discriminative vs. Generative classifiers: A comparison of logistic regression
and naive Bayes. In NIPS, 2002.
[18] K. Q. Weinberger and L. K. Saul. An introduction to nonlinear dimensionality reduction by maximum
variance unfolding. In AAAI, pages 1683?1686, 2006.
[19] B. Shaw and T. Jebara. Minimum volume embedding. In AISTATS, pages 460?467, 2007.
[20] P. Jain, R. Meka, and I. S. Dhillon. Guaranteed rank minimization via singular value projection. In NIPS,
pages 937?945, 2010.
[21] P. Sprechmann, R. Litman, T. B. Yakar, A. Bronstein, and G. Sapiro. Efficient Supervised Sparse Analysis
and Synthesis Operators. In NIPS, 2013.
[22] P. Kar, K. B. Sriperumbudur, P. Jain, and H. Karnick. On the Generalization Ability of Online Learning
Algorithms for Pairwise Loss Functions. In ICML, 2013.
[23] J. Leskovec and A. Krevl. SNAP Datasets: Stanford large network dataset collection, 2014.
[24] R. Wetzker, C. Zimmermann, and C. Bauckhage. Analyzing social bookmarking systems: A del.icio.us
cookbook. In Mining Social Data (MSoDa) Workshop Proceedings, ECAI, pages 26?30, July 2008.
[25] A. Zubiaga. Enhancing navigation on wikipedia with social tags, 2009.
[26] I. Katakis, G. Tsoumakas, and I. Vlahavas. Multilabel text classification for automated tag suggestion. In
Proceedings of the ECML/PKDD 2008 Discovery Challenge, 2008.
[27] C. Snoek, M. Worring, J. van Gemert, J.-M. Geusebroek, and A. Smeulders. The challenge problem for
automated detection of 101 semantic concepts in multimedia. In ACM Multimedia, 2006.
[28] G. Tsoumakas, I. Katakis, and I. Vlahavas. Effective and effcient multilabel classification in domains
with large number of labels. In ECML/PKDD, 2008.
[29] J. Menc??a E. L.and F?urnkranz. Efficient pairwise multilabel classification for large-scale problems in the
legal domain. In ECML/PKDD, 2008.
[30] Y.-N. Chen and H.-T. Lin. Feature-aware label space dimension reduction for multi-label classification.
In NIPS, pages 1538?1546, 2012.
[31] B. Hariharan, S. V. N. Vishwanathan, and M. Varma. Efficient max-margin multi-label classification with
applications to zero-shot learning. ML, 2012.
9
| 5969 |@word compression:1 termination:1 hu:1 seek:2 ality:1 decomposition:5 q1:2 arti:1 incurs:1 shot:1 reduction:4 contains:2 bibtex:2 document:3 outperforms:5 existing:4 ka:1 com:2 comparing:1 gmail:1 tackling:1 additive:1 partition:5 kdd:2 cis:1 plot:2 designed:1 update:6 drop:1 v:7 alone:1 generative:1 prohibitive:1 core:3 iterates:1 provides:2 cse:1 lending:1 zhang:2 purushot:1 five:1 constructed:1 direct:1 become:1 uyi:1 pairwise:4 snoek:1 indeed:3 expected:1 p1:1 dist:1 pkdd:3 multi:25 globally:1 decreasing:1 balasubramanian:1 automatically:2 curse:1 increasing:2 provided:2 project:2 moreover:4 xx:1 katakis:2 prateek:1 minimizes:4 prohibits:1 transformation:1 guarantee:1 pseudo:1 sapiro:1 every:1 tackle:1 litman:1 exactly:1 um:5 classifier:14 ensured:1 demonstrates:4 partitioning:1 gallinari:1 enjoy:1 yn:3 superiority:4 positive:3 before:1 local:12 tends:1 limit:3 despite:1 encoding:1 id:1 analyzing:2 ndcg:1 noteworthy:1 might:5 nz:2 initialization:2 challenging:2 ease:1 limited:1 bi:1 acknowledgment:1 yj:3 practice:1 definite:2 procedure:2 bookmarking:1 empirical:4 universal:1 significantly:13 projection:4 get:1 onto:7 cannot:3 selection:3 operator:2 close:1 nb:1 risk:6 applying:1 instability:2 yee:1 optimize:1 www:1 map:1 demonstrated:1 vxi:3 missing:1 convex:3 qc:2 simplicity:1 decomposable:1 amazon:3 correcting:1 fastxml:17 importantly:1 varma:4 datapoints:3 embedding:50 handle:1 population:1 variation:3 hierarchy:2 homogeneous:1 us:2 trick:1 infrequently:1 approximated:2 expensive:1 utilized:1 particularly:1 predicts:1 ding:1 wang:1 capture:1 thousand:2 region:5 decrease:2 highest:2 mentioned:3 complexity:3 multilabel:4 trained:5 grateful:1 solving:2 menc:1 deliver:1 learner:22 basis:1 iit:1 derivation:1 train:5 jain:6 distinct:1 effective:3 fast:1 eschewing:1 kp:4 bhatia:1 lift:1 hyper:4 neighborhood:2 quite:1 larger:2 supplementary:2 widely:2 say:1 snap:1 otherwise:1 compressed:2 compensates:1 ability:2 statistic:1 knn:14 unseen:1 itself:1 online:2 reproduced:1 advantage:2 differentiable:1 eigenvalue:7 agrawal:1 took:1 propose:1 product:5 pale:2 neighboring:1 relevant:3 turned:1 combining:1 iff:2 achieve:5 mediamill:2 description:1 kv:4 ky:2 convergence:3 cluster:25 requirement:1 ijcai:1 incremental:1 help:1 develop:2 ac:1 completion:2 ij:1 nearest:9 strong:1 implemented:1 predicted:3 involves:2 c:1 differ:1 correct:1 owing:1 filter:2 stochastic:2 vx:2 material:2 tsoumakas:2 require:7 generalization:5 krevl:1 yij:1 helping:1 hold:1 considered:1 predict:5 vary:1 early:1 achieves:2 outperformed:2 label:100 largest:2 unfolding:1 minimization:1 clearly:1 always:1 aim:1 rather:3 avoid:1 ax:1 focus:1 improvement:1 consistently:2 rank:25 indicates:1 mainly:1 contrast:2 rigorous:1 baseline:2 dependent:1 publically:2 nn:4 adm:1 classification:23 arg:1 overall:2 acutely:1 retaining:1 eurlex:3 art:12 smoothing:2 fairly:2 special:2 aware:3 ng:1 sampling:1 yu:2 k2f:8 nearly:2 theart:1 icml:6 cookbook:1 np:1 primarily:2 few:6 neighbour:4 preserve:7 ve:1 leml:21 individual:1 phase:3 microsoft:3 attempt:2 psd:2 detection:1 mining:1 evaluation:2 navigation:1 extreme:8 nl:3 accurate:11 closer:1 tree:15 divide:1 desired:1 sacrificing:1 e0:1 theoretical:2 leskovec:1 instance:7 earlier:1 disadvantage:1 zn:3 assignment:1 maximization:1 phrase:1 cost:1 subset:3 hundred:3 conducted:1 yakar:1 learnt:2 safeguard:1 synthesis:1 na:1 aaai:1 manage:1 cssp:1 leading:8 style:1 prox:1 ranking:3 ad:1 manik:2 break:1 try:1 start:1 bayes:2 capability:1 annotation:1 contribution:1 smeulders:1 square:1 hariharan:1 ni:4 accuracy:27 variance:3 minimize:1 efficiently:3 ensemble:16 xk2f:1 accurately:3 researcher:1 published:1 against:3 competitor:1 sriperumbudur:1 obvious:1 e2:1 naturally:1 proof:1 gain:3 sampled:1 proved:1 dataset:3 popular:5 hsu:1 recall:1 ut:1 dimensionality:4 routine:5 back:1 higher:1 supervised:1 formulation:4 done:2 evaluated:1 though:2 furthermore:1 xk1:2 stage:1 implicit:1 correlation:3 until:2 langford:1 web:2 nonlinear:2 lack:2 cplst:1 google:1 del:1 logistic:1 brings:1 reveal:2 effect:1 ye:1 concept:1 ization:1 hence:8 regularization:4 entering:1 dhillon:2 semantic:1 during:6 auc:1 maintained:1 stress:2 demonstrate:1 performs:1 l1:5 image:1 novel:4 recently:1 wikipedia:5 mt:7 empirically:1 rl:8 ji:1 volume:1 million:7 tail:7 extend:2 he:4 significant:3 refer:2 meka:1 rd:3 similarly:3 decompression:3 stable:3 acute:2 annotates:1 etc:1 add:1 base:1 closest:3 purushottam:1 optimizing:5 optimizes:1 inf:1 apart:1 scenario:1 store:1 certain:2 kar:3 delicious:4 yi:13 scoring:1 preserving:5 seen:2 minimum:2 somewhat:1 schneider:1 advertiser:1 hyi:1 july:1 ii:2 semi:1 multiple:5 reduces:1 smooth:1 technical:1 faster:5 adapt:1 lin:5 ylb:1 post:1 e1:1 prediction:35 involving:1 regression:3 enhancing:1 metric:6 iteration:1 preserved:4 whereas:3 addition:1 separately:1 fine:1 fellowship:1 singular:2 db:2 leveraging:1 obj:1 jordan:1 extracting:1 near:1 presence:1 intermediate:1 bengio:1 embeddings:31 enough:1 automated:2 iterate:1 xj:1 bid:1 zi:8 gave:1 reduce:4 inner:4 qj:2 kush:1 motivated:2 gb:1 penalty:1 e3:1 proprietary:1 clear:1 eigenvectors:1 detailed:1 svms:1 category:2 wiki:1 outperform:3 exist:1 restricts:1 zj:3 millisecond:5 canonical:1 sign:1 per:10 urnkranz:1 key:2 demonstrating:1 achieving:2 verified:1 bloom:2 kept:1 ram:1 fraction:1 run:1 extends:2 almost:1 ruling:1 reader:1 appendix:6 scaling:5 acceptable:1 comparable:1 bound:7 followed:1 guaranteed:1 occur:1 constraint:1 vishwanathan:1 x2:1 tag:4 extremely:2 min:7 performing:1 px:2 speedup:1 smaller:3 across:1 wsabie:5 kakade:1 wherever:1 happens:1 projecting:3 restricted:1 erm:1 zimmermann:1 pipeline:2 legal:1 resource:2 mutually:1 tai:1 discus:2 eventually:1 turn:1 count:1 sprechmann:1 gemert:1 tractable:1 prajain:1 end:2 adopted:1 available:2 usunier:2 apply:3 svp:7 himanshu:3 away:1 appropriate:2 upto:1 kwok:1 hierarchical:1 vlahavas:2 shaw:1 weinberger:1 original:1 denotes:6 clustering:13 ensure:2 top:3 remaining:2 opportunity:1 k1:1 build:2 approximating:1 feng:1 objective:8 already:1 primary:1 exclusive:1 traditional:4 gradient:1 subspace:6 distance:11 unable:1 separate:1 landmark:2 manifold:1 unstable:2 prabhu:2 plb:3 assuming:2 makadia:1 code:4 modeled:1 index:2 eres:1 unfortunately:1 kzk1:2 implementation:2 bronstein:1 unknown:1 perform:3 observation:2 datasets:1 benchmark:2 nullify:1 descent:1 ecml:3 immediate:1 extended:1 looking:1 worring:1 y1:3 rn:2 varied:1 jebara:1 overcoming:1 extensive:1 z1:3 delhi:2 boost:3 hour:5 nip:7 address:3 beyond:4 able:1 below:1 regime:1 sparsity:1 challenge:7 geusebroek:1 including:5 max:4 critical:2 natural:1 ranked:1 regularized:3 indicator:1 improve:2 xml:7 technology:2 carried:2 naive:1 text:4 understanding:1 l2:1 literature:1 discovery:1 relative:1 embedded:1 loss:7 sublinear:1 interesting:1 limitation:3 suggestion:1 validation:3 foundation:1 incurred:3 consistent:1 article:1 heavy:1 repeat:2 sleec:82 free:1 supported:1 infeasible:2 ecai:1 institute:2 india:5 neighbor:11 taking:1 saul:1 absolute:1 sparse:7 benefit:2 van:1 curve:1 dimension:8 xn:4 world:6 vocabulary:1 karnick:1 computes:1 kz:1 author:2 made:2 collection:1 regressors:5 simplified:1 projected:1 far:1 kzi:1 social:3 lebanon:1 excess:3 alpha:1 obtains:1 ameliorated:1 iitk:1 opp:1 ml:2 global:3 active:1 overfitting:1 recommending:1 xi:6 discriminative:2 abhishek:1 postdoctoral:1 tailed:1 table:8 learn:9 reasonably:1 robust:1 depicting:1 unavailable:1 e5:1 domain:2 aistats:2 main:2 linearly:2 n2:1 allowed:1 fair:1 x1:4 en:3 fashion:1 slow:2 precision:8 sub:5 icio:1 wish:4 stanford:1 jmlr:1 learns:1 grained:1 tang:1 kanpur:1 e4:1 theorem:5 emphasized:1 showing:1 sensing:2 list:1 gupta:1 dl:1 workshop:2 adding:1 phd:1 magnitude:3 illustrates:1 occurring:1 margin:1 chen:2 intersection:1 saturating:1 bauckhage:1 minimizer:1 satisfies:1 acm:1 weston:2 goal:3 formulated:1 consequently:1 shared:1 admm:2 yti:1 hard:2 included:1 specifically:1 typical:1 reducing:1 except:2 principal:1 multimedia:2 svd:8 indicating:2 formally:1 select:1 arises:1 indian:2 violated:2 phenomenon:1 |
5,490 | 597 | Q-Learning with Hidden-Unit Restarting
Charles W. Anderson
Department of Computer Science
Colorado State University
Fort Collins, CO 80523
Abstract
Platt's resource-allocation network (RAN) (Platt, 1991a, 1991b)
is modified for a reinforcement-learning paradigm and to "restart"
existing hidden units rather than adding new units. After restarting, units continue to learn via back-propagation. The resulting
restart algorithm is tested in a Q-Iearning network that learns to
solve an inverted pendulum problem. Solutions are found faster on
average with the restart algorithm than without it.
1
Introduction
The goal of supervised learning is the discovery of a compact representation that
generalizes well . Such representations are typically found by incremental, gradientbased search, such as error back-propagation. However, in the early stages of learning a control task, we are more concerned with fast learning than a compact representation. This implies a local representation with the extreme being the memorization of each experience. An initially local representation is also advantageous
when the learning component is operating in parallel with a conventional, fixed
controller. A learning experience should not generalize widely; the conventional
controller should be preferred for inputs that have not yet been experienced.
Platt's resource-allocation network (RAN) (Platt, 1991a, 1991b) combines gradient
search and memorization. RAN uses locally tuned (gaussian) units in the hidden
layer. The weight vector of a gaussian unit is equal to the input vector for which the
unit produces its maximal response. A new unit is added when the network's error
magnitude is large and the new unit's radial domain would not significantly overlap
domains of existing units. Platt demonstrated RAN on the supervised learning task
81
82
Anderson
of predicting values in the Mackey-Glass time series.
We have integrated Platt's ideas with the reinforcement-learning algorithm called
Q-Iearning (Watkins, 1989). One major modification is that the network has a
fixed number of hidden units, all in a single-layer, all of which are trained on every
step. Rather than adding units, the least useful hidden unit is selected and its
weights are set to new values, then continue the gradient-based search. Thus, the
unit's search is restarted. The temporal-difference errors control restart events in a
fashion similar to the way supervised errors control RAN's addition of new units.
The motivation for starting with all units present is that in a parallel implementation, the computation time for a layer of one unit is roughly the same as that for
a layer with all of the units. All units are trained from the start. Any that fail to
learn anything useful are re-allocated when needed.
Here the Q-Iearning algorithm with restarts is applied to the problem of learning
to balance a simulated inverted pendulum. In the following sections, the inverted
pendulum problem and Watkin's Q-Learning algorithm are described. Then the
details of the restart algorithm are given and results of applying the algorithm to
the inverted pendulum problem are summarized.
2
Inverted Pendulum
The inverted pendulum is a classic example of an inherently unstable system. The
problem can be used to study the difficult credit assignment problem that arises
when performance feedback is provided only by a failure signal. This problem has
often used to test new approaches to learning control (from early work by Widrow
and Smith, 1964, to recent studies such as Jordan and Jacobs, 1990, and Whitley,
Dominic, Das, and Anderson, 1993). It involves a pendulum hinged to the top
of a wheeled cart that travels along a track of limited length. The pendulum is
constrained to move within the vertical plane. The state is specified by the position
and velocity of the cart and the angle between the pendulum and vertical and the
angular velocity of the pendulum.
The only information regarding the goal of the task is provided by the failure signal,
or reinforcement, rt, which signals either the pendulum falling past ?12? or the cart
hitting the bounds of the track at ?1 m. The state at time t of the pendulum is
presented to the network as a vector, Xt, of the four state variables scaled to be
between 0 and 1.
For further details of this problem and other reinforcement learning approaches to
this problem, see Barto, Sutton, and Ande!'-son (1983) and Anderson (1987).
3
Q- Learning
The objective of many control problems is to optimize a performance measure over
time. For the inverted pendulum problem, we define a reinforcement signal to be -1
when the pendulum angle or the cart position exceed their bounds, and 0 otherwise.
The objective is to maximize the sum of this reinforcement signal over time.
Q-Learning with Hidden-Unit Restarting
If we had complete knowledge of state transition probabilities we could apply dynamic programming to find the sequence of pushes that maximize the sum of reinforcements. Reinforcement learning algorithms have been devised to learn control
strategies when such knowledge is not available. In fact, Watkins has shown that one
form of his Q-Iearning algorithm converges to the dynamic programming solution
(Watkins, 1989; Watkins and Dayan, 1992).
The essence of Q-Iearning is the learning and use of a Q function, Q(x, a), that is
a prediction of a weighted sum of future reinforcement given that action a is taken
when the controlled system is in a state represented by x. This is analogous to the
value function in dynamic programming. Specifically, the objective of Q-Iearning is
to form the following approximation:
00
Q(Xt, at) ::::::
L . l7't+k+1
k=O
where 0
< 'Y <
1 is a discount rate and
7't
is the reinforcement received at time t.
Watkins (1989) presents a number of algorithms for adjusting the parameters of Q.
Here we focus on using error back-propagation to train a neural network to learn the
Q function. For Q-Iearning, the following temporal-difference error (Sutton, 1988)
et
= 7't+1 + 'Y max
[Q(Xt+1, at+t)] at+l
Q(Xt, at).
is derived by using max [Q(Xt+l, at+t)] as an approximation to L~=o 'Yk 7't+k+2. See
at+l
(Barto, Bradtke, and Singh, 1991) for further discussion ofthe relationships between
reinforcement learning and dynamic programming.
4
Q-Learning Network
For the inverted pendulum experiments reported here, a neural network with a
single hidden layer was used to learn the Q( x, a) function. As shown in Figure 1,
the network has four inputs for the four state variables of the inverted pendulum,
and two outputs corresponding to the two possible actions for this problem, similar
to Lin (1992). In addition to the weights shown, wand v, the two units in the
output layer each have a single weight with a constant input of 0.5.
The activation function of the hidden units is the approximate gaussian function
used by Platt. Let dj be the squared distance between the current input vector, x,
and the weights in hidden unit j.
4
dj
= L(Xi - Wj,i)2
i=l
Here Xi is the ith component of x at the current time. The output, Yj, of hidden
unit j is
Yj
={
if dj < P;
otherwise,
83
84
Anderson
Xl
x2
Q(x,-lO)
x3
X
4
Q(x,+10)
Figure 1: Q-Learning Network
where p controls the radius of the region in which the unit's output is nonzero.
Unlike Platt, p is constant and equal for all units.
The output units calculate weighted sums of the hidden unit outputs and the
constant input. The output values are the current estimates of Q(Xt, -10) and
Q(Xt, 10), which are predictions of future reinforcement given the current observed
state of the inverted pendulum and assuming a particular action will be applied in
that state.
The action applied at each step is selected as the one corresponding to the larger
of Q(Xt, -10) and Q(Xt, 10). To explore the effects of each action, the action with
the lower Q value is applied with a probability that decreases with time:
at
={
> Q(Xt, -10);
0.5A t ,
if Q(Xt, 10)
otherwise,
10,
-10,
with probability p;
with probability 1 - p.
_ { 1 - 0.5At,
P-
To update all weights, error back-propagation is applied at each step using the
following temporal-difference error
et = {
,max[Q(xt+l,at+l)] - Q(Xt, at),
Gt+l
Note that rt
rt+l - Q(Xt, at),
if failure does not occur on step t + 1,
if failure occurs on step t + l.
= 0 for all non-failure steps and drops out of the first expression.
Weights are updated by the following equations, assuming Unit j is the output unit
corresponding to the action taken, and all variables are for the current time t.
~WL .
'" ,I
~V?
J,I.
In all experiments, p
discussed in Section 6.
= 2,
f3h
e yL'" V?J,'"L (x?I P
f3 e Yi
-
A = 0.99999, and,
W?J ,I.)
0.9. Values of
f3 and f3h are
Q-Learning with Hidden-Unit Restarting
5
Restart Algorithm
After weights are modified by back-propagation, conditions for a restart are checked.
If conditions are met, a unit is restarted, and processing continues with the next
time step. Conditions and primary steps of the restart algorithm appear below as
the numbered equations.
5.1
When to Restart
Several conditions must be met before a restart is performed. First, the magnitude
of the error, et, must be larger than usual. To detect this, exponentially-weighted
averages of the mean, J1., and variance, u 2 , of et are maintained and used to calculate
a normalized error, e~
e't
et - (1 _ let)'
+ (1 leU; + (1 leJ1.t
J1.t+l
2
ut+l
For our experiments,
Ie
Ie)et,
Ie )e? ,
= 0.99.
Now we can state the first restart condition. A restart is considered on steps for
which the magnitude of the error is greater than 0.01 and greater than a constant
factor of the error's standard deviation, i.e., whenever
le,1 > om
and
Of a small number of tested values, a
le,1 >
aV(1 ~l"n)'
(1)
= 0.2 resulted in the best performance.
Before choosing a unit to restart for this step, we determine whether or not the
current input vector is already "covered" by a unit. Assuming Yj is the output of
Unit j for the current input vector, the restart procedure is continued only if
Yj
5.2
< 0.5, for
j
= 1, ... ,20
(2)
Which Ullit to Restart
As stated by Mozer and Smolensky (1989), ideally we would choose the least useful
unit as the one that results in the largest error when removed from the network.
For the Q-network, this requires the removal of one unit at a time, making multiple
attempts to balance the pendulum, and determining which unit when removed
results in the shortest balancing times. Rather than following this computationally
expensive procedure, we simply took the sum of the magnitudes of a hidden unit's
output weights as a measure of it's utility. This is one of several utility measures
suggested by Mozer and Smolensky and others (e.g., Kloph and Gose, 1969).
After a unit is restarted, it may require further learning experience to acquire a
useful function in the network. The amount of learning experience is defined as a
sum of magnitudes of the error et. The sum of error magnitudes since Unit j was
85
86
Anderson
restarted is given by Cj. Once this sum surpasses a maxImum,
again eligible for restarting. Thus, Unit j is restarted when
Uj
. min
(IVI,j 1+ IV2,j I)
JE{1 ?...? 20}
Cmax ,
the unit is
(3)
and
Cj
>
( 4)
Cmax .
Without a detailed search, a value of Cmax
mance.
5.3
= 10 was found to result in good perfor-
New Weights for Restarted Unit
Say Unit j is restarted. It's input weights are set equal to the current input vector,
x, the one for which the output of the network was in error. One of the two output
weights of Unit j is also modified. The output weight through which Unit j modifies
the output of the unit corresponding to the action actually taken is set equal to the
error, et. The other output weight is not modified.
W??
Xi,
}.'
{ I,
2,
where k
6
for i
= 1, ... , 4,
if at
if at
(5)
(6)
= -10;
= 10.
Results
The pendulum is said to be balanced when 90,000 steps (1/2 hour of simulated
time) have elapsed without failure. After every failure, the pendulum is reset to the
center ofthe track with a zero angle (straight up) and zero velocities. Performance is
judged by the average number of failures before the pendulum is balanced. Averages
were taken over 30 runs. Each run consists of choosing initial values for the hidden
units' weights from a uniform distribution from 0 to 1, then training the net until
the pendulum is balanced for 90,000 steps or a maximum number of 50,000 failures
is reached.
To determine the effect of restarting, we ccmpare the performance of the Q-Iearning
algorithm with and without restarts. Back-propagation learning rates are given by
13 for the output units and 13h for the hidden units. 13 and 13h were optimized for the
algorithm without restarts by testing a large number of values. The best values of
1.0. These values were used for both algorithms.
those tried are 13 0.05 and 13h
A small number of values for the additional restart parameters were tested, so the
restart algorithm is not optimized for this problem.
=
=
Figure 2 is a graph of the number of steps between failures versus the number of
failures. Each algorithm was initialized with the same hidden unit weights. Without
restarts the pendulum is balanced for this run after 6,879 failures. With restarts it
is balanced after 3,415 failures.
The performances of the algorithms were averaged over 30 runs giving the following
results. The restart algorithm balanced the pendulum in all 30 runs, within an
Q-Learning with Hidden-Unit Restarting
100,000-
10,000With Restarts
Steps
Between
1,000 -
Without Restarts
...
Failures
1,,\
.
" '?
I
~
,
~
100 -
"
10 - I
o
'..
I
I
I
JI
I,
"
I',
_',
:~'
I
\
"'-------, ... -
I
I
2,000
4,000
I
I
6,000
Failures
Figure 2: Learning Curves of Balancing Time Versus Failures (averaged over bins
of 100 failures)
average of 3,303 failures. The algorithm without restarts was unsuccessful within
50,000 failures for two of the 30 runs. Not counting the unsuccessful runs, this
algorithm balanced the pendulum within an average of 4,923 failures. Considering
the unsuccessful runs, this average is 7 ,928 failures.
In studying the timing of restarts, we observe that initially the number of restarts
is small, due to the high variance of et in the early stages of learning. During later
stages, we see that a single unit might be restarted many times (15 to 20) before it
becomes more useful (at least aecording to our measure) than some other unit.
7
Conclusion
This first test of an algorithm for restarting hidden units in a reinforcement-learning
paradigm led to a decrease in learning time for this task. However, much work
remains in studying the effects of each step of the restart procedure. Many alternatives exist, most significantly in the method for determining the utility of hidden
units. A significant extension of this algorithm would be to consider units with
variable-width domains, as in Platt's RAN algorithm.
Acknowledgenlents
The work was supported in part by the National Science Foundation through Grant
IRI-9212191 and by Colorado State University through Faculty Research Grant 138592.
87
88
Anderson
References
C. W. Anderson. (1987). Strategy learning with multilayer connectionist representations. Technical Report TR87-509.3, GTE Laboratories, Waltham, MA,
1987. Corrected version of article that was published in Proceedings of the
Fourth International Workshop on Machine Learning, pp. 103-114, June, 1987.
A. G. Barto, S. J. Bradtke, and S. P. Singh. (1991). Real-time learning and
control using asynchronous dynamic programming. Technical Report 91-57,
Department of Computer Science, University of Massachusetts, Amherst, MA,
Aug.
A. G. Barto, R. S. Sutton, and C. W. Anderson. (1983). Neuronlike elements that
can solve difficult learning control problems. IEEE Transactions on Systems,
Man, and Cybernetics, 13:835-846. Reprinted in J. A. Anderson and E. Rosenfeld, Neurocomputing: Foundations of Research, MIT Press, Cambridge, MA,
1988.
M. I. Jordan and R. A. Jacobs. (1990). Learning to control an unstable system with
forward modeling. In D. S. Touretzky, editor, Advances in Neural Information
Processing Systems, volume 2, pages 324-331. Morgan Kaufmann, San Mateo,
CA.
A. H. Klopf and E. Gose. (1969). An evolutionary pattern recognition network.
IEEE Transactions on Systems, Science, and Cybernetics, 15:247-250.
L.-J. Lin. (1992). Self-improving reactive agents based on reinforcement learning,
planning, and teaching. Machine Learning, 8(3/4):293-32l.
M. C. Mozer and P. Smolensky. (1989). Skeltonization: A technique for trimming
the fat from a network via relevance assessment. In D. S. Touretzky, editor,
Advances in Neural Information Systems, volume 1, pages 107-115. Morgan
Kaufmann, San Mateo, CA, 1989.
J. C. Platt. (1991a). Learning by combining memorization and gradient descent.
In R. P. Lippmann, J. E. Moody, and D. S. Touretzky, editors, Advances in
Neural Information Processing Systems 3, pages 714-720. Morgan Kaufmann
Publishers, San Mateo, CA.
J. C. Platt. (1991 b) A resource-allocating network for function interpolation. N eural Computation, 3:213-225.
R. S. Sutton. (1988). Learning to predict by the method of temporal differences.
Machine Learning, 3:9-44.
C. J. C. H. Watkins. (1989). Learning with Delayed Rewards. PhD thesis, Cambridge University Psychology Department.
C. J. C. H. Watkins and P. Dayan.
8(3/4):279-292.
(1992).
Q-Iearning.
Machine Learning,
D. Whitley, S. Dominic, R. Das, and C. Anderson. (1993). Genetic reinforcement
learning for neurocontrol problems. Machine Learning, to appear.
B. Widrow and F. W. Smith. (1964). Pattern-recognizing control systems. In Proceedings of the 1963 Computer and Information Sciences (COINS) Symposium,
pages 288-317, Washington, DC. Spartan.
| 597 |@word version:1 faculty:1 advantageous:1 tried:1 jacob:2 initial:1 series:1 tuned:1 genetic:1 past:1 existing:2 current:8 activation:1 yet:1 must:2 j1:2 drop:1 update:1 mackey:1 selected:2 plane:1 ith:1 smith:2 hinged:1 along:1 symposium:1 consists:1 combine:1 roughly:1 planning:1 f3h:2 considering:1 becomes:1 provided:2 temporal:4 every:2 iearning:9 fat:1 scaled:1 platt:11 control:11 unit:58 grant:2 appear:2 before:4 local:2 timing:1 sutton:4 interpolation:1 might:1 mateo:3 co:1 limited:1 averaged:2 yj:4 testing:1 x3:1 procedure:3 significantly:2 radial:1 numbered:1 judged:1 applying:1 memorization:3 optimize:1 conventional:2 demonstrated:1 center:1 modifies:1 iri:1 starting:1 continued:1 his:1 classic:1 analogous:1 updated:1 colorado:2 programming:5 us:1 velocity:3 element:1 expensive:1 recognition:1 continues:1 observed:1 calculate:2 wj:1 region:1 decrease:2 removed:2 ran:6 yk:1 mozer:3 balanced:7 reward:1 ideally:1 dynamic:5 trained:2 singh:2 ande:1 represented:1 train:1 fast:1 acknowledgenlents:1 spartan:1 choosing:2 widely:1 solve:2 larger:2 say:1 whitley:2 otherwise:3 rosenfeld:1 sequence:1 net:1 took:1 maximal:1 reset:1 combining:1 produce:1 incremental:1 converges:1 widrow:2 received:1 aug:1 involves:1 implies:1 waltham:1 met:2 radius:1 bin:1 require:1 extension:1 gradientbased:1 credit:1 considered:1 wheeled:1 predict:1 major:1 early:3 travel:1 largest:1 wl:1 weighted:3 mit:1 gaussian:3 modified:4 rather:3 barto:4 derived:1 focus:1 june:1 detect:1 glass:1 dayan:2 typically:1 integrated:1 initially:2 hidden:19 l7:1 constrained:1 equal:4 once:1 f3:2 washington:1 future:2 others:1 connectionist:1 report:2 resulted:1 national:1 neurocomputing:1 delayed:1 attempt:1 neuronlike:1 trimming:1 extreme:1 allocating:1 experience:4 initialized:1 re:1 modeling:1 assignment:1 deviation:1 surpasses:1 uniform:1 recognizing:1 reported:1 international:1 amherst:1 ie:3 yl:1 moody:1 squared:1 again:1 thesis:1 choose:1 watkin:1 summarized:1 performed:1 later:1 pendulum:25 reached:1 start:1 parallel:2 om:1 variance:2 kaufmann:3 ofthe:2 generalize:1 cybernetics:2 straight:1 published:1 touretzky:3 whenever:1 checked:1 failure:21 pp:1 adjusting:1 massachusetts:1 leu:1 knowledge:2 ut:1 cj:2 actually:1 back:6 supervised:3 restarts:10 response:1 anderson:11 angular:1 stage:3 until:1 assessment:1 propagation:6 effect:3 normalized:1 nonzero:1 laboratory:1 during:1 width:1 self:1 essence:1 maintained:1 anything:1 complete:1 bradtke:2 charles:1 ji:1 exponentially:1 volume:2 discussed:1 significant:1 cambridge:2 teaching:1 had:1 dj:3 operating:1 gt:1 recent:1 continue:2 yi:1 inverted:10 morgan:3 greater:2 additional:1 determine:2 paradigm:2 maximize:2 shortest:1 signal:5 multiple:1 technical:2 faster:1 lin:2 devised:1 tr87:1 controlled:1 prediction:2 controller:2 multilayer:1 addition:2 allocated:1 publisher:1 unlike:1 ivi:1 cart:4 jordan:2 counting:1 exceed:1 concerned:1 psychology:1 idea:1 regarding:1 reprinted:1 whether:1 expression:1 utility:3 ullit:1 action:8 useful:5 detailed:1 covered:1 amount:1 discount:1 locally:1 exist:1 track:3 four:3 falling:1 graph:1 sum:8 wand:1 run:8 angle:3 fourth:1 eligible:1 bound:2 layer:6 occur:1 x2:1 min:1 department:3 son:1 modification:1 making:1 taken:4 computationally:1 equation:2 resource:3 remains:1 fail:1 needed:1 studying:2 generalizes:1 available:1 mance:1 apply:1 observe:1 alternative:1 coin:1 top:1 cmax:3 giving:1 uj:1 move:1 objective:3 added:1 already:1 occurs:1 strategy:2 primary:1 rt:3 usual:1 said:1 evolutionary:1 gradient:3 distance:1 simulated:2 restart:19 unstable:2 assuming:3 length:1 relationship:1 balance:2 acquire:1 difficult:2 stated:1 implementation:1 vertical:2 av:1 descent:1 dominic:2 dc:1 fort:1 specified:1 optimized:2 elapsed:1 hour:1 suggested:1 below:1 pattern:2 smolensky:3 max:3 unsuccessful:3 overlap:1 event:1 predicting:1 discovery:1 removal:1 determining:2 allocation:2 versus:2 foundation:2 agent:1 article:1 editor:3 balancing:2 lo:1 neurocontrol:1 supported:1 asynchronous:1 feedback:1 curve:1 transition:1 forward:1 reinforcement:15 san:3 transaction:2 restarting:8 compact:2 approximate:1 preferred:1 lippmann:1 xi:3 search:5 learn:5 ca:3 inherently:1 improving:1 domain:3 da:2 motivation:1 eural:1 je:1 fashion:1 experienced:1 position:2 xl:1 watkins:7 learns:1 xt:14 workshop:1 adding:2 phd:1 magnitude:6 push:1 led:1 simply:1 explore:1 hitting:1 restarted:8 ma:3 goal:2 man:1 specifically:1 corrected:1 gte:1 called:1 klopf:1 perfor:1 arises:1 collins:1 reactive:1 relevance:1 tested:3 |
5,491 | 5,970 | Robust Spectral Inference for Joint Stochastic Matrix
Factorization
David Mimno
Dept. of Information Science
Cornell University
Ithaca, NY 14850
mimno@cornell.edu
Moontae Lee, David Bindel
Dept. of Computer Science
Cornell University
Ithaca, NY 14850
{moontae,bindel}@cs.cornell.edu
Abstract
Spectral inference provides fast algorithms and provable optimality for latent topic
analysis. But for real data these algorithms require additional ad-hoc heuristics,
and even then often produce unusable results. We explain this poor performance
by casting the problem of topic inference in the framework of Joint Stochastic
Matrix Factorization (JSMF) and showing that previous methods violate the theoretical conditions necessary for a good solution to exist. We then propose a novel
rectification method that learns high quality topics and their interactions even on
small, noisy data. This method achieves results comparable to probabilistic techniques in several domains while maintaining scalability and provable optimality.
1
Introduction
Summarizing large data sets using pairwise co-occurrence frequencies is a powerful tool for data
mining. Objects can often be better described by their relationships than their inherent characteristics. Communities can be discovered from friendships [1], song genres can be identified
from co-occurrence in playlists [2], and neural word embeddings are factorizations of pairwise cooccurrence information [3, 4]. Recent Anchor Word algorithms [5, 6] perform spectral inference on
co-occurrence statistics for inferring topic models [7, 8]. Co-occurrence statistics can be calculated
using a single parallel pass through a training corpus. While these algorithms are fast, deterministic,
and provably guaranteed, they are sensitive to observation noise and small samples, often producing
effectively useless results on real documents that present no problems for probabilistic algorithms.
We cast this general problem
Area = 0.000313
Area = 0.002602
Area = 0.000660
of learning overlapping latent
clusters as Joint-Stochastic Matrix Factorization (JSMF), a
subset of non-negative matrix
factorization that contains topic
modeling as a special case.
We explore the conditions necessary for inference from co- Figure 1: 2D visualizations show the low-quality convex hull
occurrence statistics and show found by Anchor Words [6] (left) and a better convex hull (middle)
that the Anchor Words algo- found by discovering anchor words on a rectified space (right).
rithms necessarily violate such
conditions. Then we propose a rectified algorithm that matches the performance of probabilistic
inference?even on small and noisy datasets?without losing efficiency and provable guarantees.
Validating on both real and synthetic data, we demonstrate that our rectification not only produces
better clusters, but also, unlike previous work, learns meaningful cluster interactions.
0.05
0.05
0.04
0.04
0.03
0.03
0.02
0.02
0.01
0.01
0.02
0.015
0.01
0
0
-0.01
-0.01
-0.02
-0.02
0.005
0
-0.005
-0.03
-0.04
-0.04
-0.01
-0.03
-0.02
0
0.02
0.04
1
0.06
0.08
-0.04
-0.04
-0.02
0
0.02
0.04
0.06
0.08
-0.015
-0.02
-0.01
0
0.01
0.02
0.03
Let the matrix C represent the co-occurrence of pairs drawn from N objects: Cij is the joint probability p(X1 = i, X2 = j) for a pair of objects i and j. Our goal is to discover K latent clusters by approximately decomposing C ? BAB T . B is the object-cluster matrix, in which each
column corresponds to a cluster and Bik = p(X = i|Z = k) is the probability of drawing an
object i conditioned on the object belonging to the cluster k; and A is the cluster-cluster matrix,
in which Akl = p(Z1 = k, Z2 = l) represents the joint probability of pairs of clusters. We
call the matrices C and A joint-stochastic (i.e., C ? J S N , A ? J S K ) due to their correspondence to joint distributions; B is column-stochastic. Example applications are shown in Table 1.
Table 1: JSMF applications, with anchor-word equivalents.
Anchor Word algorithms [5,
6] solve JSMF problems usDomain
Object
Cluster
Basis
ing a separability assumption:
Document
Word
Topic
Anchor Word
each topic contains at least
Image
Pixel
Segment
Pure Pixel
one ?anchor? word that has
Network
User
Community
Representative
non-negligible probability exLegislature Member Party/Group
Partisan
Playlist
Song
Genre
Signature Song
clusively in that topic. The algorithm uses the co-occurrence
patterns of the anchor words as a summary basis for the co-occurrence patterns of all other words.
The initial algorithm [5] is theoretically sound but unable to produce column-stochastic word-topic
matrix B due to unstable matrix inversions. A subsequent algorithm [6] fixes negative entries in B,
but still produces large negative entries in the estimated topic-topic matrix A. As shown in Figure 3,
the proposed algorithm infers valid topic-topic interactions.
2
Requirements for Factorization
In this section we review the probabilistic and statistical structures of JSMF and then define geometric structures of co-occurrence matrices required for successful factorization. C ? RN ?N is a
joint-stochastic matrix constructed from M training examples, each of which contain some subset
of N objects. We wish to find K N latent clusters by factorizing C into a column-stochastic
matrix B ? RN ?K and a joint-stochastic matrix A ? RK?K , satisfying C ? BAB T .
Probabilistic structure. Figure 2 shows the event
space of our model. The distribution A over pairs of clusters is generated first from a stochastic process with a hyperparameter ?. If the m-th training example contains
a total of nm objects, our model views the example as
consisting of all possible nm (nm ? 1) pairs of objects.1
For each of these pairs, cluster assignments are sampled
from the selected distribution ((z1 , z2 ) ? A). Then an
actual object pair is drawn with respect to the corresponding cluster assignments (x1 ? Bz1 , x2 ? Bz2 ). Note that
this process does not explain how each training example
is generated from a model, but shows how our model understands the objects in the training examples.
?
Z1
X1
Z2
X2
A
Bk
nm (nm ? 1)
1?m?M
1?k?K
Figure 2: The JSMF
event space differs
1
from LDA?s. JSMF deals only with pairwise
co-occurrence events and does not generate
observations/documents.
Following [5, 6], our model views B as a set of parameters rather than random variables.2 The
primary learning task is to estimate B; we then estimate A to recover the hyperparameter ?. Due to
the conditional independence X1 ? X2 | (Z1 or Z2 ), the factorization C ? BAB T is equivalent to
XX
p(X1 , X2 |A; B) =
p(X1 |Z1 ; B)p(Z1 , Z2 |A)p(X2 |Z2 ; B).
z1
z2
Under the separability assumption, each cluster k has a basis object sk such that p(X = sk |Z =
k) > 0 and p(X = sk |Z 6= k) = 0. In matrix terms, we assume the submatrix of B comprised of
1
Due to the bag-of-words assumption, every object can pair with any other object in that example, except
itself. One implication of our work is better understanding the self-co-occurrences, the diagonal entries in the
co-occurrence matrix.
2
In LDA, each column of B is generated from a known distribution Bk ? Dir(?).
2
the rows with indices S = {s1 , . . . , sK } is diagonal. As these rows form a non-negative basis for
the row space of B, the assumption implies rank+ (B) = K = rank(B).3 Providing identifiability
to the factorization, this assumption becomes crucial for inference of both B and A. Note that JSMF
factorization is unique up to column permutation, meaning that no specific ordering exists among
the discovered clusters, equivalent to probabilistic topic models (see the Appendix).
Statistical structure. Let f (?) be a (known) distribution of distributions from which a cluster
distribution is sampled for each training example. Saying Wm ? f (?), we have M i.i.d samples
{W1 , . . . , WM } which are not directly observable. Defining the posterior cluster-cluster matrix
PM
1
T
?
T
4
A?M = M
m=1 Wm Wm and the expectation A = E[Wm Wm ], Lemma 2.2 in [5] showed that
A?M ?? A?
as M ?? ?.
(1)
?
Cm
Denote the posterior co-occurrence for the m-th training example by
and all examples by C ? .
P
M
1
?
T T
?
Then Cm
= BWm Wm
B , and C ? = M
m=1 Cm . Thus
!
M
1 X
?
T
C =B
Wm Wm B T = BA?M B T .
(2)
M m=1
Denote the noisy observation for the m-th training example by Cm , and all examples by C. Let
W = [W1 |...|WM ] be a matrix of topics. We will construct Cm so that E[C|W ] is an unbiased
estimator of C ? . Thus as M ? ?
C ?? E[C] = C ? = BA?M B T ?? BA? B T .
(3)
Geometric structure. Though the separability assumption allows us to identify B even from the
noisy observation C, we need to throughly investigate the structure of cluster interactions. This is
because it will eventually be related to how much useful information the co-occurrence between
corresponding anchor bases contains, enabling us to best use our training data. Say DN N n is the
set of n ? n doubly non-negative matrices: entrywise non-negative and positive semidefinite (PSD).
Claim A?M , A? ? DN N K and C ? ? DN N N
Proof Take any vector y ? RK . As A?M is defined as a sum of outer-products,
y
T
A?M y
M
X
1 X T T
1 X T
T
T
y Wm Wm
y=
(Wm y) (Wm
y) =
(non-negative) ? 0.
=
M m=1
M
(4)
Thus A?M ? PSDK . In addition, (A?M )kl = p(Z1 = k, Z2 = l) ? 0 for all k, l. Proving
A? ? DN N K is analogous by the linearity of expectation. Relying on double non-negativity of
A?M , Equation (3) implies not only the low-rank structure of C ? , but also double non-negativity of
C ? by a similar proof (see the Appendix).
The Anchor Word algorithms in [5, 6] consider neither double non-negativity of cluster interactions
nor its implication on co-occurrence statistics. Indeed, the empirical co-occurrence matrices collected from limited data are generally indefinite and full-rank, whereas the posterior co-occurrences
must be positive semidefinite and low-rank. Our new approach will efficiently enforce double nonnegativity and low-rankness of the co-occurrence matrix C based on the geometric property of its
posterior behavior. We will later clarify how this process substantially improves the quality of the
clusters and their interactions by eliminating noises and restoring missing information.
3
Rectified Anchor Words Algorithm
In this section, we describe how to estimate the co-occurrence matrix C from the training data, and
how to rectify C so that it is low-rank and doubly non-negative. We then decompose the rectified
C 0 in a way that preserves the doubly non-negative structure in the cluster interaction matrix.
3
4
rank+ (B) means the non-negative rank of the matrix B, whereas rank(B) means the usual rank.
PM
1
This convergence is not trivial while M
m=1 Wm ? E[Wm ] as M ? ? by the Central Limit Theorem.
3
Generating co-occurrence C. Let Hm be the vector of object counts for the m-th training example, and let pm = BWm where Wm is the document?s latent topic distribution. Then Hm is assumed
PN
(i)
to be a sample from a multinomial distribution Hm ? Multi(nm , pm ) where nm = i=1 Hm , and
recall E[Hm ] = nm pm = nm BWm and Cov(Hm ) = nm diag(pm ) ? pm pTm . As in [6], we
generate the co-occurrence for the m-th example by
Cm =
T
Hm Hm
? diag(Hm )
.
nm (nm ? 1)
(5)
The diagonal penalty in Eq. 5 cancels out the diagonal matrix term in the variance-covariance matrix,
T
making the estimator unbiased. Putting dm = nm (nm ? 1), that is E[Cm |Wm ] = d1m E[Hm Hm
]?
1
1
T
T
T
?
diag(E[H
])
=
(E[H
]E[H
]
+
Cov(H
)
?
diag(E[H
]))
=
B(W
W
)B
?
C
m
m
m
m
m
m
m
m.
dm
dm
?
Thus E[C|W ] = C by the linearity of expectation.
Rectifying co-occurrence C. While C is an unbiased estimator for C ? in our model, in reality the
two matrices often differ due to a mismatch between our model assumptions and the data5 or due
to error in estimation from limited data. The computed C is generally full-rank with many negative
eigenvalues, causing a large approximation error. As the posterior co-occurrence C ? must be lowrank, doubly non-negative, and joint-stochastic, we propose two rectification methods: Diagonal
Completion (DC) and Alternating Projection (AP). DC modifies only diagonal entries so that C
becomes low-rank, non-negative, and joint-stochastic; while AP enforces modifies every entry and
enforces the same properties as well as positive semi-definiteness. As our empirical results strongly
favor alternating projection, we defer the details of diagonal completion to the Appendix.
Based on the desired property of the posterior co-occurrence C ? , we seek to project our estimator
C onto the set of joint-stochastic, doubly non-negative, low rank matrices. Alternating projection
methods like Dykstra?s algorithm [9] allow us to project onto an intersection of finitely many convex
sets using projections onto each individual set in turn. In our setting, we consider the intersection
of three sets of symmetric N ? N matrices: the elementwise non-negative matrices N N N , the
normalized matrices N ORN whose entry sum is equal to 1, and the positive semi-definite matrices
with rank K, PSDN K . We project onto these three sets as follows:
P
1 ? i,j Cij T
+ T
?PSDN K (C) = U ?K U , ?N ORN (C) = C +
11 , ?N N N (C) = max{C, 0}.
N2
where C = U ?U T is an eigendecomposition and ?+
K is the matrix ? modified so that all negative
eigenvalues and any but the K largest positive eigenvalues are set to zero. Truncated eigendecompositions can be computed efficiently, and the other projections are likewise efficient. While N N N
and N ORN are convex, PSDN K is not. However, [10] show that alternating projection with a
non-convex set still works under certain conditions, guaranteeing a local convergence. Thus iterating three projections in turn until the convergence rectifies C to be in the desired space. We will
show how to satisfy such conditions and the convergence behavior in Section 5.
Selecting basis S. The first step of the factorization is to select the subset S of objects that satisfy
the separability assumption. We want the K best rows of the row-normalized co-occurrence matrix
C so that all other rows lie nearly in the convex hull of the selected rows. [6] use the GramSchmidt process to select anchors, which computes pivoted QR decomposition, but did not utilize the
sparsity of C. To scale beyond small vocabularies, they use random projections that approximately
preserve `2 distances between rows of C. For all experiments we use a new pivoted QR algorithm
(see the Appendix) that exploits sparsity instead of using random projections, and thus preserves
deterministic inference.6
Recovering object-cluster B. After finding the set of basis objects S, we can infer each entry of
B by Bayes? rule as in [6]. Let {p(Z1 = k|X1 = i)}K
k=1 be the coefficients that reconstruct the
i-th row of C in terms of the basis rows corresponding to S. Since Bik = p(X1 = i|Z1 = k),
5
There is no reason to expect real data to be generated from topics, much less exactly K latent topics.
To effectively use random projections, it is necessary to either find proper dimensions based on multiple
trials or perform low-dimensional random projection multiple times [25] and merge the resulting anchors.
6
4
P
we can use the corpus frequencies p(X1 = i) =
j Cij to estimate Bik ? p(Z1 = k|X1 =
i)p(X1 = i). Thus the main task for this step is to solve simplex-constrained QPs to infer a
set of such coefficients for each object. We use an exponentiated gradient algorithm to solve the
problem similar to [6]. Note that this step can be efficiently done in parallel for each object.
Recovering cluster-cluster A.
[6] recovered A by minimizing
kC ? BAB T kF ; but the inferred
A generally has many negative
entries, failing to model the
probabilistic interaction between
topics. While we can further
project A onto the joint-stochastic
matrices, this produces a large Figure 3: The algorithm of [6] (first panel) produces negative cluster
co-occurrence probabilities. A probabilistic reconstruction alone (this
approximation error.
22.842
-7.687
0.629
-2.723
-12.888
45.021
0.000
0.000
0.000
0.000
0.114
0.000
0.002
0.024
0.004
-7.687
43.605
-4.986
-7.788
-22.930
0.000
43.086
0.000
0.000
0.000
0.000
0.115
0.010
0.007
0.017
0.629
-4.986
12.782
-5.269
-2.998
0.000
0.000
52.828
0.000
0.000
0.002
0.010
0.162
0.016
0.012
-2.723
-7.788
-5.269
19.237
-3.267
0.000
0.000
0.000
17.527
0.000
0.024
0.007
0.016
0.072
0.014
-12.888
-22.930
-2.998
-3.267
42.367
0.000
0.000
0.000
0.000
76.153
0.004
0.017
0.012
0.014
0.328
23.46
1.00
0.84
0.67
0.50
0.34
0.17
0.00
?11.23
?22.93
paper & [5], second panel) removes negative entries but has no off-
We consider an alternate recovery diagonals and does not sum to one. Trying after rectification (this
method that again leverages the paper, third panel) produces a valid joint stochastic matrix.
separability assumption. Let CSS be the submatrix whose rows and columns correspond to the
selected objects S, and let D be the diagonal submatrix BS? of rows of B corresponding to S. Then
CSS = DADT = DAD =? A = D?1 CSS D?1 .
(6)
This approach efficiently recovers a cluster-cluster matrix A mostly based on the co-occrrurence
information between corresponding anchor basis, and produces no negative entries due to the stability of diagonal matrix inversion. Note that the principle submatrices of a PSD matrix are also
PSD; hence, if C ? PSDN then CSS , A ? PSDK . Thus, not only is the recovered A an unbiased
estimator for A?M , but also it is now doubly non-negative as A?M ? DN N K after the rectification.7
4
Experimental Results
Our Rectified Anchor Words algorithm with alternating projection fixes many problems in the baseline Anchor Words algorithm [6] while matching the performance of Gibbs sampling [11] and maintaining spectral inference?s determinism and independence from corpus size. We evaluate direct
measurement of matrix quality as well as indicators of topic utility. We use two text datasets:
NIPS full papers and New York Times news articles.8 We eliminate a minimal list of 347 English stop words and prune rare words based on tf-idf scores and remove documents with fewer
than five tokens after vocabulary curation. We also prepare two non-textual item-selection datasets:
users? movie reviews from the Movielens 10M Dataset,9 and music playlists from the complete
Yes.com dataset.10 We perform similar vocabulary curation and document tailoring, with the exception of frequent stop-object elimination. Playlists often contain the same songs multiple times,
but users are unlikely to review the same movies more than once, so we augment the movie dataset
so that each review contains 2 ? (stars) number of movies based on the half-scaled rating information that varies from 0.5 stars to 5 stars. Statistics of our datasets are shown in Table 2.
We run DC 30 times for each experiment, randomly
permuting the order of objects and using the median
Dataset
M
N
Avg. Len
results to minimize the effect of different orderings.
NIPS
1,348
5k
380.5
We also run 150 iterations of AP alternating PSDN K ,
NYTimes 269,325 15k
204.9
N ORN , and N N N in turn. For probabilistic Gibbs
Movies
63,041
10k
142.8
sampling, we use the Mallet with the standard option
Songs
14,653
10k
119.2
doing 1,000 iterations. All metrics are evaluated against
the original C, not against the rectified C 0 , whereas we use B and A inferred from the rectified C 0 .
Table 2: Statistics of four datasets.
7
We later realized that essentially same approach was previously tried in [5], but it was not able to generate
a valid topic-topic matrix as shown in the middle panel of Figure 3.
8
https://archive.ics.uci.edu/ml/datasets/Bag+of+Words
9
http://grouplens.org/datasets/movielens
10
http://www.cs.cornell.edu/?shuochen/lme
5
Qualitative results. Although [6] report comparable results to probabilistic algorithms for LDA,
the algorithm fails under many circumstances. The algorithm prefers rare and unusual anchor words
that form a poor basis, so topic clusters consist of the same high-frequency terms repeatedly, as
shown in the upper third of Table 3. In contrast, our algorithm with AP rectification successfully learns themes similar to the probabilistic algorithm. One can also verify that cluster interactions given in the third panel of Figure 3 explain how the five topics correlate with each other.
Similar to [12], we visualize the Table 3: Each line is a topic from NIPS (K = 5). Previous work
five anchor words in the co- simply repeats the most frequent words in the corpus five times.
occurrence space after 2D PCA
Arora et al. 2013 (Baseline)
of C. Each panel in Figure 1
neuron layer hidden recognition signal cell noise
neuron layer hidden cell signal representation noise
shows a 2D embedding of the
neuron layer cell hidden signal noise dynamic
NIPS vocabulary as blue dots and
neuron layer cell hidden control signal noise
five selected anchor words in red.
neuron layer hidden cell signal recognition noise
The first plot shows standard anThis paper (AP)
chor words and the original coneuron circuit cell synaptic signal layer activity
occurrence space. The second plot
control action dynamic optimal policy controller reinforcement
shows anchor words selected from
recognition layer hidden word speech image net
the rectified space overlaid on the
cell field visual direction image motion object orientation
original co-occurrence space. The
gaussian noise hidden approximation matrix bound examples
third plot shows the same anchor
Probabilistic LDA (Gibbs)
words as the second plot overlaid
neuron cell visual signal response field activity
on the AP-rectified space. The reccontrol action policy optimal reinforcement dynamic robot
recognition image object feature word speech features
tified anchor words provide better
hidden net layer dynamic neuron recurrent noise
coverage on both spaces, explaingaussian approximation matrix bound component variables
ing why we are able to achieve reasonable topics even with K = 5.
Rectification also produces better clusters in the non-textual movie dataset. Each cluster is notably
more genre-coherent and year-coherent than the clusters from the original algorithm. When K = 15,
for example, we verify a cluster of Walt Disney 2D Animations mostly from the 1990s and a cluster
of Fantasy movies represented by Lord of the Rings films, similar to clusters found by probabilistic
Gibbs sampling. The Baseline algorithm [6] repeats Pulp Fiction and Silence of the Lambs 15 times.
Quantitative results. We measure the intrinsic quality of inference and summarization with respect to the JSMF objectives as well as the extrinsic quality of resulting topics. Lines correspond to
four methods: ? Baseline for the algorithm in the previous work [6] without any rectification, 4 DC
for Diagonal Completion, AP for Alternating Projection, and Gibbs for Gibbs sampling.
Anchor objects should form a good basis for theremaining objects. We measure Recovery error
PN
PK
1
i kC i ?
k p(Z1 = k|X1 = i)C Sk k2 with respect to the original C matrix, not the
N
rectified matrix. AP reduces error in almost all cases and is more effective than DC. Although
we expect error to decrease as we increase the number of clusters K, reducing recovery error for
a fixed K by choosing better anchors is extremely difficult: no other subset selection algorithm
[13] decreased error by more than 0.001. A good
matrix factorization should have small elementwise Approximation error kC ? BAB T kF . DC and AP preserve more of the information in
the original matrix C than the Baseline method, especially when K is small.11 We expect nontrivial interactions between clusters, even when wedo not explicitly model them as in [14]. Greater
PK
1
12
diagonal Dominancy K
k p(Z2 = k|Z1 = k) indicates lower correlation between clusters.
AP and Gibbs results are similar. We do not report held-out probability because we find that relative
results are determined by user-defined smoothing parameters [12, 24].
PK
1
Specificity K
k KL (p(X|Z = k)kp(X)) measures how much each cluster is distinct from
the corpus distribution. When anchors produce a poor basis, the conditional distribution of clus11
In the NYTimes corpus, 10?2 is a large error: each element is around 10?9 due to the number of normalized entries.
12
Dominancy in Songs corpus lacks any Baseline results at K > 10 because dominancy is undefined if an
algorithm picks a song that occurs at most once in each playlist as a basis object. In this case, the original
construction of CSS , and hence of A, has a zero diagonal element, making dominancy NaN.
6
Nips
Recovery
Approximation
?
Dominancy
1.0
?
?
?
?
Specificity
?
? ?
?
Dissimilarity
?
3
0.8
0.05
0.10
?
Coherence
?160
15
0.15
0.06
?
?
?
10 15
25
Category
AP
?
5
10 15
1
0.2
0
5
?
?
?
0.00
50 75100
0.4
?
?
?
5
?
?
?
?240
?
?
?
?
25
50 75100
5
10 15
25
?
50 75100
5
10 15
?
?
? ?
?
?
0
25
50 75100
?
?
?
?
?
?
?
5
Baseline
DC
?
?
0.03
?
?200
2
0.6
0.05
?
?
10
?
0.04
?
Gibbs
?280
?320
10 15
25
50 75100
5
10 15
25
50 75100
NYTimes
Recovery
0.25
Approximation
1.0
?
Specificity
?
0.20
0.20
0.15
Dominancy
?
0.8
?
?
?
?
0.05
?
5
?
10 15 25
?
? ??
15
5
?
? ?
1
10 15 25
?
?
?
0
5
10 15 25
50 75100150
?
Baseline
DC
?
?
??
50 75100150
AP
??
?
? ?
5
0.2
50 75100150
Category
?
?
?
?
?350
?
?
0.00
?
?
?
10
0.4
?
?
?300
2
?
?
? ?
3
?
0.05
Coherence
?
?
?
0.6
?
0.10
0.10
? ??
?
?
0.15
Dissimilarity
Gibbs
?400
?
?
?
? ??
?
5
10 15 25
50 75100150
5
10 15 25
50 75100150
5
10 15 25
50 75100150
Movies
Recovery
Approximation
Dominancy
1.00
?
?
?
?
?
? ?
Specificity
?
?
?
Dissimilarity
Coherence
?120
4
0.75
10
0.08
?
?
? ?
?
?
0.06
5
?
?
?
?
0.04
?
10 15
25
?
50 75100
5
1
?
?
?
0.25
0
10 15
25
50 75100
5
10 15
25
50 75100
5
? ?
?
?
?
10 15
25
?
?
0
50 75100
Gibbs
?210
?
0
5
Baseline
DC
?
?
?
?180
2
0.50
AP
?
10
?
Category
?
?150
3
?
5
?
15
0.10
?
5
?
?
10 15
?
? ?
25
?
?
?240
50 75100
5
10 15
25
50 75100
Songs
Recovery
0.150
Approximation
?
Dominancy
1.0
?
Specificity
4
0.8
?
?
?
?
?
?
0.075
?
?
0.005
0.6
?
?
0.050
5
10 15
25
50 75100
5
10 15
25
?
?
50 75100
10
?500
Category
AP
0.4
0
5
10 15
?
25
50 75100
?
?
?
5
?
?
10 15
? ?
25
?
?
?
?
?
DC
Gibbs
?700
?
0
50 75100
?
Baseline
?
?
5
1
0.000
?300
2
?
?
?
Coherence
15
?
3
0.010
?
0.100
Dissimilarity
20
5
0.015
0.125
?
5
?
?
10 15
?
?
?
?
25
50 75100
5
10 15
25
50 75100
Figure 4: Experimental results on real dataset. The x-axis indicates logK where K varies by 5 up to 25 topics
and by 25 up to 100 or 150 topics. Whereas the Baseline algorithm largely fails with small K and does not infer
quality B and A even with large K, Alternating Projection (AP) not only finds better basis vectors (Recovery),
but also shows stable and comparable behaviors to probabilistic inference (Gibbs) in every metric.
ters given objects becomes uniform, making p(X|Z) similar to p(X). Inter-topic Dissimilarity
counts the average number of objects in each cluster that do not occur in any other cluster?s top
20 objects. Our experiments validate that AP and Gibbs yield comparably specific and distinct
topics, while Baseline and DC simply repeat the corpus distribution as in Table 3. Coherence
PK P?T opk
D2 (x1 ,x2 )+
1
penalizes topics that assign high probability (rank > 20) to
k
x1 6=x2 log
K
D1 (x2 )
words that do not occur together frequently. AP produces results close to Gibbs sampling, and
far from the Baseline and DC. While this metric correlates with human evaluation of clusters [15]
?worse? coherence can actually be better because the metric does not penalize repetition [12].
In semi-synthetic experiments [6] AP matches Gibbs sampling and outperforms the Baseline, but
the discrepancies in topic quality metrics are smaller than in the real experiments (see Appendix).
We speculate that semi-synthetic data is more ?well-behaved? than real data, explaining why issues
were not recognized previously.
5
Analysis of Algorithm
Why does AP work? Before rectification, diagonals of the empirical C matrix may be far from
correct. Bursty objects yield diagonal entries that are too large; extremely rare objects that occur
at most once per document yield zero diagonals. Rare objects are problematic in general: the corresponding rows in the C matrix are sparse and noisy, and these rows are likely to be selected by
the pivoted QR. Because rare objects are likely to be anchors, the matrix CSS is likely to be highly
diagonally dominant, and provides an uninformative picture of topic correlations. These problems
are exacerbated when K is small relative to the effective rank of C, so that an early choice of a poor
anchor precludes a better choice later on; and when the number of documents M is small, in which
case the empirical C is relatively sparse and is strongly affected by noise. To mitigate this issue,
[24] run exhaustive grid search to find document frequency cutoffs to get informative anchors. As
7
model performance is inconsistent for different cutoffs and search requires cross-validation for each
case, it is nearly impossible to find good heuristics for each dataset and number of topics.
Fortunately, a low-rank PSD matrix cannot have too many diagonally-dominant rows, since this violates the low rank property. Nor can it have diagonal entries that are small relative to off-diagonals,
since this violates positive semi-definiteness. Because the anchor word assumption implies that
non-negative rank and ordinary rank are the same, the AP algorithm ideally does not remove the
information we wish to learn; rather, 1) the low-rank projection in AP suppresses the influence of
small numbers of noisy rows associated with rare words which may not be well correlated with the
others, and 2) the PSD projection in AP recovers missing information in diagonals. (As illustrated
in the Dominancy panel of the Songs corpus in Figure 4, AP shows valid dominancies even after
K > 10 in contrast to the Baseline algorithm.)
Why does AP converge? AP enjoys local linear convergence [10] if 1) the initial C is near the
convergence point C 0 , 2) PSDN K is super-regular at C 0 , and 3) strong regularity holds at C 0 . For
the first condition, recall that we rectified C 0 by pushing C toward C ? , which is the ideal convergence
point inside the intersection. Since C ? C ? as shown in (5), C is close to C 0 as desired.The proxregular sets13 are subsets of super-regular sets, so prox-regularity of PSDN K at C 0 is sufficient for
the second condition. For permutation invariant M ? RN , the spectral set of symmetric matrices
is defined as ??1 (M) = {X ? SN : (?1 (X), . . . , ?N (X)) ? M}, and ??1 (M) is prox-regular
if and only if M is prox-regular [16, Th. 2.4]. Let M be {x ? R+
n : |supp(x)| = K}. Since each
element in M has exactly K positive components and all others are zero, ??1 (M) = PSDN K . By
the definition of M and K < N , PM is locally unique almost everywhere, satisfying the second
condition almost surely. (As the intersection of the convex set PSDN and the smooth manifold of
rank K matrices, PSDN K is a smooth manifold almost everywhere.)
Checking the third condition a priori is challenging, but we expect noise in the empirical C to
prevent an irregular solution, following the argument of Numerical Example 9 in [10]. We expect
AP to converge locally linearly and we can verify local convergence of AP in practice. Empirically,
the ratio of average distances between two iterations are always ? 0.9794 on the NYTimes dataset
(see the Appendix), and other datasets were similar. Note again that our rectified C 0 is a result of
pushing the empirical C toward the ideal C ? . Because approximation factors of [6] are all computed
based on how far C and its co-occurrence shape could be distant from C ? ?s, all provable guarantees
of [6] hold better with our rectified C 0 .
6
Related and Future Work
JSMF is a specific structure-preserving Non-negative Matrix Factorization (NMF) performing spectral inference. [17, 18] exploit a similar separable structure for NMF problmes. To tackle hyperspectral unmixing problems, [19, 20] assume pure pixels, a separability-equivalent in computer vision.
In more general NMF without such structures, RESCAL [21] studies tensorial extension of similar
factorization and SymNMF [22] infers BB T rather than BAB T . For topic modeling, [23] performs
spectral inference on third moment tensor assuming topics are uncorrelated.
As the core of our algorithm is to rectify the input co-occurrence matrix, it can be combined with
several recent developments. [24] proposes two regularization methods for recovering better B.
[12] nonlinearly projects co-occurrence to low-dimensional space via t-SNE and achieves better
anchors by finding the exact anchors in that space. [25] performs multiple random projections to
low-dimensional spaces and recovers approximate anchors efficiently by divide-and-conquer strategy. In addition, our work also opens several promising research directions. How exactly do anchors
found in the rectified C 0 form better bases than ones found in the original space C? Since now the
topic-topic matrix A is again doubly non-negative and joint-stochastic, can we learn super-topics in
a multi-layered hierarchical model by recursively applying JSMF to topic-topic co-occurrence A?
Acknowledgments
This research is supported by NSF grant HCC:Large-0910664. We thank Adrian Lewis for valuable
discussions on AP convergence.
13
A set M is prox-regular if PM is locally unique.
8
References
[1] Alan Mislove, Bimal Viswanath, Krishna P. Gummadi, and Peter Druschel. You are who you know: Inferring user profiles in Online Social Networks. In Proceedings of the 3rd ACM International Conference
of Web Search and Data Mining (WSDM?10), New York, NY, February 2010.
[2] Shuo Chen, J. Moore, D. Turnbull, and T. Joachims. Playlist prediction via metric embedding. In ACM
SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), pages 714?722, 2012.
[3] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. GloVe: Global vectors for word representation. In EMNLP, 2014.
[4] Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. In NIPS, 2014.
[5] S. Arora, R. Ge, and A. Moitra. Learning topic models ? going beyond SVD. In FOCS, 2012.
[6] Sanjeev Arora, Rong Ge, Yonatan Halpern, David Mimno, Ankur Moitra, David Sontag, Yichen Wu, and
Michael Zhu. A practical algorithm for topic modeling with provable guarantees. In ICML, 2013.
[7] T. Hofmann. Probabilistic latent semantic analysis. In UAI, pages 289?296, 1999.
[8] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, pages
993?1022, 2003. Preliminary version in NIPS 2001.
[9] JamesP. Boyle and RichardL. Dykstra. A method for finding projections onto the intersection of convex
sets in Hilbert spaces. In Advances in Order Restricted Statistical Inference, volume 37 of Lecture Notes
in Statistics, pages 28?47. Springer New York, 1986.
[10] Adrian S. Lewis, D. R. Luke, and Jrme Malick. Local linear convergence for alternating and averaged
nonconvex projections. Foundations of Computational Mathematics, 9:485?513, 2009.
[11] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy of
Sciences, 101:5228?5235, 2004.
[12] Moontae Lee and David Mimno. Low-dimensional embeddings for interpretable anchor-based topic
inference. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing
(EMNLP), pages 1319?1328. Association for Computational Linguistics, 2014.
[13] Mary E Broadbent, Martin Brown, Kevin Penner, I Ipsen, and R Rehman. Subset selection algorithms:
Randomized vs. deterministic. SIAM Undergraduate Research Online, 3:50?71, 2010.
[14] D. Blei and J. Lafferty. A correlated topic model of science. Annals of Applied Statistics, pages 17?35,
2007.
[15] David Mimno, Hanna Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. Optimizing
semantic coherence in topic models. In EMNLP, 2011.
[16] A. Daniilidis, A. S. Lewis, J. Malick, and H. Sendov. Prox-regularity of spectral functions and spectral
sets. Journal of Convex Analysis, 15(3):547?560, 2008.
[17] Christian Thurau, Kristian Kersting, and Christian Bauckhage. Yes we can: simplex volume maximization
for descriptive web-scale matrix factorization. In CIKM?10, pages 1785?1788, 2010.
[18] Abhishek Kumar, Vikas Sindhwani, and Prabhanjan Kambadur. Fast conical hull algorithms for nearseparable non-negative matrix factorization. CoRR, pages ?1?1, 2012.
[19] Jos M. P. Nascimento, Student Member, and Jos M. Bioucas Dias. Vertex component analysis: A fast
algorithm to unmix hyperspectral data. IEEE Transactions on Geoscience and Remote Sensing, pages
898?910, 2005.
[20] C?ecile Gomez, H. Le Borgne, Pascal Allemand, Christophe Delacourt, and Patrick Ledru. N-FindR
method versus independent component analysis for lithological identification in hyperspectral imagery.
International Journal of Remote Sensing, 28(23):5315?5338, 2007.
[21] Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. A three-way model for collective learning on
multi-relational data. In Proceedings of the 28th International Conference on Machine Learning (ICML11), ICML, pages 809?816. ACM, 2011.
[22] Da Kuang, Haesun Park, and Chris H. Q. Ding. Symmetric nonnegative matrix factorization for graph
clustering. In SDM. SIAM / Omnipress, 2012.
[23] Anima Anandkumar, Dean P. Foster, Daniel Hsu, Sham Kakade, and Yi-Kai Liu. A spectral algorithm
for latent Dirichlet allocation. In Advances in Neural Information Processing Systems 25: 26th Annual
Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December
3-6, 2012, Lake Tahoe, Nevada, United States., pages 926?934, 2012.
[24] Thang Nguyen, Yuening Hu, and Jordan Boyd-Graber. Anchors regularized: Adding robustness and
extensibility to scalable topic-modeling algorithms. In Association for Computational Linguistics, 2014.
[25] Tianyi Zhou, Jeff A Bilmes, and Carlos Guestrin. Divide-and-conquer learning by anchoring a conical
hull. In Advances in Neural Information Processing Systems 27, pages 1242?1250. 2014.
9
| 5970 |@word trial:1 version:1 middle:2 inversion:2 eliminating:1 tensorial:1 open:1 adrian:2 d2:1 hu:1 seek:1 tried:1 covariance:1 decomposition:1 pick:1 tianyi:1 recursively:1 moment:1 initial:2 liu:1 contains:5 score:1 selecting:1 united:1 daniel:1 document:9 outperforms:1 recovered:2 z2:9 com:1 must:2 subsequent:1 numerical:1 tailoring:1 informative:1 shape:1 distant:1 kdd:1 remove:3 plot:4 hofmann:1 christian:2 interpretable:1 v:1 alone:1 half:1 discovering:1 selected:6 fewer:1 item:1 mccallum:1 core:1 ptm:1 blei:2 provides:2 tahoe:1 org:1 five:5 dn:5 constructed:1 direct:1 qualitative:1 focs:1 doubly:7 inside:1 theoretically:1 pairwise:3 inter:1 notably:1 indeed:1 behavior:3 nor:2 frequently:1 multi:3 anchoring:1 wsdm:1 relying:1 actual:1 becomes:3 project:5 discover:1 xx:1 linearity:2 panel:7 circuit:1 cm:7 fantasy:1 akl:1 substantially:1 suppresses:1 finding:4 guarantee:3 quantitative:1 every:3 mitigate:1 tackle:1 exactly:3 scaled:1 k2:1 control:2 grant:1 producing:1 positive:7 negligible:1 before:1 local:4 bioucas:1 limit:1 approximately:2 ap:28 merge:1 ankur:1 wallach:1 challenging:1 luke:1 co:33 factorization:18 limited:2 averaged:1 unique:3 acknowledgment:1 enforces:2 restoring:1 practical:1 practice:1 definite:1 differs:1 area:3 empirical:7 submatrices:1 projection:19 matching:1 word:36 boyd:1 regular:5 specificity:5 griffith:1 get:1 onto:6 close:2 selection:3 cannot:1 layered:1 impossible:1 influence:1 applying:1 www:1 equivalent:4 deterministic:3 dean:1 missing:2 modifies:2 convex:9 recovery:8 pure:2 boyle:1 estimator:5 rule:1 steyvers:1 proving:1 stability:1 embedding:3 analogous:1 cs:6 construction:1 annals:1 user:5 exact:1 losing:1 us:1 goldberg:1 element:3 satisfying:2 recognition:4 viswanath:1 ding:1 news:1 ordering:2 decrease:1 remote:2 extensibility:1 valuable:1 leenders:1 nytimes:4 cooccurrence:1 ideally:1 dynamic:4 halpern:1 signature:1 segment:1 algo:1 lord:1 efficiency:1 gramschmidt:1 basis:13 druschel:1 joint:15 represented:1 genre:3 distinct:2 fast:4 describe:1 effective:2 kp:1 kevin:1 choosing:1 exhaustive:1 whose:2 heuristic:2 film:1 solve:3 kai:1 say:1 drawing:1 reconstruct:1 precludes:1 favor:1 statistic:8 cov:2 noisy:6 itself:1 online:2 hoc:1 descriptive:1 eigenvalue:3 sdm:1 net:2 nevada:1 propose:3 reconstruction:1 interaction:10 product:1 frequent:2 causing:1 uci:1 omer:1 achieve:1 academy:1 validate:1 scalability:1 qr:3 convergence:10 cluster:45 requirement:1 double:4 regularity:3 unmixing:1 produce:11 generating:1 guaranteeing:1 qps:1 object:36 ring:1 recurrent:1 completion:3 andrew:1 finitely:1 lowrank:1 miriam:1 exacerbated:1 eq:1 strong:1 recovering:3 c:2 coverage:1 implies:3 differ:1 direction:2 correct:1 stochastic:16 hull:5 human:1 elimination:1 orn:4 violates:2 require:1 assign:1 fix:2 decompose:1 preliminary:1 extension:1 rong:1 clarify:1 hold:2 around:1 ic:1 bursty:1 overlaid:2 thurau:1 visualize:1 claim:1 achieves:2 early:1 failing:1 estimation:1 pivoted:3 bag:2 prepare:1 grouplens:1 sensitive:1 largest:1 repetition:1 tf:1 successfully:1 tool:1 gaussian:1 always:1 super:3 modified:1 rather:3 pn:2 zhou:1 cornell:5 kersting:1 volker:1 casting:1 joachim:1 rank:22 indicates:2 contrast:2 sigkdd:1 baseline:15 summarizing:1 inference:15 eliminate:1 unlikely:1 hidden:8 kc:3 going:1 playlist:6 provably:1 pixel:3 issue:2 among:1 orientation:1 pascal:1 augment:1 priori:1 malick:2 development:1 proposes:1 constrained:1 special:1 smoothing:1 equal:1 construct:1 once:3 field:2 bab:6 sampling:6 ng:1 thang:1 represents:1 park:1 cancel:1 nearly:2 icml:2 discrepancy:1 simplex:2 report:2 others:2 future:1 inherent:1 richard:1 partisan:1 randomly:1 preserve:4 national:1 individual:1 consisting:1 jeffrey:1 psd:5 mining:3 investigate:1 highly:1 rectifies:1 evaluation:1 semidefinite:2 undefined:1 permuting:1 held:2 implication:2 necessary:3 divide:2 penalizes:1 desired:3 theoretical:1 minimal:1 column:7 modeling:4 penner:1 yichen:1 yoav:1 assignment:2 opk:1 ordinary:1 turnbull:1 maximization:1 vertex:1 subset:6 entry:13 rare:6 uniform:1 comprised:1 kuang:1 successful:1 eigendecompositions:1 too:2 varies:2 dir:1 synthetic:3 combined:1 international:3 randomized:1 siam:2 lee:2 probabilistic:15 off:2 jos:2 michael:1 together:1 sanjeev:1 w1:2 again:3 central:1 nm:14 moitra:2 imagery:1 emnlp:3 worse:1 unmix:1 supp:1 prox:5 star:3 speculate:1 student:1 coefficient:2 satisfy:2 explicitly:1 ad:1 later:3 view:2 dad:1 doing:1 red:1 wm:18 recover:1 bayes:1 parallel:2 len:1 option:1 identifiability:1 defer:1 carlos:1 rectifying:1 minimize:1 variance:1 characteristic:1 efficiently:5 likewise:1 correspond:2 identify:1 largely:1 yield:3 yes:2 who:1 identification:1 comparably:1 bilmes:1 daniilidis:1 rectified:14 anima:1 explain:3 bindel:2 synaptic:1 definition:1 against:2 frequency:4 dm:3 proof:2 associated:1 recovers:3 rithms:1 sampled:2 stop:2 dataset:8 hsu:1 recall:2 knowledge:1 pulp:1 infers:2 improves:1 hilbert:1 actually:1 understands:1 response:1 entrywise:1 done:1 though:1 strongly:2 evaluated:1 walt:1 implicit:1 until:1 correlation:2 web:2 christopher:1 broadbent:1 overlapping:1 lack:1 lda:4 quality:8 behaved:1 scientific:1 mary:1 effect:1 contain:2 unbiased:4 normalized:3 verify:3 brown:1 hence:2 regularization:1 alternating:9 symmetric:3 moore:1 semantic:2 illustrated:1 deal:1 self:1 trying:1 mallet:1 complete:1 demonstrate:1 performs:2 motion:1 omnipress:1 image:4 meaning:1 novel:1 multinomial:1 empirically:1 volume:2 association:2 elementwise:2 measurement:1 gibbs:15 rd:1 grid:1 pm:9 mathematics:1 hcc:1 language:1 dot:1 rectify:2 robot:1 stable:1 han:1 base:2 patrick:1 dominant:2 posterior:6 recent:2 showed:1 optimizing:1 certain:1 yonatan:1 nonconvex:1 christophe:1 meeting:1 yi:1 preserving:1 krishna:1 additional:1 greater:1 fortunately:1 guestrin:1 prune:1 recognized:1 converge:2 surely:1 signal:7 semi:5 violate:2 sound:1 sham:1 full:3 infer:3 multiple:4 ing:2 reduces:1 match:2 smooth:2 alan:1 cross:1 dept:2 curation:2 gummadi:1 prediction:1 scalable:1 controller:1 essentially:1 expectation:3 metric:6 circumstance:1 vision:1 iteration:3 represent:1 cell:8 penalize:1 irregular:1 addition:2 whereas:4 want:1 uninformative:1 decreased:1 median:1 crucial:1 ithaca:2 unlike:1 archive:1 validating:1 member:2 december:1 inconsistent:1 lafferty:1 bik:3 call:1 jordan:2 anandkumar:1 near:1 leverage:1 ideal:2 embeddings:2 independence:2 identified:1 d1m:1 pca:1 utility:1 penalty:1 song:9 peter:2 sontag:1 speech:2 york:3 rescal:1 prefers:1 repeatedly:1 action:2 useful:1 generally:3 iterating:1 locally:3 category:4 generate:3 http:3 exist:1 problematic:1 fiction:1 nsf:1 estimated:1 extrinsic:1 per:1 cikm:1 blue:1 hyperparameter:2 affected:1 group:1 putting:1 indefinite:1 four:2 drawn:2 prevent:1 neither:1 cutoff:2 utilize:1 graph:1 sum:3 year:1 run:3 everywhere:2 powerful:1 you:2 saying:1 reasonable:1 lamb:1 almost:4 wu:1 lake:1 coherence:7 appendix:6 comparable:3 submatrix:3 data5:1 layer:8 bound:2 guaranteed:1 nan:1 conical:2 correspondence:1 gomez:1 nonnegative:1 activity:2 nontrivial:1 annual:1 occur:3 idf:1 x2:9 argument:1 optimality:2 extremely:2 kumar:1 performing:1 separable:1 relatively:1 martin:1 alternate:1 poor:4 manning:1 belonging:1 smaller:1 separability:6 chor:1 kakade:1 making:3 s1:1 b:1 invariant:1 restricted:1 rectification:9 equation:1 visualization:1 previously:2 turn:3 eventually:1 count:2 know:1 ge:2 unusual:1 dia:1 decomposing:1 hierarchical:1 spectral:10 enforce:1 occurrence:33 robustness:1 moontae:3 original:8 vikas:1 top:1 remaining:1 dirichlet:2 linguistics:2 clustering:1 maintaining:2 music:1 pushing:2 exploit:2 especially:1 conquer:2 february:1 dykstra:2 tensor:1 objective:1 realized:1 occurs:1 strategy:1 primary:1 usual:1 diagonal:19 gradient:1 distance:2 unable:1 thank:1 outer:1 chris:1 topic:49 manifold:2 collected:1 unstable:1 trivial:1 reason:1 provable:5 toward:2 assuming:1 useless:1 relationship:1 index:1 providing:1 minimizing:1 ratio:1 prabhanjan:1 kambadur:1 difficult:1 mostly:2 cij:3 ipsen:1 sne:1 negative:25 ba:3 proper:1 policy:2 summarization:1 perform:3 collective:1 upper:1 observation:4 neuron:7 datasets:8 enabling:1 truncated:1 defining:1 relational:1 disney:1 dc:12 discovered:2 rn:3 community:2 nmf:3 inferred:2 rating:1 david:6 bk:2 cast:1 pair:8 required:1 kl:2 z1:13 nonlinearly:1 coherent:2 textual:2 nip:7 beyond:2 able:2 kriegel:1 pattern:2 mismatch:1 sparsity:2 max:1 event:3 natural:1 regularized:1 indicator:1 zhu:1 tified:1 movie:8 picture:1 arora:3 axis:1 negativity:3 hm:11 tresp:1 sn:1 text:1 review:4 geometric:3 understanding:1 checking:1 kf:2 discovery:1 nascimento:1 relative:3 expect:5 permutation:2 lecture:1 allocation:2 versus:1 nickel:1 validation:1 eigendecomposition:1 foundation:1 sufficient:1 article:1 principle:1 foster:1 uncorrelated:1 row:16 summary:1 token:1 repeat:3 diagonally:2 supported:1 english:1 enjoys:1 silence:1 allow:1 exponentiated:1 explaining:1 sendov:1 determinism:1 sparse:2 mimno:5 calculated:1 vocabulary:4 valid:4 dimension:1 computes:1 avg:1 reinforcement:2 nguyen:1 party:1 far:3 social:1 correlate:2 bb:1 transaction:1 approximate:1 observable:1 mislove:1 ml:1 global:1 uai:1 anchor:36 corpus:9 assumed:1 abhishek:1 factorizing:1 search:3 latent:9 icml11:1 sk:5 why:4 table:7 reality:1 promising:1 learn:2 robust:1 hanna:1 necessarily:1 domain:1 diag:4 shuo:1 did:1 pk:4 main:1 da:1 linearly:1 noise:11 animation:1 profile:1 n2:1 graber:1 x1:14 representative:1 definiteness:2 ny:3 talley:1 fails:2 inferring:2 nonnegativity:1 wish:2 theme:1 haesun:1 lie:1 levy:1 third:6 learns:3 rk:2 theorem:1 unusable:1 friendship:1 specific:3 showing:1 lme:1 sensing:2 list:1 exists:1 undergraduate:1 consist:1 throughly:1 adding:1 corr:1 intrinsic:1 effectively:2 pennington:1 socher:1 logk:1 dissimilarity:5 hyperspectral:3 conditioned:1 edmund:1 maximilian:1 rankness:1 chen:1 intersection:5 simply:2 explore:1 likely:3 visual:2 geoscience:1 ters:1 sindhwani:1 springer:1 kristian:1 corresponds:1 bauckhage:1 lewis:3 acm:3 conditional:2 goal:1 jeff:1 movielens:2 except:1 reducing:1 determined:1 glove:1 lemma:1 total:1 pas:1 experimental:2 svd:1 meaningful:1 exception:1 select:2 evaluate:1 d1:1 correlated:2 |
5,492 | 5,971 | Space-Time Local Embeddings
Ke Sun1? Jun Wang2
Alexandros Kalousis3,1
St?ephane Marchand-Maillet1
1
Viper Group, Computer Vision and Multimedia Laboratory, University of Geneva
sunk.edu@gmail.com, Stephane.Marchand-Maillet@unige.ch, and 2 Expedia,
Switzerland, jwang1@expedia.com, and 3 Business Informatics Department, University
of Applied Sciences, Western Switzerland, Alexandros.Kalousis@hesge.ch
Abstract
Space-time is a profound concept in physics. This concept was shown to be
useful for dimensionality reduction. We present basic definitions with interesting counter-intuitions. We give theoretical propositions to show that space-time
is a more powerful representation than Euclidean space. We apply this concept
to manifold learning for preserving local information. Empirical results on nonmetric datasets show that more information can be preserved in space-time.
1
Introduction
As a simple and intuitive representation, the Euclidean space <d has been widely used in various
learning tasks. In dimensionality reduction, n given high-dimensional points in <D , or their pairwise (dis-)similarities, are usually represented as a corresponding set of points in <d (d < D).
The representation power of <d is limited. Some of its limitations are listed next. ? The maximum
number of points which can share a common nearest neighbor is limited (2 for <; 5 for <2 ) [1, 2],
while such centralized structures do exist in real data. ? <d can at most embed (d + 1) points
with uniform pair-wise similarities. It is hard to model pair-wise relationships with less variance. ?
Even if d is large enough, <d as a metric space must satisfy the triangle inequality, and therefore
must admit transitive similarities [2], meaning that a neighbor?s neighbor should also be nearby.
Such relationships can be violated on real data, e.g. social networks. ? The Gram matrix of n
real vectors must be positive semi-definite (p. s. d.). Therefore <d cannot faithfully represent the
negative eigen-spectrum of input similarities, which was discovered to be meaningful [3].
To tackle the above limitations of Euclidean embeddings, a commonly-used method is to impose a
statistical mixture model. Each embedding point is a random point on several candidate locations
w. r. t. some mixture weights. These candidate locations can be in the same <d [4]. This allows an
embedding point to jump across a long distance through a ?statistical worm-hole?. Or, they can be
in m independent <d ?s [2, 5], resulting in m different views of the input data.
Another approach beyond Euclidean embeddings is to change the embedding destination to a curved
space Md . This Md can be a Riemannian manifold [6] with a positive definite metric, or equivalently, a curved surface embedded in a Euclidean space [7, 8]. To learn such an embedding requires
a closed-form expression of the distance measure. This Md can also be semi-Riemannian [9] with
an indefinite metric. This semi-Riemannian representation, under the names ?pseudo-Euclidean
space?, ?Minkowski space?, or more conveniently, ?space-time?, was shown [3, 7, 10?12] to be a
powerful representation for non-metric datasets. In these works, an embedding is obtained through
a spectral decomposition of a ?pseudo-Gram? matrix, which is computed based on some input data.
On the other hand, manifold learning methods [4, 13, 14] are capable of learning a p. s. d. kernel Gram matrix, that encapsulates useful information into a narrow band of its eigen-spectrum.
?
Corresponding author
1
Usually, local neighborhood information is more strongly preserved as compared to non-local information [4, 15], so that the input information is unfolded in a non-linear manner to achieve the
desired compactness.
The present work advocates the space-time representation. Section 2 introduces the basic concepts.
Section 3 gives several simple propositions that describe the representation power of space-time. As
novel contributions, section 4 applies the space-time representation to manifold learning. Section 5
shows that using the same number of parameters, more information can be preserved by such embeddings as compared to Euclidean embeddings. This leads to new data visualization techniques.
Section 6 concludes and discusses possible extensions.
2
Space-time
The fundamental measurements in geometry are established by the concept of a metric [6]. Intuitively, it is a locally- or globally-defined inner product. The metric of a Euclidean space <d is
everywhere identity. The inner product between any two vectors y1 and y2 is hy1 , y2 i = y1T Id y2 ,
where Id is the d ? d identity matrix. A space-time <ds ,dt is a (ds + dt )-dimensional real vector
space, where ds ? 0, dt ? 0, and the metric is
Id s
0
M=
.
(1)
0 ?Idt
This metric is not trivial. It is semi-Riemannian with a background in physics [9]. A point in <ds ,dt
is called an event, denoted by y = (y 1 , . . . , y ds , y ds +1 , . . . , y ds +dt )T . The first ds dimensions
are space-like, where the measurements are exactly the same as in a Euclidean space. The last dt
dimensions are time-like, which cause counter-intuitions. In accordance to the metric M in eq. (1),
?y1 , y2 ? <ds ,dt ,
hy1 , y2 i =
ds
X
l=1
y1l y2l ?
dX
s +dt
y1l y2l .
(2)
l=ds +1
In analogy to using inner products to define distances, the following definition gives a dissimilarity
measure between two events in <ds ,dt .
Definition 1. The space-time interval, or shortly interval, between any two events y1 and y2 is
dX
ds
s +dt
X
l
l 2
(y1l ? y2l )2 .
(y1 ? y2 ) ?
c(y1 , y2 ) = hy1 , y1 i + hy2 , y2 i ? 2hy1 , y2 i =
l=1
(3)
l=ds +1
The space-time interval c(y1 , y2 ) can be positive, zero or negative. With respect to a reference point
y0 ? <ds ,dt , the set {y : c(y, y0 ) = 0} is called a light cone. Figure 1a shows a light cone in
<2,1 . Within the light cone, c(y, y0 ) < 0, i. e., negative interval occurs; outside the light cone,
c(y, y0 ) > 0. The following counter-intuitions help to establish the concept of space-time.
A low-dimensional <ds ,dt can accommodate an arbitrarily large number of events sharing a common nearest neighbor. In <2,1 , let A = (0, 0, 1), and put {B1 , B2 , . . . , } evenly on the circle
{(y 1 , y 2 , 0) : (y 1 )2 + (y 2 )2 = 1} at time 0. Then, A is the unique nearest neighbor of B1 , B2 , . . . .
A low-dimensional <ds ,dt can represent uniform pair-wise similarities between an arbitrarily large
number of points. In <1,1 , the similarities within {Ai : Ai = (i, i)}ni=1 are uniform.
In <ds ,dt , the triangle inequality is not necessarily satisfied. In <2,1 , let A = (?1, 0, 0), B =
(0, 0, 1), C = (1, 0, 0). Then c(A, C) > c(A, B) + c(B, C). The trick is that, as B?s absolute
time value increases, its intervals with all events at time 0 are shrinking. Correspondingly, similarity
measures in <ds ,dt can be non-transitive. The fact that B is similar to A and C independently does
not necessarily mean that A and C are similar.
A neighborhood of y0 ? <2,1 is {(y 1 , y 2 , y 3 ) : (y 1 ?y01 )2 +(y 2 ?y02 )2 ?(y 3 ?y03 )2 ? }, where ?
<. This hyperboloid has infinite ?volume?, no matter how small is. Comparatively, a neighborhood
in <d is much narrower, with an exponentially shrinking volume as its radius decreases.
2
time
time
c
spa
e
?1 5
c = = ?0.
c = 0.5
c
c=1
y0
?n
p?3,0
space
0
g(Kn3,0 )
space
p?
lightcone
p?2,1
(a)
(b)
g(Kn2,1 )
(c)
Figure 1: (a) A space-time; (b) A space-time ?compass? in <1,1 . The colored lines show equalinterval contours with respect to the origin; (c) All possible embeddings in <2,1 (resp. <3 ) are
mapped to a sub-manifold of ?n , as shown by the red (resp. blue) line. Dimensionality reduction
projects the input p? onto these sub-manifolds, e. g. by minimizing the KL divergence.
3
The representation capability of space-time
This section formally discusses some basic properties of <ds ,dt in relation to dimensionality reduction. We first build a tool to shift between two different representations of an embedding: a matrix
of c(yi , yj ) and a matrix of hyi , yj i. From straightforward derivations, we have
Lemma
Pn 1. Cn = {Cn?n : ?i, Cii = 0; ?i 6= j, Cij = Cji } and Kn = {Kn?n :
?i, j=1 Kij = 0; ?i 6= j, Kij = Kji } are two families of real symmetric matrices. dim(Cn ) =
dim(Kn ) = n(n ? 1)/2. A linear mapping from Cn to Kn and its inverse are given by
1
1
1
K(C) = ? (In ? eeT )C(In ? eeT ), C(K) = diag(K)eT + ediag(K)T ? 2K, (4)
2
n
n
where e = (1, ? ? ? , 1)T , and diag(K) means the diagonal entries of K as a column vector.
Cn and Kn are the sets of interval matrices and ?pseudo-Gram? matrices, respectively [3, 12]. In
particular, a p. s. d. K ? Kn means a Gram matrix, and the corresponding C(K) means a square
distance matrix. The double centering mapping K(C) is widely used to generate a (pseudo-)Gram
matrix from a dissimilarity matrix.
Proposition 2. ?C ? ? Cn , ? n events in <ds ,dt , s. t. ds + dt ? n ? 1 and their intervals are C ? .
Prank(K ? ) ? ? ? T
Proof. ?C ? ? Cn , K ? = K(C ? ) has the eigen-decomposition K ? =
vl (vl )
?l p
l=1
where rank(K ? ) ? n ? 1 and {vl? } are orthonormal. For each l = 1, ? ? ? , rank(K ? ), |??l |vl?
gives the coordinates in one dimension, which is space-like if ??l > 0 or time-like if ??l < 0.
Remark 2.1. <ds ,dt (ds + dt ? n ? 1) can represent any interval matrix C ? ? Cn , or equivalently,
any K ? ? Kn . Comparatively, <d (d ? n ? 1) can only represent {K ? Kn : K 0}.
A pair-wise distance matrix in <d is invariant to rotations. In other words, the direction information
of a point cloud is completely discarded. In <ds ,dt , some direction information is kept to distinguish
between space-like and time-like dimensions. As shown in fig. 1b, one can tell the direction in <1,1
by moving a point along the curve {(y 1 )2 + (y 2 )2 = 1} and measuring its interval w. r. t. the origin.
Local
embedding techniques often use similarity measures in a statistical
n
o simplex ?n =
P
p = (pij ) : 1 ? i ? n; 1 ? j ? n; i < j; ?i, ?j, pij > 0; i,j:i<j pij = 1 . This ?n has one
less dimension than Cn and Kn so that dim(?n ) = n(n ? 1)/2 ? 1. A mapping from Kn (Cn ) to
?n is given by
pij ? f (Cij (K)),
(5)
where f (?) is a positive-valued strictly monotonically decreasing function, so that a large probability
mass is assigned to a pair of events with a small interval. Proposition 2 trivially extends to
Proposition 3. ?p? ? ?n , ? n events in <ds ,dt , s. t. ds + dt ? n ? 1 and their similarities are p? .
Remark 3.1. <ds ,dt (ds + dt ? n ? 1) can represent any n ? n symmetric positive similarities.
3
?
Typically in
The pre-image in
eq. (5) we have f (x)
= exp (?x).
Cn of any given p ?
?n is
?
the curve C ? + 2? eeT ? In : ?i 6= j, Cij
= ? ln p?ij ; ? ? < , where 2? eeT ? In means
?
a uniform
on the off-diagonal
(4), the corresponding curve in
increment
entries of
C . By eq.
1
?
?
T
?
ee
:
?
?
<
,
where
K
(0)
= K ? = K(C ? ). Because
Kn is K (?)
=
K
+
?
I
?
n
n
1
T
?
In ? n ee shares with K a common eigenvector e with zero eigenvalue, and the rest eigenrank(K ? )
?
values are all 1, there exist orthonormal vectors {vl? }n?1
l=1 and real numbers {?l }l=1
?
P
P
rank(K
)
n?1
K ? = l=1
??l vl? (vl? )T , and In ? n1 eeT = l=1 vl? (vl? )T . Therefore
rank(K ? )
?
K (?) =
X
(??l + ?)vl? (vl? )T +
n?1
X
?vl? (vl? )T .
, s. t.
(6)
l=rank(K ? )+1
l=1
Depending on ?, K ? (?) can be negative definite, positive definite, or somewhere in between. This
is summarized in the following theorem.
Theorem 4. If f (x) = exp(?x) in eq. (5), the pre-image in Kn of ?p? ? ?n is a continuous curve
{K ? (?) : ? ? <}. ??0 , ?1 ? <, s. t. ?? < ?0 , K ? (?) ? 0, ?? > ?1 , K ? (?) 0, and the number
of positive eigenvalues of K ? (?) increases monotonically with ?.
With enough dimensions, any p? ? ?n can be perfectly represented in a space-only, or timeonly, or space-time-mixed <ds ,dt . There is no particular reason to favor a space-only model,
because the objective of dimensionality reduction is to get a compact model with a small number of dimensions, regardless of whether they are space-like or time-like. Formally, Knds ,dt =
{K + ? K ? : rank(K + ) ? ds ; rank(K ? ) ? dt ; K + 0; K ? 0} is a low-rank subset of
? ds ,dt ?
Kn . In the domain Kn , dimensionality reduction based on the input p? finds some K
Knds ,dt , which is close to the curve K ? (?).
In the probability domain ?n , the image of Knds ,dt under some mapping g : Kn ? ?n is
g(Knds ,dt ). As shown in fig. 1c, dimensionality reduction finds some p?ds ,dt ? g(Knds ,dt ), so
that p?ds ,dt is the closest point to p? w. r. t. some information theoretic measure. The proximity
of p? to p?ds ,dt , i. e. its proximity to g(Knds ,dt ), measures the quality of the model <ds ,dt as the
embedding target space, when the model scale or the number of dimensions is given.
We will investigate the latter approach, which depends on the choice of ds , dt , the mapping g, and
some proximity measure on ?n . We will show that, with the same number of dimensions ds + dt ,
the region g(Knds ,dt ) with space-time-mixed dimensions is naturally close to certain input p? .
4
Space-time local embeddings
? ? K ds ,dt , or equivalently, to a set of
We project a given similarity matrix p? ? ?n to some K
n
n
ds ,dt
? ij as in eq. (2), and the similarities
events Y = {yi }i=1 ? <
, so that ?i, ?j, hyi , yj i = K
among these events resemble p? . As discussed in section 3, a mapping g : Kn ? ?n helps transfer
Knds ,dt into a sub-manifold of ?n , so that the projection can be done inside ?n . This mapping
expressed in the event coordinates is given by
exp kyit ? yjt k2
pij (Y ) ?
,
(7)
1 + kyis ? yjs k2
where y s = (y 1 , . . . , y ds )T , y t = (y ds +1 , . . . , y ds +dt )T , and k ? k denotes the 2-norm. For any pair
of events yi and yj , pij (Y ) increases when their space coordinates move close, and/or when their
time coordinates move away. This agrees with the basic intuitions of space-time. For time-like dimensions, the heat kernel is used to make pij (Y ) sensitive to time variations. This helps to suppress
events with large absolute time values, which make the embedding less interpretable. For space-like
dimensions, the Student-t kernel, as suggested by t-SNE [13], is used, so that there could be more
?volume? to accommodate the often high-dimensional input data. Based on our experience, this
hybrid parametrization of pij (Y ) can better model real data as compared to alternative parametrizations. Similar to SNE [4] and t-SNE [13], an optimal embedding can be obtained by minimizing the
Kullback-Leibler (KL) divergence from the input p? to the output p(Y ), given by
X
p?ij
.
(8)
KL(Y ) =
p?ij ln
pij (Y )
i,j:i<j
4
According to some straightforward derivations, its gradients are
X
?KL
= ?2
p?ij ? pij (Y ) yit ? yjt ,
t
?yi
(9)
j:j6=i
X
1
?KL
p?ij ? pij (Y ) yis ? yjs ,
=2
s
s
s
2
?yi
1 + kyi ? yj k
(10)
j:j6=i
where ?i, ?j, p?ij = p?ji and pij (Y ) = pji (Y ). As an intuitive interpretation of a gradient descent
process w. r. t. eqs. (9) and (10), we have that if pij (Y ) < p?ij , i. e. yi and yj are put too far
from each other, then yis and yjs are attracting, and yit and yjt are repelling, so that their space-time
interval becomes shorter; if pij (Y ) > p?ij , then yi and yj are repelling in space and attracting in
time.
During gradient descent, {yis } are updated by the delta-bar-delta scheme as used in t-SNE [13],
where each scalar parameter has its own adaptive learning rate initialized to ? s > 0; {yit } are
updated based on one global adaptive learning rate initialized to ? t > 0. The learning of time
should be more cautious, because pij (Y ) is more sensitive to time variations by eq. (7). Therefore,
the ratio ? t /? s should be very small, e.g. 1/100.
5
Empirical results
Aiming at potential applications in data visualization and social network analysis, we compare
SNE [4], t-SNE [13], and the method proposed in section 4 denoted as SNEST . They are based
on the same optimizer but correspond to different sub-manifolds of ?n , as presented by the curves
in fig. 1c. Given different embeddings of the same dataset using the same number of dimensions,
we perform model selection based on the KL divergence as explained in the end of section 3.
We generated a toy dataset SCHOOL, representing a school with two classes. Each class has 20
students standing evenly on a circle, where each student is communicating with his (her) 4 nearest
neighbours, and one teacher, who is communicating with all the students in the same class and the
teacher in the other class. The input p? is distributed evenly on the pairs (i, j) who are socially
connected.
NIPS22 contains a 4197 ? 3624 author-document matrix from NIPS 1988 to 2009 [2]. After
discarding the authors who have only one NIPS paper, we get 1418 authors who co-authored
2121 papers. The co-authorship matrix is CA1418?1418 , where CAij denotes the number of papers thatPauthor i co-authored
with author j. The input similarity p? is computed so that p?ij ?
P
CAij (1/ j CAij + 1/ i CAij ), where the number of co-authored papers is normalized by each
author?s total number of papers. NIPS17 is built in the same way using only the first 17 volumes.
GrQc is an arXiv co-authorship graph [16] with 5242 nodes and 14496 edges. After removing
one isolated node, a matrix CA5241?5241 gives the numbers of co-authored papers between any two
authors who submitted to the general relativity and quantum cosmology
category
P
P from January 1993
to April 2003. The input similarity p? satisfies p?ij ? CAij (1/ j CAij + 1/ i CAij ).
W5000 is the semantic similarities among 5000 English words in WS5000?5000 [2, 17]. Each WSij
is an asymmetric non-negative similarityPfrom word i to P
word j. The input is normalized into a
probability vector p? so that p?ij ? WSij / j WSij + WSji / i WSji . W1000 is built in the same way
using a subset of 1000 words.
Table 1 shows the KL divergence in eq. (8). In most cases, SNEST for a fixed number of free parameters has the lowest KL. On NIPS22, GrQc, W1000 and W5000, the embedding by SNEST in <2,1
is even better than SNE and t-SNE in <4 , meaning that the embedding by SNEST is both compact
and faithful. This is in contrast to the mixture approach for visualization [2], which multiplies the
number of parameters to get a faithful representation.
Fixing the free parameters to two dimensions, t-SNE in <2 has the best overall performance, and
SNEST in <1,1 is worse. We also discovered that, using d dimensions, <d?1,1 usually performs
better than alternative choices such as <d?2,2 , which are not shown due to space limitation. A timelike dimension allows adaptation to non-metric data. The investigated similarities, however, are
5
Table 1: KL divergence of different embeddings. After repeated runs on different configurations for
each embedding, the minimal KL that we have achieved within 5000 epochs is shown. The bold
numbers show the winners among SNE, t-SNE and SNEST using the same number of parameters.
SNE ? <2
SNE ? <3
SNE ? <4
t-SNE ? <2
t-SNE ? <3
t-SNE ? <4
SNEST ? <1,1
SNEST ? <2,1
SNEST ? <3,1
SCHOOL
0.52
0.36
0.19
0.61
0.58
0.58
0.43
0.31
0.29
NIPS17
1.88
0.85
0.35
0.88
0.85
0.84
0.91
0.60
0.54
NIPS22
2.98
1.79
1.01
1.29
1.23
1.22
1.62
0.97
0.93
GrQc
3.19
1.82
1.03
1.24
1.14
1.11
2.34
1.00
0.88
2
W1000
3.67
3.20
2.76
2.15
2.00
1.96
2.59
1.92
1.79
W5000
4.93
4.42
3.93
3.00
2.79
2.74
3.64
2.57
2.39
exp(kyit ? ytj k2)/(1 + kyis ? ysj k2)
100
kyit ? ytj k
time
1
50
0.1
10
100
1
teachers
0
(a)
50
100 150
kyis ? ysj k
0
200
(b)
Figure 2: (a) The embedding of SCHOOL by SNEST in <2,1 . The black (resp. colored) dots denote
the students (resp. teachers). The paper coordinates (resp. color) mean the space (resp. time)
exp(ky t ?y t k2 )
coordinates. The links mean social connections. (b) The contour of 1+kysi?ysjk2 in eq. (7) as a
i
j
function of kyis ? yjs k (x-axis) and kyit ? yjt k (y-axis). The unit of the displayed levels is 10?3 .
mainly space-like, in the sense that a random pair of people or words are more likely to be dissimilar
(space-like) rather than similar (time-like). According to our experience, on such datasets, good
performance is often achieved with mainly space-like dimensions mixed with a small number of
time-dimensions, e.g. <2,1 or <3,1 as suggested by table 1.
To interpret the embeddings, fig. 2a presents the embedding of SCHOOL in <2,1 , where the space
and time are represented by paper coordinates and three colors levels, respectively. Each class is
embedded as a circle. The center of each class, the teacher, is lifted to a different time, so as to be
near to all students in the same class. One teacher being blue, while the other being red, creates a
?hyper-link? between the teachers, because their large time difference makes them nearby in <2,1 .
Figures 3 and 4 show the embeddings of NIPS22 and W5000 in <2,1 . Similar to the (t-)SNE
visualizations [2, 4, 13], it is easy to find close authors or words embedded nearby. The learned
p(Y ), however, is not equivalent to the visual proximity, because of the counter-intuitive time dimension. How much does the visual proximity reflect the underlying p(Y )? From the histogram
of the time coordinates, we see that the time values are in the narrow range [?1.5, 1.5], while the
range of the space coordinates is at least 100 times larger. Figure 2b shows the similarity function
on the right-hand-side of eq. (7) over an interesting range of kyis ? yjs k and kyit ? yjt k. In this range,
large similarity values are very sensitive to space variations, and their red level curves are almost
vertical, meaning that the similarity information is largely carried by space coordinates. Therefore,
the visualization of neighborhoods is relatively accurate: visually nearby points are indeed similar;
proximity in a neighborhood is informative regarding p(Y ). On the other hand, small similarity values are less sensitive to space variations, and their blue level curves span a large distance in space,
meaning that the visual distance between dissimilar points is less informative regarding p(Y ). For
6
>1.0
Das
Atiya
Frasconi
Sminchisescu
Mozer
Grimes
Bengio
Li
Kim
Yu
Achan
Gupta
150
Ballard
Gerstner
Rosenfeld
0.5
Touretzky
Black
RaoCohn
Caruana
Mitchell
Thrun
Courville
Gordon
Wainwright
Bradley
Tesauro
Lafferty
SahaniPillow
Willsky
Tresp
Montague
Welling
Dayan
Smyth
Malik
Hofmann
Kulis
Rahimi
Grauman
Pouget
Roth
Blei
Teh
Hinton
Simoncelli
Baldi
Mjolsness
Riesenhuber
Winther
Lee
Saad Ghahramani
Moody
Zemel
Griffiths Bartlett Seung
Opper
Leen
Barber
Marchand
Jordan
Scholkopf
Amari
Kakade
Wang
Muller
Atkeson Bishop TishbySaul BengioSeeger
Weston
Scott
Doya WilliamsFrey
Crammer Shawe-Taylor
Ratsch
Zador
JaakkolaSinger
Ruppin
Waibel
0
Yuille
Bach Poggio
Maass
Garrigues
Horn WeinshallDarrell SchraudolphChapelle Williamson
Hochreiter
Vapnik
DeWeese
Lewicki
Koller FukumizuPlatt Warmuth
Freeman
Bialek
Lee
XingGrettonSmolaSimard
Stevens
Weiss
Gray
Herbrich
Ng
Blair
Pearlmutter
Viola Mohri Denker
Mel
Moore
Kearns
Koch Bower Schuurmans
Hastie
Movellan
Singh LeCun
Lee
Liu
WangJin
Zhang
Sejnowski
Lippmann
Barto
DeFreitas
Murray
Sutton
Hasler
Attias
Goldstein
Minch
Roweis
Nowlan
RasmussenTenenbaum
Buhmann
Sollich
Cristianini
---time-->
250
0
Graepel
Morgan
Beck
Johnson
Rumelhart
?150
Cauwenberghs
Meir
Kawato
Cottrell
Smith
Zhang
-0.5
250
Sun
Obermayer
histogram of time coordinates
Giles
200
150
100
?250
50
Cowan
?250
?150
?1.5
0
150
0.0
250
1.5
<-1.0
Figure 3: An embedding of NIPS22 in <2,1 . ?Major authors? with at least 10 NIPS papers or with
a time value in the range (??, ?1] ? [1, ?) are shown by their names. Other authors are shown
by small dots. The paper coordinates are in space-like dimensions. The positions of the displayed
names are adjusted up to a tiny radius to avoid text overlap. The color of each name represents the
time dimension. The font size is proportional to the absolute time value.
example, a visual distance of 165 with a time difference of 1 has roughly the same similarity as a
visual distance of 100 with no time difference. This is a matter of embedding dissimilar samples far
or very far and does not affect much the visual perception, which naturally requires less accuracy on
such samples. However, perception errors could still occur in these plots, although they are increasingly unlikely as the observation radius turns small. In viewing such visualizations, one must count
in the time represented by the colors and font sizes, and remember that a point with a large absolute
time value should be weighted higher in similarity judgment.
Consider the learning of yi by eq. (9), if the input p?ij is larger than what can be faithfully modeled
in a space-only model, then j will push i to a different time. Therefore, the absolute value of time
is a significance measurement. By fig. 2a, the connection hubs, and points with remote connections,
are more likely to be at a different time. Emphasizing the embedding points with large absolute time
values helps the user to focus on important points. One can easily identify well-known authors and
popular words in figs. 3 and 4. This type of information is not discovered by traditional embeddings.
6
Conclusions and Discussions
We advocate the use of space-time representation for non-metric data. While previous works on
such embeddings [3, 12] compute an indefinite kernel by simple transformations of the input data,
we learn a low-rank indefinite kernel by manifold learning, trying to better preserve the neigh7
>0.8
HER
SPIDER
SHUTTER
150
OUTDOORS
SQUID
GLOVES
CLAM
PARROT
PANTYHOSE
CURTAINS
TURTLE
PRONOUN
COWGIRL
LOBSTER
CRANE
SHARK
FEATHER
SINKER
FISH BOAT
BIRD
FRONTIER
BOOT
DOLPHIN
CREW
LAKE
SPACE
GULLY MARINE
TELESCOPE
PLANETS
NEST BIRDSDUCKS RIVER DRIFT
SUBMARINE
EAGLE
PENGUIN
FLY
CUE
WATERFALL
BUTTERFLY
HARBOR
TROPICALSUNSET
ROCKS
WATERDISASTER
THIRSTY
ROACH
BASSFLUTE
PARADE
BOARD
SUNSHINE
TUBE
PIANO
MARINES
TERMINAL
ACCIDENT
HIKER
FLEET
RADIATOR
MUSIC
GENERAL
FLAP
POUR
TRAILER
KEYSSTICKER
BATTERY
BUS
WANDERCHASE
SINGER SPEAKER
DUNK BLOW
AURA
PLUG
CONSOLE
RANKCOMMANDER
FUSE
TRAIN
LIGHTNING
POEM
DANCER
BALLOON
MAROON
RUNNER VALVE
OZONE
TRUCK
STEAMBREATH
HAUL
CLEAR PADDY
EXPRESS
BREEZEWAY
HOOP
SOAPCLEANER FUMES
UNLOAD
SALOON
HANDLE
VEER
HIKING
FOG
BUMPS
TAR
SPIT BOILFEVER CUPBOARD
RUNNING
COUNTER
MENTHOL
FIREMAN
CLEANING
CYLINDER
MARBLE
SLURP
DRAGON
MAIN SIGNAL
SPONTANEOUS
COOL MILD OIL
CORNERPASSAGE
VODKALEMONADE
SCUM
DUST
EMERGENCY
CAMPINGGLIDE LEVELELECTRICIAN WIDE CROOKED ORIENT EXPLORER
DRYER
DRAFT WINECOCA-COLA
SHAKE HEAT
FIREPLACE
BEYOND
COVEREDDESCENT
PILE
SMEAR
REACTION
MEASURE
CLUMSY STAIRS
ATTIC
FUZZ
TRAVEL
TOOTHPASTEPALE
BARREL THIRSTSNAP
MOTION
SWING
DIRECTION
MINER
LIFT
DRILL
CUBE
TOURIST
RAIN
SWAMPSPRAYDRAIN
SLUG
DAMP
LIZARD
SHADOW
DRUNK
HOLE
COLD
BOWL
BLENDER
HOCKEY
GRIND
SNOW
BUFFALO
STICK
MILKSHEEP
TEAPOT
SILVERWARE
KITCHENORANGE JUICE
MOLASSES
CATTLE
FRY
EGGOATMEAL
TURKEY
INTAKE
LUNCH
HOSTESS
0
CHINESE
HOT DOGS
COLESLAW
ITALIAN
MIXED
TOPPING
CRUSH
LEMON
SCAR
COMMUNIST
MAKE UP
RED
RELIEF
DISGUST
SQUEEZE
BODY
MOLE
FRAIL
DANGEROUS
TWELVE
FOUL
DRUGSDOCTOR
MAN
MAIDEN
BABY
ESSENCE
histogram of time coordinates
IMPATIENCE
SUPERMAN
SPOIL
GHOST
PUBERTY
PARTY
HORMONES
ECSTASY
BUZZ
INVENTOR
1000 EGYPT GRACE
?100
PRIME
CROSS
PROVE
PERSUADE
LOVER
INTIMATE
INTEGRITY
REPENTANCE NEPHEW
?1.5
0.0
?150
1.5
ROYAL
?100
MONARCH
OATH FAVOR
ENGAGE
GROOM
PLEASE GIVING
RARE
COMMON SOME
COURT
NOTHING
BET
EXTRA
-0.4
REDUCE
STOLEN
ADD
VOID QUALITY
ADDITION
GET USE BORROW
TILL
ACCOUNT
CHARITY
FORBID
POSSESSION
KEEP
BEG
COMPULSION
RITUAL
THRESHOLD
NORM
UNEVEN
DISTINCT
CRIME CHANCE
FOR
CAPTURE
REPLACE
FORBIDDEN
ADMIT
LIMIT
DARECLAIMS
LACK
ASSOCIATE
PARENT
INDEPENDENT
RESPONSIBILITY
EMPIRE
ANARCHY
CHANGE
ANOTHER MAJORITY
RESTRICTION
DIFFERENCE
MAXIMUM
REBEL LEGAL
FAKEINTERESTSALESMAN
FAITHFULAFFAIR
SISTER
BOND
RIGHTEOUSNESS FAMILY
VALENTINESTRANGER
JEWISH
SUNDAY
GREEK
OPPONENT
LOVERS
NUN
NOT
MORAL
HONOR
TRAITOR
DISBELIEVE
RESPECT
UNDERSTANDING
REVIVAL
500
TABOO
ADORE
FAIR
PERFECT
FAVORITE
ENDLESS
DECISION
OFTEN
EVERYDAY
ABUNDANCE
CONGRESS
PROOF
ACCUMULATE
INNOCENCE
OUTSTANDING
DECENCY
DISOWN
PARENTS
BLAME
MISCHIEF
HELPFUL
FUSS
WORRY
ANNOYING
IDOL JUNIOR
SUPERSTITION
IDENTITY
MEMORY DISPERSE
FILL
HANDICAP
VOTE
ORDER
TEN
PERCENT
PERSONALITY
TEND
WORTHLESS
REGRET
MATH SPY
ALONETOGETHER
PERSON
CRITICISM
REPEAT
AWARD
FACTORY
LEADER
CARD
POSSIBLE
UNSURE
DEPLETION
MANAGEMENT
BRIEFCASE
AUTHORITY
HELPER MASTER
SUPPORT
POLICEMAN
UNION
CULTURE
IMPORTANT
DIVISION
EGO
FEELING
GODANGEL
DAMN
PROVERB
STUBBORN
CONDEMN
DISMAY
FRUSTRATE
SNEAKY
POPULAR
TREAT
CHAMPION
DILIGENCE
SUGGEST
FIGURE
NEGOTIATION
NONSENSE
KIDS
DELIGHT
STAFF
AFFECT
DOBE
PROFESSIONAL
REASON
EFFORT
SKILL CAPACITY
SECRETARY
EXCEL
PROFESSION
VIOLATION
CHALLENGE
POTENTIAL
DEFEAT
SPECIFIC
CRISIS
RESISTANCE
CONSEQUENCE
ACCOMPLISHED
HISTORY
EVALUATE
PROJECT
RENOUNCE
CREATIVITY
THINK
CONFUSE
SHYEXPERIENCE
OBNOXIOUS
ENTERTAIN
SENSITIVE
TEMPER
CODE
SCREAM
AWAREHIDDENSTIMULUS
CONFUSION THEORY
OPINION
SERIOUS
ATTRACT
SCIENTIFIC
COMPOUND
EINSTEIN
SPADE
HARDY
CHEERLEADER
WASP
NERVE
OUTRAGEOUS
GROWTH
SPELL
MESSAGE
CONTEXT
DELIVER
GRADUATION
LEARN
0
DEFENSE
WRAP
FOREIGN
ELABORATE
SINCE
WHO
LABEL
WRITER
GONE COURSEMEANING
THERAPY
DARING
CONTEMPORARY
TRAUMAELDERS
OUTLINE
PLAN
IMPRESSION
HOROSCOPE
ABSENCE ORIGINATE
SCRIBBLE
CLAMP
WARN
SYMBOL
ISSUE
THESIS
HIGHLIGHT
ATTENTION
OPENING
SNEAK
PROTECT TELEPHONE
PRESENTATION
BUSY
DOMINATE
FRAY
RECENT
DATE INSTANCE
TELEVISION
LEAD
ELIMINATION EVENT
AD
TALE
FREE
STALL
GIVE UP
GOING
COMPUTER
DISCOVER
BOXER
ATTEND
ESCAPE
IMITATE
PERFORMANCE
CAST
SALUTE
ERA
BRAKE
RETREAT
DELAY
DIRECT
RETURN
SERIES
SCREEN ENGINEER
PROCESS
PROTECTION
DISCIPLINE
PANIC
MEDICINE BOY
CUTEDECORATEVIOLENT
ELEGANT
EYEBALL
UNIFORM
NERVES
INTENSITY
FREEDOM
EVICT
SCENERY
MISSILE
TIRED
EXTREME
MADE
GROW
NEUTRAL
SMALL
TONE
WAKE
PLAIN
STIFF
EUROPE
MICROSCOPE
COMFORTPEACEFUL
HYPNOTIZE
GRAVE
BREAST
HEALTH
LOFT
PLUSH
FIT
PERISH BIRTH
SMELL MAFIA DRUG
MUSTY
GUN
COMPONENTS
RAPE
STROKE
BACTERIA
HANDKERCHIEF
MOUTH
SENSE
RESTORE
JAPAN
ANNIHILATE
DOWNTOWN
LONG
EXERCISE
RECLINERPOISE
LIMP
SPLIT
HOBBY
FOLD
STYLE
SWABS
SUSPENSE
CAPTION
RANGE
FOOTBALL
COVER
MUSCLE
TIP
FORT
SPIKE
CLOTHES
BLOOD
PINCH
ZIT
LICK HEADACHE
TART
AX
HAIRCUT
LEAN
CHEEK
PAINTER
KEEPER
PLACE
APARTMENT
WRESTLING
EMERALD
SEWLACE
DETERIORATE
WASTED SABER
PINK BEARD
COSTUME
MINT
GARLIC
PROVISIONINTESTINELIVER
PANTS
SWOON
BROWN NECK
CLAY
SLIVER
RACK
INDIANBLOCK
HOTEL
BASE
BUILDING
BAT DICE
HIT
BOUND
STRIPE
PAT
FLOWER HAIR SHOT
SQUASH
VASELINE
BISCUIT
WOLF
BACK
IMAGEBLOCKADEARTS CHART
POUND
CHUCK
WASTE
ELF
APPLE
PIZZA
?50
NATURAL
PASTRYSTRAWBERRYFLOWERS
DILL
PARSLEY
PLASTIC
RAT
STRAY
TAIL
PIGFIELD
CRACKER FATTENING
PIE
GOODS TASTY
PREDATOR LEO
BAG
CHISEL
SHED
HANGHAND
TARGET
ANIMALCOAT
IRON
ANKLE
LANDSCAPEHEDGE
SAUSAGE
PROTEIN
RIBS CRUNCH
BRITTLE RABBIT
SOUTHERN
POTATO
WOOD
STAIN
ELEPHANT
MONKEY
TREEDEER
BEAR
CHICKENCORN
HAY
HUNGER
COOKED
SUPERMARKET
MOVE
DIAMETER
SAUCER
MIXTURE
ROAD
DOOR
BEER
50
RECKLESS
CAR
ROBE
---time-->
BITE
LAVA
EXPLODE AARDVARK
GOO
ANISETTE
?150
0.4
CART
MOTORCYCLE CONTROLS
BICYCLE
HELICOPTER
RUBBER
CRATER
WEEK
AFTERNOON
NOMAD RAM
SITTING
RIDER
COAST
CORAL
SWIMMER
WORM SEALOVERFLOW
GRASSHOPPER
100
DOWNSTAIRS
STABLE
BOOTS
HORSE
ENVIRONMENTUNICORN
AHOY
TRADE
MONEY
QUARTER
DUE
CENTS
RECEIPT
RATE
EXCISE
PROFIT
VALUE
SALES
SHOP
INFLATION
ECONOMIC
HANDBAG
EXTRAVAGANT PRECIOUS
BROKE
BLACKMAIL
LUXURYSTINGY
BUM
GHETTO
WELFARE
WELCOME
MEET
?50
0
50
100
150
<-0.8
Figure 4: An embedding of W5000 in <2,1 . Only a subset is shown for a clear visualization. The
position of each word represents its space coordinates up to tiny adjustments to avoid overlap. The
color of each word shows its time value. The font size represents the absolute time value.
bours [4]. We discovered that, using the same number of dimensions, certain input information is
better preserved in space-time than Euclidean space. We built a space-time visualizer of non-metric
data, which automatically discovers important points.
To enhance the proposed visualization, an interactive interface can allow the user select one reference point, and show the true similarity values, e.g., by aligning other points so that the visual
distances correspond to the similarities. Proper constraints or regularization could be proposed, so
that the time values are discrete or sparse, and the resulting embedding can be more easily interpreted.
The proposed learning is on a sub-manifold Knds ,dt ? Kn , or a corresponding sub-manifold
of ?n .
Another interesting sub-manifold of Kn could be K ? ttT : K 0; t ? <n , which extends the
p. s. d. cone to any matrix in Kn with a compact negative eigen-spectrum. It is possible to construct
a sub-manifold of Kn so that the embedder can learn whether a dimension is space-like or time-like.
As another axis of future investigation, given the large family of manifold learners, there can be many
ways to project the input information onto these sub-manifolds. The proposed method SNEST is
based on the KL divergence in ?n . Some immediate extensions can be based on other dissimilarity
measures in Kn or ?n . This could also be useful for faithful representations of graph datasets with
indefinite weights.
Acknowledgments
This work has been supported be the Department of Computer Science, University of Geneva, in
collaboration with Swiss National Science Foundation Project MAAYA (Grant number 144238).
8
References
[1] K. Zeger and A. Gersho. How many points in Euclidean space can have a common nearest
neighbor? In International Symposium on Information Theory, page 109, 1994.
[2] L. van der Maaten and G. E. Hinton. Visualizing non-metric similarities in multiple maps.
Machine Learning, 87(1):33?55, 2012.
[3] J. Laub and K. R. M?uller. Feature discovery in non-metric pairwise data. JMLR, 5(Jul):801?
818, 2004.
[4] G. E. Hinton and S. T. Roweis. Stochastic neighbor embedding. In NIPS 15, pages 833?840.
MIT Press, 2003.
[5] J. Cook, I. Sutskever, A. Mnih, and G. E. Hinton. Visualizing similarity data with a mixture of
maps. In AISTATS?07, pages 67?74, 2007.
[6] J. Jost. Riemannian Geometry and Geometric Analysis. Universitext. Springer, 6th edition,
2011.
[7] R. C. Wilson, E. R. Hancock, E. Pekalska, and R. P. W. Duin. Spherical embeddings for
non-Euclidean dissimilarities. In CVPR?10, pages 1903?1910, 2010.
[8] D. Lunga and O. Ersoy. Spherical stochastic neighbor embedding of hyperspectral data. Geoscience and Remote Sensing, IEEE Transactions on, 51(2):857?871, 2013.
[9] B. O?Neill. Semi-Riemannian Geometry With Applications to Relativity. Number 103 in Series:
Pure and Applied Mathematics. Academic Press, 1983.
[10] L. Goldfarb. A unified approach to pattern recognition. Pattern Recognition, 17(5):575?582,
1984.
[11] E. Pekalska and R. P. W. Duin. The Dissimilarity Representation for Pattern Recognition:
Foundations and Applications. World Scientific, 2005.
[12] J. Laub, J. Macke, K. R. M?uller, and F. A. Wichmann. Inducing metric violations in human
similarity judgements. In NIPS 19, pages 777?784. MIT Press, 2007.
[13] L. van der Maaten and G. E. Hinton. Visualizing data using t-SNE. JMLR, 9(Nov):2579?2605,
2008.
[14] N. D. Lawrence. Spectral dimensionality reduction via maximum entropy. In AISTATS?11,
JMLR W&CP 15, pages 51?59, 2011.
[15] K. Q. Weinberger, F. Sha, and L. K. Saul. Learning a kernel matrix for nonlinear dimensionality
reduction. In ICML?04, pages 839?846, 2004.
[16] J. Leskovec, J. Kleinberg, and C. Faloutsos. Graph evolution: Densification and shrinking
diameters. ACM Transactions on Knowledge Discovery from Data, 1(1), 2007.
[17] D. L. Nelson, C. L. McEvoy, and T. A Schreiber. The university of South Florida
word association, rhyme, and word fragment norms. 1998. http://www.usf.edu/
FreeAssociation.
9
| 5971 |@word mild:1 judgement:1 norm:3 profit:1 garrigues:1 reduction:9 liu:1 fragment:1 bradley:1 repelling:2 dx:2 cue:1 tone:1 zhang:2 symposium:1 excise:1 baldi:1 manner:1 tart:1 indeed:1 roughly:1 panic:1 globally:1 unfolded:1 becomes:1 what:1 unified:1 transformation:1 growth:1 shed:1 exactly:1 grant:1 attend:1 local:6 aiming:1 meet:1 co:6 definite:4 movellan:1 road:1 groom:1 close:4 scar:1 context:1 crane:1 roth:1 brake:1 straightforward:2 rabbit:1 pure:1 communicating:2 handle:1 target:2 saber:1 swimmer:1 stain:1 ego:1 rumelhart:1 capture:1 counter:5 contemporary:1 trade:1 learner:1 represented:4 train:1 describe:1 sejnowski:1 neighborhood:5 outside:1 birth:1 grave:1 valued:1 cvpr:1 squash:1 favor:2 butterfly:1 eigenvalue:2 pronoun:1 till:1 sutskever:1 parent:2 double:1 tale:1 ij:13 school:5 zit:1 beg:1 switzerland:2 stephane:1 human:1 elimination:1 graduation:1 topping:1 strictly:1 frontier:1 proximity:6 therapy:1 visually:1 lawrence:1 optimizer:1 bag:1 schreiber:1 champion:1 sunday:1 tar:1 barto:1 sunk:1 rank:9 hobby:1 dim:3 dayan:1 foreign:1 vl:13 relation:1 koller:1 going:1 overall:1 aura:1 multiplies:1 negotiation:1 construct:1 ng:1 superstition:1 ephane:1 serious:1 opening:1 national:1 relief:1 violation:2 grime:1 fog:1 bum:1 endless:1 helper:1 entertain:1 experience:2 culture:1 bacteria:1 theoretical:1 measuring:1 rare:1 too:1 kn:21 teacher:7 minch:1 st:1 winther:1 river:1 international:1 lee:3 physic:2 enhance:1 moody:1 satisfied:1 management:1 diligence:1 blow:1 satisfy:1 contribution:1 square:1 ni:1 painter:1 who:6 largely:1 judgment:1 buzz:1 submitted:1 stroke:1 lobster:1 proof:2 cosmology:1 popular:2 mitchell:1 dimensionality:9 worry:1 dt:45 april:1 western:1 quality:2 gray:1 true:1 symmetric:2 moore:1 semantic:1 rat:1 performs:1 percent:1 meaning:4 coast:1 hy1:4 common:5 volume:4 defeat:1 association:1 interpretation:1 interpret:1 trivially:1 blame:1 dot:2 moving:1 stable:1 surface:1 add:1 base:1 aligning:1 closest:1 forbidden:1 honor:1 chuck:1 yi:11 accomplished:1 muscle:1 hormone:1 academic:1 plug:1 jost:1 metric:15 addition:1 interval:11 void:1 rest:1 tend:1 hastie:1 stall:1 inner:3 reduce:1 court:1 whether:2 effort:1 listed:1 shake:1 authored:4 category:1 generate:1 exist:2 fish:1 loft:1 blood:1 deweese:1 achan:1 graph:3 fuse:1 cone:5 everywhere:1 disgust:1 extends:2 submarine:1 cheek:1 shark:1 lake:1 spa:1 emergency:1 hiking:1 occur:1 constraint:1 dragon:1 turtle:1 minkowski:1 missile:1 relatively:1 department:2 according:2 unsure:1 rider:1 wichmann:1 dryer:1 rubber:1 visualization:8 singer:1 costume:1 away:1 pji:1 professional:1 florida:1 running:1 somewhere:1 coral:1 murray:1 sha:1 diagonal:2 obermayer:1 southern:1 wrap:1 link:2 gun:1 rebel:1 spider:1 boy:1 proper:1 perform:1 fuzz:1 vertical:1 datasets:4 riesenhuber:1 curved:2 prank:1 hinton:5 intensity:1 learned:1 suggested:2 bar:1 built:3 memory:1 explorer:1 restore:1 boat:1 representing:1 shop:1 tasty:1 jun:1 supermarket:1 text:1 understanding:1 geometric:1 retreat:1 pant:1 neighbor:8 absolute:7 distributed:1 welling:1 scribble:1 transaction:2 keep:1 rib:1 spectrum:3 favorite:1 edition:1 repeated:1 body:1 fig:6 beard:1 elaborate:1 board:1 shrinking:3 position:2 exercise:1 bower:1 haircut:1 emphasizing:1 dissimilarity:5 visual:7 weston:1 identity:3 worm:2 total:1 meaningful:1 outstanding:1 evaluate:1 annoying:1 cola:1 blender:1 reaction:1 gmail:1 cottrell:1 zeger:1 limp:1 hofmann:1 plot:1 cook:1 smith:1 draft:1 math:1 authority:1 advocate:2 deteriorate:1 pour:1 terminal:1 spherical:2 underlying:1 mass:1 monkey:1 pseudo:4 anarchy:1 treat:1 congress:1 consequence:1 sutton:1 era:1 black:2 gone:1 bat:1 unique:1 faithful:3 acknowledgment:1 union:1 regret:1 cold:1 crush:1 drug:1 projection:1 griffith:1 get:4 selection:1 jewish:1 regardless:1 attention:1 independently:1 ke:1 pouget:1 orthonormal:2 resp:6 user:2 smyth:1 cleaning:1 origin:2 associate:1 recognition:3 asymmetric:1 stripe:1 lean:1 fly:1 connected:1 mjolsness:1 balloon:1 decrease:1 battery:1 yuille:1 creates:1 division:1 deliver:1 completely:1 various:1 derivation:2 heat:2 horse:1 lift:1 pinch:1 larger:2 elephant:1 football:1 rock:1 product:3 helicopter:1 motorcycle:1 roweis:2 intuitive:3 inducing:1 everyday:1 squeeze:1 eq:11 resemble:1 shadow:1 lava:1 direction:4 radius:3 stevens:1 stochastic:2 y01:1 sunshine:1 opinion:1 trailer:1 adjusted:1 exp:5 mapping:7 major:1 bond:1 pn:1 bet:1 kid:1 waterfall:1 kim:1 secretary:1 scream:1 attract:1 italian:1 issue:1 among:3 temper:1 plan:1 unload:1 icml:1 elf:1 gordon:1 preserve:1 n1:1 freedom:1 investigate:1 mnih:1 mixture:5 shorter:1 circle:3 isolated:1 instance:1 column:1 caruana:1 subset:3 johnson:1 twelve:1 standing:1 discipline:1 tip:1 thesis:1 admit:2 busy:1 potential:2 b2:2 bold:1 waste:1 matter:2 view:1 red:4 expedia:2 capability:1 idol:1 predator:1 freeassociation:1 accuracy:1 correspond:2 plastic:1 apple:1 caij:7 graepel:1 nonmetric:1 back:1 wei:1 done:1 pound:1 strongly:1 d:43 spell:1 evolution:1 assigned:1 impatience:1 laboratory:1 visualizing:3 authorship:2 impression:1 confusion:1 pearlmutter:1 wise:4 ruppin:1 novel:1 rotation:1 console:1 ji:1 winner:1 discussed:1 ai:2 mathematics:1 integrity:1 own:1 recent:1 certain:2 relativity:2 muller:1 impose:1 staff:1 monotonically:2 turkey:1 yjt:5 award:1 arxiv:1 represent:5 kernel:6 microscope:1 cart:1 lover:2 near:1 door:1 split:1 enough:2 easy:1 harbor:1 regarding:2 cn:11 fleet:1 expression:1 cji:1 inventor:1 cause:1 remark:2 useful:3 band:1 telescope:1 diameter:2 http:1 spy:1 kyi:1 run:1 inverse:1 place:1 kji:1 distinguish:1 courville:1 fold:1 marchand:3 truck:1 explode:1 foul:1 span:1 across:1 y0:6 kakade:1 encapsulates:1 intuitively:1 explained:1 legal:1 count:1 denker:1 einstein:1 spectral:2 fry:1 faloutsos:1 personality:1 denotes:2 worthless:1 ghahramani:1 build:1 move:3 malik:1 parade:1 md:3 grace:1 spade:1 capacity:1 majority:1 originate:1 barber:1 trivial:1 reason:2 minimizing:2 equivalently:3 pie:1 cij:3 sne:19 pizza:1 y1l:3 drunk:1 descent:2 y1:7 pekalska:2 kl:11 connection:3 established:1 usually:3 ghost:1 challenge:1 royal:1 buhmann:1 concludes:1 carried:1 transitive:2 discovery:2 proportional:1 spoil:1 pij:15 share:2 collaboration:1 pile:1 mohri:1 last:1 world:1 contour:2 boxer:1 commonly:1 made:1 far:3 lippmann:1 tourist:1 skill:1 eet:5 annihilate:1 leader:1 table:3 transfer:1 sminchisescu:1 investigated:1 domain:2 significance:1 screen:1 lizard:1 theorem:2 removing:1 specific:1 symbol:1 confuse:1 push:1 likely:2 expressed:1 scalar:1 lewicki:1 ch:2 satisfies:1 acm:1 scenery:1 presentation:1 replace:1 hard:1 change:2 infinite:1 engineer:1 multimedia:1 select:1 uneven:1 rape:1 handbag:1 kulis:1 accommodate:2 shot:1 series:2 hardy:1 daring:1 nowlan:1 must:4 planet:1 informative:2 fuss:1 marine:2 colored:2 along:1 scholkopf:1 prove:1 feather:1 inside:1 pairwise:2 freeman:1 frustrate:1 nips22:5 discover:1 crisis:1 eigenvector:1 clothes:1 remember:1 tackle:1 interactive:1 grauman:1 k2:5 hit:1 control:1 accordance:1 limit:1 id:3 rhyme:1 horn:1 lecun:1 yj:7 dice:1 suggest:1 onto:2 put:2 www:1 equivalent:1 map:2 hunger:1 ozone:1 fill:1 dominate:1 his:1 embedding:22 updated:2 spontaneous:1 trick:1 cloud:1 wang:1 apartment:1 revival:1 sun:1 remote:2 intuition:4 writer:1 leo:1 zemel:1 hyper:1 unige:1 slug:1 rosenfeld:1 think:1 clamp:1 adaptation:1 parametrizations:1 ky:1 help:4 depending:1 fixing:1 eyeball:1 cool:1 greek:1 downtown:1 creativity:1 investigation:1 proposition:5 inflation:1 bicycle:1 bump:1 clam:1 grind:1 faithfully:2 tool:1 weighted:1 uller:2 mit:2 rather:1 lifted:1 ax:1 focus:1 mainly:2 sense:2 helpful:1 compactness:1 cube:1 frasconi:1 teapot:1 represents:3 simplex:1 idt:1 neighbour:1 geometry:3 message:1 runner:1 introduces:1 stair:1 nephew:1 hoop:1 potato:1 poggio:1 taylor:1 initialized:2 leskovec:1 cover:1 kn3:1 reflect:1 receipt:1 worse:1 style:1 return:1 toy:1 li:1 depends:1 closed:1 ttt:1 jul:1 chart:1 j6:2 definition:3 centering:1 hotel:1 crater:1 color:5 car:1 iron:1 profession:1 innocence:1 delight:1 leen:1 maillet:1 ediag:1 warn:1 kn2:1 rack:1 outdoors:1 empire:1 scientific:2 oil:1 brown:1 swing:1 regularization:1 goldfarb:1 during:1 trying:1 theoretic:1 cp:1 image:3 discovers:1 quarter:1 shawe:1 europe:1 similarity:27 attracting:2 dust:1 stiff:1 tesauro:1 prime:1 inequality:2 der:2 preserving:1 tired:1 hyi:2 semi:5 multiple:1 long:2 basic:4 breast:1 vision:1 histogram:3 preserved:4 ratsch:1 saad:1 extra:1 elegant:1 lafferty:1 jordan:1 bengio:1 embeddings:14 sun1:1 fit:1 economic:1 shift:1 bartlett:1 moral:1 resistance:1 clear:2 locally:1 welcome:1 atiya:1 meir:1 ytj:2 blue:3 radiator:1 threshold:1 kept:1 hasler:1 garlic:1 maiden:1 ram:1 wood:1 powerful:2 master:1 doya:1 y03:1 maaten:2 cattle:1 neill:1 eagle:1 duin:2 flap:1 nearby:4 waibel:1 increasingly:1 lunch:1 depletion:1 ln:2 bus:1 gersho:1 alternative:2 weinberger:1 shortly:1 universitext:1 music:1 medicine:1 giving:1 establish:1 comparatively:2 occurs:1 spike:1 traditional:1 bialek:1 gradient:3 charity:1 thrun:1 evenly:3 manifold:15 modeled:1 negative:6 suppress:1 drill:1 boot:2 observation:1 roach:1 buffalo:1 january:1 pat:1 immediate:1 ysj:2 drift:1 stubborn:1 pair:8 cast:1 junior:1 fort:1 crime:1 narrow:2 protect:1 nip:5 beyond:2 flower:1 perception:2 wainwright:1 power:2 overlap:2 mouth:1 business:1 hybrid:1 axis:3 health:1 epoch:1 piano:1 embedded:3 brittle:1 hyperboloid:1 saul:1 sparse:1 van:2 curve:8 dimension:23 plain:1 usf:1 quantum:1 adaptive:2 miner:1 atkeson:1 feeling:1 geneva:2 nov:1 kullback:1 global:1 b1:2 dismay:1 hockey:1 learn:4 ballard:1 williamson:1 gerstner:1 necessarily:2 cupboard:1 diag:2 nothing:1 fair:1 clumsy:1 stray:1 intimate:1 embed:1 discarding:1 bishop:1 hub:1 sensing:1 gupta:1 hyperspectral:1 mafia:1 entropy:1 applies:1 wolf:1 afternoon:1 chance:1 narrower:1 absence:1 glove:1 telephone:1 neck:1 vote:1 formally:2 support:1 squid:1 ankle:1 decomposition:2 configuration:1 contains:1 document:1 com:2 intake:1 protection:1 interpretable:1 warmuth:1 imitate:1 mcevoy:1 parametrization:1 alexandros:2 blei:1 node:2 location:2 herbrich:1 embedder:1 direct:1 profound:1 laub:2 socially:1 decreasing:1 automatically:1 valve:1 project:5 barrel:1 lowest:1 interpreted:1 possession:1 maroon:1 stick:1 sale:1 unit:1 positive:7 marble:1 bird:1 limited:2 monarch:1 range:6 wasp:1 swiss:1 empirical:2 word:12 pre:2 protein:1 cannot:1 keeper:1 restriction:1 center:1 zador:1 borrow:1 coordinate:14 increment:1 variation:4 smell:1 reckless:1 w1000:3 engage:1 caption:1 region:1 goo:1 mozer:1 seung:1 cristianini:1 singh:1 triangle:2 easily:2 bowl:1 montague:1 distinct:1 hancock:1 tell:1 precious:1 y1t:1 widely:2 amari:1 cooked:1 date:1 achieve:1 cautious:1 dolphin:1 perfect:1 communist:1 nearest:5 crew:1 blair:1 snow:1 broke:1 viewing:1 dill:1 cheerleader:1 extension:2 koch:1 welfare:1 week:1 travel:1 label:1 sensitive:5 agrees:1 curtain:1 avoid:2 wilson:1 contrast:1 criticism:1 typically:1 unlikely:1 her:2 denoted:2 grasshopper:1 yu:1 future:1 escape:1 penguin:1 divergence:6 veer:1 beck:1 cylinder:1 centralized:1 disperse:1 extreme:1 light:4 accurate:1 edge:1 capable:1 euclidean:12 desired:1 minimal:1 kij:2 giles:1 compass:1 suspense:1 entry:2 neutral:1 uniform:5 delay:1 y02:1 damp:1 person:1 fundamental:1 forbid:1 destination:1 off:1 informatics:1 headache:1 tube:1 nest:1 macke:1 japan:1 account:1 summarized:1 student:6 ad:1 responsibility:1 cauwenberghs:1 biscuit:1 variance:1 downstairs:1 puberty:1 sitting:1 identify:1 history:1 touretzky:1 sharing:1 dancer:1 naturally:2 riemannian:6 dataset:2 knowledge:1 clay:1 goldstein:1 nerve:2 higher:1 nomad:1 hand:3 nonlinear:1 lack:1 name:4 building:1 normalized:2 concept:6 y2:11 leibler:1 maass:1 poem:1 please:1 essence:1 speaker:1 mel:1 smear:1 outline:1 motion:1 egypt:1 interface:1 juice:1 kawato:1 exponentially:1 tail:1 accumulate:1 measurement:3 lightning:1 money:1 mint:1 grqc:3 compound:1 hay:1 arbitrarily:2 baby:1 morgan:1 cii:1 accident:1 mole:1 signal:1 simoncelli:1 rahimi:1 bach:1 cross:1 hair:1 achieved:2 hochreiter:1 background:1 wake:1 grow:1 south:1 cowan:1 ee:2 affect:2 perfectly:1 attias:1 shutter:1 defense:1 ten:1 delta:2 sister:1 discrete:1 express:1 group:1 indefinite:4 yit:3 wasted:1 orient:1 policeman:1 family:3 almost:1 decision:1 handicap:1 bound:1 lemon:1 dangerous:1 kleinberg:1 kalousis:1 pink:1 sollich:1 invariant:1 discus:2 turn:1 end:1 opponent:1 apply:1 wang2:1 eigen:4 cent:1 rain:1 yjs:5 chinese:1 objective:1 font:3 distance:10 mapped:1 card:1 nelson:1 willsky:1 code:1 relationship:2 ratio:1 teh:1 discarded:1 displayed:2 viola:1 lick:1 discovered:4 dog:1 pattern:3 scott:1 bite:1 hot:1 event:14 natural:1 scheme:1 nonsense:1 excel:1 tresp:1 highlight:1 bear:1 mixed:4 interesting:3 limitation:3 analogy:1 foundation:2 beer:1 tiny:2 repeat:1 supported:1 free:3 english:1 dis:1 side:1 allow:1 wide:1 correspondingly:1 opper:1 gram:6 author:11 jump:1 party:1 social:3 compact:3 continuous:1 schuurmans:1 da:1 aistats:2 main:1 sub:9 factory:1 candidate:2 jmlr:3 abundance:1 densification:1 vapnik:1 hole:2 television:1 conveniently:1 adjustment:1 geoscience:1 springer:1 man:1 chisel:1 lemma:1 kearns:1 called:2 people:1 latter:1 crammer:1 dissimilar:3 violated:1 |
5,493 | 5,972 | A Fast, Universal Algorithm
to Learn Parametric Nonlinear Embeddings
? Carreira-Perpin?
? an
Miguel A.
EECS, University of California, Merced
Max Vladymyrov
UC Merced and Yahoo Labs
http://eecs.ucmerced.edu
maxv@yahoo-inc.com
Abstract
Nonlinear embedding algorithms such as stochastic neighbor embedding do dimensionality reduction by optimizing an objective function involving similarities
between pairs of input patterns. The result is a low-dimensional projection of each
input pattern. A common way to define an out-of-sample mapping is to optimize
the objective directly over a parametric mapping of the inputs, such as a neural
net. This can be done using the chain rule and a nonlinear optimizer, but is very
slow, because the objective involves a quadratic number of terms each dependent
on the entire mapping?s parameters. Using the method of auxiliary coordinates,
we derive a training algorithm that works by alternating steps that train an auxiliary embedding with steps that train the mapping. This has two advantages: 1)
The algorithm is universal in that a specific learning algorithm for any choice of
embedding and mapping can be constructed by simply reusing existing algorithms
for the embedding and for the mapping. A user can then try possible mappings
and embeddings with less effort. 2) The algorithm is fast, and it can reuse N -body
methods developed for nonlinear embeddings, yielding linear-time iterations.
1 Introduction
Given a high-dimensional dataset YD?N = (y1 , . . . , yN ) of N points in RD , nonlinear embedding
algorithms seek to find low-dimensional projections XL?N = (x1 , . . . , xN ) with L < D by optimizing an objective function E(X) constructed using an N ? N matrix of similarities W = (wnm )
between pairs of input patterns (yn , ym ). For example, the elastic embedding (EE) [5] optimizes:
P
PN
2
2
E(X) = N
? > 0.
(1)
n,m=1 wnm kxn ? xm k + ?
n,m=1 exp (? kxn ? xm k )
Here, the first term encourages projecting similar patterns near each other, while the second term
repels all pairs of projections. Other algorithms of this type are stochastic neighbor embedding
(SNE) [15], t-SNE [27], neighbor retrieval visualizer (NeRV) [28] or the Sammon mapping [23],
as well as spectral methods such as metric multidimensional scaling and Laplacian eigenmaps [2]
(though our focus is on nonlinear objectives). Nonlinear embeddings can produce visualizations of
high-dimensional data that display structure such as manifolds or clustering, and have been used for
exploratory purposes and other applications in machine learning and beyond.
Optimizing nonlinear embeddings is difficult for three reasons: there are many parameters (N L);
the objective is very nonconvex, so gradient descent and other methods require many iterations; and
it involves O(N 2 ) terms, so evaluating the gradient is very slow. Major progress in these problems
has been achieved in recent years. For the second problem, the spectral direction [29] is constructed
by ?bending? the gradient using the curvature of the quadratic part of the objective (for EE, this is the
graph Laplacian L of W). This significantly reduces the number of iterations, while evaluating the
direction itself is about as costly as evaluating the gradient. For the third problem, N -body methods
such as tree methods [1] and fast multipole methods [11] approximate the gradient in O(N log N )
1
and O(N ) for small dimensions L, respectively, and have allowed to scale up embeddings to millions
of patterns [26, 31, 34].
Another issue that arises with nonlinear embeddings is that they do not define an ?out-of-sample?
mapping F: RD ? RL that can be used to project patterns not in the training set. There are two
basic approaches to define an out-of-sample mapping for a given embedding. The first one is a
variational argument, originally put forward for Laplacian eigenmaps [6] and also applied to the
elastic embedding [5]. The idea is to optimize the embedding objective for a dataset consisting of
the N training points and one test point, but keeping the training projections fixed. Essentially, this
constructs a nonparametric mapping implicitly defined by the training points Y and its projections
X, without introducing any assumptions. The mapping comes out in closed form for Laplacian
eigenmaps (a Nadaraya-Watson estimator) but not in general (e.g. for EE), in which case it needs
a numerical optimization. In either case, evaluating the mapping for a test point is O(N D), which
is slow and does not scale. (For spectral methods one can also use the Nystr?om formula [3], but
it does not apply to nonlinear embeddings, and is still O(N D) at test time.) The second approach
is to use a mapping F belonging to a parametric family F of mappings (e.g. linear or neural net),
which is fast at test time. Directly fitting F to (Y, X) is inelegant, since F is unrelated to the
embedding, and may not work well if the mapping cannot model well the data (e.g. if F is linear).
A better way is to involve F in the learning from the beginning, by replacing xn with F(yn ) in
the embedding objective function and optimizing it over the parameters of F. For example, for the
elastic embedding of (1) this means
PN
PN
2
2
P (F) = n,m=1 wnm kF(yn ) ? F(ym )k + ? n,m=1 exp ? kF(yn ) ? F(ym )k . (2)
This will give better results because the only embeddings that are allowed are those that are realizable by a mapping F in the family F considered. Hence, the optimal F will exactly match the
embedding, which is still trying to optimize the objective E(X). This provides an intermediate
solution between the nonparametric mapping described above, which is slow at test time, and the
direct fit of a parametric mapping to the embedding, which is suboptimal. We will focus on this
approach, which we call parametric embedding (PE), following previous work [25].
A long history of PEs exists, using unsupervised [14, 16?18, 24, 25, 32] or supervised [4, 9, 10,
13, 20, 22] embedding objectives, and using linear or nonlinear mappings (e.g. neural nets). Each
of these papers develops a specialized algorithm to learn the particular PE they define (= embedding objective and mapping family). Besides, PEs have also been used as regularization terms in
semisupervised classification, regression or deep learning [33].
Our focus in this paper is on optimizing an unsupervised parametric embedding defined by a given
embedding objective E(X), such as EE or t-SNE, and a given family for the mapping F, such as
linear or a neural net. The straightforward approach, used in all papers cited above, is to derive
a training algorithm by applying the chain rule to compute gradients over the parameters of F and
feeding them to a nonlinear optimizer (usually gradient descent or conjugate gradients). This has
three problems. First, a new gradient and optimization algorithm must be developed and coded for
each choice of E and F. For a user who wants to try different choices on a given dataset, this is
very inconvenient?and the power of nonlinear embeddings and unsupervised methods in general is
precisely as exploratory techniques to understand the structure in data, so a user needs to be able to
try multiple techniques. Ideally, the user should simply be able to plug different mappings F into any
embedding objective E, with minimal development work. Second, computing the gradient involves
O(N 2 ) terms each depending on the entire mapping?s parameters, which is very slow. Third, both
E and F must be differentiable for the chain rule to apply.
Here, we propose a new approach to optimizing parametric embeddings, based on the recently introduced method of auxiliary coordinates (MAC) [7, 8], that partially alleviates these problems. The
idea is to solve an equivalent, constrained problem by introducing new variables (the auxiliary coordinates). Alternating optimization over the coordinates and the mapping?s parameters results in
a step that trains an auxiliary embedding with a ?regularization? term, and a step that trains the
mapping by solving a regression, both of which can be solved by existing algorithms. Section 2
introduces important concepts and describes the chain-rule based optimization of parametric embeddings, section 3 applies MAC to parametric embeddings, and section 4 shows with different
combinations of embeddings and mappings that the resulting algorithm is very easy to construct,
including use of N -body methods, and is faster than the chain-rule based optimization.
2
Embedding space
Z1
F2 (Y)
F1 (Y)
X?
Free embedding
Direct fit
PE
RL?N
Z3
F? (Y)
F? (Y)
Z2
Figure 1: Left: illustration of the feasible set {Z ? RL?N : Z = F(Y) for F ? F } (grayed areas) of
embeddings that can be produced by the mapping family F . This corresponds to the feasible set of
the equality constraints in the MAC-constrained problem (4). A parametric embedding Z? = F? (Y)
is a feasible embedding with locally minimal value of E. A free embedding X? is a minimizer of
E and is usually not feasible. A direct fit F? (to the free embedding X? ) is feasible but usually
not optimal. Right 3 panels: 2D embeddings of 3 objects from the COIL-20 dataset using a linear
mapping: a free embedding, its direct fit, and the parametric embedding (PE) optimized with MAC.
2 Free embeddings, parametric embeddings and chain-rule gradients
Consider a given nonlinear embedding objective function E(X) that takes an argument X ? RL?N
and maps it to a real value. E(X) is constructed for a dataset Y ? RD?N according to a particular
embedding model. We will use as running example the equations (1), (2) for the elastic embedding, which are simpler than for most other embeddings. We call free embedding X? the result
of optimizing E, i.e., a (local) optimizer of E. A parametric embedding (PE) objective function
for E using a family F of mappings F: RD ? RL (for example, linear mappings), is defined as
P (F) = E(F(Y)), where F (Y) = (F(y1 ), . . . , F(yN )), as in eq. (2) for EE. Note that, to simplify
the notation, we do not write explicitly the parameters of F. Thus, a specific PE can be defined by
any combination of embedding objective function E (EE, SNE. . . ) and parametric mapping family
F (linear, neural net. . . ). The result of optimizing P , i.e., a (local) optimizer of P , is a mapping F?
which we can apply to any input y ? RD , not necessarily from among the training patterns. Finally,
we call direct fit the mapping resulting from fitting F to (Y, X? ) by least-squares regression, i.e., to
map the input patterns to a free embedding. We have the following results.
Theorem 2.1. Let X? be a global minimizer of E. Then ?F ? F : P (F) ? E(X? ).
Proof. P (F) = E(F(Y)) ? E(X? ).
Theorem 2.2 (Perfect direct fit). Let F? ? F . If F? (Y) = X? and X? is a global minimizer of E
then F? is a global minimizer of P .
Proof. Let F ? F with F 6= F? . Then P (F) = E(F(Y)) ? E(X? ) = E(F? (Y)) = P (F? ).
Theorem 2.2 means that if the direct fit of F? to (Y, X? ) has zero error, i.e., F? (Y) = X? , then
it is the solution of the parametric embedding, and there is no need to optimize P . Theorem 2.1
means that a PE cannot do better than a free embedding1. This is obvious in that a PE is not free but
constrained to use only embeddings that can be produced by a mapping in F , as illustrated in fig. 1.
A PE will typically worsen the free embedding: more powerful mapping families, such as neural
nets, will distort the embedding less than more restricted families, such as linear mappings. In this
sense, the free embedding can be seen as using as mapping family F a table (Y, X) with parameters
X. It represents the most flexible mapping, since every projection xn is a free parameter, but it can
only be applied to patterns in the training set Y. We will assume that the direct fit has a positive
error, i.e., the direct fit is not perfect, so that optimizing P is necessary.
Computationally, the complexity of the gradient of P (F) appears to be O(N 2 |F|), where |F| is
the number of parameters in F, because P (F) involves O(N 2 ) terms, each dependent on all the
parameters of F (e.g. for linear F this would cost O(N 2 LD)). However, if manually simplified and
coded, the gradient can actually be computed in O(N 2 L + N |F|). For example, for the elastic
embedding with a linear mapping F(y) = Ay where A is of L ? D, the gradient of eq. (2) is:
i
h
PN
2
?P
T
(3)
?
(Ay
?
Ay
)(y
?
y
)
=
2
w
?
?exp
?
kAy
?
Ay
k
n
m
n
m
nm
n
m
n,m=1
?A
1
By a continuity argument, theorem 2.2 carries over to the case where F? and X? = F? (Y) are local minimizers of P and E, respectively. However, theorem 2.2 would apply only locally, that is, P (F) ? E(X? ) holds
locally but there may be mappings F with P (F) < E(X? ) associated with another (lower) local minimizer of
E. However, the same intuition remains: we cannot expect a PE to improve over a good free embedding.
3
and this can be computed in O(N 2 L + N DL) if we precompute X = AY and take common factors
of the summation over xn and xm . An automatic differentiation package may or may not be able to
realize these savings in general.
The obvious way to optimize P (F) is to compute the gradient wrt the parameters of F by applying
the chain rule (since P is a function of E and this is a function of the parameters of F), assuming
E and F are differentiable. While perfectly doable in theory, in practice this has several problems.
(1) Deriving, debugging and coding the gradient of P for a nonlinear F is cumbersome. One could
use automatic differentiation [12], but current packages can result in inefficient, non-simplified gradients in time and memory, and are not in widespread use in machine learning. Also, combining
autodiff with N -body methods seems difficult, because the latter require spatial data structures that
are effective for points in low dimension (no more than 3 as far as we know) and depend on the
actual point values. (2) The PE gradient may not benefit from special-purpose algorithms developed
for embeddings. For example, the spectral direction method [29] relies on special properties of the
free embedding Hessian which do not apply to the PE Hessian. (3) Given the gradient, one then has
to choose and possibly adapt a suitable nonlinear optimization method and set its parameters (line
search parameters, etc.) so that convergence is assured and the resulting algorithm is efficient. Simple choices such as gradient descent or conjugate gradients are usually not efficient, and developing
a good algorithm is a research problem in itself (as evidenced by the many papers that study specific combinations of embedding objective and parametric mapping). (4) Even having done all this,
the resulting algorithm will still be very slow because of the complexity of computing the gradient:
O(N 2 L + N |F|). It may be possible to approximate the gradient using N -body methods, but again
this would involve significant development effort. (5) As noted earlier, the chain rule only applies
if both E and F are differentiable. Finally, all of the above needs to be redone if we change the
mapping (e.g. from a neural net to a RBF network) or the embedding (e.g. from EE to t-SNE). We
now show how these problems can be addressed by using a different approach to the optimization.
3 Optimizing a parametric embedding using auxiliary coordinates
The PE objective function, e.g. (2), can be seen as a nested function where we first apply F and
then E. A recently proposed strategy, the method of auxiliary coordinates (MAC) [7, 8], can
be used to derive optimization algorithms for such nested systems. We write the nested problem
min P (F) = E(F(Y)) as the following, equivalent constrained optimization problem:
min P? (F, Z) = E(Z)
s.t.
zn = F(yn ), n = 1, . . . , N
(4)
where we have introduced an auxiliary coordinate zn for each input pattern and a corresponding
equality constraint. zn can be seen as the output of F (i.e., the low-dimensional projection) for xn .
The optimization is now on an augmented space (F, Z) with N L extra parameters Z ? RL?N , and
F ? F . The feasible set of the equality constraints is shown in fig. 1. We solve the constrained
problem (4) using a quadratic-penalty method (it is also possible to use the augmented Lagrangian
method), by optimizing the following unconstrained problem and driving ? ? ?:
PN
2
2
min PQ (F, Z; ?) = E(Z) + ?2 n=1 kzn ? F(yn )k = E(Z) + ?2 kZ ? F(Y)k .
(5)
Under mild assumptions, the minima (Z? (?), F? (?)) trace a continuous path that converges to a
local optimum of P? (F, Z) and hence of P (F) [7, 8]. Finally, we optimize PQ using alternating
optimization over the coordinates and the mapping. This results in two steps:
PN
2
Over F given Z: minF?F n=1 kzn ? F(yn )k . This is a standard least-squares regression for
a dataset (Y, Z) using F, and can be solved using existing, well-developed code for many
families of mappings. For example, for a linear mapping F(y) = Ay we solve a linear
system A = ZY+ (efficiently done by caching Y+ in the first iteration and doing a matrix
multiplication in subsequent iterations); for a deep net, we can use stochastic gradient
descent with pretraining, possibly on a GPU; for a regression tree or forest, we can use any
tree-growing algorithm; etc. Also, note that if we want to have a regularization term R(F)
in the PE objective (e.g. for weight decay, or for model complexity), that term will appear
in the F step but not in the Z step. Hence, the training and regularization of the mapping
F is confined to the F step, given the inputs Y and current outputs Z. The mapping F
?communicates? with the embedding objective precisely through these low-dimensional
coordinates Z.
4
2
Over Z given F: minZ E(Z) + ?2 kZ ? F(Y)k . This is a regularized embedding, since E(Z) is
2
the original embedding objective function and kZ ? F(Y)k is a quadratic regularization
?
term on Z, with weight 2 , which encourages Z to be close to a given embedding F(Y). We
can reuse existing, well-developed code to learn the embedding E(Z) with simple modifications. For example, the gradient has an added term ?(Z ? F(Y)); the spectral direction
now uses a curvature matrix L + ?2 I. The embedding ?communicates? with the mapping
F through the outputs F(Y) (which are constant within the Z step), which gradually force
the embedding Z to agree with the output of a member of the family of mappings F .
Hence, the intricacies of nonlinear optimization (line search, method parameters, etc.) remain confined within the regression for F and within the embedding for Z, separately from each other.
Designing an optimization algorithm for an arbitrary combination of embedding and mapping is
simply achieved by alternately calling existing algorithms for the embedding and for the mapping.
Although we have introduced a large number of new parameters to optimize over, the N L auxiliary
coordinates Z, the cost of a MAC iteration is actually the same (asymptotically) as the cost of
computing the PE gradient, i.e., O(N 2 L + N |F|), where |F| is the number of parameters in F. In
the Z step, the objective function has O(N 2 ) terms but each term depends only on 2 projections (zn
and zm , i.e., 2L parameters), hence it costs O(N 2 L). In the F step, the objective function has N
terms, each depending on the entire mapping?s parameters, hence it costs O(N |F|).
Another advantage of MAC is that, because it does not use chain-rule gradients, it is even possible
to use something like a regression tree for F, which is not differentiable, and so the PE objective
function is not differentiable either. In MAC, we can use an algorithm to train regression trees within
2
the F step using as data (Y, Z), reducing the constraint error kZ ? F(Y)k and the PE objective.
A final advantage is that we can benefit from recent work done on using N -body methods to reduce
the O(N 2 ) complexity of computing the embedding gradient exactly to O(N log N ) (using treebased methods such as the Barnes-Hut algorithm; [26, 34]) or even O(N ) (using fast multipole
methods; [31]), at a small approximation error. We can reuse such code as is, without any extra
work, to approximate the gradient of E(Z) and then add to it the exact gradient of the regularization
2
term kZ ? F(Y)k , which is already linear in N . Hence, each MAC iteration (Z and F steps) runs
in linear time on the sample size, and is thus scalable to larger datasets.
The problem of optimizing parametric embeddings is closely related to that of learning binary hashing for fast information retrieval using affinity-based loss functions [21]. The only difference is that
in binary hashing the mapping F (an L-bit hash function) maps a D-dimensional vector y ? RD
to an L-dimensional binary vector z ? {0, 1}L . The MAC framework can also be applied, and the
resulting algorithm alternates an F step that fits a classifier for each bit of the hash function, and a
Z step that optimizes a regularized binary embedding using combinatorial optimization.
Schedule of ?, initial Z and the path to a minimizer The MAC algorithm for parametric embeddings introduces no new optimization parameters except for the penalty parameter ?. The convergence theory of quadratic-penalty methods and MAC [7, 8, 19] tells us that convergence to a
local optimum is guaranteed if each iteration achieves sufficient decrease (always possible by running enough (Z,F) steps) and if ? ? ?. The latter condition ensures the equality constraints are
eventually satisfied. Mathematically, the minima (Z? (?), F? (?)) of PQ as a function of ? ? [0, ?)
trace a continuous path in the (Z, F) space that ends at a local minimum of the constrained problem (4) and thus of the parametric embedding objective function. Hence, our algorithm belongs to
the family of path-following methods, such as quadratic penalty, augmented Lagrangian, homotopy
and interior-point methods, widely regarded as effective with nonconvex problems.
In practice, one follows that path loosely, i.e., doing fast, inexact steps on Z and F for the current
value of ? and then increasing ?. How fast to increase ? does depend on the particular problem;
typically, one multiplies ? times a factor of around 2. Increasing ? very slowly will follow the path
more closely, but the runtime will increase. Since ? does not appear in the F step, increasing ? is
best done within a Z step (i.e., we run several iterations over Z, increase ?, run several iterations
over Z, and then do an F step).
The starting point of the path is ? ? 0+ . Here, the Z step simply optimizes E(Z) and hence gives
us a free embedding (e.g. we just train an elastic embedding model on the dataset). The F step
then fits F to (Y, Z) and hence gives us the direct fit (which generally will have a positive error
5
2
kZ ? F(Y)k , otherwise we stop with an optimal PE). Thus, the beginning of the path is the direct
fit to the free embedding. As ? increases, we follow the path (Z? (?), F? (?)), and as ? ? ?, F
converges to a minimizer of the PE and Z converges to F(Y). Hence, the ?lifetime? of the MAC
algorithm over the ?time? ? starts with a free embedding and a direct fit which disagree with each
other, and progressively reduces the error in the F fit by increasing the error in the Z embedding,
until F(Y) and Z agree at an optimal PE.
Although it is possible to initialize Z in a different way (e.g. random) and start with a large value of
?, we find this converges to worse local optima than starting from a free embedding with a small ?.
Good local optima for the free embedding itself can be found by homotopy methods as well [5].
4 Experiments
Our experiments confirm that MAC finds optima as good as those of the conventional optimization based on chain-rule gradients, but that it is faster (particularly if using N -body methods). We
demonstrate this with different embedding objectives (the elastic embedding and t-SNE) and mappings (linear and neural net). We report on a representative subset of experiments.
Illustrative example The simple example of fig. 1 shows the different embedding types described
in the paper. We use the COIL-20 dataset, containing rotation sequences of 20 physical objects
every 5 degrees, each a grayscale image of 128 ? 128 pixels, total N = 1 440 points in 16 384
dimensions; thus, each object traces a closed loop in pixel space. We produce 2D embeddings of 3
objects, using the elastic embedding (EE) [5]. The free embedding X? results from optimizing the
EE objective function (1), without any limitations on the low-dimensional projections. It gives the
best visualization of the data, but no out-of-sample mapping. We now seek a linear out-of-sample
mapping F. The direct fit fits a linear mapping to map the high-dimensional images Y to their
2D projections X? from the free embedding. The resulting predictions F(Y) give a quite distorted
representation of the data, because a linear mapping cannot realize the free embedding X with low
error. The parametric embedding (PE) finds the linear mapping F? that optimizes P (F), which
for EE is eq. (2). To optimize the PE, we used MAC (which was faster than gradient descent and
conjugate gradients). The resulting PE represents the data worse than the free embedding (since the
PE is constrained to produce embeddings that are realizable by a linear mapping), but better than the
direct fit, because the PE can search for embeddings that, while being realizable by a linear mapping,
produce a lower value of the EE objective function.
The details of the optimization are as follows. We preprocess the data using PCA projecting to 15
dimensions (otherwise learning a mapping would be trivial since there are more degrees of freedom
than there are points). The free embedding was optimized using the spectral direction [29] until
consecutive iterates differed by a relative error less than 10?3 . We increased ? from 0.003 to 0.015
with a step of 0.001 (12 ? values) and did 40 iterations for each ? value. The Z step uses the spectral
direction, stopping when the relative error is less than 10?2 .
Cost of the iterations Fig. 2(left) shows, as a function of the number of data points N (using
a 3D Swissroll dataset), the time needed to compute the gradient of the PE objective (red curve)
and the gradient of the MAC Z and F steps (black and magenta, respectively, as well as their
sum in blue). We use t-SNE and a sigmoidal neural net with an architecture 3?100?500?2. We
approximate the Z gradient in O(N log N ) using the Barnes-Hut method [26, 34]. The log-log plot
shows the asymptotically complexity to be quadratic for the PE gradient, but linear for the F step
and O(N log N ) for the Z step. The PE gradient runs out of memory for large N .
Quality of the local optima For the same Swissroll dataset, fig. 2(right) shows, as a function of
the number of data points N , the final value of the PE objective function achieved by the chain-rule
CG optimization and by MAC, both using the same initialization. There is practically no difference between both optimization algorithms. We sometimes do find they converge to different local
optima, as in some of our other experiments.
Different embedding objectives and mapping families The goal of this experiment is to show
that we can easily derive a convergent, efficient algorithm for various combinations of embeddings
and mappings. We consider as embedding objective functions E(X) t-SNE and EE, and as mappings F a neural net and a linear mapping. We apply each combination to learn a parametric embedding for the MNIST dataset, containing N = 60 000 images of handwritten digit images. Training
6
PE objective function P (F)
2
Runtime (seconds)
10
PE
MAC
Z step
F step
0
10
?2
10
?4
10
1
2
10
3
10
4
10
5
10
10
0.8
10
PE
MAC
0.6
10
0.4
10
0.2
10
1
2
10
3
10
4
10
5
10
10
N
N
Figure 2: Runtime per iteration and final PE objective for a 3D Swissroll dataset, using as mapping
F a sigmoidal neural net with an architecture 3?100?500?2, for t-SNE. For PE, we give the runtime
needed to compute the gradient of the PE objective using CG with chain-rule gradients. For MAC,
we give the runtime needed to compute the (Z,F) steps, separately and together. The gradient of the
Z step is approximated with an N -body method. Errorbars over 5 randomly generated Swissrolls.
100
0
1
2
3
4
5
6
7
8
9
50
100
0
1
2
3
4
5
6
7
8
9
t-SNE neural net embedding
50
MAC
PE (minibatch)
PE (batch)
0
0
?50
?50
?100
?100
P (F)
20
19.5
19
18.5
?100
?50
0
50
100
?100
2
?50
0
50
10
4
10
Runtime (seconds)
EE linear embedding
5
6
3
10
100
4
4
13
4
x 10
3
MAC
PE
12
2
2
11
P (F)
1
0
0
?2
?1
7
?3
6
?6
?6
?4
?2
0
2
4
6
?4
9
8
?2
?4
10
?2
0
2
4
2
10
3
10
Runtime (seconds)
Figure 3: MNIST dataset. Top: t-SNE with a neural net. Bottom: EE with a linear mapping. Left:
initial, free embedding (we show a sample of 5 000 points to avoid clutter). Middle: final parametric
embedding. Right: learning curves for MAC and chain-rule optimization. Each marker indicates
one iteration. For MAC, the solid markers indicate iterations where ? increased.
a nonlinear (free) embedding on a dataset of this size was very slow until the recent introduction
of N -body methods for t-SNE, EE and other methods [26, 31, 34]. We are the first to use N -body
methods for PEs, thanks to the decoupling between mapping and embedding introduced by MAC.
For each combination, we derive the MAC algorithm by reusing code available online: for the
EE and t-SNE (free) embeddings we use the spectral direction [29]; for the N -body methods to
approximate the embedding objective function gradient we use the fast multipole method for EE
[31] and the Barnes-Hut method for t-SNE [26, 34]; and for training a deep net we use unsupervised
pretraining and backpropagation [22, 25]. Fig. 3(left) shows the free embedding of MNIST obtained
with t-SNE and EE after 100 iterations of the spectral direction. To compute the Gaussian affinities
between pairs of points, we used entropic affinities with perplexity K = 30 neighbors [15, 30].
The optimization details are as follows. For the neural net, we replicated the setup of [25]. This uses
a neural net with an architecture (28 ? 28)?500?500?2000?2, initialized with pretraining as de7
scribed in [22] and [25]. For the chain-rule PE optimization we used the code from [25]. Because of
memory limitations, [25] actually solved an approximate version of the PE objective function, where
rather than using all N 2 pairwise point interactions, only BN interactions are used, corresponding
to using minibatches of B = 5 000 points. Therefore, the solution obtained is not a minimizer of the
PE objective, as can be seen from the higher objective value in fig. 3(bottom). However, we did also
solve the exact objective by using B = N (i.e., one minibatch containing the entire dataset). Each
minibatch was trained with 3 CG iterations and a total of 30 epochs.
For MAC, we used ? ? {10?7 , 5?10?7, 10?6 , 5?10?6, 10?5 , 5?10?5}, optimizing until the objective
function decrease (before the Z step and after the F step) was less than a relative error of 10?3 . The
rest of the optimization details concern the embedding and neural net, and are based on existing code.
The initialization for Z is the free embedding. The Z step (like the free embedding) uses the spectral
direction with a fixed step size ? = 0.05, using 10 iterations of linear conjugate gradients to solve
the linear system (L + ?2 I)P = ?G, and using warm-start (i.e., initialized from the the previous
iteration?s direction). The gradient G of the free embedding is approximated in O(N log N ) using
the Barnes-Hut method with accuracy ? = 1.5. Altogether one Z iteration took around 5 seconds.
We exit the Z step when the relative error between consecutive embeddings is less than 10?3 . For
the F step we used stochastic gradient descent with minibatches of 100 points, step size 10?3 and
momentum rate 0.9, and trained for 5 epochs for the first 3 values of ? and for 3 epochs for the rest.
For the linear mapping F(y) = Ay, we implemented our own chain-rule PE optimizer with gradient descent and backtracking line search for 30 iterations. In MAC, we used 10 ? values spaced
logarithmically from 10?2 to 102 , optimizing at each ? value until the objective function decrease
was less than a relative error of 10?4 . Both the Z step and the free embedding use the spectral
direction with a fixed step size ? = 0.01. We stop optimizing them when the relative error between
consecutive embeddings is less than 10?4 . The gradient is approximated using fast multipole methods with accuracy p = 6 (the number of terms in the truncated series). In the F step, the linear
system to find A was solved using 10 iterations of linear conjugate gradients with warm start.
Fig. 3 shows the final parametric embeddings for MAC, neural-net t-SNE (top) and linear EE (bottom), and the learning curves (PE error P (F(Y)) over iterations). MAC is considerably faster than
the chain-rule optimization in all cases.
For the neural-net t-SNE, MAC is almost 5? faster than using minibatch (the approximate PE
objective) and 20? faster than the exact, batch mode. This is partly thanks to the use of N -body
methods in the Z step. The runtimes were (excluding the time taken by pretraining, 40?): MAC: 42?;
PE (minibatch): 3.36 h; PE (batch): 15 h; free embedding: 63?. Without using N -body methods,
MAC is 4? faster than PE (batch) and comparable to PE (minibatch). For the linear EE, the runtimes
were: MAC: 12.7?; PE: 63?; direct fit: 40?.
The neural-net t-SNE embedding preserves the overall structure of the free t-SNE embedding but
both embeddings do differ. For example, the free embedding creates small clumps of points and
the neural net, being a continuous mapping, tends to smooth them out. The linear EE embedding
distorts the free EE embedding considerably more than if using a neural net. This is because a linear
mapping has a much harder time at approximating the complex mapping from the high-dimensional
data into 2D that the free embedding implicitly demands.
5 Conclusion
In our view, the main advantage of using the method of auxiliary coordinates (MAC) to learn parametric embeddings is that it simplifies the algorithm development. One only needs to plug in existing
code for the embedding (with minor modifications) and the mapping. This is particularly useful to
benefit from complex, highly optimized code for specific problems, such as the N -body methods
we used here, or perhaps GPU implementations of deep nets and other machine learning models. In
many applications, the efficiency in programming an easy, robust solution is more valuable than the
speed of the machine. But, in addition, we find that the MAC algorithm can be quite faster than the
chain-rule based optimization of the parametric embedding.
Acknowledgments
Work funded by NSF award IIS?1423515. We thank Weiran Wang for help with training the deep
net in the MNIST experiment.
8
References
[1] J. Barnes and P. Hut. A hierarchical O(N log N ) force-calculation algorithm. Nature, 324, 1986.
[2] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation.
Neural Computation, 15:1373?1396, 2003.
[3] Y. Bengio, O. Delalleau, N. Le Roux, J.-F. Paiement, P. Vincent, and M. Ouimet. Learning eigenfunctions
links spectral embedding and kernel PCA. Neural Computation, 16:2197?2219, 2004.
[4] J. Bromley, J. W. Bentz, L. Bottou, I. Guyon, Y. LeCun, C. Moore, E. S?ackinger, and R. Shah. Signature verification using a ?siamese? time delay neural network. Int. J. Pattern Recognition and Artificial
Intelligence, 5:669?688, 1993.
[5] M. Carreira-Perpi?na? n. The elastic embedding algorithm for dimensionality reduction. ICML, 2010.
[6] M. Carreira-Perpi?na? n and Z. Lu. The Laplacian Eigenmaps Latent Variable Model. AISTATS, 2007.
[7] M. Carreira-Perpi?na? n and W. Wang. Distributed optimization of deeply nested systems. arXiv:1212.5921
[cs.LG], Dec. 24 2012.
[8] M. Carreira-Perpi?na? n and W. Wang. Distributed optimization of deeply nested systems. AISTATS, 2014.
[9] A. Globerson and S. Roweis. Metric learning by collapsing classes. NIPS, 2006.
[10] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. NIPS,
2005.
[11] L. Greengard and V. Rokhlin. A fast algorithm for particle simulations. J. Comp. Phys., 73, 1987.
[12] A. Griewank and A. Walther. Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation. SIAM Publ., second edition, 2008.
[13] R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. CVPR,
2006.
[14] X. He and P. Niyogi. Locality preserving projections. NIPS, 2004.
[15] G. Hinton and S. T. Roweis. Stochastic neighbor embedding. NIPS, 2003.
[16] D. Lowe and M. E. Tipping. Feed-forward neural networks and topographic mappings for exploratory
data analysis. Neural Computing & Applications, 4:83?95, 1996.
[17] J. Mao and A. K. Jain. Artificial neural networks for feature extraction and multivariate data projection.
IEEE Trans. Neural Networks, 6:296?317, 1995.
[18] R. Min, Z. Yuan, L. van der Maaten, A. Bonner, and Z. Zhang. Deep supervised t-distributed embedding.
ICML, 2010.
[19] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, second edition, 2006.
[20] J. Peltonen and S. Kaski. Discriminative components of data. IEEE Trans. Neural Networks, 16, 2005.
[21] R. Raziperchikolaei and M. Carreira-Perpi?na? n. Learning hashing with affinity-based loss functions using
auxiliary coordinates. arXiv:1501.05352 [cs.LG], Jan. 21 2015.
[22] R. Salakhutdinov and G. Hinton. Learning a nonlinear embedding by preserving class neighbourhood
structure. AISTATS, 2007.
[23] J. W. Sammon, Jr. A nonlinear mapping for data structure analysis. IEEE Trans. Computers, 18, 1969.
[24] Y. W. Teh and S. Roweis. Automatic alignment of local representations. NIPS, 2003.
[25] L. J. P. van der Maaten. Learning a parametric embedding by preserving local structure. AISTATS, 2009.
[26] L. J. P. van der Maaten. Barnes-Hut-SNE. Int. Conf. Learning Representations (ICLR), 2013.
[27] L. J. P. van der Maaten and G. E. Hinton. Visualizing data using t-SNE. JMLR, 9:2579?2605, 2008.
[28] J. Venna, J. Peltonen, K. Nybo, H. Aidos, and S. Kaski. Information retrieval perspective to nonlinear
dimensionality reduction for data visualization. JMLR, 11:451?490, 2010.
[29] M. Vladymyrov and M. Carreira-Perpi?na? n. Partial-Hessian strategies for fast learning of nonlinear embeddings. ICML, 2012.
[30] M. Vladymyrov and M. Carreira-Perpi?na? n. Entropic affinities: Properties and efficient numerical computation. ICML, 2013.
[31] M. Vladymyrov and M. Carreira-Perpi?na? n. Linear-time training of nonlinear low-dimensional embeddings. AISTATS, 2014.
[32] A. R. Webb. Multidimensional scaling by iterative majorization using radial basis functions. Pattern
Recognition, 28:753?759, 1995.
[33] J. Weston, F. Ratle, and R. Collobert. Deep learning via semi-supervised embedding. ICML, 2008.
[34] Z. Yang, J. Peltonen, and S. Kaski. Scalable optimization for neighbor embedding for visualization.
ICML, 2013.
9
| 5972 |@word mild:1 version:1 middle:1 seems:1 sammon:2 seek:2 perpin:1 bn:1 simulation:1 nystr:1 solid:1 harder:1 carry:1 ld:1 reduction:5 initial:2 series:1 existing:7 current:3 com:1 z2:1 goldberger:1 must:2 gpu:2 realize:2 numerical:3 subsequent:1 plot:1 progressively:1 maxv:1 hash:2 intelligence:1 beginning:2 provides:1 iterates:1 redone:1 sigmoidal:2 simpler:1 zhang:1 constructed:4 direct:15 walther:1 yuan:1 fitting:2 pairwise:1 growing:1 ratle:1 salakhutdinov:2 actual:1 grayed:1 increasing:4 project:1 unrelated:1 notation:1 panel:1 developed:5 differentiation:3 every:2 multidimensional:2 runtime:7 exactly:2 classifier:1 yn:9 appear:2 positive:2 before:1 local:13 tends:1 path:9 yd:1 black:1 initialization:2 ucmerced:1 nadaraya:1 clump:1 scribed:1 acknowledgment:1 lecun:2 globerson:1 practice:2 backpropagation:1 digit:1 jan:1 area:1 universal:2 significantly:1 projection:12 radial:1 venna:1 cannot:4 close:1 interior:1 put:1 applying:2 optimize:8 equivalent:2 map:4 lagrangian:2 conventional:1 straightforward:1 starting:2 hadsell:1 roux:1 griewank:1 rule:17 estimator:1 regarded:1 deriving:1 inelegant:1 kay:1 embedding:111 exploratory:3 coordinate:12 user:4 exact:3 programming:1 us:4 designing:1 logarithmically:1 approximated:3 particularly:2 recognition:2 merced:2 bottom:3 solved:4 wang:3 ensures:1 decrease:3 valuable:1 deeply:2 intuition:1 complexity:5 ideally:1 signature:1 trained:2 depend:2 solving:1 creates:1 f2:1 exit:1 efficiency:1 basis:1 easily:1 various:1 kaski:3 bonner:1 train:6 jain:1 fast:12 effective:2 artificial:2 tell:1 quite:2 larger:1 solve:5 widely:1 delalleau:1 cvpr:1 otherwise:2 niyogi:2 topographic:1 itself:3 final:5 online:1 advantage:4 differentiable:5 doable:1 net:25 sequence:1 took:1 propose:1 interaction:2 zm:1 combining:1 loop:1 alleviates:1 roweis:4 convergence:3 optimum:7 produce:4 perfect:2 converges:4 object:4 help:1 derive:5 depending:2 miguel:1 minor:1 progress:1 eq:3 auxiliary:11 implemented:1 involves:4 come:1 indicate:1 c:2 differ:1 direction:11 closely:2 stochastic:5 require:2 feeding:1 f1:1 homotopy:2 summation:1 mathematically:1 hold:1 practically:1 hut:6 considered:1 around:2 wright:1 exp:3 bromley:1 mapping:78 algorithmic:1 driving:1 major:1 optimizer:5 achieves:1 consecutive:3 entropic:2 purpose:2 combinatorial:1 always:1 gaussian:1 rather:1 pn:6 caching:1 avoid:1 focus:3 indicates:1 raziperchikolaei:1 cg:3 realizable:3 sense:1 dependent:2 minimizers:1 stopping:1 entire:4 typically:2 pixel:2 issue:1 classification:1 among:1 flexible:1 overall:1 yahoo:2 development:3 multiplies:1 constrained:7 spatial:1 special:2 uc:1 initialize:1 construct:2 saving:1 having:1 extraction:1 manually:1 runtimes:2 represents:2 unsupervised:4 icml:6 minf:1 report:1 develops:1 simplify:1 belkin:1 randomly:1 preserve:1 consisting:1 vladymyrov:4 freedom:1 highly:1 alignment:1 introduces:2 yielding:1 chain:17 partial:1 necessary:1 tree:5 loosely:1 initialized:2 inconvenient:1 minimal:2 increased:2 earlier:1 zn:4 cost:6 introducing:2 mac:36 subset:1 weiran:1 delay:1 eigenmaps:5 eec:2 considerably:2 thanks:2 cited:1 siam:1 ym:3 together:1 na:8 again:1 nm:1 satisfied:1 containing:3 choose:1 possibly:2 slowly:1 collapsing:1 worse:2 conf:1 inefficient:1 derivative:1 reusing:2 coding:1 int:2 inc:1 explicitly:1 depends:1 collobert:1 try:3 lowe:1 lab:1 closed:2 doing:2 view:1 red:1 start:4 worsen:1 majorization:1 om:1 square:2 accuracy:2 who:1 efficiently:1 spaced:1 preprocess:1 handwritten:1 vincent:1 produced:2 zy:1 lu:1 comp:1 history:1 phys:1 cumbersome:1 distort:1 inexact:1 obvious:2 proof:2 associated:1 stop:2 dataset:15 dimensionality:5 schedule:1 actually:3 appears:1 feed:1 originally:1 hashing:3 supervised:3 follow:2 higher:1 tipping:1 done:5 though:1 lifetime:1 just:1 until:5 replacing:1 nonlinear:23 marker:2 widespread:1 continuity:1 minibatch:6 mode:1 quality:1 perhaps:1 semisupervised:1 concept:1 hence:11 regularization:6 kxn:2 alternating:3 equality:4 moore:1 illustrated:1 visualizing:1 encourages:2 noted:1 illustrative:1 trying:1 ay:7 demonstrate:1 image:4 variational:1 recently:2 common:2 rotation:1 specialized:1 rl:6 physical:1 million:1 he:1 significant:1 rd:6 automatic:3 unconstrained:1 particle:1 pq:3 funded:1 similarity:2 etc:3 add:1 something:1 curvature:2 multivariate:1 own:1 recent:3 perspective:1 optimizing:16 optimizes:4 belongs:1 perplexity:1 nonconvex:2 binary:4 watson:1 der:4 seen:4 minimum:3 preserving:3 converge:1 semi:1 ii:1 multiple:1 siamese:1 reduces:2 smooth:1 match:1 faster:8 plug:2 adapt:1 long:1 retrieval:3 calculation:1 award:1 coded:2 laplacian:6 prediction:1 involving:1 basic:1 regression:8 scalable:2 essentially:1 metric:2 arxiv:2 iteration:23 sometimes:1 kernel:1 achieved:3 confined:2 dec:1 addition:1 want:2 separately:2 addressed:1 extra:2 rest:2 eigenfunctions:1 member:1 call:3 ee:22 near:1 chopra:1 yang:1 intermediate:1 bengio:1 embeddings:35 easy:2 enough:1 fit:19 architecture:3 perfectly:1 suboptimal:1 reduce:1 idea:2 simplifies:1 pca:2 reuse:3 effort:2 penalty:4 hessian:3 pretraining:4 deep:7 generally:1 useful:1 involve:2 nonparametric:2 clutter:1 locally:3 http:1 nsf:1 per:1 blue:1 write:2 paiement:1 nocedal:1 bentz:1 graph:1 asymptotically:2 year:1 sum:1 swissroll:3 run:4 package:2 powerful:1 distorted:1 family:14 almost:1 guyon:1 maaten:4 scaling:2 comparable:1 bit:2 guaranteed:1 display:1 convergent:1 quadratic:7 barnes:6 precisely:2 constraint:5 calling:1 speed:1 argument:3 min:4 developing:1 according:1 alternate:1 debugging:1 combination:7 precompute:1 belonging:1 conjugate:5 describes:1 remain:1 jr:1 modification:2 projecting:2 restricted:1 gradually:1 invariant:1 taken:1 computationally:1 equation:1 visualization:4 remains:1 agree:2 eventually:1 wrt:1 know:1 needed:3 ouimet:1 end:1 available:1 greengard:1 apply:7 hierarchical:1 spectral:12 repels:1 neighbourhood:2 batch:4 shah:1 altogether:1 original:1 top:2 clustering:1 multipole:4 running:2 approximating:1 objective:45 added:1 already:1 parametric:27 costly:1 strategy:2 gradient:48 affinity:5 iclr:1 thank:1 link:1 manifold:1 trivial:1 reason:1 assuming:1 besides:1 code:8 z3:1 illustration:1 difficult:2 setup:1 lg:2 webb:1 sne:21 trace:3 implementation:1 publ:1 teh:1 disagree:1 datasets:1 descent:7 truncated:1 hinton:4 excluding:1 y1:2 arbitrary:1 introduced:4 evidenced:1 pair:4 distorts:1 z1:1 optimized:3 california:1 errorbars:1 alternately:1 nip:5 trans:3 beyond:1 able:3 usually:4 pattern:12 xm:3 max:1 including:1 memory:3 power:1 suitable:1 force:2 regularized:2 warm:2 improve:1 bending:1 epoch:3 kf:2 multiplication:1 relative:6 loss:2 expect:1 limitation:2 degree:2 sufficient:1 wnm:3 verification:1 principle:1 keeping:1 free:37 understand:1 neighbor:6 benefit:3 distributed:3 curve:3 dimension:4 xn:5 evaluating:5 van:4 kz:6 forward:2 replicated:1 simplified:2 far:1 approximate:7 implicitly:2 confirm:1 global:3 discriminative:1 grayscale:1 search:4 continuous:3 latent:1 iterative:1 table:1 learn:5 nature:1 robust:1 elastic:9 decoupling:1 forest:1 bottou:1 necessarily:1 complex:2 assured:1 did:2 aistats:5 main:1 edition:2 allowed:2 ackinger:1 body:14 x1:1 fig:8 augmented:3 representative:1 peltonen:3 differed:1 slow:7 momentum:1 mao:1 xl:1 pe:52 communicates:2 third:2 minz:1 jmlr:2 formula:1 theorem:6 magenta:1 perpi:8 specific:4 kzn:2 decay:1 concern:1 dl:1 exists:1 mnist:4 autodiff:1 demand:1 locality:1 intricacy:1 backtracking:1 simply:4 partially:1 applies:2 springer:1 corresponds:1 minimizer:8 nested:5 relies:1 minibatches:2 coil:2 weston:1 goal:1 rbf:1 feasible:6 change:1 carreira:9 except:1 reducing:1 total:2 partly:1 rokhlin:1 latter:2 arises:1 treebased:1 |
5,494 | 5,973 | Bayesian Manifold Learning:
The Locally Linear Latent Variable Model
Mijung Park, Wittawat Jitkrittum, Ahmad Qamar?,
Zolt?an Szab?o, Lars Buesing?, Maneesh Sahani
Gatsby Computational Neuroscience Unit
University College London
{mijung, wittawat, zoltan.szabo}@gatsby.ucl.ac.uk
atqamar@gmail.com, lbuesing@google.com, maneesh@gatsby.ucl.ac.uk
Abstract
We introduce the Locally Linear Latent Variable Model (LL-LVM), a probabilistic
model for non-linear manifold discovery that describes a joint distribution over observations, their manifold coordinates and locally linear maps conditioned on a set
of neighbourhood relationships. The model allows straightforward variational optimisation of the posterior distribution on coordinates and locally linear maps from
the latent space to the observation space given the data. Thus, the LL-LVM encapsulates the local-geometry preserving intuitions that underlie non-probabilistic
methods such as locally linear embedding (LLE). Its probabilistic semantics make
it easy to evaluate the quality of hypothesised neighbourhood relationships, select
the intrinsic dimensionality of the manifold, construct out-of-sample extensions
and to combine the manifold model with additional probabilistic models that capture the structure of coordinates within the manifold.
1
Introduction
Many high-dimensional datasets comprise points derived from a smooth, lower-dimensional manifold embedded within the high-dimensional space of measurements and possibly corrupted by noise.
For instance, biological or medical imaging data might reflect the interplay of a small number of latent processes that all affect measurements non-linearly. Linear multivariate analyses such as principal component analysis (PCA) or multidimensional scaling (MDS) have long been used to estimate
such underlying processes, but cannot always reveal low-dimensional structure when the mapping is
non-linear (or, equivalently, the manifold is curved). Thus, there has been substantial recent interest
in algorithms to identify non-linear manifolds in data.
Many more-or-less heuristic methods for non-linear manifold discovery are based on the idea of
preserving the geometric properties of local neighbourhoods within the data, while embedding, unfolding or otherwise transforming the data to occupy fewer dimensions. Thus, algorithms such as
locally-linear embedding (LLE) and Laplacian eigenmap attempt to preserve local linear relationships or to minimise the distortion of local derivatives [1, 2]. Others, like Isometric feature mapping
(Isomap) or maximum variance unfolding (MVU) preserve local distances, estimating global manifold properties by continuation across neighbourhoods before embedding to lower dimensions by
classical methods such as PCA or MDS [3]. While generally hewing to this same intuitive path, the
range of available algorithms has grown very substantially in recent years [4, 5].
?
?
Current affiliation: Thread Genius
Current affiliation: Google DeepMind
1
However, these approaches do not define distributions over the data or over the manifold properties.
Thus, they provide no measures of uncertainty on manifold structure or on the low-dimensional
locations of the embedded points; they cannot be combined with a structured probabilistic model
within the manifold to define a full likelihood relative to the high-dimensional observations; and they
provide only heuristic methods to evaluate the manifold dimensionality. As others have pointed out,
they also make it difficult to extend the manifold definition to out-of-sample points in a principled
way [6].
An established alternative is to construct an explicit probabilistic model of the functional relationship
between low-dimensional manifold coordinates and each measured dimension of the data, assuming
that the functions instantiate draws from Gaussian-process priors. The original Gaussian process
latent variable model (GP-LVM) required optimisation of the low-dimensional coordinates, and thus
still did not provide uncertainties on these locations or allow evaluation of the likelihood of a model
over them [7]; however a recent extension exploits an auxiliary variable approach to optimise a
more general variational bound, thus retaining approximate probabilistic semantics within the latent
space [8]. The stochastic process model for the mapping functions also makes it straightforward
to estimate the function at previously unobserved points, thus generalising out-of-sample with ease.
However, the GP-LVM gives up on the intuitive preservation of local neighbourhood properties that
underpin the non-probabilistic methods reviewed above. Instead, the expected smoothness or other
structure of the manifold must be defined by the Gaussian process covariance function, chosen a
priori.
Here, we introduce a new probabilistic model over high-dimensional observations, low-dimensional
embedded locations and locally-linear mappings between high and low-dimensional linear maps
within each neighbourhood, such that each group of variables is Gaussian distributed given the
other two. This locally linear latent variable model (LL-LVM) thus respects the same intuitions
as the common non-probabilistic manifold discovery algorithms, while still defining a full-fledged
probabilistic model. Indeed, variational inference in this model follows more directly and with fewer
separate bounding operations than the sparse auxiliary-variable approach used with the GP-LVM.
Thus, uncertainty in the low-dimensional coordinates and in the manifold shape (defined by the local
maps) is captured naturally. A lower bound on the marginal likelihood of the model makes it possible
to select between different latent dimensionalities and, perhaps most crucially, between different
definitions of neighbourhood, thus addressing an important unsolved issue with neighbourhooddefined algorithms. Unlike existing probabilistic frameworks with locally linear models such as
mixtures of factor analysers (MFA)-based and local tangent space analysis (LTSA)-based methods
[9, 10, 11], LL-LVM does not require an additional step to obtain the globally consistent alignment
of low-dimensional local coordinates.1
This paper is organised as follows. In section 2, we introduce our generative model, LL-LVM, for
which we derive the variational inference method in section 3. We briefly describe out-of-sample
extension for LL-LVM and mathematically describe the dissimilarity between LL-LVM and GPLVM at the end of section 3. In section 4, we demonstrate the approach on several real world
problems.
Notation: In the following, a diagonal matrix with entries taken from the vector v is written diag(v).
The vector of n ones is 1n and the n ? n identity matrix is In . The Euclidean norm of a vector is
kvk, the Frobenius norm of a matrix is kMkF . The Kronecker delta is denoted by ?ij (= 1 if i = j,
and 0 otherwise). The Kronecker product of matrices M and N is M ? N. For a random vector w,
we denote the normalisation constant in its probability density function by Zw . The expectation of
a random vector w with respect to a density q is hwiq .
2
The model: LL-LVM
Suppose we have n data points {y1 , . . . , yn } ? Rdy , and a graph G on nodes {1 . . . n} with edge
set EG = {(i, j) | yi and yj are neighbours}. We assume that there is a low-dimensional (latent)
representation of the high-dimensional data, with coordinates {x1 , . . . , xn } ? Rdx , dx < dy . It will
be helpful to concatenate the vectors to form y = [y1 > , . . . , yn > ]> and x = [x1 > , . . . , xn > ]> .
1
This is also true of one previous MFA-based method [12] which finds model parameters and global coordinates by variational methods similar to our own.
2
high-dimensional space
yj
TyiMy
yi
Ci
low-dimensional space
TxiMx
xi
xj
Figure 1: Locally linear mapping Ci
for ith data point transforms the tangent space, Txi Mx at xi in the lowdimensional space to the tangent space,
Tyi My at the corresponding data point
yi in the high-dimensional space. A
neighbouring data point is denoted by yj
and the corresponding latent variable by
xj .
Our key assumption is that the mapping between high-dimensional data and low-dimensional coordinates is locally linear (Fig. 1). The tangent spaces are approximated by {yj ? yi }(i,j)?EG and
{xj ? xi }(i,j)?EG , the pairwise differences between the ith point and neighbouring points j. The
matrix Ci ? Rdy ?dx at the ith point linearly maps those tangent spaces as
yj ? yi ? Ci (xj ? xi ).
(1)
Under this assumption, we aim to find the distribution over the linear maps C = [C1 , ? ? ? , Cn ] ?
Rdy ?ndx and the latent variables x that best describe the data likelihood given the graph G:
ZZ
log p(y|G) = log
p(y, C, x|G) dx dC.
(2)
The joint distribution can be written in terms of priors on C, x and the likelihood of y as
p(y, C, x|G) = p(y|C, x, G)p(C|G)p(x|G).
(3)
In the following, we highlight the essential components the Locally Linear Latent Variable Model
(LL-LVM). Detailed derivations are given in the Appendix.
Adjacency matrix and Laplacian matrix The edge set of G for n data points specifies a n ? n
symmetric adjacency matrix G. We write ?ij for the i, jth element of G, which is 1 if yj and
yi are neighbours and 0 if not (including on the diagonal). The graph Laplacian matrix is then
L = diag(G 1n ) ? G.
Prior on x We assume that the latent variables are zero-centered with a bounded expected scale,
and that latent variables corresponding to neighbouring high-dimensional points are close (in Euclidean distance). Formally, the log prior on the coordinates is then
log p({x1 . . . xn }|G, ?) =
? 21
n
n
X
X
2
(?kxi k +
?ij kxi ? xj k2 ) ? log Zx ,
i=1
j=1
where the parameter ? controls the expected scale (? > 0). This prior can be written as multivariate
normal distribution on the concatenated x:
p(x|G, ?) = N (0, ?), where ??1 = 2L ? Idx , ??1 = ?Indx + ??1 .
Prior on C We assume that the linear maps corresponding to neighbouring points are similar in
terms of Frobenius norm (thus favouring a smooth manifold of low curvature). This gives
n
n
n
1 XX
X
2
log p({C1 . . . Cn }|G) = ?
Ci
?
?ij kCi ? Cj k2F ? log Zc
2 i=1
2 i=1 j=1
F
1
= ? Tr (JJ> + ??1 )C> C ? log Zc ,
(4)
2
where J := 1n ? Idx . The second line corresponds to the matrix normal density, giving p(C|G) =
MN (C|0, Idy , (JJ> + ??1 )?1 ) as the prior on C. In our implementation, we fix to a small
value2 , since the magnitude of the product Ci (xi ? xj ) is determined by optimising the hyperparameter ? above.
2
sets the scale of the average linear map, ensuring the prior precision matrix is invertible.
3
C
G
x
y
V ?
Figure 2: Graphical representation of generative process in LLLVM. Given a dataset, we construct a neighbourhood graph G. The
distribution over the latent variable x is controlled by the graph G
as well as the parameter ?. The distribution over the linear map
C is also governed by the graph G. The latent variable x and the
linear map C together determine the data likelihood.
Likelihood Under the local-linearity assumption, we penalise the approximation error of Eq. (1),
which yields the log likelihood
n
n X
n
X
X
log p(y|C, x, V, G) = ? k
?ij (?yj,i ? Ci ?xj,i )> V?1 (?yj,i ? Ci ?xj,i ) ? log Zy ,
yi k2 ? 12
2 i=1
i=1 j=1
(5)
where ?yj,i = yj ? yi and ?xj,i = xj ? xi .3 Thus, y is drawn from a multivariate normal
distribution given by
p(y|C, x, V, G) = N (?y , ?y ),
>
?1
, ?y = ?y e, and e = [e1 > , ? ? ? , en > ]> ? Rndy ;
with ??1
yP = (1n 1n ) ? Idy + 2L ? V
n
?1
ei = ? j=1 ?ji V (Cj + Ci )?xj,i . For computational simplicity, we assume V?1 = ?Idy .
The graphical representation of the generative process underlying the LL-LVM is given in Fig. 2.
3
Variational inference
Our goal is to infer the latent variables (x, C) as well as the parameters ? = {?, ?} in LL-LVM. We
infer them by maximising the lower bound L of the marginal likelihood of the observations
ZZ
p(y, C, x|G, ?)
log p(y|G, ?) ?
q(C, x) log
dxdC := L(q(C, x), ?).
(6)
q(C, x)
Following the common treatment for computational tractability, we assume the posterior over (C, x)
factorises as q(C, x) = q(x)q(C) [13]. We maximise the lower bound w.r.t. q(C, x) and ? by the
variational expectation maximization algorithm [14], which consists of (1) the variational expectation step for computing q(C, x) by
Z
q(x) ? exp
Z
q(C) ? exp
q(C) log p(y, C, x|G, ?)dC ,
q(x) log p(y, C, x|G, ?)dx ,
(7)
(8)
then (2) the maximization step for estimating ? by ?? = arg max? L(q(C, x), ?).
Variational-E step Computing q(x) from Eq. (7) requires rewriting the likelihood in Eq. (5) as a
quadratic function in x
p(y|C, x, ?, G) = Z?1 exp ? 12 (x> Ax ? 2x> b) ,
x
? := (1n 1>
where the normaliser Z?x has all the terms that do not depend on x from Eq. (5). Let L
n +
n
ndx ?ndx
?1
>
where the i, jth
2?L) . The matrix A is given by A := AE ?y AE = [Aij ]i,j=1 ? R
Pn Pn
? q)AE (p, i)> AE (q, j) and each i, jth (dy ? dx ) block of
dx ? dx block is Aij = p=1 q=1 L(p,
P
?1
AE ? Rndy ?ndx is given by AE (i, j) = ??ij V?1 (Cj + Ci ) + ?ij
(Ck + Ci ) . The
k ?ik V
>
> >
ndx
vector b is defined
with the component dx -dimensional vectors
1 , ? ? ? , bn ] ? R
Pn as b = [b
> ?1
> ?1
given by bi = j=1 ?ij (Cj V (yi ? yj ) ? Ci V (yj ? yi )). The likelihood combined with
the prior on x gives us the Gaussian posterior over x (i.e., solving Eq. (7))
?1
q(x) = N (x|?x , ?x ), where ??1
,
x = hAiq(C) + ?
?x = ?x hbiq(C) .
(9)
3
The term centers the data and ensures the distribution can be normalised. It applies in a subspace orthogonal to that modelled by x and C and so its value does not affect the resulting manifold model.
4
A
400 datapoints
C
posterior mean of C
E
average lwbs
1000
B
900
D
6
7
8
9
10
11
k
true x
post mean of
x
Figure 3: A simulated example. A: 400 data points drawn from Swiss Roll. B: true latent points (x)
in 2D used for generating the data. C: Posterior mean of C and D: posterior mean of x after 50 EM
iterations given k = 9, which was chosen by maximising the lower bound across different k?s. E:
Average lower bounds as a function of k. Each point is an average across 10 random seeds.
Similarly, computing q(C) from Eq. (8) requires rewriting the likelihood in Eq. (5) as a quadratic
function in C
p(y|C, x, G, ?) = Z?1 exp[? 12 Tr(?C> C ? 2C> V?1 H)],
(10)
C
? >.
where the normaliser Z?C has all the terms that do not depend on C from Eq. (5), and ? := QLQ
ndx ?n
The matrix Q = [q1 q2P
? ? ? qn ] ? R
where
the jth subvector of the ith column is qi (j) =
?1
dx
?ij V?1 (xi ? xj ) + ?ij
?
V
(x
?
x
)
?
R
. We define H = [H1 , ? ? ? , Hn ] ? Rdy ?ndx
ik
i
k
Pkn
>
whose ith block is Hi = j=1 ?ij (yj ? yi )(xj ? xi ) .
The likelihood combined with the prior on C gives us the Gaussian posterior over C (i.e., solving
Eq. (8))
>
?1
q(C) = MN (?C , I, ?C ), where ??1
and ?C = V?1 hHiq(x) ?>
C.
C := h?iq(x) + JJ + ?
(11)
The expected values of A, b, ? and H are given in the Appendix.
Variational-M step We set the parameters by maximising L(q(C, x), ?) w.r.t. ? which is split
into two terms based on dependence on each parameter: (1) expected log-likelihood for updating
V by arg maxV Eq(x)q(C) [log p(y|C, x, V, G)]; and (2) negative KL divergence between the prior
and the posterior on x for updating ? by arg max? Eq(x)q(C) [log p(x|G, ?) ? log q(x)]. The update
rules for each hyperparameter are given in the Appendix.
The full EM algorithm4 starts with an initial value of ?. In the E-step, given q(C), compute q(x)
as in Eq. (9). Likewise, given q(x), compute q(C) as in Eq. (11). The parameters ? are updated
in the M-step by maximising Eq. (6). The two steps are repeated until the variational lower bound
in Eq. (6) saturates. To give a sense of how the algorithm works, we visualise fitting results for
a simulated example in Fig. 3. Using the graph constructed from 3D observations given different
k, we run our EM algorithm. The posterior means of x and C given the optimal k chosen by the
maximum lower bound resemble the true manifolds in 2D and 3D spaces, respectively.
Out-of-sample extension In the LL-LVM model one can formulate a computationally efficient
out-of-sample extension technique as follows. Given n data points denoted by D = {y1 , ? ? ? , yn },
the variational EM algorithm derived in the previous section converts D into the posterior q(x, C):
D 7? q(x)q(C). Now, given a new high-dimensional data point y? , one can first find
the neighbourhood of y? without changing the current neighbourhood graph. Then, it is possible to compute the distributions over the corresponding locally linear map and latent variable
q(C? , x? ) via simply performing the E-step given q(x)q(C) (freezing all other quantities the same)
as D ? {y? } 7? q(x)q(C)q(x? )q(C? ).
4
An implementation is available from http://www.gatsby.ucl.ac.uk/resources/lllvm.
5
A 400 samples (in 3D)
B
C
2D representation
posterior mean of x in 2D space
G without shortcut
29
29
G with shortcut
28
28
LB: 1151.5
LB: 1119.4
Figure 4: Resolving short-circuiting problems using variational lower bound. A: Visualization of
400 samples drawn from a Swiss Roll in 3D space. Points 28 (red) and 29 (blue) are close to each
other (dotted grey) in 3D. B: Visualization of the 400 samples on the latent 2D manifold. The
distance between points 28 and 29 is seen to be large. C: Posterior mean of x with/without shortcircuiting the 28th and the 29th data points in the graph construction. LLLVM achieves a higher
lower bound when the shortcut is absent. The red and blue parts are mixed in the resulting estimate
in 2D space (right) when there is a shortcut. The lower bound is obtained after 50 EM iterations.
Comparison to GP-LVM A closely related probabilistic dimensionality reduction algorithm to
LL-LVM is GP-LVM [7]. GP-LVM defines the mapping from the latent space to data space using Gaussian processes. The likelihood of the observations Y = [y1 , . . . , ydy ] ? Rn?dy (yk
is the vector formed by the kth element of all n high dimensional vectors) given latent variables
Qdy
X = [x1 , . . . , xdx ] ? Rn?dx is defined by p(Y|X) = k=1
N (yk |0, Knn + ? ?1 In ), where
the i, jth
of the covariancei matrix is of the exponentiated quadratic form: k(xi , xj ) =
h element
Pdx
1
2
?f exp ? 2 q=1 ?q (xi,q ? xj,q )2 with smoothness-scale parameters {?q } [8]. In LL-LVM, once
we integrate out C from Eq. (5), we also obtain the Gaussian likelihood given x,
Z
p(y|x, G, ?) = p(y|C, x, G, ?)p(C|G, ?) dC = Z1Y exp ? 12 y> K?1
LL y .
y
?1
In contrast to GP-LVM, the precision matrix K?1
) ? (W ? V?1 ) ? (W> ?
LL = (2L ? V
?1
V ) depends on the graph Laplacian matrix through W and ?. Therefore, in LL-LVM, the graph
structure directly determines the functional form of the conditional precision.
4
4.1
Experiments
Mitigating the short-circuit problem
Like other neighbour-based methods, LL-LVM is sensitive to misspecified neighbourhoods; the
prior, likelihood, and posterior all depend on the assumed graph. Unlike other methods, LLLVM provides a natural way to evaluate possible short-circuits using the variational lower bound
of Eq. (6). Fig. 4 shows 400 samples drawn from a Swiss Roll in 3D space (Fig. 4A). Two points,
labelled 28 and 29, happen to fall close to each other in 3D, but are actually far apart on the latent (2D) surface (Fig. 4B). A k-nearest-neighbour graph might link these, distorting the recovered
coordinates. However, evaluating the model without this edge (the correct graph) yields a higher
variational bound (Fig. 4C). Although it is prohibitive to evaluate every possible graph in this way,
the availability of a principled criterion to test specific hypotheses is of obvious value.
In the following, we demonstrate LL-LVM on two real datasets: handwritten digits and climate data.
4.2
Modelling USPS handwritten digits
As a first real-data example, we test our method on a subset of 80 samples each of the digits
0, 1, 2, 3, 4 from the USPS digit dataset, where each digit is of size 16?16 (i.e., n = 400, dy = 256).
We follow [7], and represent the low-dimensional latent variables in 2D.
6
A variational lower bound
B posterior mean of x (k=n/80)
x 10 4
digit 0
5
digit 1
k=n/80
k=n/100
k=n/50
k=n/40
4
digit 2
digit 3
digit 4
true Y* estimate
query (1)
query (0)
query (4)
3
query (2)
query (3)
0
30
# EM iterations
C GP-LVM
F Classification error
E LLE
D ISOMAP
0.4
0.2
0
LLLVM ISOMAP GPLVM
LLE
Figure 5: USPS handwritten digit dataset described in section 4.2. A: Mean (in solid) and variance
(1 standard n deviation shading) of the variational lower bound across 10 different random starts of
EM algorithm with different k?s. The highest lower bound is achieved when k = n/80. B: The
posterior mean of x in 2D. Each digit is colour coded. On the right side are reconstructions of y? for
randomly chosen query points x? . Using neighbouring y and posterior means of C we can recover
y? successfully (see text). C: Fitting results by GP-LVM using the same data. D: ISOMAP (k = 30)
and E: LLE (k=40). Using the extracted features (in 2D), we evaluated a 1-NN classifier for digit
identity with 10-fold cross-validation (the same data divided into 10 training and test sets). The
classification error is shown in F. LL-LVM features yield the comparably low error with GP-LVM
and ISOMAP.
Fig. 5A shows variational lower bounds for different values of k, using 9 different EM initialisations.
The posterior mean of x obtained from LL-LVM using the best k is illustrated in Fig. 5B. Fig. 5B
also shows reconstructions of one randomly-selected example of each digit, using its 2D coordinates
? i and actual images yi of its
? i , tangent spaces C
x? as well as the posterior mean coordinates x
k = n/80 closest neighbours. The reconstruction is basedhon the assumed tangent-space
structure
i
P
? i (x? ? x
? ? = k1 ki=1 yi + C
? i ) . A similar process
of the generative model (Eq. (5)), that is: y
could be used to reconstruct digits at out-of-sample locations. Finally, we quantify the relevance
of the recovered subspace by computing the error incurred using a simple classifier to report digit
identity using the 2D features obtained by LL-LVM and various competing methods (Fig. 5C-F).
Classification with LL-LVM coordinates performs similarly to GP-LVM and ISOMAP (k = 30),
and outperforms LLE (k = 40).
4.3
Mapping climate data
In this experiment, we attempted to recover 2D geographical relationships between weather stations
from recorded monthly precipitation patterns. Data were obtained by averaging month-by-month
annual precipitation records from 2005?2014 at 400 weather stations scattered across the US (see
Fig. 6) 5 . Thus, the data set comprised 400 12-dimensional vectors. The goal of the experiment is to
recover the two-dimensional topology of the weather stations (as given by their latitude and longi5
The dataset is made available by the National Climatic Data Center at http://www.ncdc.noaa.
gov/oa/climate/research/ushcn/. We use version 2.5 monthly data [15].
7
Latitude
45
40
35
30
?120 ?110 ?100 ?90
Longitude
?80
(a) 400 weather stations
(d) ISOMAP
?70
(b) LLE
(e) GP-LVM
(c) LTSA
(f) LL-LVM
Figure 6: Climate modelling problem as described in section 4.3. Each example corresponding to
a weather station is a 12-dimensional vector of monthly precipitation measurements. Using only
the measurements, the projection obtained from the proposed LL-LVM recovers the topological
arrangement of the stations to a large degree.
tude) using only these 12-dimensional climatic measurements. As before, we compare the projected
points obtained by LL-LVM with several widely used dimensionality reduction techniques. For the
graph-based methods LL-LVM, LTSA, ISOMAP, and LLE, we used 12-NN with Euclidean distance
to construct the neighbourhood graph.
The results are presented in Fig. 6. LL-LVM identified a more geographically-accurate arrangement
for the weather stations than the other algorithms. The fully probabilistic nature of LL-LVM and
GPLVM allowed these algorithms to handle the noise present in the measurements in a principled
way. This contrasts with ISOMAP which can be topologically unstable [16] i.e. vulnerable to shortcircuit errors if the neighbourhood is too large. Perhaps coincidentally, LL-LVM also seems to
respect local geography more fully in places than does GP-LVM.
5
Conclusion
We have demonstrated a new probabilistic approach to non-linear manifold discovery that embodies the central notion that local geometries are mapped linearly between manifold coordinates and
high-dimensional observations. The approach offers a natural variational algorithm for learning,
quantifies local uncertainty in the manifold, and permits evaluation of hypothetical neighbourhood
relationships.
In the present study, we have described the LL-LVM model conditioned on a neighbourhood graph.
In principle, it is also possible to extend LL-LVM so as to construct a distance matrix as in [17], by
maximising the data likelihood. We leave this as a direction for future work.
Acknowledgments
The authors were funded by the Gatsby Charitable Foundation.
8
References
[1] S. T. Roweis and L. K. Saul. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science, 290(5500):2323?2326, 2000.
[2] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and
clustering. In NIPS, pages 585?591, 2002.
[3] J. B. Tenenbaum, V. Silva, and J. C. Langford. A Global Geometric Framework for Nonlinear
Dimensionality Reduction. Science, 290(5500):2319?2323, 2000.
[4] L.J.P. van der Maaten, E. O. Postma, and H. J. van den Herik. Dimensionality reduction: A comparative review, 2008.
http://www.iai.uni-bonn.de/?jz/
dimensionality_reduction_a_comparative_review.pdf.
[5] L. Cayton. Algorithms for manifold learning. Univ. of California at San Diego Tech. Rep,
pages 1?17, 2005. http://www.lcayton.com/resexam.pdf.
[6] J. Platt. Fastmap, metricmap, and landmark MDS are all Nystr?om algorithms. In Proceedings
of 10th International Workshop on Artificial Intelligence and Statistics, pages 261?268, 2005.
[7] N. Lawrence. Gaussian process latent variable models for visualisation of high dimensional
data. In NIPS, pages 329?336, 2003.
[8] M. K. Titsias and N. D. Lawrence. Bayesian Gaussian process latent variable model. In
AISTATS, pages 844?851, 2010.
[9] S. Roweis, L. Saul, and G. Hinton. Global coordination of local linear models. In NIPS, pages
889?896, 2002.
[10] M. Brand. Charting a manifold. In NIPS, pages 961?968, 2003.
[11] Y. Zhan and J. Yin. Robust local tangent space alignment. In NIPS, pages 293?301. 2009.
[12] J. Verbeek. Learning nonlinear image manifolds by global alignment of local linear models.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(8):1236?1250, 2006.
[13] C. Bishop. Pattern recognition and machine learning. Springer New York, 2006.
[14] M. J. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, Gatsby
Unit, University College London, 2003.
[15] M. Menne, C. Williams, and R. Vose. The U.S. historical climatology network monthly temperature data, version 2.5. Bulletin of the American Meteorological Society, 90(7):993?1007,
July 2009.
[16] Mukund Balasubramanian and Eric L. Schwartz. The isomap algorithm and topological stability. Science, 295(5552):7?7, January 2002.
[17] N. Lawrence. Spectral dimensionality reduction via maximum entropy. In AISTATS, pages
51?59, 2011.
9
| 5973 |@word version:2 briefly:1 norm:3 seems:1 grey:1 crucially:1 bn:1 covariance:1 zolt:1 q1:1 tr:2 solid:1 shading:1 nystr:1 reduction:6 initial:1 initialisation:1 favouring:1 existing:1 outperforms:1 current:3 com:3 recovered:2 gmail:1 dx:10 must:1 written:3 concatenate:1 happen:1 shape:1 update:1 maxv:1 xdx:1 generative:4 fewer:2 instantiate:1 prohibitive:1 selected:1 intelligence:2 ith:5 short:3 record:1 provides:1 node:1 location:4 constructed:1 ik:2 consists:1 combine:1 fitting:2 introduce:3 pairwise:1 indeed:1 expected:5 globally:1 balasubramanian:1 gov:1 actual:1 mijung:2 precipitation:3 estimating:2 underlying:2 notation:1 bounded:1 circuit:2 xx:1 linearity:1 substantially:1 deepmind:1 q2:1 unobserved:1 every:1 multidimensional:1 penalise:1 hypothetical:1 k2:2 classifier:2 schwartz:1 uk:3 control:1 unit:2 underlie:1 medical:1 yn:3 platt:1 before:2 maximise:1 lvm:43 local:16 path:1 might:2 ease:1 range:1 bi:1 acknowledgment:1 yj:13 block:3 swiss:3 digit:16 maneesh:2 weather:6 projection:1 cannot:2 close:3 ncdc:1 mvu:1 www:4 map:11 demonstrated:1 center:2 straightforward:2 williams:1 formulate:1 simplicity:1 rule:1 datapoints:1 embedding:6 handle:1 notion:1 coordinate:16 stability:1 updated:1 construction:1 suppose:1 diego:1 neighbouring:5 hypothesis:1 ydy:1 element:3 approximated:1 recognition:1 updating:2 capture:1 ensures:1 hypothesised:1 ahmad:1 highest:1 yk:2 substantial:1 intuition:2 transforming:1 principled:3 depend:3 solving:2 titsias:1 eric:1 usps:3 kmkf:1 joint:2 various:1 grown:1 derivation:1 univ:1 describe:3 london:2 query:6 artificial:1 analyser:1 whose:1 heuristic:2 widely:1 distortion:1 otherwise:2 reconstruct:1 niyogi:1 knn:1 statistic:1 gp:13 beal:1 interplay:1 ucl:3 reconstruction:3 lowdimensional:1 product:2 climatic:2 roweis:2 intuitive:2 frobenius:2 generating:1 comparative:1 leave:1 derive:1 iq:1 ac:3 measured:1 nearest:1 ij:11 eq:18 longitude:1 auxiliary:2 resemble:1 quantify:1 direction:1 closely:1 correct:1 lars:1 stochastic:1 centered:1 adjacency:2 require:1 fastmap:1 fix:1 geography:1 biological:1 zoltan:1 mathematically:1 extension:5 normal:3 exp:6 seed:1 mapping:8 lawrence:3 pkn:1 achieves:1 coordination:1 sensitive:1 successfully:1 unfolding:2 always:1 gaussian:10 aim:1 ck:1 rdx:1 pn:3 geographically:1 derived:2 ax:1 modelling:2 likelihood:18 tech:1 contrast:2 sense:1 helpful:1 inference:4 nn:2 visualisation:1 semantics:2 mitigating:1 issue:1 arg:3 classification:3 denoted:3 priori:1 retaining:1 marginal:2 construct:5 comprise:1 once:1 zz:2 optimising:1 park:1 k2f:1 future:1 report:1 others:2 belkin:1 randomly:2 neighbour:5 preserve:2 divergence:1 national:1 szabo:1 geometry:2 attempt:1 interest:1 normalisation:1 evaluation:2 alignment:3 mixture:1 kvk:1 normaliser:2 accurate:1 edge:3 orthogonal:1 euclidean:3 instance:1 idy:3 column:1 maximization:2 tractability:1 addressing:1 entry:1 subset:1 deviation:1 comprised:1 eigenmaps:1 too:1 corrupted:1 kxi:2 my:1 combined:3 density:3 geographical:1 international:1 probabilistic:15 invertible:1 together:1 thesis:1 reflect:1 recorded:1 central:1 hn:1 possibly:1 algorithm4:1 american:1 derivative:1 yp:1 de:1 availability:1 depends:1 h1:1 red:2 start:2 recover:3 om:1 formed:1 roll:3 variance:2 likewise:1 yield:3 identify:1 buesing:1 bayesian:3 modelled:1 handwritten:3 zy:1 comparably:1 zx:1 definition:2 obvious:1 naturally:1 recovers:1 unsolved:1 ushcn:1 dataset:4 treatment:1 jitkrittum:1 dimensionality:9 cj:4 actually:1 noaa:1 indx:1 higher:2 isometric:1 follow:1 iai:1 evaluated:1 cayton:1 until:1 langford:1 ndx:7 ei:1 freezing:1 nonlinear:3 google:2 postma:1 meteorological:1 defines:1 quality:1 perhaps:2 reveal:1 true:5 isomap:10 symmetric:1 illustrated:1 eg:3 climate:4 ll:32 criterion:1 pdf:2 demonstrate:2 txi:1 climatology:1 performs:1 temperature:1 silva:1 image:2 variational:20 misspecified:1 common:2 functional:2 ji:1 visualise:1 extend:2 measurement:6 monthly:4 rdy:4 smoothness:2 similarly:2 pointed:1 funded:1 surface:1 curvature:1 posterior:18 multivariate:3 recent:3 own:1 closest:1 apart:1 affiliation:2 rep:1 yi:13 der:1 preserving:2 captured:1 additional:2 seen:1 determine:1 july:1 preservation:1 resolving:1 full:3 infer:2 smooth:2 cross:1 long:1 offer:1 divided:1 post:1 e1:1 coded:1 laplacian:5 ensuring:1 controlled:1 qi:1 verbeek:1 ae:6 optimisation:2 expectation:3 iteration:3 represent:1 achieved:1 c1:2 zw:1 unlike:2 ltsa:3 wittawat:2 split:1 easy:1 affect:2 xj:15 competing:1 topology:1 identified:1 idea:1 cn:2 minimise:1 absent:1 metricmap:1 thread:1 distorting:1 pca:2 colour:1 york:1 jj:3 generally:1 tude:1 detailed:1 transforms:1 coincidentally:1 locally:14 tenenbaum:1 tyi:1 continuation:1 occupy:1 specifies:1 http:4 dotted:1 neuroscience:1 delta:1 blue:2 write:1 hyperparameter:2 group:1 key:1 kci:1 drawn:4 changing:1 rewriting:2 imaging:1 graph:18 year:1 convert:1 run:1 uncertainty:4 topologically:1 place:1 draw:1 maaten:1 dy:4 scaling:1 appendix:3 zhan:1 bound:17 hi:1 ki:1 fold:1 quadratic:3 topological:2 annual:1 kronecker:2 bonn:1 performing:1 structured:1 pdx:1 describes:1 across:5 em:8 encapsulates:1 den:1 taken:1 computationally:1 resource:1 visualization:2 previously:1 end:1 lcayton:1 available:3 operation:1 permit:1 spectral:2 neighbourhood:15 alternative:1 original:1 clustering:1 graphical:2 embodies:1 exploit:1 giving:1 concatenated:1 k1:1 classical:1 society:1 arrangement:2 quantity:1 dependence:1 md:3 diagonal:2 kth:1 mx:1 distance:5 separate:1 subspace:2 simulated:2 link:1 oa:1 mapped:1 landmark:1 manifold:30 idx:2 unstable:1 assuming:1 maximising:5 charting:1 relationship:6 equivalently:1 difficult:1 negative:1 underpin:1 implementation:2 observation:8 herik:1 datasets:2 curved:1 gplvm:3 january:1 defining:1 saturates:1 hinton:1 y1:4 dc:3 rn:2 station:7 lb:2 required:1 subvector:1 kl:1 california:1 established:1 nip:5 pattern:3 latitude:2 optimise:1 including:1 max:2 mfa:2 natural:2 mn:2 factorises:1 sahani:1 text:1 prior:12 geometric:2 discovery:4 tangent:8 review:1 relative:1 embedded:3 fully:2 highlight:1 mixed:1 organised:1 validation:1 foundation:1 integrate:1 incurred:1 degree:1 consistent:1 principle:1 charitable:1 jth:5 zc:2 aij:2 lle:8 allow:1 fledged:1 normalised:1 exponentiated:1 fall:1 side:1 saul:2 bulletin:1 sparse:1 distributed:1 van:2 dimension:3 xn:3 world:1 evaluating:1 qn:1 author:1 made:1 projected:1 san:1 historical:1 far:1 transaction:1 approximate:2 uni:1 global:5 generalising:1 assumed:2 xi:10 latent:26 quantifies:1 reviewed:1 nature:1 jz:1 robust:1 diag:2 did:1 aistats:2 linearly:3 bounding:1 noise:2 repeated:1 allowed:1 x1:4 fig:13 en:1 scattered:1 gatsby:6 precision:3 explicit:1 governed:1 specific:1 bishop:1 mukund:1 intrinsic:1 essential:1 workshop:1 ci:12 phd:1 dissimilarity:1 magnitude:1 conditioned:2 entropy:1 yin:1 simply:1 vulnerable:1 applies:1 springer:1 corresponds:1 determines:1 extracted:1 conditional:1 identity:3 goal:2 month:2 labelled:1 shortcut:4 determined:1 szab:1 averaging:1 principal:1 attempted:1 brand:1 select:2 college:2 formally:1 relevance:1 eigenmap:1 evaluate:4 |
5,495 | 5,974 | Local Causal Discovery of Direct Causes and Effects
Tian Gao
Qiang Ji
Department of ECSE
Rensselaer Polytechnic Institute, Troy, NY 12180
{gaot, jiq}@rpi.edu
Abstract
We focus on the discovery and identification of direct causes and effects of a target
variable in a causal network. State-of-the-art causal learning algorithms generally
need to find the global causal structures in the form of complete partial directed
acyclic graphs (CPDAG) in order to identify direct causes and effects of a target
variable. While these algorithms are effective, it is often unnecessary and wasteful
to find the global structures when we are only interested in the local structure of
one target variable (such as class labels). We propose a new local causal discovery algorithm, called Causal Markov Blanket (CMB), to identify the direct causes
and effects of a target variable based on Markov Blanket Discovery. CMB is designed to conduct causal discovery among multiple variables, but focuses only on
finding causal relationships between a specific target variable and other variables.
Under standard assumptions, we show both theoretically and experimentally that
the proposed local causal discovery algorithm can obtain the comparable identification accuracy as global methods but significantly improve their efficiency, often
by more than one order of magnitude.
1
Introduction
Causal discovery is the process to identify the causal relationships among a set of random variables.
It not only can aid predictions and classifications like feature selection [4], but can also help predict consequences of some given actions, facilitate counter-factual inference, and help explain the
underlying mechanisms of the data [13]. A lot of research efforts have been focused on predicting causality from observational data [13, 18]. They can be roughly divided into two sub-areas:
causal discovery between a pair of variables and among multiple variables. We focus on multivariate causal discovery, which searches for correlations and dependencies among variables in causal
networks [13]. Causal networks can be used for local or global causal prediction, and thus they can
be learned locally and globally. Many causal discovery algorithms for causal networks have been
proposed, and the majority of them belong to global learning algorithms as they seek to learn global
causal structures. The Spirtes-Glymour-Scheines (SGS) [18] and Peter-Clark (P-C) algorithm [19]
test for the existence of edges between every pair of nodes in order to first find the skeleton, or
undirected edges, of causal networks and then discover all the V-structures, resulting in a partially
directed acyclic graph (PDAG). The last step of these algorithms is then to orient the rest of edges
as much as possible using Meek rules [10] while maintaining consistency with the existing edges.
Given a causal network, causal relationships among variables can be directly read off the structure.
Due to the complexity of the P-C algorithm and unreliable high order conditional independence tests
[9], several works [23, 15] have incorporated the Markov Blanket (MB) discovery into the causal
discovery with a local-to-global approach. Growth and Shrink (GS) [9] algorithm uses the MBs
of each node to build the skeleton of a causal network, discover all the V-structures, and then use
the Meek rules to complete the global causal structure. The max-min hill climbing (MMHC) [23]
algorithm also finds MBs of each variable first, but then uses the MBs as constraints to reduce the
search space for the score-based standard hill climbing structure learning methods. In [15], authors
1
use Markov Blanket with Collider Sets (CS) to improve the efficiency of the GS algorithm by combining the spouse and V-structure discovery. All these local-to-global methods rely on the global
structure to find the causal relationships and require finding the MBs for all nodes in a graph, even
if the interest is the causal relationships between one target variable and other variables. Different MB discovery algorithms can be used and they can be divided into two different approaches:
non-topology-based and topology-based. Non-topology-based methods [5, 9], used by CS and GS
algorithms, greedily test the independence between each variable and the target by directly using the
definition of Markov Blanket. In contrast, more recent topology-based methods [22, 1, 11] aim to
improve the data efficiency while maintaining a reasonable time complexity by finding the parents
and children (PC) set first and then the spouses to complete the MB.
Local learning of causal networks generally aims to identify a subset of causal edges in a causal
network. Local Causal Discovery (LCD) algorithm and its variants [3, 17, 7] aim to find causal edges
by testing the dependence/independence relationships among every four-variable set in a causal
network. Bayesian Local Causal Discovery (BLCD) [8] explores the Y-structures among MB nodes
to infer causal edges [6]. While LCD/BLCD algorithms aim to identify a subset of causal edges via
special structures among all variables, we focus on finding all the causal edges adjacent to one target
variable. In other words, we want to find the causal identities of each node, in terms of direct causes
and effects, with respect to one target node. We first use Markov Blankets to find the direct causes
and effects, and then propose a new Causal Markov Blanket (CMB) discovery algorithm, which
determines the exact causal identities of MB nodes of a target node by tracking their conditional
independence changes, without finding the global causal structure of a causal network. The proposed
CMB algorithm is a complete local discovery algorithm and can identify the same direct causes and
effects for a target variable as global methods under standard assumptions. CMB is more scalable
than global methods, more efficient than local-to-global methods, and is complete in identifying
direct causes and effects of one target while other local methods are not.
2
Backgrounds
We use V to represent the variable space, capital letters (such as X, Y ) to represent variables, bold
letters (such as Z, MB) to represent variable sets, and use |Z| to represent the size of set Z. X ?
?Y
and X ?
? Y represent independence and dependence between X and Y , respectively. We assume
\
readers are familar with related concepts in causal network learning, and only review a few major
ones here. In a causal network or causal Bayesian Network [13], nodes correspond to the random
variables in a variable set V. Two nodes are adjacent if they are connected by an edge. A directed
edge from node X to node Y , (X, Y ) ? V, indicates X is a parent or direct cause of Y and Y is
a child or direct effect of X [12]. Moreover, If there is a directed path from X to Y , then X is an
ancestor of Y and Y is a descendant of X. If nonadjacent X and Y have a common child, X and
Y are spouses. Three nodes X, Y , and Z form a V-structure [12] if Y has two incoming edges from
X and Z, forming X ? Y ? Z, and X is not adjacent to Z. Y is a collider in a path if Y has two
incoming edges in this path. Y with nonadjacent parents X and Z is an unshielded collider. A path
J from node X and Y is blocked [12] by a set of nodes Z, if any of following holds true: 1) there is
a non-collider node in J belonging to Z. 2) there is a collider node C on J such that neither C nor
any of its descendants belong to Z. Otherwise, J is unblocked or active.
A PDAG is a graph which may have both undirected and directed edges and has at most one edge
between any pair of nodes [10]. CPDAGs [2] represent Markov equivalence classes of DAGs, capturing the same conditional independence relationships with the same skeleton but potentially different
edge orientations. CPDAGs contain directed edges that has the same orientation for every DAG in
the equivalent class and undirected edges that have reversible orientations in the equivalent class.
Let G be the causal DAG of a causal network with variable set V and P be the joint probability distribution over variables in V. G and P satisfy Causal Markov condition [13] if and only if, ?X ? V,
X is independent of non-effects of X given its direct causes. The causal faithfulness condition [13]
states that G and P are faithful to each other, if all and every independence and conditional independence entailed by P is present in G. It enables the recovery of G from sampled data of P . Another
widely-used assumption by existing causal discovery algorithms is causal sufficiency [12]. A set of
variables X ? V is causally sufficient, if no set of two or more variables in X shares a common
cause variable outside V. Without causal sufficiency assumption, latent confounders between adjacent nodes would be modeled by bi-directed edges [24]. We also assume no selection bias [20] and
2
we can capture the same independence relationships among variables from the sampled data as the
ones from the entire population.
Many concepts and properties of a DAG hold in causal networks, such as d-separation and MB.
A Markov Blanket [12] of a target variable T , MBT , in a causal network is the minimal set of
nodes conditioned on which all other nodes are independent of T , denoted as X ?
? T |MBT , ?X ?
{V \ T } \ MBT . Given an unknown distribution P that satisfied the Markov condition with respect
to an unknown DAG G0 , Markov Blanket Discovery is the process used to estimate the MB of a
target node in G0 , from independently and identically distributed (i.i.d) data D of P . Under the
causal faithfulness assumption between G0 and P , the MB of a target node T is unique and is the
set of parents, children, and spouses of T (i.e., other parents of children of T ) [12]. In addition, the
parents and children set of T , PCT , is also unique. Intuitively, the MB can directly facilitate causal
discovery. If conditioning on the MB of a target variable T renders a variable X independent of
T , then X cannot be a direct cause or effect of T . From the local causal discovery point of view,
although MB may contain nodes with different causal relationships with the target, it is reasonable
to believe that we can identify their relationships exactly, up to the Markov equivalence, with further
tests.
Lastly, exiting causal network learning algorithms all use three Meek rules [10], which we assume
the readers are familiar with, to orient as many edges as possible given all V-structures in PDAGs to
obtain CPDAG. The basic idea is to orient the edges so that 1) the edge directions do not introduce
new V-structures, 2) preserve the no-cycle property of a DAG, and 3) enforce 3-fork V-structures.
3
Local Causal Discovery of Direct Causes and Effects
Existing MB discovery algorithms do not directly offer the exact causal identities of the learned MB
nodes of a target. Although the topology-based methods can find the PC set of the target within
the MB set, they can only provide the causal identities of some children and spouses that form vstructures. Nevertheless, following existing works [4, 15], under standard assumptions, every PC
variable of a target can only be its direct cause or effect:
Theorem 1. Causality within a MB. Under the causal faithfulness, sufficiency, correct independence tests, and no selection bias assumptions, the parent and child nodes within a target?s MB set
in a causal network contains all and only the direct causes and effects of the target variable.
The proof can be directly derived from the PC set definition of a causal network. Therefore, using
the topology-based MB discovery methods, if we can discover the exact causal identities of the PC
nodes within the MB, causal discovery of direct causes and effects of the target can therefore be
successfully accomplished.
Building on MB discovery, we propose a new local causal discovery algorithm, Causal Markov
Blanket (CMB) discovery as shown in Algorithm 1. It identifies the direct causes and effects of a
target variable without the need of finding the global structure or the MBs of all other variables in
a causal network. CMB has three major steps: 1) to find the MB set of the target and to identify
some direct causes and effects by tracking the independence relationship changes among a target?s
PC nodes before and after conditioning on the target node, 2) to repeat Step 1 but conditioned on
one PC node?s MB set, and 3) to repeat Step 1 and 2 with unidentified neighboring nodes as new
targets to identify more direct causes and effects of the original target.
Step 1: Initial identification. CMB first finds the MB nodes of a target T , MBT , using a topologybased MB discovery algorithm that also finds PCT . CMB then uses the CausalSearch subroutine,
shown in Algorithm 2, to get an initial causal identities of variables in PCT by checking every
variable pair in PCT according to Lemma 1.
Lemma 1. Let (X, Y ) ? PCT , the PC set of the target T ? V in a causal DAG. The independence
relationships between X and Y can be divided into the following four conditions:
C1 X ?
? Y and X ?
? Y |T ; this condition can not happen.
C2 X ?
? Y and X ?
? Y |T ? X and Y are both the parents of T .
\
C3 X ?
? Y and X ?
\
? Y |T ? at least one of X and Y is a child of T .
C4 X ?
? Y and X ?
\
? Y |T ? their identities are inconclusive and need further tests.
\
3
Algorithm 1 Causal Markov Blanket Discovery Algorithm
1: Input: D: Data; T : target variable
2: Output: IDT : the causal identities of all
13:
nodes with respect to T
{Step 1: Establish initial ID }
3: IDT = zeros(|V|, 1);
4: (MBT , PCT ) ? F indM B(T, D);
5: Z ? ?;
14:
15:
16:
17:
6: IDT ? CausalSearch(D, T, PCT , Z, IDT ) 18:
{Step 2: Further test variables with idT = 4}
19:
7: for one X in each pair (X, Y ) with idT = 4 do
20:
8:
MBX ? F indM B(X, D);
21:
9:
Z ? {MBX \ T } \ Y ;
10:
IDT ? CausalSearch(D, T, PCT , Z, IDT );22:
23:
24:
11:
if no element of IDT is equal to 4, break;
12: for every pair of parents (X, Y ) of T do
if ?Z s.t. (X, Z) and (Y, Z) are idT = 4 pairs
then
IDT (Z) = 1;
IDT (X) ? 3, ?X that IDT (X) = 4;
{Step 3: Resolve variable set with idT = 3}
for each X with idT = 3 do
Recursively find IDX , without going back to
the already queried variables;
update IDT according to IDX ;
if IDX (T ) = 2 then
IDT (X) = 1;
for every Y in idT = 3 variable pairs
(X, Y ) do
IDT (Y ) = 2;
if no element of IDT is equal to 3, break;
Return: IDT
Algorithm 2 CausalSearch Subroutine
1: Input: D: Data; T : target variable; PCT :
13:
14:
15:
16:
17:
18:
19:
20:
21:
the PC set of T ; Z: the conditioned variable
set; ID: current ID
Output: IDT : the new causal identities of
all nodes with respect to T
{Step 1: Single PC }
if |PCT | = 1 then
IDT (PCT ) ? 3;
{Step 2: Check C2 & C3}
for every X, Y ? PCT do
if X ?
? Y |Z and X ?
? Y |T ? Z then
\
IDT (X) ? 1; IDT (Y ) ? 1;
else if X ?
? Y |Z and X ?
\
? Y |T ?Z then
if IDT (X) = 1 then
IDT (Y ) ? 2
else if IDT (Y ) 6= 2 then
IDT (Y ) ? 3
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
22:
23:
24:
25:
if IDT (Y ) = 1 then
IDT (X) ? 2
else if IDT (X) 6= 2 then
IDT (X) ? 3
add (X, Y ) to pairs with idT = 3
else
if IDT (X) & IDT (Y ) = 0 or 4 then
IDT (X) ? 4; IDT (Y ) ? 4
add (X, Y ) to pairs with idT = 4
{Step 3: identify idT = 3 pairs with known
parents}
for every X such that IDT (X) = 1 do
for every Y in idT = 3 variable pairs
(X, Y ) do
IDT (Y ) ? 2;
Return: IDT
C1 does not happen because the path X ? T ? Y is unblocked either not given T or given T , and
the unblocked path makes X and Y dependent on each other. C2 implies that X and Y form a
V-structure with T as the corresponding collider, such as node C in Figure 1a which has two parents
A and B. C3 indicates that the paths between X and Y are blocked conditioned on T , which means
that either one of (X, Y ) is a child of T and the other is a parent, or both of (X, Y ) are children of
T . For example, node D and F in Figure 1a satisfy this condition with respect to E. C4 shows that
there may be another unblocked path from X and Y besides X ? T ? Y . For example, in Figure
1b, node D and C have multiple paths between them besides D ? T ? C. Further tests are needed
to resolve this case.
Notation-wise, we use IDT to represent the causal identities for all the nodes with respect to T ,
IDT (X) as variable X?s causal identity to T , and the small case idT as the individual ID of a node
to T . We also use IDX to represent the causal identities of nodes with respect to node X. To avoid
changing the already identified PCs, CMB establishes a priority system1 . We use the idT = 1 to
represent nodes as the parents of T , idT = 2 children of T , idT = 3 to represent a pair of nodes that
cannot be both parents (and/or ambiguous pairs from Markov equivalent structures, to be discussed
at Step 2), and idT = 4 to represent the inconclusiveness. A lower number id cannot be changed
1
Note that the identification number is slightly different from the condition number in Lemma 1.
4
A
B
D
E
C
A
T
B
D
G
F
E
C
(?)
(?)
Figure 1: a) A Sample Causal Network. b) A Sample Network with C4 nodes. The only active path
between D and C conditioned on MBC \ {T, D} is D ? T ? C.
into a higher number (shown by Line 11?15 of Algorithm 2). If a variable pair satisfies C2, they
will both be labeled as parents (Line 7 of Algorithm 2). If a variable pair satisfies C3, one of them
is labeled as idT = 2 only if the other variable within the pair is already identified as a parent;
otherwise, they are both labeled as idT = 3 (Line 9?12 and 15?17 of Algorithm 2). If a PC node
remains inconclusive with idT = 0, it is labeled as idT = 4 in Line 20 of Algorithm 2. Note that
if T has only one PC node, it is labeled as idT = 3 (Line 4 of Algorithm 2). Non-PC nodes always
have idT = 0.
Step 2: Resolve idT = 4. Lemma 1 alone cannot identify the variable pairs in PCT with idT = 4
due to other possible unblocked paths, and we have to seek other information. Fortunately, by
definition, the MB set of one of the target?s PC node can block all paths to that PC node.
Lemma 2. Let (X, Y ) ? PCT , the PC set of the target T ? V in a causal DAG. The independence
relationships between X and Y , conditioned on the MB of X minus {Y, T }, MBX \ {Y, T }, can
be divided into the following four conditions:
C1 X ?
? Y |MBX \ {Y, T } and X ?
? Y |T ? MBX \ Y ; this condition can not happen.
C2 X ?
? Y |MBX \ {Y, T } and X ?
? Y |T ? MBX \ Y ? X and Y are both the parents of T .
\
C3 X ?
? Y |MBX \ {Y, T } and X ?
\
? Y |T ? MBX \ Y ? at least one of X and Y is a child of T .
C4 X ?
? Y |MBX \ {Y, T } and X ?\? Y |T ? MBX \ Y ? then X and Y is directly connected.
\
C1?3 are very similar to those in Lemma 1. C4 is true because, conditioned on T and the MB of X
minus Y , the only potentially unblocked paths between X and Y are X ? T ? Y and/or X ? Y . If
C4 happens, then the path X ?T ?Y has no impact on the relationship between X and Y , and hence
X ? Y must be directly connected. If X and Y are not directly connected and the only potentially
unblocked path between X and Y is X ? T ? Y , and X and Y will be identified by Line 10 of
Algorithm 1 with idT ? {1, 2, 3}. For example in Figure 1b, conditioned on MBC \ {T, D}, i.e.,
{A, B}, the only path between C and D is through T. However, if X and Y are directly connected,
they will remain with idT = 4 (such as node D and E from Figure 1b). In this case, X, Y , and
T form a fully connected clique, and edges among the variables that form a fully connected clique
can have many different orientation combinations without affecting the conditional independence
relationships. Therefore, this case needs further tests to ensure Meek rules are satisfied. The third
Meek rule (enforcing 3-fork V-structures) is first enforced by Line 14 of Algorithm 1. Then the rest
of idT = 4 nodes are changed to have idT = 3 by Line 15 of Algorithm 1 and to be further processed
(even though they could be both parents at the same time) with neighbor nodes? causal identities.
Therefore, Step 2 of Algorithm 1 makes all variable pairs with idT = 4 to become identified either
as parents, children, or with idT = 3 after taking some neighbors? MBs into consideration. Note
that Step 2 of CMB only needs to find the MB?s for a small subset of the PC variables (in fact only
one MB for each variable pair with idT = 4).
Step 3: Resolve idT = 3. After Step 2, some PC variables may still have idT = 3. This could
happen because of the existence of Markov equivalence structures. Below we show the condition
under which the CMB can resolve the causal identities of all PC nodes.
5
Lemma 3. The Identifiability Condition. For Algorithm 1 to fully identify all the causal relationships within the PC set of a target T , 1) T must have at least two nonadjacent parents, 2) one of T ?s
single ancestors must contain at least two nonadjacent parents, or 3) T has 3 parents that form a
3-fork pattern as defined in Meeks rules.
We use single ancestors to represent ancestor nodes that do not have a spouse with a mutual child that
is also an ancestor of T . If the target does not meet any of the conditions in Lemma 2, C2 will never
be satisfied and all PC variables within a MB will have idT = 3. Without a single parent identified,
it is impossible to infer the identities of children nodes using C3. Therefore, all the identities of the
PC nodes are uncertain, even though the resulting structure could be a CPDAG.
Step 3 of CMB searches for a non-single ancestor of T to infer the causal directions. For each node
X with idT = 3, CMB tries to identify its local causal structure recursively. If X?s PC nodes are
all identified, it would return to the target with the resolved identities; otherwise, it will continue
to search for a non-single ancestor of X. Note that CMB will not go back to already-searched
variables with unresolved PC nodes without providing new information. Step 3 of CMB checks the
identifiability condition for all the ancestors of the target. If a graph structure does not meet the
conditions of Lemma 3, the final IDT will contain some idT = 3, which indicates reversible edges
in CPDAGs. The found causal graph using CMB will be a PDAG after Step 2 of Algorithm 1, and
it will be a CPDAG after Step 3 of Algorithm 1.
Case Study. The procedure using CMB to identify the direct causes and effects of E in Figure 1a
has the following 3 steps. Step 1: CMB finds the MB and PC set of E. The PC set contains node
D and F . Then, IDE (D) = 3 and IDE (F ) = 3. Step 2: to resolve the variable pair D and F
with idE = 3, 1) CMB finds the PC set of D, containing C, E, and G. Their idD are all 3?s, since
D contains only one parent. 2) To resolve IDD , CMB checks causal identities of node C and G
(without going back to E). The PC set of C contains A, B, and D. CMB identifies IDC (A) = 1,
IDC (B) = 1, and IDC (D) = 2. Since C resolves all its PC nodes, CMB returns to node D
with IDD (C) = 1. 3) With the new parent C, IDD (G) = 2, IDD (E) = 2, and CMB returns to
node E with IDE (D) = 1. Step 3: the IDE (D) = 1, and after resolving the pair with idE = 3,
IDE (F ) = 2.
Theorem 2. The Soundness and Completeness of CMB Algorithm. If the identifiability condition
is satisfied, using a sound and complete MB discovery algorithm, CMB will identify the direct causes
and effects of the target under the causal faithfulness, sufficiency, correct independence tests, and
no selection bias assumptions.
Proof. A sound and complete MB discovery algorithm find all and only the MB nodes of a target.
Using it and under the causal sufficiency assumption, the learned PC set contains all and only the
cause-effect variables by Theorem 1. When Lemma 3 is satisfied, all parent nodes are identifiable
through V-structure independence changes, either by Lemma 1 or by Lemma 2. Also since children
cannot be conditionally independent of another PC node given its MB minus the target node (C2),
all parents identified by Lemma 1 and 2 will be the true positive direct causes. Therefore, all and
only the true positive direct causes will be correctly identified by CMB. Since PC variables can only
be direct causes or direct effects, all and only the direct effects are identified correctly by CMB.
In the cases where CMB fails to identify all the PC nodes, global causal discovery methods cannot
identify them either. Specifically, structures failing to satisfy Lemma 3 can have different orientations on some edges while preserving the skeleton and v-structures, hence leading to Markov
equivalent structures. For the cases where T has all single ancestors, the edge directions among all
single ancestors can always be reversed without introducing new V-structures and DAG violations,
in which cases the Meek rules cannot identify the causal directions either. For the cases with fully
connected cliques, these fully connected cliques do not meet the nonadjacent-parents requirement
for the first Meek rule (no new V-structures), and the second Meek rule (preserving DAGs) can
always be satisfied within a clique by changing the direction of one edge. Since CMB orients the
3-fork V-structure in the third Meek rule correctly by Line 12?14 of Algorithm 1, CMB can identify
the same structure as the global methods that use the Meek rules.
Theorem 3. Consistency between CMB and Global Causal Discovery Methods. For the same
DAG G, Algorithm 1 will correctly identify all the direct causes and effects of a target variable T
6
as the global and local-to-global causal discovery methods2 that use the Meek rules [10], up to G?s
CPDAG under the causal faithfulness, sufficiency, correct independence tests, and no selection bias
assumptions.
Proof. It has been shown that causal methods using Meek rules [10] can identify up to a graph?s
CPDAG. Since Meek rules cannot identify the structures that fail Lemma 3, the global and local-toglobal methods can only identify the same structures as CMB. Since CMB is sound and complete in
identifying these structures by Theorem 2, CMB will identify all direct causes and effects up to G?s
CPDAG.
3.1
Complexity
The complexity of CMB algorithm is dominated by the step of finding the MB, which can have an
exponential complexity [1, 16]. All other steps of CMB are trivial in comparison. If we assume a
uniform distribution on the neighbor sizes in a network with N nodes, then the expected time comPN
N
plexity of Step 1 of CMB is O( N1 i=1 2i ) = O( 2N ), while local-to-global methods are O(2N ).
In later steps, CMB also needs to find MBs for a small subset of nodes that include 1) one node
between every pair of nodes that meet C4, and 2) a subset of the target?s neighboring nodes that
provide additional clues for the target. Let l be the total size of these nodes, then CMB reduces the
cost by Nl times asymptotically.
4
Experiments
We use benchmark causal learning datasets to evaluate the accuracy and efficiency of CMB with
four other causal discovery algorithms discussed: P-C, GS, MMHC, CS, and the local causal discovery algorithm LCD2 [7]. Due to page limit, we show the results of the causal algorithms on four
medium-to-large datasets: ALARM, ALARM3, CHILD3, and INSUR3. They contain 37 to 111
nodes. We use 1000 data samples for all datasets. For each global or local-to-global algorithm, we
find the global structure of a dataset and then extract causal identities of all nodes to a target node.
CMB finds causal identities of every variable with respect to the target directly. We repeat the discovery process for each node in the datasets, and compare the discovered causal identities of all the
algorithms to all the Markov equivalent structures with the known ground truth structure. We use the
edge scores [15] to measure the number of missing edges, extra edges, and reversed edges3 in each
node?s local causal structure and report average values along with its standard deviation, for all the
nodes in a dataset. We use the existing implementation [21] of HITON-MB discovery algorithm to
find the MB of a target variable for all the algorithms. We also use the existing implementations [21]
for P-C, MMHC, and LCD2 algorithms. We implement GS, CS, and the proposed CMB algorithms
in MATLAB on a machine with 2.66GHz CPU and 24GB memory. Following the existing protocol [15], we use the number of conditional independence tests needed (or scores computed for the
score-based search method MMHC) to find the causal structures given the MBs4 , and the number
of times that MB discovery algorithms are invoked to measure the efficiency of various algorithms.
We also use mutual-information-based conditional independence tests with a standard significance
level of 0.02 for all the datasets without worrying about parameter tuning.
As shown in Table 1, CMB consistently outperforms the global discovery algorithms on benchmark
causal networks, and has comparable edge accuracy with local-to-global algorithms. Although CMB
makes slightly more total edge errors in ALARM and ALARM3 datasets than CS, CMB is the best
method on CHILD3 and INSUR3. Since LCD2 is an incomplete algorithm, it never finds extra or
reversed edges but misses the most amount of edges. Efficiency-wise, CMB can achieve more than
one order of magnitude speedup, sometimes two orders of magnitude as shown in CHILD3 and
INSUR3, than the global methods. Compared to local-to-global methods, CMB also can achieve
2
We specify the global and local-to-global causal methods to be P-C [19], GS [9] and CS [15].
If an edge is reversible in the equivalent class of the original graph but are not in the equivalent class of the
learned graph, it is considered as reversed edges as well.
4
For global methods, it is the number of tests needed or scores computed given the moral graph of the global
structure. For LCD2, it would be the total number of tests since it does not use moral graph or MBs.
3
7
Table 1: Performance of Various Causal Discovery Algorithms on Benchmark Networks
Dataset
Alarm
Alarm3
Child3
Insur3
Method
P-C
MMHC
GS
CS
LCD2
CMB
P-C
MMHC
GS
CS
LCD2
CMB
P-C
MMHC
GS
CS
LCD2
CMB
P-C
MMHC
GS
CS
LCD2
CMB
Errors:
Extra
1.59?0.19
1.29?0.18
0.39?0.44
0.42?0.10
0.00?0.00
0.69?0.13
3.71?0.57
2.36?0.11
1.24?0.23
1.26?0.16
0.00?0.00
1.41?0.13
4.32?0.68
1.98?0.10
0.88?0.04
0.94?0.20
0.00?0.00
0.92?0.12
4.76?1.33
2.39?0.18
1.94?0.06
1.92?0.08
0.00?0.00
1.72?0.07
Edges
Missing
2.19?0.14
1.94?0.09
0.87?0.48
0.64?0.10
2.49?0.00
0.61?0.11
2.21?0.25
2.45?0.08
1.41?0.05
1.47?0.08
3.85?0.00
1.55?0.27
2.69?0.08
1.57?0.04
0.75?0.08
0.91?0.14
2.63?0.00
0.84?0.16
2.50?0.11
2.53?0.06
1.44?0.05
1.56?0.06
5.03?0.00
1.39?0.06
Reversed
0.32?0.10
0.24?0.06
1.13?0.23
0.38?0.08
0.00?0.0
0.51?0.10
1.37?0.04
0.72?0.08
0.99?0.14
0.63?0.14
0.00?0.0
0.78?0.25
0.84?0.10
0.43?0.04
1.03?0.08
0.53?0.08
0.00?0.0
0.60?0.10
1.29?0.11
0.76?0.07
1.19?0.10
0.89?0.09
0.00?0.0
1.19?0.05
Total
4.10?0.19
3.46?0.23
2.39?0.44
1.43?0.10
2.49?0.00
1.81?0.11
7.30?0.68
5.53?0.27
3.64?0.13
3.38?0.13
3.85?0.00
3.73?0.11
7.76?0.98
4.00?0.93
2.66?0.33
2.37?0.33
2.63?0.00
2.36?0.31
8.55?0.81
5.68?0.43
4.57?0.33
4.37?0.23
5.03?0.00
4.30?0.21
Efficiency
No. Tests
4.0e3?4.0e2
1.8e3?1.7e3
586.5?72.2
331.4?61.9
1.4e3?0
53.7?4.5
1.6e4?4.0e2
3.7e3?6.1e2
2.1e3?1.2e2
699.1?60.4
1.2e4?0
50.3?6.2
8.3e4?2.9e3
6.6e3?8.2e2
2.1e3?2.5e2
1.0e3?4.8e2
3.6e3?0
78.2?15.2
2.5e5?1.2e4
3.1e4?5.2e2
4.5e4?2.2e3
2.6e4?3.9e3
6.6e3?0
159.8?38.5
No. MB
37?0
37?0
37? 0
2.61 ? 0.12
111 ? 0
111 ? 0
111?0
2.58 ? 0.09
60 ?0
60?0
60? 0
2.53 ? 0.15
81 ? 0
81?0
81?0
2.46 ? 0.11
more than one order of speedup on ALARM3, CHILD3, and INSUR3. In addition, on these datasets,
CMB only invokes MB discovery algorithms between 2 to 3 times, drastically reducing the MB calls
of local-to-global algorithms. Since independence test comparison is unfair to LCD2 who does not
use MB discovery or find moral graphs, we also compared time efficiency between LCD2 and CMB.
CMB is 5 times faster on ALARM, 4 times faster on ALARM3 and CHILD3, and 8 times faster on
INSUR3 than LCD2.
In practice, the performance of CMB depends on two factors: the accuracy of independence tests
and MB discovery algorithms. First, independence tests may not always be accurate and could
introduce errors while checking the four conditions of Lemma 1 and 2, especially under insufficient
data samples. Secondly, causal discovery performance heavily depends on the performance of the
MB discovery step, as the error could propagate to later steps of CMB. Improvements on both areas
could further improve CMB?s accuracy. Efficiency-wise, CMB?s complexity can still be exponential
and is dominated by the MB discovery phrase, and thus its worst case complexity could be the same
as local-to-global approaches for some special structures.
5
Conclusion
We propose a new local causal discovery algorithm CMB. We show that CMB can identify the
same causal structure as the global and local-to-global causal discovery algorithms with the same
identification condition, but uses a fraction of the cost of the global and local-to-global approaches.
We further prove the soundness and completeness of CMB. Experiments on benchmark datasets
show the comparable accuracy and greatly improved efficiency of CMB for local causal discovery.
Possible future works could study assumption relaxations, especially without the causal sufficiency
assumption, such as by using a similar procedure as FCI algorithm and the improved CS algorithm
[14] to handle latent variables in CMB.
8
References
[1] Constantin Aliferis, Ioannis Tsamardinos, Alexander Statnikov, C. F. Aliferis M. D, Ph. D, I. Tsamardinos Ph. D, and Er Statnikov M. S. Hiton, a novel markov blanket algorithm for optimal variable selection,
2003.
[2] David Maxwell Chickering. Optimal structure identification with greedy search. Journal of Machine
Learning Research, 2002.
[3] Gregory F Cooper. A simple constraint-based algorithm for efficiently mining observational databases for
causal relationships. Data Mining and Knowledge Discovery, 1(2):203?224, 1997.
[4] Isabelle Guyon, Andre Elisseeff, and Constantin Aliferis. Causal feature selection. 2007.
[5] Daphne Koller and Mehran Sahami. Toward optimal feature selection. In ICML 1996, pages 284?292.
Morgan Kaufmann, 1996.
[6] Subramani Mani, Constantin F Aliferis, Alexander R Statnikov, and MED NYU. Bayesian algorithms for
causal data mining. In NIPS Causality: Objectives and Assessment, pages 121?136, 2010.
[7] Subramani Mani and Gregory F Cooper. A study in causal discovery from population-based infant birth
and death records. In Proceedings of the AMIA Symposium, page 315. American Medical Informatics
Association, 1999.
[8] Subramani Mani and Gregory F Cooper. Causal discovery using a bayesian local causal discovery algorithm. Medinfo, 11(Pt 1):731?735, 2004.
[9] Dimitris Margaritis and Sebastian Thrun. Bayesian network induction via local neighborhoods. In Advances in Neural Information Processing Systems 12, pages 505?511. MIT Press, 1999.
[10] Christopher Meek. Causal inference and causal explanation with background knowledge. In Proceedings
of the Eleventh conference on Uncertainty in artificial intelligence, pages 403?410. Morgan Kaufmann
Publishers Inc., 1995.
[11] Teppo Niinimaki and Pekka Parviainen. Local structure disocvery in bayesian network. In Proceedings
of Uncertainy in Artifical Intelligence, Workshop on Causal Structure Learning, pages 634?643, 2012.
[12] Judea Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan
Kaufmann Publishers, Inc., 2 edition, 1988.
[13] Judea Pearl. Causality: models, reasoning and inference, volume 29. Cambridge Univ Press, 2000.
[14] Jean-Philippe Pellet and Andr?e Elisseeff. Finding latent causes in causal networks: an efficient approach
based on markov blankets. In Advances in Neural Information Processing Systems, pages 1249?1256,
2009.
[15] Jean-Philippe Pellet and Andre Ellisseeff. Using markov blankets for causal structure learning. Journal
of Machine Learning, 2008.
[16] Jose M. Pe`oa, Roland Nilsson, Johan Bj?orkegren, and Jesper Tegn?er. Towards scalable and data efficient
learning of markov boundaries. Int. J. Approx. Reasoning, 45(2):211?232, July 2007.
[17] Craig Silverstein, Sergey Brin, Rajeev Motwani, and Jeff Ullman. Scalable techniques for mining causal
structures. Data Mining and Knowledge Discovery, 4(2-3):163?192, 2000.
[18] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. The MIT Press, 2nd edition,
2000.
[19] Peter Spirtes, Clark Glymour, Richard Scheines, Stuart Kauffman, Valerio Aimale, and Frank Wimberly.
Constructing bayesian network models of gene expression networks from microarray data, 2000.
[20] Peter Spirtes, Christopher Meek, and Thomas Richardson. Causal inference in the presence of latent
variables and selection bias. In Proceedings of the Eleventh conference on Uncertainty in artificial intelligence, pages 499?506. Morgan Kaufmann Publishers Inc., 1995.
[21] Alexander Statnikov, Ioannis Tsamardinos, Laura E. Brown, and Constatin F. Aliferis. Causal explorer:
A matlab library for algorithms for causal discovery and variable selection for classification. In Causation
and Prediction Challenge at WCCI, 2008.
[22] Ioannis Tsamardinos, Constantin F. Aliferis, and Alexander Statnikov. Time and sample efficient discovery of markov blankets and direct causal relations. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ?03, pages 673?678, New York, NY,
USA, 2003. ACM.
[23] Ioannis Tsamardinos, LauraE. Brown, and ConstantinF. Aliferis. The max-min hill-climbing bayesian
network structure learning algorithm. Machine Learning, 65(1):31?78, 2006.
[24] Jiji Zhang. On the completeness of orientation rules for causal discovery in the presence of latent confounders and selection bias. Artificial Intelligence, 172(16):1873?1896, 2008.
9
| 5974 |@word nd:1 seek:2 propagate:1 elisseeff:2 minus:3 recursively:2 initial:3 contains:5 score:5 outperforms:1 existing:7 current:1 rpi:1 must:3 happen:4 kdd:1 enables:1 designed:1 update:1 unshielded:1 alone:1 greedy:1 infant:1 intelligence:4 wimberly:1 record:1 completeness:3 node:81 daphne:1 zhang:1 along:1 c2:7 direct:29 become:1 symposium:1 descendant:2 prove:1 eleventh:2 introduce:2 theoretically:1 expected:1 roughly:1 nor:1 globally:1 resolve:8 cpu:1 discover:3 underlying:1 moreover:1 notation:1 medium:1 finding:8 every:13 growth:1 exactly:1 medical:1 causally:1 before:1 positive:2 local:35 limit:1 consequence:1 id:5 meet:4 path:16 amia:1 equivalence:3 tian:1 bi:1 directed:7 faithful:1 unique:2 testing:1 practice:1 block:1 implement:1 fci:1 procedure:2 area:2 significantly:1 word:1 pekka:1 get:1 cannot:8 selection:11 unidentified:1 impossible:1 equivalent:7 missing:2 go:1 independently:1 focused:1 wcci:1 identifying:2 recovery:1 rule:15 system1:1 population:2 handle:1 target:48 pt:1 heavily:1 exact:3 us:4 element:2 labeled:5 database:1 fork:4 factual:1 capture:1 worst:1 connected:9 cycle:1 counter:1 compn:1 complexity:7 skeleton:4 nonadjacent:5 lcd:2 efficiency:10 resolved:1 joint:1 various:2 univ:1 jesper:1 effective:1 artificial:3 outside:1 neighborhood:1 birth:1 jean:2 widely:1 aliferis:7 plausible:1 otherwise:3 soundness:2 richardson:1 final:1 propose:4 mb:54 unresolved:1 neighboring:2 combining:1 achieve:2 parent:28 motwani:1 requirement:1 help:2 c:11 blanket:15 implies:1 collider:6 direction:5 correct:3 jiji:1 observational:2 brin:1 require:1 secondly:1 hold:2 considered:1 ground:1 predict:1 bj:1 major:2 failing:1 label:1 pellet:2 successfully:1 establishes:1 ecse:1 mit:2 always:4 aim:4 avoid:1 derived:1 focus:4 improvement:1 spouse:6 consistently:1 indicates:3 check:3 greatly:1 contrast:1 sigkdd:1 greedily:1 inference:5 dependent:1 entire:1 koller:1 ancestor:10 relation:1 going:2 subroutine:2 interested:1 among:12 classification:2 orientation:6 denoted:1 art:1 special:2 mutual:2 equal:2 never:2 qiang:1 stuart:1 icml:1 future:1 report:1 idt:72 intelligent:1 richard:1 few:1 causation:2 preserve:1 individual:1 familiar:1 n1:1 interest:1 mining:6 entailed:1 violation:1 nl:1 pc:34 constantin:4 accurate:1 edge:37 partial:1 conduct:1 incomplete:1 causal:130 minimal:1 uncertain:1 phrase:1 cost:2 introducing:1 deviation:1 subset:5 uniform:1 dependency:1 gregory:3 confounders:2 explores:1 international:1 probabilistic:1 off:1 informatics:1 satisfied:6 containing:1 priority:1 american:1 laura:1 leading:1 return:5 ullman:1 bold:1 ioannis:4 int:1 inc:3 teppo:1 satisfy:3 depends:2 later:2 view:1 lot:1 break:2 try:1 identifiability:3 accuracy:6 kaufmann:4 who:1 efficiently:1 correspond:1 identify:25 climbing:3 silverstein:1 identification:6 bayesian:8 craig:1 cmb:64 pdag:3 explain:1 andre:2 sebastian:1 definition:3 e2:8 proof:3 judea:2 sampled:2 dataset:3 knowledge:4 back:3 maxwell:1 higher:1 specify:1 improved:2 sufficiency:7 shrink:1 though:2 lastly:1 correlation:1 christopher:2 reversible:3 assessment:1 rajeev:1 believe:1 facilitate:2 effect:25 building:1 concept:2 true:4 contain:5 unblocked:7 mani:3 hence:2 brown:2 read:1 death:1 spirtes:4 conditionally:1 adjacent:4 ambiguous:1 ide:7 hill:3 complete:8 plexity:1 reasoning:3 wise:3 consideration:1 invoked:1 novel:1 common:2 ji:1 conditioning:2 volume:1 belong:2 discussed:2 association:1 blocked:2 isabelle:1 cambridge:1 dag:11 queried:1 tuning:1 approx:1 consistency:2 add:2 multivariate:1 recent:1 subramani:3 continue:1 accomplished:1 preserving:2 morgan:4 fortunately:1 additional:1 july:1 resolving:1 multiple:3 sound:3 infer:3 reduces:1 valerio:1 faster:3 offer:1 jiq:1 divided:4 roland:1 impact:1 prediction:4 variant:1 scalable:3 basic:1 idc:3 mehran:1 represent:12 sometimes:1 sergey:1 c1:4 background:2 want:1 addition:2 affecting:1 mbx:11 else:4 microarray:1 publisher:3 extra:3 rest:2 med:1 undirected:3 idd:5 call:1 presence:2 identically:1 independence:22 topology:6 identified:9 reduce:1 idea:1 expression:1 gb:1 effort:1 moral:3 render:1 peter:3 statnikov:5 mbt:5 e3:14 cause:28 york:1 action:1 matlab:2 generally:2 tsamardinos:5 amount:1 locally:1 ph:2 processed:1 andr:1 correctly:4 four:6 nevertheless:1 capital:1 wasteful:1 changing:2 neither:1 graph:12 asymptotically:1 worrying:1 fraction:1 relaxation:1 enforced:1 orient:4 jose:1 letter:2 uncertainty:2 reasonable:2 reader:2 guyon:1 separation:1 comparable:3 capturing:1 pct:14 meek:15 g:10 identifiable:1 constraint:2 dominated:2 min:2 glymour:3 department:1 speedup:2 according:2 combination:1 belonging:1 remain:1 slightly:2 mbc:2 happens:1 nilsson:1 intuitively:1 scheines:3 remains:1 mechanism:1 fail:1 needed:3 sahami:1 polytechnic:1 enforce:1 existence:2 original:2 thomas:1 ensure:1 include:1 maintaining:2 invokes:1 build:1 establish:1 especially:2 objective:1 g0:3 already:4 dependence:2 reversed:5 thrun:1 oa:1 majority:1 idx:4 trivial:1 toward:1 enforcing:1 induction:1 besides:2 modeled:1 relationship:17 insufficient:1 providing:1 cpdags:3 potentially:3 margaritis:1 frank:1 troy:1 implementation:2 pdags:1 unknown:2 markov:24 datasets:8 benchmark:4 philippe:2 incorporated:1 discovered:1 ninth:1 exiting:1 david:1 mmhc:8 pair:23 c3:6 faithfulness:5 c4:7 learned:4 pearl:2 nip:1 below:1 pattern:1 dimitris:1 kauffman:1 challenge:1 max:2 memory:1 explanation:1 explorer:1 rely:1 predicting:1 improve:4 library:1 identifies:2 methods2:1 extract:1 review:1 sg:1 discovery:61 checking:2 fully:5 acyclic:2 clark:2 sufficient:1 usa:1 share:1 changed:2 repeat:3 last:1 drastically:1 bias:6 institute:1 neighbor:3 taking:1 distributed:1 ghz:1 boundary:1 author:1 clue:1 hiton:2 unreliable:1 clique:5 gene:1 global:39 active:2 incoming:2 unnecessary:1 search:7 rensselaer:1 latent:5 table:2 learn:1 johan:1 e5:1 constructing:1 protocol:1 significance:1 alarm:4 edition:2 child:17 causality:4 ny:2 aid:1 cooper:3 sub:1 fails:1 exponential:2 unfair:1 chickering:1 pe:1 third:2 theorem:5 e4:7 specific:1 er:2 nyu:1 inconclusive:2 workshop:1 cpdag:7 magnitude:3 conditioned:8 parviainen:1 gao:1 forming:1 tracking:2 partially:1 truth:1 determines:1 satisfies:2 acm:2 conditional:7 identity:21 towards:1 jeff:1 experimentally:1 change:3 specifically:1 reducing:1 miss:1 lemma:16 called:1 total:4 searched:1 alexander:4 artifical:1 evaluate:1 |
5,496 | 5,975 | Discriminative Robust Transformation Learning
Jiaji Huang
Qiang Qiu
Guillermo Sapiro
Robert Calderbank
Department of Electrical Engineering, Duke University
Durham, NC 27708
{jiaji.huang,qiang.qiu,guillermo.sapiro,robert.calderbank}@duke.edu
Abstract
This paper proposes a framework for learning features that are robust to data variation, which is particularly important when only a limited number of training
samples are available. The framework makes it possible to tradeoff the discriminative value of learned features against the generalization error of the learning
algorithm. Robustness is achieved by encouraging the transform that maps data
to features to be a local isometry. This geometric property is shown to improve
(K, )-robustness, thereby providing theoretical justification for reductions in generalization error observed in experiments. The proposed optimization framework
is used to train standard learning algorithms such as deep neural networks. Experimental results obtained on benchmark datasets, such as labeled faces in the wild,
demonstrate the value of being able to balance discrimination and robustness.
1
Introduction
Learning features that are able to discriminate is a classical problem in data analysis. The basic idea
is to reduce the variance within a class while increasing it between classes. One way to implement
this is by regularizing a certain measure of the variance, while assuming some prior knowledge
about the data. For example, Linear Discriminant Analysis (LDA) [4] measures sample covariance
and implicitly assumes that each class is Gaussian distributed. The Low Rank Transform (LRT) [10],
instead uses nuclear norm to measure the variance and assumes that each class is near a low-rank
subspace. A different approach is to regularize the pairwise distances between data points. Examples
include the seminal work on metric learning [17] and its extensions [5, 6, 16].
While great attention has been paid to designing objectives to encourage discrimination, less effort
has been made in understanding and encouraging robustness to data variation, which is especially
important when a limited number of training samples are available. One exception is [19], which
promotes robustness by regularizing the traditional metric learning objective using prior knowledge
from an auxiliary unlabeled dataset.
In this paper we develop a general framework for balancing discrimination and robustness. Robustness is achieved by encouraging the learned data-to-features transform to be locally an isometry
within each class. We theoretically justify this approach using (K, )-robustness [1, 18] and give an
example of the proposed formulation, incorporating it in deep neural networks. Experiments validate the capability to trade-off discrimination against robustness. Our main contributions are the
following: 1) prove that locally near isometry leads to robustness; 2) propose a practical framework
that allows to robustify a wide class of learned transforms, both linear and nonlinear; 3) provide
an explicit realization of the proposed framework, achieving competitive results on difficult face
verification tasks.
The paper is organized as follows. Section 2 motivates the proposed study and proposes a general
formulation for learning a Discriminative Robust Transform (DRT). Section 3 provides a theoretical
justification for the framework by making an explicit connection to robustness. Section 4 gives a
1
specific example of DRT, denoted as Euc-DRT. Section 5 provides experimental validation of EucDRT, and section 6 presents conclusions. 1
2
Problem Formulation
Consider an L-way classification problem. The training set is denoted by T = {(xi , yi )}, where
xi ? Rn is the data and yi ? {1, . . . , L} is the class label. We want to learn a feature transform
f? (?) such that a datum x becomes more discriminative when it is transformed to feature f? (x).
The transform f? is parametrized by a vector ?, a framework that includes linear transforms and
neural networks where the entries of ? are the learned network parameters.
2.1
Motivation
The transform f? promotes discriminability by reducing intra-class variance and enlarging interclass variance. This aim is expressed in the design of objective functions [5, 10] or the structure
of the transform f? [7, 11]. However the robustness of the learned transform is an important issue
that is often overlooked. When training samples are scarce, statistical learning theory [15] predicts
overfitting to the training data. The result of overfitting is that discrimination achieved on test data
will be significantly worse than that on training data. Our aim in this paper is the design of robust
transforms f? for which the training-to-testing degradation is small [18].
We formally measure robustness of the learned transform f? in terms of (K, )-robustness [1].
Given a distance metric ?, a learning algorithm is said to be (K, )-robust if the input data space
can be partitioned into K disjoint sets Sk , k = 1, ..., K, such that for all training sets T , the learned
parameter ?T determines a loss for which the value on pairs of training samples taken from different
sets Sj and Sk is very close to the value of any pair of data samples taken from Sj and Sk .
(K, )-robustness is illustrated in Fig. 1, where S1 and S2 are both of diameter ? and
|e ? e0 | = |?(f? (x1 ), f? (x2 )) ? ?(f? (x01 ), f? (x02 ))|.
If the transform f? preserves all distances within S1 and S2 , then |e ? e0 | cannot deviate much from
|d ? d0 | ? 2?.
Figure 1: (K, )-robustness: Here d = ?(x1 , x2 ), d0 = ?(x01 , x02 ), e = ?(f? (x1 ), f? (x2 )), and
e0 = ?(f? (x01 ), f? (x02 )). The difference |e ? e0 | cannot deviate too much from |d ? d0 |.
2.2
Formulation and Discussion
Motivated bythe above reasoning, we now present our proposed framework. First we define a pair
1
if yi = yj
label `i,j ,
. Given a metric ?, we use the following hinge loss to encourage
?1 otherwise
high inter-class distance and small intra-class distance.
1 X
max {0, `i,j [? (f? (xi ), f? (xj )) ? t(`i,j )]} ,
(1)
|P|
i,j?P
Here P = {(i, j|i 6= j)} is the set of all data pairs. t(`i,j ) ? 0 is a function of `i,j and t(1) < t(?1).
Similar to metric learning [17], this loss function connects pairwise distance to discrimination. However traditional metric learning typically assumes squared Euclidean distance and here the metric ?
can be arbitrary.
For robustness, as discussed above, we may want f? (?) to be distance-preserving within each small
local region. In particular, we define the set of all local neighborhoods as
N B , {(i, j)|`i,j = 1, ?(xi , xj ) ? ?} .
1
A note on the notations: matrices (vectors) are denoted in upper (lower) case bold letters. Scalars are
denoted in plain letters.
2
Therefore, we minimize the following objective function
X
1
|?(f? (xi ), f? (xj )) ? ?(xi , xj )| .
|N B|
(2)
(i,j)?N B
Note that we do not need to have the same metric in both the input and the feature space, they do not
even have in general the same dimension. With a slight abuse of notation we use the same symbol
to denote both metrics.
To achieve discrimination and robustness simultaneously, we formulate the objective function as a
weighted linear combination of the two extreme cases in (1) and (2)
? X
1?? X
max {0, `i,j [? (f? (xi ), f? (xj )) ? t(`i,j )]}+
|?(f? (xi ), f? (xj )) ? ?(xi , xj )|
|P|
|N B|
i,j?P
(i,j)?N B
(3)
where ? ? [0, 1]. The formulation (3) balances discrimination and robustness. When ? = 1 it seeks
discrimination, and as ? decreases it starts to encourage robustness. We shall refer to a transform
that is learned by solving (3) as a Discriminative Robust Transform (DRT). The DRT framework
provides opportunity to select both the distance measure and the transform family.
3
Theoretical Analysis
In this section, we provide a theoretical explanation for robustness. In particular, we show that if the
solution to (1) yields a transform f? that is locally a near isometry, then f? is robust.
3.1
Theoretical Framework
Let X denote the original data, let Y = {1, ..., L} denote the set of class labels, and let Z = X ? Y.
The training samples are pairs zi = (xi , yi ), i = 1, . . . , n drawn from some unknown distribution
D defined on Z. The indicator function is defined as `i,j = 1 if yi = yj and ?1 otherwise. Let
f? be a transform that maps a low-level feature x to a more discriminative feature f? (x), and let F
denote the space of transformed features.
For simplicity we consider an arbitrary metric ? defined on both X and F (the general case of
different metrics is a straightforward extension), and a loss function g(?(f? (xi ), f? (xj )), `i,j ) that
encourages ?(f? (xi ), f? (xj )) to be small (big) if `i,j = 1 (?1). We shall require the Lipschtiz
constant of g(?, 1) and g(?, ?1) to be upper bounded by A > 0. Note that the loss function in Eq. (1)
has a Lipschtiz constant of 1. We abbreviate
g(?(f? (xi ), f? (xj )), `i,j ) , h? (zi , zj ).
The empirical loss on the training set is a function of ? given by
Pn
2
Remp (?) , n(n?1)
i,j=1 h? (zi , zj ),
(4)
i6=j
and the expected loss on the test data is given by
R(?) , Ez01 ,z02 ?D [h? (z01 , z02 )] .
The algorithm operates on pairs of training samples and finds parameters
?T , arg min Remp (?),
(5)
(6)
?
that minimize the empirical loss on the training set T . The difference Remp ? R between expected
loss on the test data and empirical loss on the training data is the generalization error of the algorithm.
3.2
(K, )-robustness and Covering Number
We work with the following definition of (K, )-robustness [1].
Definition 1. A learning algorithm is (K, )-robust if Z = X ?Y can be partitioned into K disjoint
sets Zk , k = 1, . . . , K such that for all training sets T ? Z n , the learned parameter ?T determines
a loss function where the value on pairs of training samples taken from sets Zp and Zq is ?very
close? to the value of any pair of data samples taken from Zp and Zq . Formally,
assume zi , zj ? T , with zi ? Zp and zj ? Zq , if z0i ? Zp and z0j ? Zq , then
h? (zi , zj ) ? h? (z0i , z0j ) ? .
T
T
3
Remark 1. (K, )-robustness means that the loss incurred by a testing pair (z0i , z0j ) in Zp ? Zq is
very close to the loss incurred by any training pair (zi , zj ) in Zp ? Zq . It is shown in [1] that the
generalization error of (K, )-robust algorithms is bounded as
r !
K
R(?T ) ? Remp (?T ) ? + O
.
(7)
n
Therefore the smaller , the smaller is the generalization error, and the more robust is the learning
algorithm.
Given a metric space, the covering number specifies how many balls of a given radius are needed to
cover the space. The more complex the metric space, the more balls are needed to cover it. Covering
number is formally defined as follows.
Definition 2 (Covering number). Given a metric space (S, ?), we say that a subset S? of S is a
?-cover of S, if for every element s ? S, there exists ?s ? S? such that ?(s, ?s) ? ?. The ?-covering
number of S is
? : S? is a ?-cover of S}.
N? (S, ?) = min{|S|
Remark 2. The covering number is a measure of the geometric complexity of (S, ?). A set S with
covering number N?/2 (S, ?) can be partitioned into N?/2 (S, ?) disjoint subsets, such that any two
points within the same subset are separated by no more than ?.
Lemma 1. The metric space Z = X ? Y can be partitioned into LN?/2 (X , ?) subsets, denoted
as Z1 , . . . , ZLN?/2 (X ,?) , such that any two points z1 , (x1 , y1 ), z2 , (x2 , y2 ) in the same subset
satisfy y1 = y2 and ?(x1 , x2 ) ? ?.
Proof. Assuming the metric space (X , ?) is compact, we can partition X into N?/2 (X , ?) subsets,
each with diameter at most ?. Since Y is a finite set of size L, we can partition Z = X ? Y into
LN?/2 (X , ?) subsets with the property that two samples (x1 , y1 ), (x2 , y2 ) in the same subset satisfy
y1 = y2 and ?(x1 , x2 ) ? ?.
It follows from Lemma 1 that we may partition X into subsets X1 , . . . , XLN?/2 (X ,?) , such that pairs
of points x1 , x2 from the same subset have the same label and satisfy ?(xi , xj ) ? ?. Before we
connect local geometry to robustness we need one more definition. We say that a learned transform
f? is a ?-isometry if the metric is distorted by at most ?:
Definition 3 (?-isometry). Let A, B be metric spaces with metrics ?A and ?B . A map f : A 7? B is
a ?-isometry if for any a1 , a2 ? A, |?A (f (a1 ), f (a2 )) ? ?B (a1 , a2 )| ? ?.
Theorem 1. Let f? be a transform derived via Eq. (6) and let X1 , . . . , XLN?/2 (X ,?) be a cover of
X as described above. If f? is a ?-isometry, then it is (LN?/2 (X , ?), 2A(? + ?))-robust.
Proof sketch. Consider training samples zi , zj and testing samples z0i , z0j such that zi , z0i ? Zp and
zj , z0j ? Zq for some p, q ? {1, . . . , LN?/2 (X , ?)}. Then by Lemma 1,
?(xi , x0i ) ? ? and ?(xj , x0j ) ? ?,
yi = yi0 and yj = yj0 ,
and xi , x0i ? Xp and xj , x0j ? Xq . By definition of ?-isometry,
|?(f?T (xi ), f?T (x0i )) ? ?(xi , x0i )| ? ? and |?(f?T (xj ), f?T (x0j )) ? ?(xj , x0j )| ? ?.
Rearranging the terms gives
?(f?T (xi ), f?T (x0i )) ? ?(xi , x0i ) + ? ? ? + ? and ?(f?T (xj ), f?T (x0j )) ? ?(xj , x0j ) + ? ? ? + ?.
Figure 2: Proof without words.
4
In order to bound the generalization error, we need to bound the difference between
?(f?T (xi ), f?T (xj )) and ?(f?T (x0i ), f?T (x0j )). The details can be found in [9]; here we appeal to the proof schematic in Fig. 2. We need to bound |e ? e0 | and it cannot exceed twice the
diameter of a local region in the transformed domain.
Robustness of the learning algorithm depends on the granularity of the cover and the degree to
which the learned transform f? distorts distances between pairs of points in the same covering
subset. The subsets in the cover constitute regions where the local geometry makes it possible to
bound generalization error. It now
from [1] that the generalization error satisfies R(?T ) ?
qfollows
K
Remp (?T ) ? 2A(? + ?) + O
n . The DRT proposed here is a particular example of a local
isometry, and Theorem 1 explains why the generalization error is smaller than that of pure metric
learning.
The transform described in [9] partitions the metric space X into exactly L subsets, one for each
class. The experiments reported in Section 5 demonstrate that the performance improvements derived from working with a finer partition can be worth the cost of learning finer grained local regions.
4
An Illustrative Realization of DRT
Having justified robustness, we now provide a realization of the proposed general DRT where the
metric ? is Euclidean distance. We use Gaussian random variables to initialize ?, then, on the
randomly transformed data, we set t(1) (t(?1)) to be the average intra-class (inter-class) pairwise
distance. In all our experiments, the solution satisfied the condition t(1) < t(?1) required in Eq. (1).
We calculate the diameter ? of the local regions N B indirectly, using the ?-nearest neighbors of each
training sample to define a local neighborhood. We leave the question of how best to initialize the
indicator t and the diameter ? for future research.
We denote this particular example as Euc-DRT and use gradient descent to solve for ?. Denoting
the objective by J, we define yi , f? (xi ), ?i,j , f? (xi ) ? f? (xj ), and ?0i.j , kxi ? xj k. Then
X
X 1??
?J
?
?i,j
?i,j
=
? `i,j ?
+
? sgn(k?i,j k ? ?0i,j ) ?
. (8)
?yi
|P|
k?i,j k
|N B|
k?i,j k
(i,j)?P
`i,j (k?i,j k?t(`i,j ))>0
(i,j)?N B
In general, f? defines a D-layer neural network (when D = 1 it defines a linear transform). Let ?(d)
(D)
be the linear weights at the d-th layer, and let x(d) be the output of the d-th layer, so that yi = xi .
Then the gradients are computed as,
(d+1)
(d)
X ?J
X ?J
?J
?yi
?xi
?xi
?J
?
?
?
=
,
and
=
for 1 ? d ? D ?1. (9)
(d+1)
(d)
?yi ??(D)
??(D)
??(d)
??(d)
?x
i
i ?x
i
i
Algorithm 1 provides a summary, and we note that the extension to stochastic training using minbatches is straightforward.
5
Experimental Results
In this section we report on experiments that confirm robustness of Euc-DRT. Recall that empirical
loss is given by Eq. (4) where ? is learned as ?T from the training set T , and |T | = N . The
generalization error is R ? Remp where the expected loss R is estimated using a large test set.
5.1
Toy Example
This illustrative example is motivated by the discussion in Section 2.1. We first generate a 2D
dataset consisting of two noisy half-moons, then use a random 100 ? 2 matrix to embed the data
in a 100-dimensional space. We learn a linear transform f? that maps the 100 dimensional data to
2 dimensional features, and we use ? = 5 nearest neighbors to construct the set N B. We consider
? = 1, 0.5, 0.25, representing the most discriminative, balanced, and more robust scenarios.
When ? = 1 the transformed training samples are rather discriminative (Fig. 3a), but when the
transform is applied to testing data, the two classes are more mixed (Fig. 3d). When ? = 0.5, the
5
Algorithm 1 Gradient descent solver for Euc-DRT
Input: ? ? [0, 1], training pairs {(xi , xj , `i,j )}, a pre-defined D-layer network (D = 1 as linear
transform), stepsize ?, neighborhood size ?.
Output: ?
1: Randomly initialize ?, compute yi = f? (xi ).
2: On the yi , compute the average intra and inter-class pairwise distances, assign to t(1), t(?1)
3: For each training datum, find its ? nearest neighbor and define the set N B.
4: while stable objective not achieved do
5:
Compute yi = f? (xi ) by a forward pass.
6:
Compute objective J.
?J
7:
Compute ?y
as Eq. (8).
i
8:
for l = D down to 1 do
?J
9:
Compute ??
(d) as Eq. (9).
?J
(d)
(d)
10:
? ? ? ? ? ??
(d) .
11:
end for
12: end while
30
30
30
20
20
20
10
10
10
0
0
0
-10
-10
-10
-20
-20
-20
-30
-30
-20
0
20
-30
-20
0
20
-20
0
20
(a) ? = 1 Transformed training (b) ? = 0.5 transformed training (c) ? = 0.25 Transformed trainsamples. (discriminative case)
samples. (balanced case)
ing samples. (robust case)
30
30
30
20
20
20
10
10
10
0
0
0
-10
-10
-10
-20
-20
-20
-30
-30
-20
0
20
-30
-20
0
20
-20
0
20
(d) ? = 1 Transformed testing (e) ? = 0.5 transformed testing (f) ? = 0.25 Transformed testing
samples. (discriminative case)
samples. (balanced case)
samples. (robust case)
Figure 3: Original and transformed training/testing samples embedded in 2-dimensional space with
different colors representing different classes.
transformed training data are more dispersed within each class (Fig. 3b), hence less easily separated
than when ? = 1. However Fig. 3e shows that it is easier to separate the two classes on the test data.
When ? = 0.25, robustness is preferred to discriminative power as shown in Figs. 3c and 3f.
Tab. 1 quantifies empirical loss Remp , generalization error, and classification performance (by 1-nn)
for ? = 1, 0.5 and 0.25. As ? decreases, Remp increases, indicating loss of discrimination on the
training set. However, generalization error decreases, implying more robustness. We conclude that
by varying ?, we can balance discrimination and robustness.
5.2
MNIST Classfication Using a Very Small Training Set
The transform f? learned in the previous section was linear, and we now apply a more sophisticated
convolutional neural network to the MNIST dataset. The network structure is similar to LeNet, and is
6
Table 1: Varying ? on a toy dataset.
?
Remp
generalization error
1-nn accuracy
(original data 93.35%)
1
1.5983
10.5855
0.5
1.6025
9.5071
0.25
1.9439
8.8040
92.20%
98.30%
91.55%
Table 3: Implementation details of the neural network for
MNIST classification.
name
conv1
pool1
Table 2: Classification error on MNIST.
Training/class
original pixels
LeNet
DML
Euc-DRT
30
81.91%
87.51%
92.32%
94.14%
50
86.18%
89.89%
94.45%
95.20%
70
86.86%
91.24%
95.67%
96.05%
conv2
100
88.49%
92.75%
96.19%
96.21%
pool2
conv3
parameters
size: 5 ? 5 ? 1 ? 20
stride: 1, pad: 0
size: 2 ? 2
size: 5 ? 5 ? 20 ? 50
stride: 1, pad: 0
size: 2 ? 2
size: 4 ? 4 ? 50 ? 128
stride: 1, pad: 0
made up of alternating convolutional layers and pooling layers, with parameters detailed in Table 3.
We map the original 784-dimensional pixel values (28x28 image) to 128-dimensional features.
While state-of-art results often use the full training set (6,000 training samples per class), here we are
interested in small training sets. We use only 30 training samples per class, and we use ? = 7 nearest
neighbors to define local regions in Euc-DRT. We vary ? and study empirical error, generalization
error, and classification accuracy (1-nn). We observe in Fig. 4 that when ? decreases, the empirical
error also decreases, but that the generalization error actually increases. By balancing between these
two factors, a peak classification accuracy is achieved at ? = 0.25. Next, we use 30, 50, 70, 100
4.5
0.12
4
0.08
0.06
3
2.5
0.04
2
0.02
0
94
3.5
R-R emp
R emp
0.1
94.5
1-nn accuracy(%)
0.14
0
0.25
0.5
0.75
1
1.5
0
0.25
0.5
0.75
1
93.5
93
92.5
92
0
0.25
0.5
?
?
?
(a)
(b)
(c)
0.75
1
Figure 4: MNIST test: with only 30 training samples per class. We vary ? and assess (a) Remp ; (b)
generalization error; and (c) 1-nn classification accuracy. Peak accuracy is achieved at ? = 0.25.
training samples per class and compare the performance of Euc-DRT with LeNet and Deep Metric
Learning (DML) [7]. DML minimizes a hinge loss on the squared Euclidean distances. It shares the
same spirit with our Euc-DRT using ? = 1. All methods use the same network structure, Tab. 3, to
map to the features. For classification, LeNet uses a linear softmax classifier on top of the ?conv3?
layer and minimizes the standard cross-entropy loss during training. DML and Euc-DRT both use
a 1-nn classifier on the learned features. Classification accuracies are reported in Tab. 2. In Tab. 2,
we see that all the learned features improve upon the original ones. DML is very discriminative
and achieves higher accuracy than LeNet. However, when the training set is very small, robustness
becomes more important and Euc-DRT significantly outperforms DML.
5.3
Face Verification on LFW
We now present face verification on the more challenging Labeled Faces in the Wild (LFW) benchmark, where our experiments will show that there is an advantage to balancing disciminability and
robustness. Our goal is not to reproduce the success of deep learning in face verification [7, 14],
but to stress the importance of robust training and to compare the proposed Euc-DRT objective
with popular alternatives. Note also that it is difficult to compare with deep learning methods when
training sets are proprietary [12?14].
7
We adopt the experimental framework used in [2], and train a deep network on the WDRef dataset,
where each face is described using a high dimensional LBP feature [3] (available at 2 ) that is reduced to a 5000-dimensional feature using PCA. The WDRef dataset is significantly smaller than
the proprietary datasets typical of deep learning, such as the 4.4 million labeled faces from 4030
individuals in [14], or the 202,599 labeled faces from 10,177 individuals in [12]. It contains 2,995
subjects with about 20 samples per subject.
We compare the Euc-DRT objective with DeepFace (DF) [14] and Deep Metric Learning (DML) [7],
two state-of-the-art deep learning objectives. For a fair comparison, we employ the same network
structure and train on the same input data. DeepFace feeds the output of the last network layer to an
L-way soft-max to generate a probability distribution over L classes, then minimizes a cross entropy
loss. The Euc-DRT feature f? is implemented as a two-layer fully connected network with tanh as
the squash function. Weight decay (conventional Frobenius norm regularization) is employed in
both DF and DML, and results are only reported for the best weight decay factor. After a network
is trained on WDRef, it is tested on the LFW benchmark. Verification simply consists of comparing
the cosine distance between a given pair of faces to a threshold.
Fig. 5 displays ROC curves and Table 4 reports area under the ROC curve (AUC) and verification
accuracy. High-Dim LBP refers to verification using the initial LBP features. DeepFace (DF) optimizes for a classification objective by minimizing a softmax loss, and it successfully separates
samples from different classes. However the constraint that assigns similar representations to the
same class is weak, and this is reflected in the true positive rate displayed in Fig. 5. In Deep Metric
Learning (DML) this same constraint is strong, but robustness is a concern when the training set
is small. The proposed Euc-DRT improves upon both DF and DML by balancing disciminability
and robustness. It is less conservative than DF for better discriminability, and more responsive to
local geometry than DML for smaller generalization error. Face verification accuracy for Euc-DRT
was obtained by varying the regularization parameter ? between 0.4 and 1 (as shown in Fig 6), then
reporting the peak accuracy observed at ? = 0.9.
92.4
verification accuracy (%)
1
0.9
0.8
0.7
HD-LBP
deepFace
DML
Euc-DRT
0.6
0.5
0
0.5
1
Table 4: Verification accuracy and
AUCs on LFW
92.2
92
Method
91.8
91.6
91.4
0.4
0.6
0.8
1
?
Figure 5: Comparison of Figure 6: Verification accuROCs for all methods
racy of Euc-DRT as ? varies
6
HD-LBP
deepFace
DML
Euc-DRT
Accuracy
(%)
74.73
88.72
90.28
92.33
AUC
(?10?2 )
82.22?1.00
95.50? 0.29
96.74?0.33
97.77? 0.25
Conclusion
We have proposed an optimization framework within which it is possible to tradeoff the discriminative value of learned features with robustness of the learning algorithm. Improvements to generalization error predicted by theory are observed in experiments on benchmark datasets. Future
work will investigate how to initialize and tune the optimization, also how the Euc-DRT algorithm
compares with other methods that reduce generalization error.
7
Acknowledgement
The work of Huang and Calderbank was supported by AFOSR under FA 9550-13-1-0076 and by
NGA under HM017713-1-0006. The work of Qiu and Sapiro is partially supported by NSF and
DoD.
2
http://home.ustc.edu.cn/chendong/
8
References
[1] A. Bellet and A. Habrard. Robustness and generalization for metric learning. Neurocomputing,
151(14):259?267, 2015.
[2] D. Chen, X. Cao, L. Wang, F. Wen, and J. Sun. Bayesian face revisited: A joint formulation.
In European Conference on Computer Vision (ECCV), 2012.
[3] D. Chen, X. Cao, F. Wen, and J. Sun. Blessing of dimensionality: High-dimensional feature
and its efficient compression for face verification. In IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), 2013.
[4] K. Fukunaga. Introduction to Statistical Pattern Recognition. San Diego: Academic Press,
1990.
[5] A. Globerson and S. Roweis. Metric learning by collapsing classes. In Advances in Neural
Information Processing Systems (NIPS), 2005.
[6] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. In Advances in Neural Information Processing Systems (NIPS), 2004.
[7] J. Hu, J. Lu, and Y. Tan. Discriminative deep metric learning for face verification in the wild.
In Computer Vision and Pattern Recognition (CVPR), pages 1875?1882, 2014.
[8] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A
database for studying face recognition in unconstrained environments. Technical Report 0749, University of Massachusetts, Amherst, October 2007.
[9] J. Huang, Q. Qiu, R. Calderbank, and G. Sapiro. Geometry-aware deep transform. In International Conference on Computer Vision, 2015.
[10] G. Sapiro Q. Qiu. Learning transformations for clustering and classification. Journal of Machine Learning Research (JMLR), pages 187?225, 2015.
[11] C. Sumit, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. In IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), volume 1, pages 539?546, 2005.
[12] Y. Sun, Y. Chen, X. Wang, and X. Tang. Deep learning face representation by joint
identification-verification. In Advances in Neural Information Processing Systems (NIPS),
pages 1988?1996, 2014.
[13] Y. Sun, X. Wang, and X. Tang. Deep learning face representation from predicting 10,000
classes. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages
1891?1898, 2014.
[14] Y. Taigman, M. Yang, M. A. Ranzato, and L. Wolf. Deepface: Closing the gap to humanlevel performance in face verification. In IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), pages 1701?1708, 2014.
[15] V. N. Vapnik. An overview of statistical learning theory. IEEE Transactions on Neural Networks, 10(5):988?999, 1999.
[16] K. Q. Weinberger and L. K. Saul. Distance metric learning for large margin nearest neighbor
classification. Journal of Machine Learning Research, 10:207?244, 2009.
[17] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell. Distance metric learning, with application
to clustering with side-information. In Advances in Neural Information Processing Systems
(NIPS), 2002.
[18] H. Xu and S. Mannor. Robustness and generalization. Machine Learning, 86(3):391?423,
2012.
[19] Z. Zha, T. Mei, M. Wang, Z. Wang, and X. Hua. Robust distance metric learning with auxiliary
knowledge. In International Joint Conference on Artificial Intelligence (IJCAI), 2009.
9
| 5975 |@word compression:1 norm:2 yi0:1 hu:1 seek:1 covariance:1 paid:1 thereby:1 reduction:1 initial:1 contains:1 denoting:1 humanlevel:1 outperforms:1 z2:1 comparing:1 goldberger:1 partition:5 discrimination:11 implying:1 half:1 intelligence:1 pool2:1 provides:4 mannor:1 revisited:1 prove:1 consists:1 wild:4 theoretically:1 pairwise:4 inter:3 expected:3 salakhutdinov:1 encouraging:3 solver:1 increasing:1 becomes:2 notation:2 bounded:2 minimizes:3 transformation:2 sapiro:5 every:1 exactly:1 classifier:2 before:1 positive:1 engineering:1 local:12 abuse:1 discriminability:2 twice:1 challenging:1 limited:2 practical:1 globerson:1 lecun:1 yj:3 testing:8 implement:1 euc:19 mei:1 area:1 empirical:7 significantly:3 word:1 pre:1 refers:1 cannot:3 unlabeled:1 close:3 seminal:1 conventional:1 map:6 straightforward:2 attention:1 hadsell:1 formulate:1 simplicity:1 assigns:1 pure:1 nuclear:1 regularize:1 hd:2 variation:2 justification:2 diego:1 yj0:1 tan:1 z02:2 duke:2 us:2 designing:1 element:1 recognition:7 particularly:1 predicts:1 labeled:5 database:1 observed:3 electrical:1 wang:5 calculate:1 region:6 connected:1 sun:4 ranzato:1 trade:1 decrease:5 russell:1 balanced:3 environment:1 complexity:1 trained:1 solving:1 upon:2 easily:1 joint:3 pool1:1 train:3 separated:2 artificial:1 neighborhood:3 solve:1 cvpr:5 say:2 otherwise:2 squash:1 transform:26 noisy:1 advantage:1 propose:1 cao:2 realization:3 achieve:1 roweis:2 frobenius:1 validate:1 ijcai:1 zp:7 leave:1 jiaji:2 develop:1 nearest:5 x0i:7 eq:6 strong:1 auxiliary:2 implemented:1 predicted:1 radius:1 stochastic:1 sgn:1 explains:1 require:1 assign:1 generalization:21 extension:3 great:1 vary:2 achieves:1 a2:3 adopt:1 label:4 tanh:1 successfully:1 weighted:1 gaussian:2 aim:2 rather:1 pn:1 varying:3 derived:2 improvement:2 rank:2 dim:1 nn:6 typically:1 pad:3 transformed:13 reproduce:1 interested:1 lrt:1 issue:1 classification:12 arg:1 pixel:2 denoted:5 proposes:2 art:2 softmax:2 initialize:4 construct:1 aware:1 having:1 ng:1 qiang:2 future:2 report:3 dml:13 employ:1 wen:2 randomly:2 preserve:1 simultaneously:1 neurocomputing:1 individual:2 geometry:4 connects:1 consisting:1 investigate:1 intra:4 extreme:1 encourage:3 euclidean:3 e0:5 theoretical:5 soft:1 cover:7 cost:1 entry:1 subset:13 habrard:1 dod:1 sumit:1 too:1 reported:3 connect:1 varies:1 kxi:1 lipschtiz:2 peak:3 amherst:1 international:2 off:1 squared:2 satisfied:1 huang:5 collapsing:1 worse:1 toy:2 stride:3 bold:1 includes:1 satisfy:3 depends:1 tab:4 competitive:1 start:1 xing:1 capability:1 zha:1 contribution:1 ass:1 minimize:2 accuracy:14 moon:1 variance:5 convolutional:2 miller:1 yield:1 weak:1 bayesian:1 identification:1 lu:1 worth:1 finer:2 definition:6 against:2 proof:4 dataset:6 massachusetts:1 popular:1 remp:10 recall:1 knowledge:3 color:1 improves:1 dimensionality:1 organized:1 sophisticated:1 actually:1 feed:1 higher:1 reflected:1 formulation:6 robustify:1 sketch:1 working:1 nonlinear:1 defines:2 lda:1 name:1 y2:4 true:1 hence:1 lenet:5 regularization:2 alternating:1 illustrated:1 during:1 encourages:1 auc:3 covering:8 illustrative:2 cosine:1 stress:1 demonstrate:2 classfication:1 reasoning:1 image:1 regularizing:2 overview:1 volume:1 million:1 discussed:1 slight:1 refer:1 unconstrained:1 i6:1 closing:1 stable:1 similarity:1 isometry:10 optimizes:1 scenario:1 certain:1 success:1 yi:14 preserving:1 employed:1 x02:3 full:1 d0:3 ing:1 technical:1 academic:1 x28:1 cross:2 drt:26 promotes:2 a1:3 schematic:1 basic:1 vision:7 metric:32 lfw:4 df:5 achieved:6 justified:1 lbp:5 want:2 pooling:1 subject:2 spirit:1 jordan:1 near:3 yang:1 granularity:1 exceed:1 xj:21 zi:9 reduce:2 idea:1 cn:1 tradeoff:2 motivated:2 pca:1 effort:1 constitute:1 remark:2 proprietary:2 deep:14 detailed:1 tune:1 transforms:3 z0j:5 locally:3 diameter:5 reduced:1 generate:2 specifies:1 http:1 zj:8 nsf:1 estimated:1 disjoint:3 per:5 shall:2 threshold:1 achieving:1 drawn:1 nga:1 taigman:1 letter:2 distorted:1 reporting:1 family:1 x0j:7 home:1 bound:4 layer:9 datum:2 display:1 constraint:2 x2:8 min:2 fukunaga:1 department:1 combination:1 ball:2 smaller:5 bellet:1 partitioned:4 making:1 s1:2 taken:4 ln:4 needed:2 end:2 studying:1 available:3 apply:1 observe:1 indirectly:1 stepsize:1 responsive:1 neighbourhood:1 alternative:1 robustness:38 weinberger:1 original:6 assumes:3 top:1 include:1 clustering:2 z01:1 opportunity:1 hinge:2 especially:1 classical:1 objective:12 question:1 fa:1 traditional:2 said:1 gradient:3 subspace:1 distance:18 separate:2 parametrized:1 discriminant:1 assuming:2 providing:1 balance:3 minimizing:1 nc:1 difficult:2 october:1 robert:2 design:2 implementation:1 motivates:1 unknown:1 conv2:1 upper:2 datasets:3 benchmark:4 finite:1 ramesh:1 descent:2 displayed:1 hinton:1 y1:4 rn:1 interclass:1 arbitrary:2 overlooked:1 pair:14 distorts:1 required:1 connection:1 z1:2 learned:17 nip:4 able:2 pattern:6 max:3 explanation:1 power:1 predicting:1 indicator:2 abbreviate:1 scarce:1 representing:2 improve:2 xq:1 deviate:2 prior:2 geometric:2 understanding:1 acknowledgement:1 afosr:1 embedded:1 loss:21 fully:1 calderbank:4 discriminatively:1 mixed:1 validation:1 x01:3 incurred:2 degree:1 verification:16 xp:1 conv1:1 share:1 balancing:4 eccv:1 guillermo:2 summary:1 supported:2 xln:2 last:1 side:1 wide:1 neighbor:5 face:20 conv3:2 emp:2 saul:1 distributed:1 curve:2 plain:1 dimension:1 forward:1 made:2 san:1 transaction:1 sj:2 compact:1 implicitly:1 preferred:1 confirm:1 overfitting:2 conclude:1 discriminative:14 xi:29 quantifies:1 sk:3 zq:7 zln:1 why:1 table:6 learn:2 zk:1 robust:16 rearranging:1 complex:1 european:1 domain:1 main:1 motivation:1 s2:2 big:1 qiu:5 fair:1 x1:10 xu:1 fig:11 roc:2 explicit:2 jmlr:1 grained:1 tang:2 theorem:2 enlarging:1 embed:1 down:1 specific:1 symbol:1 appeal:1 decay:2 concern:1 incorporating:1 exists:1 mnist:5 vapnik:1 importance:1 racy:1 margin:1 chen:3 durham:1 easier:1 gap:1 entropy:2 simply:1 expressed:1 partially:1 scalar:1 hua:1 deepface:6 wolf:1 determines:2 satisfies:1 dispersed:1 goal:1 typical:1 reducing:1 operates:1 justify:1 degradation:1 lemma:3 conservative:1 blessing:1 discriminate:1 pas:1 experimental:4 exception:1 formally:3 select:1 indicating:1 berg:1 ustc:1 z0i:5 tested:1 |
5,497 | 5,976 | Max-Margin Majority Voting for
Learning from Crowds
Tian Tian, Jun Zhu
Department of Computer Science & Technology; Center for Bio-Inspired Computing Research
Tsinghua National Lab for Information Science & Technology
State Key Lab of Intelligent Technology & Systems; Tsinghua University, Beijing 100084, China
tiant13@mails.tsinghua.edu.cn; dcszj@tsinghua.edu.cn
Abstract
Learning-from-crowds aims to design proper aggregation strategies to infer the
unknown true labels from the noisy labels provided by ordinary web workers.
This paper presents max-margin majority voting (M3 V) to improve the discriminative ability of majority voting and further presents a Bayesian generalization to
incorporate the flexibility of generative methods on modeling noisy observations
with worker confusion matrices. We formulate the joint learning as a regularized
Bayesian inference problem, where the posterior regularization is derived by maximizing the margin between the aggregated score of a potential true label and that
of any alternative label. Our Bayesian model naturally covers the Dawid-Skene
estimator and M3 V. Empirical results demonstrate that our methods are competitive, often achieving better results than state-of-the-art estimators.
1
Introduction
Many learning tasks require labeling large datasets. Though reliable, it is often too expensive and
time-consuming to collect labels from domain experts or well-trained workers. Recently, online
crowdsourcing platforms have dramatically decreased the labeling cost by dividing the workload
into small parts, then distributing micro-tasks to a crowd of ordinary web workers [17, 20]. However,
the labeling accuracy of web workers could be lower than expected due to their various backgrounds
or lack of knowledge. To improve the accuracy, it is usually suggested to label every task multiple
times by different workers, then the redundant labels can provide hints on resolving the true labels.
Much progress has been made in designing effective aggregation mechanisms to infer the true labels
from noisy observations. From a modeling perspective, existing work includes both generative approaches and discriminative approaches. A generative method builds a flexible probabilistic model
for generating the noisy observations conditioned on the unknown true labels and some behavior
assumptions, with examples of the Dawid-Skene (DS) estimator [5], the minimax entropy (Entropy)
estimator1 [24, 25], and their variants. In contrast, a discriminative approach does not model the observations; it directly identifies the true labels via some aggregation rules. Examples include majority voting and the weighted majority voting that takes worker reliability into consideration [10, 11].
In this paper, we present a max-margin formulation of the most popular majority voting estimator to
improve its discriminative ability, and further present a Bayesian generalization that conjoins the advantages of both generative and discriminative approaches. The max-margin majority voting (M3 V)
directly maximizes the margin between the aggregated score of a potential true label and that of any
alternative label, and the Bayesian model consists of a flexible probabilistic model to generate the
noisy observations by conditioning on the unknown true labels. We adopt the same approach as the
1
A maximum entropy estimator can be understood as a dual of the MLE of a probabilistic model [6].
1
classical Dawid-Skene estimator to build the probabilistic model by considering worker confusion
matrices, though many other generative models are also possible. Then, we strongly couple the
generative model and M3 V by formulating a joint learning problem under the regularized Bayesian
inference (RegBayes) [27] framework, where the posterior regularization [7] enforces a large margin between the potential true label and any alternative label. Naturally, our Bayesian model covers
both the David-Skene estimator and M3 V as special cases by setting the regularization parameter to
its extreme values (i.e., 0 or ?). We investigate two choices on defining the max-margin posterior
regularization: (1) an averaging model with a variational inference algorithm; and (2) a Gibbs model
with a Gibbs sampler under a data augmentation formulation. The averaging version can be seen
as an extension to the MLE learner of Dawid-Skene model. Experiments on real datasets suggest
that max-margin learning can significantly improve the accuracy of majority voting, and that our
Bayesian estimators are competitive, often achieving better results than state-of-the-art estimators
on true label estimation tasks.
2
Preliminary
We consider the label aggregation problem with a dataset consisting of M items (e.g., pictures or
paragraphs). Each item i has an unknown true label yi ? [D], where [D] := {1, . . . , D}. The task
ti is to label item i. In crowdsourcing, we have N workers assigning labels to these items. Each
worker may only label a part of the dataset. Let Ii ? [N ] denote the workers who have done task
ti . We use xij to denote the label of ti provided by worker j, xi to denote the labels provided to
task ti , and X is the collection of these worker labels, which is an incomplete matrix. The goal of
learning-from-crowds is to estimate the true labels of items from the noisy observations X.
2.1
Majority Voting Estimator
Majority voting (MV) is arguably the simplest method. It posits that for every task the true label is
always most commonly given. Thus, it selects the most frequent label for each task as its true label,
by solving the problem:
N
X
I(xij = d), ?i ? [M ],
(1)
y?i = argmax
d?[D]
j=1
where I(?) is an indicator function. It equals to 1 whenever the predicate is true, otherwise it equals to
0. Previous work has extended this method to weighted majority voting (WMV) by putting different
weights on workers to measure worker reliability [10, 11].
2.2
Dawid-Skene Estimator
The method of Dawid and Skene [5] is a generative approach by considering worker confusability.
It posits that the performance of a worker is consistent across different tasks, as measured by a
confusion matrix whose diagonal entries denote the probability of assigning correct labels while offdiagonal entries denote the probability of making specific mistakes to label items in one category as
another. Formally, let ?j be the confusion matrix of worker j. Then, ?jkd denotes the probability
that worker j assigns label d to an item whose true label is k. Under the basic assumption that
workers finish each task independently, the likelihood of observed labels can be expressed as
M Y
N Y
D
N Y
D
Y
Y
i
p(X|?, y) =
?jkd njkd =
?jkd njkd ,
(2)
i=1 j=1 d,k=1
nijkd
where
= I(xij = d, yi = k), and njkd =
but being labeled to d by worker j.
j=1 d,k=1
PM
i
i=1 njkd
is the number of tasks with true label k
The unknown labels and parameters can be estimated by maximum-likelihood estimation (MLE),
? = argmax
? ?}
{y,
y,? log p(X|?, y), via an expectation-maximization (EM) algorithm that iteratively updates the true labels y and the parameters ?. The learning procedure is often initialized
by majority voting to avoid bad local optima. If we assume some structure of the confusion matrix,
various variants of the DS estimator have been studied, including the homogenous DS model [15]
and the class-conditional DS model [11]. We can also put a prior over worker confusion matrices
and transform the inference into a standard inference problem in graphical models [12]. Recently,
spectral methods have also been applied to better initialize the DS model [23].
2
3
Max-Margin Majority Voting
Majority voting is a discriminative model that directly finds the most likely label for each item.
In this section, we present max-margin majority voting (M3 V), a novel extension of (weighted)
majority voting with a new notion of margin (named crowdsourcing margin).
3.1
Geometric Interpretation of Crowdsourcing Margin
??2
Let g(xi , d) be a N -dimensional vector, with
? ? , 1 : (?, ?)
the element j equaling to I(j ? Ii , xij = d).
Then, the estimation of the vanilla majority voting in Eq. (1) can be formulated as finding solutions {yi }i?[M ] that satisfy the following constraints:
>
1>
?i, d, (3)
? ? , 2 : (?, ?)
? ? , 3 : (?, ?)
N g(xi , yi ) ? 1N g(xi , d) ? 0,
?
where 1N is the N -dimensional all-one vector
and 1>
g(x
,
k)
is
the
aggregated
score
of
the
i
N
potential true label k for task ti . By using the
all-one vector, the aggregated score has an intuitive interpretation ? it denotes the number of Figure 1: A geometric interpretation of the crowdworkers who have labeled ti as class k.
sourcing margin.
?
?
?
?
?
?
?1
Apparently, the all-one vector treats all workers equally, which may be unrealistic in practice due
to the various backgrounds of the workers. By simply choosing what the majority of workers agree
on, the vanilla MV is prone to errors when many workers give low quality labels. One way to tackle
this problem is to take worker reliability into consideration. Let ? denote the worker weights. When
these values are known, we can get the aggregated score ? > g(xi , k) of a weighted majority voting
(WMV), and estimate the true labels by the rule: y?i = argmaxd?[D] ? > g(xi , d). Thus, reliable
workers contribute more to the decisions.
Geometrically, g(xi , d) is a point in the N -dimensional space for each task ti . The aggregated
score 1>
N g(xi , d) measures the distance (up to a constant scaling) from this point to the hyperplane
x
=
0. So the MV estimator actually finds a point that has the largest distance to that hyperplane
1>
N
for each task, and the decision boundary of majority voting is another hyperplane 1>
N x?b = 0 which
separates the point g(xi , y?i ) from the other points g(xi , k), k 6= y?i . By introducing the worker
weights ?, we relax the constraint of the all-one vector to allow for more flexible decision boundaries
? > x?b = 0. All the possible decision boundaries with the same orientation are equivalent. Inspired
by the generalized notion of margin in multi-class SVM [4], we define the crowdsourcing margin as
the minimal difference between the aggregated score of the potential true label and the aggregated
scores of other alternative labels. Then, one reasonable choice of the best hyperplane (i.e. ?) is the
one that represents the largest margin between the potential true label and other alternatives.
Fig. 1 provides an illustration of the crowdsourcing margin for WMV with D = 3 and N = 2,
where each axis represents the label of a worker. Assume that both workers provide labels 3 and 1
to item i. Then, the vectors g(xi , y), y ? [3] are three points in the 2D plane. Given the worker
weights ?, the estimated label should be 1, since g(xi , 1) has the largest distance to line P0 . Line P1
and line P2 are two boundaries that separate g(xi , 1) and other points. The margin is the distance
between them. In this case, g(xi , 1) and g(xi , 3) are support vectors that decide the margin.
3.2
Max-Margin Majority Voting Estimator
Let ` be the minimum margin between the potential true label and all other alternatives. We define
the max-margin majority voting (M3 V) as solving the constrained optimization problem to estimate
the true labels y and weights ?:
1
inf
k?k22
(4)
?,y
2
s. t. : ? > gi? (d) ? `?
i (d), ?i ? [M ], d ? [D],
?
2
where gi (d) := g(xi , yi ) ? g(xi , d) and `?
i (d) = `I(yi 6= d). And in practice, the worker
labels are often linearly inseparable by a single hyperplane. Therefore, we relax the hard constraints
2
The offset b is canceled out in the margin constraints.
3
by introducing non-negative slack variables {?i }M
i=1 , one for each task, and define the soft-margin
max-margin majority voting as
X
1
inf
k?k22 + c
?i
(5)
?i ?0,?,y
2
i
s. t. : ? > gi? (d) ? `?
i (d) ? ?i , ?i ? [M ], d ? [D],
where c is a positive regularization parameter and ` ? ?i is the soft-margin for task ti . The value of
?i reflects the difficulty of task ti ? a small ?i suggests a large discriminant margin, indicating that
the task is easy with a rare chance to make mistakes; while a large ?i suggests that the task is hard
with a higher chance to make mistakes. Note that our max-margin majority voting is significantly
different from the unsupervised SVMs (or max-margin clustering) [21], which aims to assign cluster
labels to the data points by maximizing some different notion of margin with balance constraints to
avoid trivial solutions. Our M3 V does not need such balance constraints.
Albeit not jointly convex, problem (5) can be solved by iteratively updating ? and y to find a local
PM PD
d ?
optimum. For ?, the solution can be derived as ? =
i=1
d=1 ?i gi (d) by the fact that the
subproblem is convex. The parameters ? are obtained by solving the dual problem
XX
1
sup ? ? > ? +
?id `?
(6)
i (d),
2
0??id ?c
i
d
which is exactly the QP dual problem in standard SVM [4]. So it can be efficiently solved by welldeveloped SVM solvers like LIBSVM [2]. For updating y, we define (x)+ := max(0, x), and then
it is a weighted majority voting with a margin gap constraint:
> ?
?
y?i = argmax ?c max `?
(d)
?
?
g
(d)
,
(7)
i
i
+
d?[D]
yi ?[D]
Overall, the algorithm is a max-margin iterative weighted majority voting (MM-IWMV). Comparing
with the iterative weighted majority voting (IWMV) [11], which tends to maximize the expected gap
of the aggregated scores under the Homogenous DS model, our M3 V directly maximizes the data
specified margin without further assumption on data model. Empirically, as we shall see, our M3 V
could have more powerful discriminative ability with better accuracy than IWMV.
4
Bayesian Max-Margin Estimator
With the intuitive and simple max-margin principle, we now present a more sophisticated Bayesian
max-margin estimator, which conjoins the discriminative ability of M3 V and the flexibility of the
generative DS estimator. Though slightly more complicated in learning and inference, the Bayesian
models retain the intuitive simplicity of M3 V and the flexibility of DS, as explained below.
4.1
Model Definition
We adopt the same DS model to generate observations conditioned on confusion matrices, with the
full likelihood in Eq. (2). We further impose a prior p0 (?, ?) for Bayesian inference. Assuming that
the true labels y are given, we aim to get the target posterior p(?, ?|X, y), which can be obtained
by solving an optimization problem:
inf
L (q(?, ?); y) ,
(8)
q(?,?)
where L(q; y) := KL(qkp0 (?, ?)) ? Eq [log p(X|?, y)] measures the Kullback-Leibler (KL) divergence between a desired post-data posterior q and the original Bayesian posterior, and p0 (?, ?)
is the prior, often factorized as p0 (?)p0 (?). As we shall see, this Bayesian DS estimator often leads
to better performance than the vanilla DS.
Then, we explore the ideas of regularized Bayesian inference (RegBayes) [27] to incorporate
max-margin majority voting constraints as posterior regularization on problem (8), and define the
Bayesian max-margin estimator (denoted by CrowdSVM) as solving:
X
inf
L(q(?, ?); y) + c ?
?i
(9)
?i ?0,q?P,y
s. t. : Eq [? > gi? (d)]
i
?
`?
i (d)
4
? ?i , ?i ? [M ], d ? [D],
where P is the probabilistic simplex, and we take expectation over q to define the margin constraints.
Such posterior constraints will influence the estimates of y and ? to get better aggregation, as we
shall see. We use a Dirichlet prior on worker confusion matrices, ?mk |? ? Dir(?), and a spherical
Gaussian prior on ?, ? ? N (0, vI). By absorbing the slack variables, CrowdSVM solves the
equivalent unconstrained problem:
inf L(q(?, ?); y) + c ? Rm (q(?, ?); y),
(10)
q?P,y
where Rm (q; y) =
PM
D
i=1 maxd=1
> ?
`?
i (d)?Eq [? gi (d)] + is the posterior regularization.
Remark 1. From the above definition, we can see that both the Bayesian DS estimator and the maxmargin majority voting are special cases of CrowdSVM. Specifically, when c ? 0, it is equivalent
to the DS model. If we set v = v 0 /c for some positive parameter v 0 , then when c ? ? CrowdSVM
reduces to the max-margin majority voting.
4.2
Variational Inference
Since it is intractable to directly solve
problem (9) or (10), we introduce
the structured mean-field assumption
on the post-data posterior, q(?, ?) =
q(?)q(?), and solve the problem by
alternating minimization as outlined in
Alg. 1. The algorithm iteratively performs the following steps until a local
optimum is reached:
Algorithm 1: The CrowdSVM algorithm
1. Initialize y by majority voting.
while Not converge do
2. For each worker j and category k:
q(?jk ) ? Dir(njk + ?).
3. Solve the dual problem (11).
4. For each item i: y?i ? argmaxyi ?[D] f (yi , xi ; q).
end
Infer q(?): Fixing the distribution q(?) and the true labels y, the problem in Eq. (9) turns to a
standard Bayesian inference problem with the closed-form solution: q ? (?) ? p0 (?)p(X|?, y).
Since the prior is a Dirichlet distribution, the inferred distribution is also Dirichlet, q ? (?jk ) =
Dir(njk + ?), where njk is a D-dimensional vector with element d being njkd .
Infer q(?) and solve for ?:
Fixing the distribution q(?) and the true labels y, we optimize Eq. (9) over
q(?),
which
is
can derive the optimal solution: q ? (?) ?
P P d ? also convex. We
>
d
p0 (?) exp ?
i
d ?i gi (d) , where ? = {?i } are Lagrange multipliers. With the normal
prior, p0 (?) = N (0, vI), the posterior is a normal distribution: q ? (?) = N (?, vI) , whose mean
PM PD
is ? = v i=1 d=1 ?id gi? (d). Then the parameters ? are obtained by solving the dual problem
XX
1
?id `?
(11)
sup ? ?> ? +
i (d),
2v
0??id ?c
i
d
which is same as the problem (6) in max-margin majority voting.
Infer y: Fixing the distributions of ? and ? at their optimum q ? , we find y by solving problem
(10). To make the prediction more efficient, we approximate the distribution q ? (?) by a Dirac delta
? where ?
? is the mean of q ? (?). Then since all tasks are independent, we can derive
mass ?(? ? ?),
the discriminant function of yi as
? yi ) ? c max (`? (d) ? ?
? > g ? (d))+ ,
(12)
f (yi , xi ; q ? ) = log p(xi |?,
d?[D]
i
i
? is the mean of q ? (?). Then we can make predictions by maximize this function.
where ?
Apparently, the discriminant function (12) represents a strong coupling between the generative
model and the discriminative margin constraints. Therefore, CrowdSVM jointly considers these
two factors when estimating true labels. We also note that the estimation rule used here reduces to
the rule (7) of MM-IWMV by simply setting c = ?.
5
Gibbs CrowdSVM Estimator
CrowdSVM adopts an averaging model to define the posterior constraints in problem (9). Here, we
further provide an alternative strategy which leads to a full Bayesian model with a Gibbs sampler.
The resulting Gibbs-CrowdSVM does not need to make the mean-field assumption.
5
5.1
Model Definition
Suppose the target posterior q(?, ?) is given, we perform the max-margin majority voting by drawing a random sample ?. This leads to the crowdsourcing hinge-loss
R(?, y) =
M
X
i=1
> ?
max `?
i (d) ? ? gi (d) + ,
d?[D]
(13)
which is a function of ?. Since ? are random, we define the overall hinge-loss as the expectation over
q(?), that is, R0 m (q(?, ?); y) = Eq [R(?, y)]. Due to the convexity of max function, the expected
loss is in fact an upper bound of the average loss, i.e., R0 m (q(?, ?); y) ? Rm (q(?, ?); y). Differing from CrowdSVM, we also treat the hidden true labels y as random variables with a uniform
prior. Then we define Gibbs-CrowdSVM as solving the problem:
"M
#
X
inf L q(?, ?, y) + Eq
2c(?isi )+ ,
(14)
q?P
where ?id =
> ?
`?
i (d) ? ? gi (d),
i=1
si = argmaxd6=yi ?id , and the factor 2 is introduced for simplicity.
Data Augmentation In order to build an efficient Gibbs sampler for this problem, we derive the
posterior distribution with the data augmentation [3, 26] for the max-margin regularization term.
We let ?(yi |xiR, ?) = exp(?2c(?isi )+ ) to represent the regularizer. According to the equality:
1
?
?1
?(yi |xi , ?) = 0 ?(yi , ?i |xi , ?)d?i , where ?(yi , ?i |xi , ?) = (2??i )? 2 exp( 2?
(?i + c?isi )2 ) is
i
a (unnormalized) joint distribution of yi and the augmented variable ?i [14], the posterior of GibbsCrowdSVM
can be expressed as the marginal of a higher dimensional distribution, i.e., q(?, ?, y) =
R
q(?, ?, y, ?)d?, where
M
Y
p(xi |?, yi )?(yi , ?i |xi , ?).
(15)
q(?, ?, y, ?) ? p0 (?, ?, y)
i=1
Putting the last two terms together, we can view q(?, ?, y, ?) as a standard Bayesian posterior, but
with the unnormalized likelihood pe(xi , ?i |?, ?, yi ) ? p(xi |?, yi )?(yi , ?i |xi , ?), which jointly
considers the noisy observations and the large margin discrimination between the potential true
labels and alternatives.
5.2
Posterior Inference
With the augmented representation, we can do Gibbs sampling to infer the posterior distribution
q(?, ?, y, ?) and thus q(?, ?, y) by discarding ?. The conditional distributions for {?, ?, ?, y}
are derived in Appendix A. Note that when sample ? from the inverse Gaussian distribution, a fast
sampling algorithm [13] can be applied with O(1) time complexity. And for the hidden variables y,
we initially set them as the results of majority voting. After removing burn-in samples, we use their
most frequent values of as the final outputs.
6
Experiments
We now present experimental results to demonstrate the strong discriminative ability of max-margin
majority voting and the promise of our Bayesian models, by comparing with various strong competitors on multiple real datasets.
6.1
Datasets and Setups
We use four real world crowd labeling datasets as summarized in Table 1. Web Search [24]: 177
workers are asked to rate a set of 2,665 query-URL pairs on a relevance rating scale from 1 to 5.
Each task is labeled by 6 workers on average. In total 15,567 labels are collected. Age [8]: It
consists of 10,020 labels of age estimations for 1,002 face images. Each image was labeled by 10
workers. And there are 165 workers involved in these tasks. The final estimations are discretized
into 7 bins. Bluebirds [19]: It consists of 108 bluebird pictures. There are 2 breeds among all the
images, and each image is labeled by all 39 workers. 4,214 labels in total. Flowers [18]: It contains
2,366 binary labels for a dataset with 200 flower pictures. Each worker is asked to answer whether
the flower in picture is peach flower. 36 workers participate in these tasks.
6
Table 1: Datasets Overview.
We compare M3 V, as well as its Bayesian exDATASET
L ABELS I TEMS W ORKERS
tensions CrowdSVM and Gibbs-CrowdSVM,
W
EB S EARCH
15,567
2,665
177
with various baselines, including majority vot10,020
1,002
165
AGE
ing (MV), iterative weighted majority voting
B LUEBIRDS
4,214
108
39
(IWMV) [11], the Dawid-Skene (DS) estimaF LOWERS
2,366
200
36
tor [5], and the minimax entropy (Entropy) estimator [25]. For Entropy estimator, we use the implementation provided by the authors, and show
both the performances of its multiclass version (Entropy (M)) and the ordinal version (Entropy (O)).
All the estimators that require an iterative updating are initialized by majority voting to avoid bad local minima. All experiments were conducted on a PC with Intel Core i5 3.00GHz CPU and 12.00GB
RAM.
6.2
Model Selection
Due to the special property of crowdsourcing, we cannot simply split the training data into multiple
folds to cross-validate the hyperparameters by using accuracy as the selection criterion, which may
? y)
? as the criterion to select
bias to over-optimistic models. Instead, we adopt the likelihood p(X|?,
parameters, which is indirectly related to our evaluation criterion (i.e., accuracy). Specifically, we
test multiple values of c and `, and select the value that produces a model with the maximal likelihood
on the given dataset. This method ensures us to select model without any prior knowledge on the
true labels. For the special case of M3 V, we fix the learned true labels y after training the model
with certain parameters, and learn confusion matrices that optimize the full likelihood in Eq. (2).
Note that the likelihood-based cross-validation strategy [25] is not suitable for CrowdSVM, because
this strategy uses marginal likelihood p(X|?) to select model and ignores the label information
of y, through which the effect of constraints is passed for CrowdSVM. If we use this strategy on
CrowdSVM, it will tend to optimize the generative component without considering the discriminant
constraints, thus resulting in c ? 0, which is a trivial solution for model selection.
6.3
Experimental Results
We first test our estimators on the task of estimating true labels. For CrowdSVM, we set ? = 1
and v = 1 for all experiments, since we find that the results are insensitive to them. For
M3 V, CrowdSVM and Gibbs-CrowdSVM, the regularization parameters (c, `) are selected from
c = 2?[?8 : 0] and ` = [1, 3, 5] by the method in Sec. 6.2. As for Gibbs-CrowdSVM, we generate
50 samples in each run and discard the first 10 samples as burn-in steps, which are sufficiently large
to reach convergence of the likelihood. The reported error rate is the average over 5 runs.
Table 2 presents the error rates of various estimators. We group the comparisons into three parts:
I. All the MV, IWMV and M3 V are purely discriminative estimators. We can see that our M3 V
produces consistently lower error rates on all the four datasets compared with the vanilla MV
and IWMV, which show the effectiveness of max-margin principle for crowdsourcing;
II. This part analyzes the effects of prior and max-margin regularization on improving the DS
model. We can see that DS+Prior is better than the vanilla DS model on the two larger datasets
by using a Dirichlet prior. Furthermore, CrowdSVM consistently improves the performance of
DS+Prior by considering the max-margin constraints, again demonstrating the effectiveness of
max-margin learning;
III. This part compares our Gibbs-CrowdSVM estimator to the state-of-the-art minimax entropy estimators. We can see that Gibbs-CrowdSVM performs better than CrowdSVM on Web-Search,
Age and Flowers datasets, while worse on the small Bluebuirds dataset. And it is comparable
to the minimax entropy estimators, sometimes better with faster running speed as shown in
Fig. 2 and explained below. Note that we only test Entropy (O) on two ordinal datasets, since
this method is specifically designed for ordinal labels, while not always effective.
Fig. 2 summarizes the training time and error rates after each iteration for all estimators on the
largest Web-Search dataset. It shows that the discriminative methods (e.g., IWMV and M3 V)
run fast but converge to high error rates. Compared to the minimax entropy estimator, CrowdSVM is
7
Table 2: Error-rates (%) of different estimators on four datasets.
I
II
III
M ETHODS
MV
IWMV
M3 V
DS
DS+P RIOR
C ROWD SVM
E NTROPY (M)
E NTROPY (O)
G-C ROWD SVM
W EB S EARCH
26.90
15.04
12.74
16.92
13.26
9.42
11.10
10.40
7.99 ? 0.26
AGE
34.88
34.53
33.33
39.62
34.53
33.33
31.14
37.32
32.98 ? 0.36
computationally more efficient and also converges to a lower error rate. Gibbs-CrowdSVM
runs slower than CrowdSVM since it needs to
compute the inversion of matrices. The performance of the DS estimator seems mediocre
? its estimation error rate is large and slowly
increases when it runs longer. Perhaps this is
partly because the DS estimator cannot make
good use of the initial knowledge provided by
majority voting.
B LUEBIRDS
24.07
27.78
20.37
10.19
10.19
10.19
8.33
?
10.37?0.41
F LOWERS
22.00
19.00
13.50
13.00
13.50
13.50
13.00
?
12.10 ? 1.07
0.18
Error rate
0.14
IWMV
3
M V
Dawid?Skene
Entropy (M)
Entropy (O)
CrowdSVM
Gibbs?CrowdSVM
0.10
0
10
1
10
Time (Seconds)
2
10
7
Error rate
NLL ( x103 )
We further investigate the effectiveness of the Figure 2: Error rates per iteration of various estigenerative component and the discriminative mators on the web search dataset.
component of CrowdSVM again on the largest
0.3
Web-Search dataset. For the generative part, we
22.84
0.2693
22.8
compared CrowdSVM (c = 0.125, ` = 3) with
DS and M3 V (c = 0.125, ` = 3). Fig. 3(a)
0.2
22.7
22.65
0.1504
compares the negative log likelihoods (NLL) of
22.62
0.1274
3
0.1069 0.1021
22.6
these models, computed with Eq. (2). For M V,
22.55
0.1
we fix its estimated true labels and find the con22.5
DS
CSVM G?CSVM M^3V
MV
IWMV CSVM G?CSVM M^3V
fusion matrices to optimize the likelihood. The
(a)
(b)
results show that CrowdSVM achieves a lower
NLL than DS; this suggests that by incorporat- Figure 3: NLLs and ERs when separately test the
ing M3 V constraints, CrowdSVM finds a better generative and discriminative components.
solution of the true labels as well as the confusion matrices than that found by the original EM algorithm. For the discriminative part, we use the
? to estimate the true labels as yi = argmaxd?[D] ?
? > g(xi , d), and show
mean of worker weights ?
the error rates in Fig. 3(b). Apparently, the weights learned by CrowdSVM are also better than those
learned by the other MV estimators. Overall, these results suggest that CrowdSVM can achieve a
good balance between the generative modeling and the discriminative prediction.
Conclusions and Future Work
We present a simple and intuitive max-margin majority voting estimator for learning-from-crowds
as well as its Bayesian extension that conjoins the generative modeling and discriminative prediction. By formulating as a regularized Bayesian inference problem, our methods naturally cover the
classical Dawid-Skene estimator. Empirical results demonstrate the effectiveness of our methods.
Our model is flexible to fit specific complicated application scenarios [22]. One seminal feature of
Bayesian methods is their sequential updating. We can extend our Bayesian estimators to the online
setting where the crowdsourcing labels are collected in a stream and more tasks are distributed. We
have some preliminary results as shown in Appendix B. It would also be interesting to investigate
more on active learning, such as selecting reliable workers to reduce costs [9].
Acknowledgments
The work was supported by the National Basic Research Program (973 Program) of China (Nos.
2013CB329403, 2012CB316301), National NSF of China (Nos. 61322308, 61332007), Tsinghua
National Laboratory for Information Science and Technology Big Data Initiative, and Tsinghua
Initiative Scientific Research Program (Nos. 20121088071, 20141080934).
8
References
[1] A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. R. Hruschka Jr, and T. M. Mitchell. Toward
an architecture for never-ending language learning. In AAAI, 2010.
[2] C. C. Chang and C. J. Lin. LIBSVM: A library for support vector machines. ACM Transactions
on Intelligent Systems and Technology, 2:27:1?27:27, 2011.
[3] C. Chen, J. Zhu, and X. Zhang. Robust Bayesian max-margin clustering. In NIPS, 2014.
[4] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based
vector machines. JMLR, 2:265?292, 2002.
[5] A. P. Dawid and A. M. Skene. Maximum likelihood estimation of observer error-rates using
the EM algorithm. Applied Statistics, pages 20?28, 1979.
[6] M. Dud??k, S. J. Phillips, and R. E. Schapire. Maximum entropy density estimation with generalized regularization and an application to species distribution modeling. JMLR, 8(6), 2007.
[7] K. Ganchev, J. Grac?a, J. Gillenwater, and B. Taskar. Posterior regularization for structured
latent variable models. JMLR, 11:2001?2049, 2010.
[8] Otto C. Liu X. Han, H. and A. Jain. Demographic estimation from face images: Human vs.
machine performance. IEEE Trans. on PAMI, 2014.
[9] S. Jagabathula, L. Subramanian, and A. Venkataraman. Reputation-based worker filtering in
crowdsourcing. In NIPS, 2014.
[10] D. R. Karger, S. Oh, and D. Shah. Iterative learning for reliable crowdsourcing systems. In
NIPS, 2011.
[11] H. Li and B. Yu. Error rate bounds and iterative weighted majority voting for crowdsourcing.
arXiv preprint arXiv:1411.4086, 2014.
[12] Q. Liu, J. Peng, and A. Ihler. Variational inference for crowdsourcing. In NIPS, 2012.
[13] J. R. Michael, W. R. Schucany, and R. W. Haas. Generating random variates using transformations with multiple roots. The American Statistician, 30(2):88?90, 1976.
[14] N. G. Polson and S. L. Scott. Data augmentation for support vector machines. Bayesian
Analysis, 6(1):1?23, 2011.
[15] V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning
from crowds. JMLR, 11:1297?1322, 2010.
[16] T. Shi and J. Zhu. Online Bayesian passive-aggressive learning. In ICML, 2014.
[17] R. Snow, B. O?Connor, D. Jurafsky, and A. Y. Ng. Cheap and fast?but is it good?: evaluating
non-expert annotations for natural language tasks. In EMNLP, 2008.
[18] T. Tian and J. Zhu. Uncovering the latent structures of crowd labeling. In PAKDD, 2015.
[19] P. Welinder, S. Branson, P. Perona, and S. J. Belongie. The multidimensional wisdom of
crowds. In NIPS, 2010.
[20] J. Whitehill, T. F. Wu, J. Bergsma, J. R. Movellan, and P. L. Ruvolo. Whose vote should count
more: Optimal integration of labels from labelers of unknown expertise. In NIPS, 2009.
[21] L. Xu and D. Schuurmans. Unsupervised and semi-supervised multi-class support vector machines. In AAAI, 2005.
[22] O. F. Zaidan and C. Callison-Burch. Crowdsourcing translation: Professional quality from
non-professionals. In ACL, 2011.
[23] Y. Zhang, X. Chen, D. Zhou, and M. I. Jordan. Spectral methods meet EM: A provably optimal
algorithm for crowdsourcing. In NIPS, 2014.
[24] D. Zhou, S. Basu, Y. Mao, and J. C. Platt. Learning from the wisdom of crowds by minimax
entropy. In NIPS, 2012.
[25] D. Zhou, Q. Liu, J. Platt, and C. Meek. Aggregating ordinal labels from crowds by minimax
conditional entropy. In ICML, 2014.
[26] J. Zhu, N. Chen, H. Perkins, and B. Zhang. Gibbs max-margin topic models with data augmentation. JMLR, 15:1073?1110, 2014.
[27] J. Zhu, N. Chen, and E. P. Xing. Bayesian inference with posterior regularization and applications to infinite latent svms. JMLR, 15:1799?1847, 2014.
9
| 5976 |@word version:3 inversion:1 seems:1 p0:9 initial:1 liu:3 contains:1 score:9 njk:3 selecting:1 karger:1 existing:1 comparing:2 si:1 assigning:2 cheap:1 designed:1 update:1 discrimination:1 v:1 generative:14 selected:1 item:10 plane:1 ruvolo:1 core:1 provides:1 contribute:1 tems:1 zhang:3 initiative:2 consists:3 paragraph:1 introduce:1 peng:1 expected:3 behavior:1 p1:1 isi:3 multi:2 discretized:1 inspired:2 spherical:1 cpu:1 considering:4 solver:1 provided:5 xx:2 estimating:2 maximizes:2 factorized:1 mass:1 what:1 differing:1 finding:1 transformation:1 every:2 multidimensional:1 voting:39 ti:9 tackle:1 exactly:1 rm:3 platt:2 bio:1 arguably:1 positive:2 understood:1 aggregating:1 local:4 tends:1 tsinghua:6 mistake:3 treat:2 id:7 meet:1 pami:1 burn:2 acl:1 eb:2 china:3 studied:1 collect:1 suggests:3 jurafsky:1 branson:1 tian:3 acknowledgment:1 enforces:1 practice:2 movellan:1 procedure:1 empirical:2 significantly:2 suggest:2 get:3 cannot:2 selection:3 put:1 mediocre:1 influence:1 seminal:1 optimize:4 equivalent:3 center:1 maximizing:2 shi:1 independently:1 convex:3 formulate:1 simplicity:2 assigns:1 estimator:39 rule:4 csvm:4 oh:1 notion:3 target:2 suppose:1 us:1 designing:1 dawid:10 element:2 expensive:1 jk:2 updating:4 labeled:5 observed:1 taskar:1 subproblem:1 preprint:1 solved:2 equaling:1 ensures:1 venkataraman:1 pd:2 convexity:1 complexity:1 abel:1 asked:2 trained:1 solving:8 purely:1 learner:1 workload:1 joint:3 various:7 regularizer:1 jain:1 fast:3 effective:2 query:1 labeling:5 choosing:1 crowd:11 whose:4 larger:1 solve:4 relax:2 otherwise:1 drawing:1 otto:1 ability:5 statistic:1 gi:10 breed:1 transform:1 noisy:7 crowdworkers:1 jointly:3 online:3 final:2 nll:3 advantage:1 maximal:1 frequent:2 flexibility:3 achieve:1 intuitive:4 validate:1 dirac:1 convergence:1 cluster:1 optimum:4 produce:2 generating:2 converges:1 derive:3 coupling:1 fixing:3 measured:1 progress:1 eq:11 strong:3 dividing:1 solves:1 p2:1 posit:2 snow:1 correct:1 human:1 settle:1 bin:1 require:2 assign:1 fix:2 generalization:2 preliminary:2 extension:3 mm:2 sufficiently:1 normal:2 exp:3 algorithmic:1 tor:1 inseparable:1 adopt:3 achieves:1 estimation:10 label:72 largest:5 ganchev:1 weighted:9 reflects:1 minimization:1 grac:1 always:2 gaussian:2 aim:3 avoid:3 zhou:3 xir:1 derived:3 consistently:2 likelihood:13 contrast:1 baseline:1 inference:14 initially:1 hidden:2 perona:1 selects:1 provably:1 canceled:1 dual:5 flexible:4 orientation:1 denoted:1 overall:3 among:1 uncovering:1 art:3 platform:1 special:4 initialize:2 homogenous:2 equal:2 field:2 marginal:2 never:1 ng:1 sampling:2 represents:3 yu:2 unsupervised:2 icml:2 future:1 simplex:1 intelligent:2 micro:1 hint:1 national:4 divergence:1 argmax:3 consisting:1 statistician:1 earch:2 investigate:3 callison:1 evaluation:1 extreme:1 pc:1 worker:46 peach:1 incomplete:1 initialized:2 desired:1 minimal:1 mk:1 modeling:5 soft:2 cover:3 argmaxd:2 maximization:1 ordinary:2 cost:2 introducing:2 entry:2 rare:1 uniform:1 predicate:1 welinder:1 conducted:1 too:1 reported:1 answer:1 dir:3 density:1 retain:1 probabilistic:5 michael:1 together:1 augmentation:5 again:2 aaai:2 slowly:1 emnlp:1 worse:1 cb329403:1 expert:2 american:1 zhao:1 valadez:1 li:1 aggressive:1 potential:8 summarized:1 sec:1 includes:1 satisfy:1 mv:9 vi:3 stream:1 view:1 observer:1 lab:2 closed:1 apparently:3 sup:2 reached:1 competitive:2 aggregation:5 offdiagonal:1 complicated:2 optimistic:1 annotation:1 xing:1 accuracy:6 who:2 efficiently:1 wisdom:2 bayesian:30 expertise:1 reach:1 whenever:1 definition:3 competitor:1 involved:1 naturally:3 ihler:1 couple:1 dataset:8 popular:1 mitchell:1 knowledge:3 improves:1 sophisticated:1 actually:1 higher:2 supervised:1 tension:1 formulation:2 done:1 though:3 strongly:1 furthermore:1 until:1 d:25 web:8 lack:1 quality:2 perhaps:1 scientific:1 effect:2 k22:2 true:36 multiplier:1 regularization:13 equality:1 alternating:1 dud:1 jkd:3 iteratively:3 leibler:1 laboratory:1 raykar:1 unnormalized:2 criterion:3 generalized:2 demonstrate:3 confusion:10 performs:2 passive:1 image:5 variational:3 consideration:2 novel:1 recently:2 absorbing:1 qp:1 empirically:1 overview:1 conditioning:1 insensitive:1 extend:1 interpretation:3 bluebird:2 gibbs:16 phillips:1 connor:1 vanilla:5 unconstrained:1 pm:4 outlined:1 gillenwater:1 language:2 reliability:3 han:1 longer:1 labelers:1 posterior:20 bergsma:1 perspective:1 inf:6 discard:1 scenario:1 certain:1 integration:1 binary:1 maxd:1 yi:23 seen:1 minimum:2 analyzes:1 impose:1 r0:2 aggregated:9 maximize:2 redundant:1 converge:2 semi:1 resolving:1 multiple:5 ii:4 full:3 infer:6 reduces:2 ing:2 faster:1 cross:2 lin:1 post:2 mle:3 equally:1 prediction:4 variant:2 basic:2 expectation:3 arxiv:2 iteration:2 represent:1 sometimes:1 kernel:1 background:2 separately:1 decreased:1 tend:1 effectiveness:4 jordan:1 split:1 easy:1 iii:2 incorporat:1 finish:1 fit:1 variate:1 architecture:1 florin:1 reduce:1 idea:1 cn:2 multiclass:2 whether:1 distributing:1 url:1 gb:1 passed:1 moy:1 remark:1 dramatically:1 cb316301:1 svms:2 category:2 simplest:1 generate:3 schapire:1 xij:4 nsf:1 estimated:3 delta:1 per:1 mators:1 shall:3 promise:1 group:1 key:1 putting:2 four:3 demonstrating:1 achieving:2 libsvm:2 ram:1 geometrically:1 beijing:1 run:5 inverse:1 powerful:1 i5:1 named:1 reasonable:1 decide:1 wu:1 decision:4 appendix:2 scaling:1 summarizes:1 comparable:1 bound:2 meek:1 fold:1 constraint:16 burch:1 pakdd:1 perkins:1 speed:1 formulating:2 skene:11 department:1 structured:2 according:1 x103:1 jr:1 across:1 slightly:1 em:4 making:1 maxmargin:1 constrained:1 explained:2 jagabathula:1 computationally:1 agree:1 slack:2 turn:1 mechanism:1 count:1 singer:1 ordinal:4 end:1 demographic:1 spectral:2 indirectly:1 hruschka:1 alternative:8 shah:1 slower:1 professional:2 original:2 denotes:2 clustering:2 include:1 dirichlet:4 running:1 graphical:1 hinge:2 carlson:1 build:3 classical:2 strategy:5 diagonal:1 distance:4 separate:2 majority:41 participate:1 topic:1 mail:1 haas:1 considers:2 discriminant:4 trivial:2 collected:2 toward:1 assuming:1 illustration:1 balance:3 setup:1 whitehill:1 negative:2 polson:1 design:1 implementation:2 proper:1 unknown:6 perform:1 upper:1 observation:8 datasets:11 kisiel:1 defining:1 extended:1 nlls:1 inferred:1 rating:1 david:1 introduced:1 pair:1 specified:1 kl:2 ethods:1 learned:3 nip:8 trans:1 suggested:1 regbayes:2 usually:1 below:2 flower:5 scott:1 program:3 max:36 reliable:4 including:2 confusability:1 unrealistic:1 suitable:1 subramanian:1 difficulty:1 natural:1 regularized:4 indicator:1 zhu:6 minimax:7 improve:4 technology:5 library:1 picture:4 identifies:1 axis:1 jun:1 zaidan:1 prior:13 geometric:2 root:1 loss:4 interesting:1 filtering:1 age:5 validation:1 consistent:1 principle:2 translation:1 prone:1 sourcing:1 last:1 supported:1 bias:1 allow:1 rior:1 basu:1 face:2 ghz:1 distributed:1 boundary:4 world:1 ending:1 evaluating:1 ntropy:2 ignores:1 adopts:1 made:1 collection:1 commonly:1 author:1 transaction:1 approximate:1 kullback:1 active:1 belongie:1 consuming:1 discriminative:17 xi:29 search:5 iterative:6 latent:3 wmv:3 reputation:1 table:4 learn:1 robust:1 improving:1 schuurmans:1 alg:1 domain:1 linearly:1 big:1 hyperparameters:1 xu:1 augmented:2 fig:5 intel:1 mao:1 conjoins:3 pe:1 jmlr:6 removing:1 bad:2 specific:2 discarding:1 er:1 offset:1 svm:5 betteridge:1 fusion:1 intractable:1 albeit:1 sequential:1 conditioned:2 margin:55 gap:2 chen:4 entropy:17 simply:3 likely:1 explore:1 bogoni:1 lagrange:1 expressed:2 chang:1 chance:2 acm:1 dcszj:1 conditional:3 goal:1 formulated:1 hard:2 specifically:3 infinite:1 averaging:3 sampler:3 hyperplane:5 total:2 specie:1 partly:1 experimental:2 m3:21 vote:1 indicating:1 formally:1 select:4 support:4 crammer:1 relevance:1 incorporate:2 crowdsourcing:16 |
5,498 | 5,977 | M-Best-Diverse Labelings
for Submodular Energies and Beyond
Alexander Kirillov1
Dmitrij Schlesinger1
Dmitry Vetrov2
1
Carsten Rother
Bogdan Savchynskyy1
1
2
TU Dresden, Dresden, Germany
Skoltech, Moscow, Russia
alexander.kirillov@tu-dresden.de
Abstract
We consider the problem of finding M best diverse solutions of energy minimization problems for graphical models. Contrary to the sequential method of Batra
et al., which greedily finds one solution after another, we infer all M solutions
jointly. It was shown recently that such jointly inferred labelings not only have
smaller total energy but also qualitatively outperform the sequentially obtained
ones. The only obstacle for using this new technique is the complexity of the corresponding inference problem, since it is considerably slower algorithm than the
method of Batra et al. In this work we show that the joint inference of M best
diverse solutions can be formulated as a submodular energy minimization if the
original MAP-inference problem is submodular, hence fast inference techniques
can be used. In addition to the theoretical results we provide practical algorithms
that outperform the current state-of-the-art and can be used in both submodular
and non-submodular case.
1
Introduction
A variety of tasks in machine learning can be formulated in the form of an energy minimization
problem, known also as maximum a posteriori (MAP) or maximum likelihood estimation (MLE)
inference in an undirected graphical models (related to Markov or conditional random fields). Its
modeling power and importance are well-recognized, which resulted into specialized benchmark,
i.e. [18] and computational challenges [8] for its solvers. This underlines the importance of finding
the most probable solution. Following [3] and [25] we argue, however, that finding M > 1 diverse
configurations with low energies is also of importance in a number of scenarios, such as: (a) Expressing uncertainty of the found solution [27]; (b) Faster training of model parameters [14]; (c)
Ranking of inference results [32]; (d) Empirical risk minimization [26].
We build on the new formulation for finding M -best-diverse-configurations, which was recently
proposed in [19]. In this formulation all M configurations are inferred jointly, contrary to the established method [3], where a sequential greedy procedure is used. As shown in [19], the new
formulation does not only reliably produce configurations with lower total energy, but also leads
to better results in several application scenarios. In particular, for the image segmentation scenario
the results of [19] significantly outperform those of [3]. This is true even when [19] uses a plain
Hamming distance as a diversity measure and [3] uses more powerful diversity measures.
Our contributions.
? We show that finding M -best-diverse configurations of a binary submodular energy minimization can be formulated as a submodular MAP-inference problem, and hence can be solved
This project has received funding from the European Research Council (ERC) under the European Unions
Horizon 2020 research and innovation programme (grant agreement No 647769). D. Vetrov was supported by
RFBR proj. (No. 15-31-20596) and by Microsoft (RPD 1053945).
1
efficiently for any node-wise diversity measure.
? We show that for certain diversity measures, such as e.g. Hamming distance, the M -bestdiverse configurations of a multilabel submodular energy minimization can be formulated as a
submodular MAP-inference problem, which also implies applicability of efficient graph cut-based
solvers.
? We give the insight that if the MAP-inference problem is submodular then the M -best-diverse
configurations can be always fully ordered with respect to the natural partial order, induced in the
space of all configurations.
? We show experimentally that if the MAP-inference problem is submodular, we are quantitatively at least as good as [19] and considerably better than [3]. The main advantage of our method
is a major speed up over [19], up to the order of two magnitudes. Our method has the same order of
magnitude run-time as [3]. In the non-submodular case our results are slightly inferior to [19], but
the advantage with respect to gain in speed up still holds.
Related work. The importance of the considered problem may be justified by the fact that a procedure of computing M -best solutions to discrete optimization problems was proposed in [23], which
dates back to 1972. Later, more efficient specialized procedures were introduced for MAP-inference
on a tree [29, Ch. 8], junction-trees [24] and general graphical models [33, 12, 2]. Such methods
are however not suited for scenarios where diversity of the solutions is required (like in machine
translation, search engines, producing M -best hypothesis in cascaded algorithms), since they do not
enforce it explicitly.
Structural Determinant Point Processes [22] is a tool to model probabilistic distributions over structured models. Unfortunately an efficient sampling procedure is feasible for tree-structured graphical
models only. The recently proposed algorithm [7] to find M best modes of a distribution is limited
to the same narrow class of problems.
Training of M independent graphical models to produce diverse solutions was proposed in [13, 15].
In contrast, we assume a single fixed model supporting reasonable MAP-solutions.
Along with [3], the most related to our work is the recent paper [25], which proposes a subclass of
new diversity penalties, for which the greedy nature of the algorithm [3] can be substantiated due
to submodularity of the used diversity measures. In contrast to [25] we do not limit ourselves to
diversity measures fulfilling such properties and moreover, we define a class of problems, for which
our joint inference approach leads to polynomially and efficiently solvable problems in practice.
We build on top of the work [19], which is explained in detail in Section 2.
Organization of the paper. Section 2 provides background necessary for formulation of our results:
energy minimization for graphical models and existing approaches to obtain diverse solutions. In
Section 3 we introduce submodularity for graphical models and formulate the main results of our
work. Finally, Section 4 and 5 are devoted to the experimental evaluation of our technique and
conclusions. Supplementary material contains proofs of all mathematical claims and the concurrent
submission [19].
2
Preliminaries
Energy minimization. Let 2A denote the powerset of a set A. The pair G = (V, F) is called
a hyper-graph and has V as a finite set of variable nodes and F ? 2V as a set of factors. Each
variable node v ? V Q
is associated with a variable yv taking its values in a finite set of labels
Lv . The set LA =
v?A Lv denotes a Cartesian product of sets of labels corresponding to
the subset A ? V of variables. Functions ?f : Lf ? R, associated with factors f ? F, are
called potentials and define local costs on values of variables and their combinations. Potentials
?f with |f | = 1 are called unary, with |f | = 2 pairwise and |f | > 2 higher order. The set
{?f : f ? F} of all potentials is referred by ?. For any factor f ? F the corresponding set of
variables {yv : v ? f } will be denoted by yf . The energy minimization problem consists of finding a labeling y ? = {yv : v ? V} ? LV , which minimizes the total sum of corresponding potentials:
X
y ? = arg min E(y) = arg min
?f (yf ) .
(1)
y?LV
y?LV
f ?F
Problem (1) is also known as MAP-inference. Labeling y ? satisfying (1) will be later called a solution of the energy-minimization or MAP-inference problem, shortly MAP-labeling or MAP-solution.
2
y11
y11
?1
?1
E(y 1 )
y11
?2
?1,2
?3
?2,3
y21
y31
?1,2
y21
?2
?2,3
y31
?3
?3,4
y41
?Jy11 6= y12 K
1
2
3
??M
1 (y1 , y1 , y1 )
??
1
2
3
(y , y , y )
1
2
3
??M
2 (y2 , y2 , y2 )
y12
E(y 2 )
y12
?1,2
?1
E(y 3 )
y13
?1
y22
?2,3
?2
?1,2
y23
?2
y32
?3,4
?3
?2,3
y33
?3
?1
y42
y41
?3,4
?3
?4
?Jy21 6= y22 K
?Jy31 6= y32 K
?Jy41 6= y42 K
1
2
3
??M
3 (y3 , y3 , y3 )
?Jy11 6= y13 K
M
y31
?2,3
?2
?4
?4
?3,4
y21
?1,2
?1
y41
?1,2
y22
?2
1
2
3
??M
4 (y4 , y4 , y4 )
?2,3
y32
?3
?3,4
y12
?Jy21 6= y23 K
?1,2
?1
y22
?Jy31 6= y33 K
?2,3
?2
y32
?Jy41 6= y43 K
?3,4
?3
y42
?4
y42
?4
?Jy12 6= y13 K
?Jy22 6= y23 K
?Jy32 6= y33 K
?Jy42 6= y43 K
?4
?3,4
y43
?4
(a) General diversity measure
y13
?1
?1,2
y23
?2
?2,3
y33
?3
?3,4
y13
y43
?4
(b) Node-wise diversity measure
?1
?1,2
y23
?2
?2,3
y33
?3,4
?3
y43
?4
(c) Hamming distance diversity
Figure 1: Examples of factor graphs for 3 diverse solutions of the original MRF (1) with different
diversity measures. The circles represent nodes of the original model that are copied 3 times. For
clarity the diversity factors of order higher than 2 are shown as squares. Pairwise factors are depicted
by edges connecting the nodes. We omit ? for readability. (a) The most general diversity measure
(4), (b) the node-wise diversity measure (6), (c) Hamming distance as a diversity measure (5).
Finally, a model is defined by the triple (G, LV , ?), i.e. the underlying hyper-graph, the sets of labels
and the potentials.
In the following, we use brackets to distinguish between upper index and power, i.e. (A)n means
the n-th power of A, whereas n is an upper index in the expression An . We will keep, however, the
standard notation Rn for the n-dimensional vector space.
Sequential Computation of M Best Diverse Solutions [3]. Instead of looking for a single labeling
with lowest energy, one might ask for a set of labelings with low energies, yet being significantly
different from each other. In order to find such M diverse labelings y 1 , . . . , y M , the method
proposed in [3] solves a sequence of problems of the form
"
#
m?1
X
m
i
y = arg min E(y) ? ?
?(y, y )
(2)
y
i=1
for m = 1, 2 . . . , M , where ? > 0 determines a trade-off between diversity and energy, y 1 is the
MAP-solution and the function ? : LV ? LV ? R defines the diversity of two labelings. In other
words, ?(y, y 0 ) takes a large value if y and y 0 are diverse, in a certain sense, and a small value
otherwise. This problem can be seen as an energy minimization problem, where additionally to the
initial potentials ? the potentials ???(?, y i ), associated with an additional factor V, are used. In the
simplest and most commonly used form, ?(y, y 0 ) is represented by a sum of node-wise diversity
X
measures ?v : Lv ? Lv ? R,
?v (yv , yv0 ) ,
(3)
?(y, y 0 ) =
v?V
and the potentials are split to a sum of unary potentials, i.e. those associated with additional factors
{v}, v ? V. This implies that in case efficient graph-cut based inference methods (including ?expansion [6], ?-?-swap [6] or their generalizations [1, 10]) are applicable to the initial problem (1)
then they remain applicable to the augmented problem (2), which assures efficiency of the method.
Joint computation of M-best-diverse labelings. The notation f M ({y}) will be used as a shortcut
for f M (y 1 , . . . , y M ), for any function f M : (LV )M ? R.
Instead of the greedy sequential procedure (2), in [19] it was suggested to infer all M labelings
jointly, by minimizing
M
X
M
E ({y}) =
E(y i ) ? ??M ({y})
(4)
i=1
for y 1 , . . . , y M and some ? > 0. Function ?M defines the total diversity of any M labelings.
It was shown in [19] that the M labelings obtained according to (4) have both lower total energy
PM
i
i=1 E(y ) and are better from the applied point of view, than those obtained by the sequential
method (2). Hence we will build on the formulation (4) in this work.
3
Though the expression (4) looks complicated, it can be nicely represented in the form (1) and hence
constitutes an energy minimization problem. To achieve this, one creates M copies (G i , LiV , ? i ) =
(G, LV , ?) of the initial model (G, LV , ?). The hyper-graph G1M = (V1M , F1M ) for the new task
is defined as follows. The set of nodes in the new graph is the union of the node sets from the
SM
SM
considered copies V1M = i=1 V i . Factors are F1M = i=1 F i ? {V1M }, i.e. again the union of
the initial ones extended by a special factor corresponding to the diversity penalty that depends on
all nodes of the new graph. Each node v ? V i is associated with the label set Liv = Lv . The
corresponding potentials ?1M are defined as {???M , ? 1 , . . . , ? M }, see Fig. 1a for illustration. The
model (G1M , LV1M , ?1M ) corresponds to the energy (4). An optimal M -tuple of these labelings,
corresponding to a minimum of (4), is a trade-off between low energy of individual labelings y i and
their total diversity.
Complexity of the Diversity Problem (4). Though the formulation (4) leads to better results than
those of (2), minimization of E M is computationally demanding even if the original energy E can
be easily (approximatively) optimized. This is due to the intrinsic repulsive structure of the diversity
potentials ???M : according to the intuitive meaning of the diversity, similar labels are penalized
more than different one. Consider the simplest case with the Hamming distance applied node-wise
as a diversity measure
M X
M
?1 X
X
?v (yvi , yvj ), where ?v (y, y 0 ) = Jy 6= y 0 K .
(5)
?M ({y}) =
i=1 j=i+1 v?V
Here expression JAK equals 1 if A is true and 0 otherwise. The corresponding factor graph is
sketched in Fig. 1c. Such potentials can not be optimized with efficient graph-cut based methods
and moreover, as shown in [19], the bounds delivered by LP-relaxation [31] based solvers are very
loose in practice. Indeed, solutions delivered by such solvers are significantly inferior even to the
results of the sequential method (2).
To cope with this issue a clique encoding representation of (4) was proposed in [19]. In this representation M -tuples of labels yv1 , . . . , yvM (in the M nodes corresponding to the single initial node
v) were considered as the new labels. In this way the difficult diversity factors were incorporated
into the unary factors of the new representation and the pairwise factors were adjusted respectively.
This allowed to (approximately) solve the problem (4) with graph-cuts based techniques if those
techniques were applicable to the energy E of a single labeling. The disadvantage of the clique
encoding representation is the exponential growth of the label space, which was reflected in a significantly higher inference time for the problem (4) compared to the procedure (2). In what follows,
we show an alternative transformation of the problem (4), which (i) does not have this drawback (its
size is basically the same as those of (4)) and (ii) allows to exactly solve (4) in the case the energy
E is submodular.
Node-wise Diversity. In what follows we will mainly consider the node-wise diversity measures,
i.e. those, which can be represented in the form
X
?M ({y}) =
?M
(6)
v ({y}v )
v?V
M
for some node diversity measures ?M
? R, see Fig. 1b for illustration.
v : (Lv )
3
M-Best-Diverse Labelings for Submodular Problems
Submodularity. In what follows we will assume that the sets Lv , v ? V, of labels are completely
ordered. This implies that for any s, t ? Lv their maximum and minimum, denoted as s ? t and s ? t
respectively, are well-defined. Similarly let y1 ? y2 and y1 ? y2 denote the node-wise maximum
and minimum of any two labelings y1 , y2 ? LA , A ? V. Potential ?f is called submodular, if for
any two labelings y1 , y2 ? Lf it holds1 :
?f (y1 ) + ?f (y2 ) ? ?f (y1 ? y2 ) + ?f (y1 ? y2 ) .
(7)
Potential ? will be called supermodular, if (??) is submodular.
1
Pairwise binary potentials satisfying ?f (0, 1) + ?f (1, 0) ? ?f (0, 0) + ?f (1, 1) build an important special
case of this definition.
4
Energy E is called submodular if for any two labelings y1 , y2 ? LV it holds:
E(y1 ) + E(y2 ) ? E(y1 ? y2 ) + E(y1 ? y2 ) .
(8)
Submodularity of energy trivially follows from the submodularity of all its non-unary potentials ?f ,
f ? F, |f | > 1. In the pairwise case the inverse also holds: submodularity of energy implies also
submodularity of all its (pairwise) potentials (e.g. [31, Thm. 12]). There are efficient methods for
solving energy minimization problems with submodular potentials, based on its transformation into
min-cut/max-flow problem [21, 28, 16] in case all potentials are either unary or pairwise or to a
submodular max-flow problem in the higher-order case [20, 10, 1].
Ordered M Solutions. In what follows we will write z ? ? z ? for any two vectors z 1 and z ?
meaning that the inequality holds coordinate-wise.
For an arbitrary set A we will call a function f : (A)n ? R of n variables permutation invariant if for any (x1 , x2 , . . . , xn ) ? (A)n and any permutation ? it holds f (x1 , x2 , . . . , xn ) =
f (x?(1) , x?(2) , . . . , x?(n) ). In what follows we will consider mainly permutation invariant diversity
measures.
Let us consider two arbitrary labelings y 1 , y 2 ? LV and their node-wise minimum y 1 ? y 2 and
maximum y 1 ? y 2 . Since (yv1 ? yv2 , yv1 ? yv2 ) is either equal to (yv1 , yv2 ) or to (yv2 , yv1 ), for any
permutation invariant node diversity measure it holds ?2v (yv1 , yv2 ) = ?2v (yv1 ? yv2 , yv1 ? yv2 ). This in
its turn implies ?2 (y 1 ? y 2 , y 1 ? y 2 ) = ?2 (y 1 , y 2 ) for any node-wise diversity measure of the
form (6). If E is submodular, then from (8) it additionally follows that
E 2 (y 1 ? y 2 , y 1 ? y 2 ) ? E 2 (y 1 , y 2 ) ,
(9)
where E 2 is defined as in (4). Note, that (y 1 ? y 2 ) ? (y 1 ? y 2 ). Generalizing these considerations
to M labelings one obtains
Theorem 1. Let E be submodular and ?M be a node-wise diversity measure with each component
1
M
i
j
?M
v being permutation invariant. Then there exists an ordered M -tuple (y , . . . , y ), y ? y for
1
M
M
1 ? i < j ? M , such that for any (z , . . . , z ) ? (LV ) it holds
E M ({y}) ? E M ({z}) ,
(10)
where E M is defined as in (4).
Theorem 1 in particular claims that in the binary case Lv = {0, 1}, v ? V, the optimal M labelings
define nested subsets of nodes, corresponding to the label 1.
Submodular formulation of M-Best-Diverse problem. Due to Theorem 1, for submodular energies and node-wise diversity measures it is sufficient to consider only ordered M -tuples of labelings.
This order can be enforced by modifying the diversity measure accordingly:
M 1
?v (y , . . . , y M ), y 1 ? y 2 ? ? ? ? ? y M
1
M
?M
,
(11)
?
(y
,
.
.
.
,
y
)
:=
v
??,
otherwise
?M
and using it instead of the initial measure ?M
v . Note that ?v is not permutation invariant. In
practice one can use sufficiently big numbers in place of ? in (11). This implies
Lemma 1. Let E be submodular and ?M be a node-wise diversity measure with each component
?M
v being permutation invariant. Then any solution of the ordering enforcing M -best-diverse problem
M
X
X
1
M
? M ({y}) =
E
E(y i ) ? ?
??M
(12)
v (yv , . . . , yv )
i=1
v?V
is a solution of the corresponding M -best-diverse problem (4)
E M ({y}) =
M
X
E(y i ) ? ?
i=1
X
1
M
?M
v (yv , . . . , yv ) ,
v?V
M
where ??M
v and ?v are related by (11).
We will say that a vector (y 1 , . . . , y M ) ? (Lv )M is ordered, if it holds y 1 ? y 2 ? ? ? ? ? y M .
5
(13)
Given submodularity of E the submodularity (an hence ? solvability) of E M in (13) would trivially follow from the supermodularity of ?M . However there hardly exist supermodular diversity measures. The ordering provided by Theorem 1 and the corresponding form of the orderingenforcing diversity measure ??M significantly weaken this condition, which is precisely stated by
the following lemma. In the lemma we substitute ? of (11) with a sufficiently big values such as
C? ? max{y} E M ({y}) for the sake of numerical implementation. Moreover, this values will
differ from each other to keep ??M
v supermodular.
Lemma 2. Let for any two ordered vectors y = (y 1 , . . . , y M ) ? (Lv )M and z = (z 1 , . . . , z M ) ?
(Lv )M it holds
?v (y ? z) + ?v (y ? z) ? ?v (y) + ?v (z),
(14)
?
where y ? z and y ? z are element-wise maximum and minimum respectively. Then ?v , defined as
?
?
M
?1 X
M
X
i
j
??v (y 1 , . . . , y M ) = ?v (y 1 , . . . , y M ) ? C? ? ?
3max(0,y ?y ) ? 1?
(15)
i=1 j=i+1
is supermodular.
Note, eq. (11) and (15) are the same up to the infinity values in (11). Though condition (14) resembles the supermodularity condition, it has to be fulfilled for ordered vectors only. The following
corollaries of Lemma 2 give two most important examples of the diversity measures fulfilling (14).
Corollary 1. Let |Lv | = 2 for all v ? V. Then the statement of Lemma 2 holds for arbitrary
?v : (Lv )M ? R.
PM ?1 PM
1
M
i j
Corollary 2. Let ?M
v (y , . . . , y ) =
i=1
j=i+1 ?ij (y , y ). Then the condition of Lemma 2
is equivalent to
?ij (y i , y j ) + ?ij (y i + 1, y j + 1) ? ?ij (y i + 1, y j ) + ?ij (y i , y j + 1) for y i < y j
(16)
and 1 ? i < j ? M .
In particular, condition (16) is satisfied for the Hamming distance ?ij (y, y 0 ) = Jy 6= y 0 K.
The following theorem trivially summarizes Lemmas 1 and 2:
Theorem 2. Let energy E and diversity measure ?M satisfy conditions of Lemmas 1 and 2. Then
the ordering enforcing problem (12) delivers solution to the M -best-diverse problem (13) and is
submodular. Moreover, submodularity of all non-unary potentials of the energy E implies submod?M .
ularity of all non-unary potentials of the ordering enforcing energy E
4
Experimental evaluation
We have tested our algorithms in two application scenarios: (a) interactive foreground/background
image segmentation, where annotation is available in the form of scribbles [3] and (b) Category level
segmentation on PASCAL VOC 2012 data [9].
As baselines we use: (i) the sequential method DivMBest (2) proposed in [3, 25] and (ii) the
clique-encoding CE method [19] for an (approximate) joint computation of M -best-diverse labelings. As mentioned in Section 2, this method addresses the energy E M defined in (4), however it
has the disadvantage that its label space grows exponentially with M .
Our method that solves the problem (12) with the Hamming diversity measure (5) by transforming it into min-cut/max-flow problem [21, 28, 16] and running the solver [5] is denoted as
Joint-DivMBest.
Diversity measures used in experiments are: the Hamming distance (5) HD, Label Cost LC, Label
Transitions LT and Hamming Ball HB. The last three measures are higher order diversity potentials
introduced in [25] and used only in connection with the DivMBest algorithm. If not stated otherwise, the Hamming distance (5) is used as a diversity measure. Both the clique encoding (CE) based
approaches and the submodularity-based methods proposed in this work use only the Hamming
distance as a diversity measure.
As [25] suggests, certain combinations of different diversity measures may lead to better results. To
denote such combinations, the signs ? and ? were used in [25]. We refer to [25] for a detailed
description of this notation and treat such combined methods as a black box for our comparison.
6
M=2
DivMBest
CE
Joint-DivMBest
M=6
M=10
quality
time
quality
time
quality
time
93.16
95.13
95.13
0.45
2.9
0.77
95.02
96.01
96.01
2.4
47.6
5.2
95.16
96.19
96.19
4.4
1247
20.4
Table 1: Interactive segmentation: per-pixel accuracies (quality) for the best segmentation out of M
ones and run-time. Compare to the average quality 91.57 of a single labeling. Hamming distance
is used as a diversity measure. The run-time is in milliseconds (ms). Joint-DivMBest quantitatively outperforms DivMBest, and is equal to CE, however, it is considerably faster than CE.
4.1
Interactive segmentation
Instead of returning a single segmentation corresponding to a MAP-solution, diversity methods provide to the user a small number of possible low-energy results based on the scribbles. Following [3]
we model only the first iteration of such an interactive procedure, i.e. we consider user scribbles to
be given and compare the sets of segmentations returned by the compared diversity methods.
Authors of [3] kindly provided us their 50 graphical model instances, corresponding to the MAPinference problem (1). They are based on a subset of the PASCAL VOC 2010 [9] segmentation challenge with manually added scribbles. Pairwise potentials constitute contrast sensitive Potts terms [4],
which are submodular. This implies that (i) the MAP-inference is solvable by min-cut/max-flow algorithms [21] and (ii) Theorem 2 is applicable and the M -best-diverse solutions can be found by
reducing the ordering preserving problem (12) to min-cut/max-flow and applying the corresponding
algorithm.
Quantitative comparison and run-time of the considered methods is provided in Table 1, where
each method was used with the parameter ? (see (2), (4)), optimally tuned via cross-validation.
Following [3], as a quality measure we used the per pixel accuracy of the best solution for each
sample averaged over all test images. Methods CE and Joint-DivMBest gave the same quality,
which confirms the observation made in [19], that CE returns an exact MAP solution for each sample
in this dataset. Combined methods with more sophisticated diversity measures return results that are
either inferior to DivMBest or only negligibly improved once, hence we omitted them. The runtime provided is also averaged over all samples. The max-flow algorithm was used for DivMBest
and Joint-DivMBest and ?-expansion for CE.
Summary. It can be seen that the Joint-DivMBest qualitatively outperforms DivMBest and
is equal to CE. However, it is considerably faster than the latter (the difference grows exponentially
with M ) and the runtime is of the same order of magnitude as the one of DivMBest.
4.2
Category level segmentation
The category level segmentation from PASCAL VOC 2012 challenge [9] contains 1449 validation
images with known ground truth, which we used for evaluation of diversity methods. Corresponding pairwise models with contrast sensitive Potts terms of the form ?uv (y, y 0 ) = wuv Jy 6= y 0 K,
uv ? F, were used in [25] and kindly provided to us by the authors. Contrary to interactive segmentation, the label sets contain 21 elements and hence the respective MAP-inference problem (1) is not
submodular anymore. However it still can be approximatively solved by ?-expansion or ?-?-swap.
Since the MAP-inference problem (1) is not submodular in this experiment, Theorem 2 is not applicable. We used two ways to overcome it. First, we modified the diversity potentials according
to (15), as if Theorem 2 were to be correct. This basically means we were explicitly looking for
ordered M best diverse labelings. The resulting inference problem was addressed with ?-?-swap
(since neither max-flow nor the ?-expansion algorithms are applicable). We refer to this method as
to Joint-DivMBest-ordered. The second way to overcome the non-submodularity problem,
is based on learning. Using structured SVM technique we trained pairwise potentials with additional constraints enforcing their submodularity, as it is done in e.g. [11]. We kept the contrast terms
? y 0 ), which we used in place of Jy 6= y 0 K.
wuv and learned only a single submodular function ?(y,
? y 0 ), uv ? F. We refer to
After the learning, all our potentials had the form ?uv (y, y 0 ) = wuv ?(y,
7
MAP inference
M=5
M=15
M=16
quality time quality time quality time
DivMBest
HB?
DivMBest? ?HB?
HB? ?LC? ?LT?
DivMBest? ?HB? ?LC? ?LT?
CE
CE3
Joint-DivMBest-ordered
Joint-DivMBest-learned
Joint-DivMBest-learned
?-exp[4]
HB-HOP-MAP[30]
HB-HOP-MAP[30]
LT ? coop. cuts[17]
LT ? coop. cuts[17]
?-exp[4]
?-exp[4]
?-?-swap[4]
max-flow[5]
?-exp[4]
51.21
51.71
54.22
54.14
53.81
53.85
53.84
0.01
733
2.28
0.01
0.38
0.01
52.90
55.32
55.89
56.97
57.76
56.08
56.14
56.08
0.03
5.87
0.08
35.47
0.08
53.07
57.39
58.36
56.31
56.33
56.31
0.03
7.24
0.08
38.67
0.08
Table 2: PASCAL VOC 2012. Intersection over union quality measure/running time. The best
segmentation out of M is considered. Compare to the average quality 43.51 of a single labeling.
Time is in seconds (s). Notation ?-? correspond to absence of result due to computational reasons or
inapplicability of the method. (? )- methods were not run by us and the results were taken from [25]
directly. The MAP-inference column references the slowest inference technique out of those used
by the method.
this method as to Joint-DivMBest-learned. For the model we use max-flow[5] as an exact
inference method and ?-expansion[4] as a fast approximate inference method.
Quantitative comparison and run-time of the considered methods is provided in Table 2,
where each method was used with the parameter ? (see (2), (4)) optimally tuned via crossvalidation on the validation set in PASCAL VOC 2012. Following [3], we used the Intersection over union quality measure, averaged over all images. Among combined methods with
higher order diversity measures we selected only those providing the best results. The method
CE3 [19] is a hybrid of DivMBest and CE delivering a reasonable trade-off between running time and accuracy of inference for the model E M (4). Quantitative results delivered by
Joint-DivMBest-ordered and Joint-DivMBest-learned are very similar (though the
latter is negligibly better), significantly outperform those of DivMBest and only slightly inferior to those of CE3 . However the run-time for Joint-DivMBest-ordered and ?-expansion
version of Joint-DivMBest-learned are comparable to those of DivMBest and outperform all other competitors due to use of the fast inference algorithms and linearly growing label
space, contrary to the label space of CE3 , which grows as (Lv )3 . Though we do not know exact run-time for the combined methods (where ? and ? are used) we expect them to be significantly higher then those for DivMBest and Joint-DivMBest-ordered because of the intrinsically slow MAP-inference techniques used. However contrary to the latter one the inference in
Joint-DivMBest-learned can be exact due to submodularity of the underlying energy.
5
Conclusions
We have shown that submodularity of the MAP-inference problem implies a fully ordered set of
M best diverse solutions given a node-wise permutation invariant diversity measure. Enforcing
such ordering leads to a submodular formulation of the joint M -best-diverse problem and implies
its efficient solvability. Moreover, we have shown that even in non-submodular cases, when the
MAP-inference is (approximately) solvable with efficient graph-cut based methods, enforcing this
ordering leads to the M -best-diverse problem, which is (approximately) solvable with graph-cut
based methods as well. In our test cases (and there are likely others), such an approximative technique lead to notably better results then those provided by the established sequential DivMBest
technique [3], whereas its run-time remains quite comparable to the run-time of DivMBest and is
much smaller than the run-time of other competitors.
8
References
[1] C. Arora, S. Banerjee, P. Kalra, and S. Maheshwari. Generalized flows for optimal inference in higher
order MRF-MAP. TPAMI, 2015.
[2] D. Batra. An efficient message-passing algorithm for the M-best MAP problem. arXiv:1210.4841, 2012.
[3] D. Batra, P. Yadollahpour, A. Guzman-Rivera, and G. Shakhnarovich. Diverse M-best solutions in markov
random fields. In ECCV. Springer Berlin/Heidelberg, 2012.
[4] Y. Boykov and M.-P. Jolly. Interactive graph cuts for optimal boundary & region segmentation of objects
in N-D images. In ICCV, 2001.
[5] Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy
minimization in vision. TPAMI, 26(9):1124?1137, 2004.
[6] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. TPAMI,
23(11):1222?1239, 2001.
[7] C. Chen, V. Kolmogorov, Y. Zhu, D. N. Metaxas, and C. H. Lampert. Computing the M most probable
modes of a graphical model. In AISTATS, 2013.
[8] G. Elidan and A. Globerson. The probabilistic inference challenge (PIC2011).
[9] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object
Classes Challenge 2012 (VOC2012) Results.
[10] A. Fix, A. Gruber, E. Boros, and R. Zabih. A graph cut algorithm for higher-order Markov random fields.
In ICCV, 2011.
[11] V. Franc and B. Savchynskyy. Discriminative learning of max-sum classifiers. JMLR, 9:67?104, 2008.
[12] M. Fromer and A. Globerson. An lp view of the m-best map problem. In NIPS 22, 2009.
[13] A. Guzman-Rivera, D. Batra, and P. Kohli. Multiple choice learning: Learning to produce multiple
structured outputs. In NIPS 25, 2012.
[14] A. Guzman-Rivera, P. Kohli, and D. Batra. DivMCuts: Faster training of structural SVMs with diverse
M-best cutting-planes. In AISTATS, 2013.
[15] A. Guzman-Rivera, P. Kohli, D. Batra, and R. A. Rutenbar. Efficiently enforcing diversity in multi-output
structured prediction. In AISTATS, 2014.
[16] H. Ishikawa. Exact optimization for Markov random fields with convex priors. TPAMI, 2003.
[17] S. Jegelka and J. Bilmes. Submodularity beyond submodular energies: coupling edges in graph cuts. In
CVPR, 2011.
[18] J. H. Kappes, B. Andres, F. A. Hamprecht, C. Schn?orr, S. Nowozin, D. Batra, S. Kim, B. X. Kausler,
T. Kr?oger, J. Lellmann, N. Komodakis, B. Savchynskyy, and C. Rother. A comparative study of modern
inference techniques for structured discrete energy minimization problems. IJCV, pages 1?30, 2015.
[19] A. Kirillov, B. Savchynskyy, D. Schlesinger, D. Vetrov, and C. Rother. Inferring M-best diverse labelings
in a single one. In ICCV, 2015.
[20] V. Kolmogorov. Minimizing a sum of submodular functions. Discrete Applied Mathematics, 2012.
[21] V. Kolmogorov and R. Zabin. What energy functions can be minimized via graph cuts? TPAMI, 2004.
[22] A. Kulesza and B. Taskar. Structured determinantal point processes. In NIPS 23, 2010.
[23] E. L. Lawler. A procedure for computing the K best solutions to discrete optimization problems and its
application to the shortest path problem. Management Science, 18(7), 1972.
[24] D. Nilsson. An efficient algorithm for finding the m most probable configurationsin probabilistic expert
systems. Statistics and Computing, 8(2):159?173, 1998.
[25] A. Prasad, S. Jegelka, and D. Batra. Submodular meets structured: Finding diverse subsets in
exponentially-large structured item sets. In NIPS 27, 2014.
[26] V. Premachandran, D. Tarlow, and D. Batra. Empirical minimum bayes risk prediction: How to extract
an extra few % performance from vision models with just three more parameters. In CVPR, 2014.
[27] V. Ramakrishna and D. Batra. Mode-marginals: Expressing uncertainty via diverse M-best solutions. In
NIPS Workshop on Perturbations, Optimization, and Statistics, 2012.
[28] D. Schlesinger and B. Flach. Transforming an arbitrary minsum problem into a binary one. TU Dresden,
Fak. Informatik, 2006.
[29] M. I. Schlesinger and V. Hlavac. Ten lectures on statistical and structural pattern recognition, volume 24.
Springer Science & Business Media, 2002.
[30] D. Tarlow, I. E. Givoni, and R. S. Zemel. Hop-map: Efficient message passing with high order potentials.
In AISTATS, 2010.
[31] T. Werner. A linear programming approach to max-sum problem: A review. TPAMI, 29(7), 2007.
[32] P. Yadollahpour, D. Batra, and G. Shakhnarovich. Discriminative re-ranking of diverse segmentations. In
CVPR, 2013.
[33] C. Yanover and Y. Weiss. Finding the M most probable configurations using loopy belief propagation. In
NIPS 17, 2004.
9
| 5977 |@word kohli:3 determinant:1 version:1 underline:1 everingham:1 yv0:1 flach:1 confirms:1 prasad:1 rivera:4 initial:6 configuration:9 contains:2 tuned:2 outperforms:2 existing:1 current:1 yet:1 determinantal:1 numerical:1 premachandran:1 greedy:3 selected:1 item:1 accordingly:1 plane:1 tarlow:2 provides:1 node:27 readability:1 mathematical:1 along:1 consists:1 ijcv:1 introduce:1 pairwise:10 notably:1 indeed:1 nor:1 growing:1 multi:1 jolly:1 voc:5 solver:5 project:1 provided:7 moreover:5 underlying:2 notation:4 medium:1 lowest:1 what:6 minimizes:1 finding:9 transformation:2 quantitative:3 y3:3 subclass:1 growth:1 interactive:6 runtime:2 exactly:1 returning:1 classifier:1 grant:1 omit:1 producing:1 local:1 treat:1 wuv:3 limit:1 vetrov:2 encoding:4 meet:1 path:1 approximately:3 might:1 black:1 dresden:4 resembles:1 suggests:1 limited:1 averaged:3 practical:1 globerson:2 union:5 practice:3 lf:2 procedure:8 empirical:2 significantly:7 word:1 savchynskyy:3 risk:2 applying:1 equivalent:1 map:29 williams:1 convex:1 formulate:1 minsum:1 insight:1 hd:1 coordinate:1 user:2 exact:5 programming:1 us:2 approximative:1 hypothesis:1 fak:1 agreement:1 givoni:1 element:2 satisfying:2 recognition:1 cut:18 submission:1 negligibly:2 taskar:1 solved:2 region:1 kappes:1 ordering:7 trade:3 mentioned:1 transforming:2 complexity:2 multilabel:1 trained:1 solving:1 shakhnarovich:2 creates:1 efficiency:1 swap:4 completely:1 easily:1 joint:22 represented:3 kolmogorov:4 substantiated:1 fast:4 zemel:1 labeling:7 hyper:3 quite:1 supplementary:1 solve:2 cvpr:3 say:1 supermodularity:2 otherwise:4 coop:2 statistic:2 jointly:4 delivered:3 advantage:2 sequence:1 tpami:6 product:1 tu:3 date:1 achieve:1 intuitive:1 description:1 crossvalidation:1 produce:3 comparative:1 object:2 bogdan:1 coupling:1 ij:6 received:1 eq:1 solves:2 implies:10 differ:1 submodularity:16 drawback:1 correct:1 modifying:1 material:1 v1m:3 fix:1 generalization:1 preliminary:1 rpd:1 probable:4 adjusted:1 hold:10 sufficiently:2 considered:6 ground:1 exp:4 claim:2 major:1 omitted:1 estimation:1 applicable:6 label:16 council:1 sensitive:2 concurrent:1 tool:1 minimization:17 always:1 modified:1 corollary:3 potts:2 likelihood:1 mainly:2 slowest:1 contrast:5 greedily:1 baseline:1 sense:1 kim:1 posteriori:1 inference:34 unary:7 proj:1 labelings:22 germany:1 pixel:2 sketched:1 arg:3 issue:1 pascal:6 denoted:3 among:1 proposes:1 art:1 special:2 field:4 equal:4 once:1 nicely:1 sampling:1 manually:1 hop:3 ishikawa:1 look:1 constitutes:1 foreground:1 minimized:1 others:1 guzman:4 quantitatively:2 few:1 franc:1 modern:1 resulted:1 individual:1 inapplicability:1 powerset:1 ourselves:1 microsoft:1 organization:1 message:2 evaluation:3 bracket:1 hamprecht:1 devoted:1 edge:2 tuple:2 partial:1 savchynskyy1:1 necessary:1 respective:1 tree:3 circle:1 re:1 theoretical:1 weaken:1 schlesinger:3 instance:1 column:1 modeling:1 obstacle:1 disadvantage:2 werner:1 loopy:1 applicability:1 cost:2 subset:4 veksler:1 optimally:2 considerably:4 combined:4 divmbest:34 oger:1 probabilistic:3 off:3 connecting:1 again:1 satisfied:1 management:1 russia:1 expert:1 return:2 ularity:1 potential:27 de:1 diversity:56 orr:1 satisfy:1 explicitly:2 ranking:2 depends:1 later:2 view:2 yv:8 bayes:1 complicated:1 annotation:1 contribution:1 square:1 accuracy:3 efficiently:3 correspond:1 metaxas:1 andres:1 basically:2 informatik:1 bilmes:1 definition:1 competitor:2 energy:40 proof:1 associated:5 hamming:12 gain:1 dataset:1 intrinsically:1 ask:1 segmentation:15 sophisticated:1 back:1 lawler:1 higher:9 supermodular:4 voc2012:1 follow:1 reflected:1 zisserman:1 improved:1 wei:1 formulation:8 done:1 though:5 box:1 just:1 banerjee:1 propagation:1 defines:2 mode:3 yf:2 quality:13 grows:3 contain:1 true:2 y2:14 hence:7 y12:4 submod:1 komodakis:1 inferior:4 m:1 generalized:1 yv1:8 delivers:1 image:6 wise:16 meaning:2 consideration:1 recently:3 funding:1 boykov:3 specialized:2 exponentially:3 volume:1 yvm:1 marginals:1 expressing:2 refer:3 uv:4 trivially:3 pm:3 similarly:1 erc:1 mathematics:1 submodular:34 had:1 yv2:7 solvability:2 recent:1 scenario:5 certain:3 inequality:1 binary:4 seen:2 minimum:6 additional:3 preserving:1 recognized:1 shortest:1 elidan:1 ii:3 multiple:2 infer:2 faster:4 cross:1 y22:4 mle:1 jy:4 prediction:2 mrf:2 y21:3 vision:2 arxiv:1 iteration:1 represent:1 justified:1 addition:1 background:2 whereas:2 addressed:1 winn:1 extra:1 induced:1 undirected:1 contrary:5 flow:11 call:1 structural:3 split:1 hb:7 variety:1 gave:1 expression:3 penalty:2 returned:1 passing:2 hardly:1 constitute:1 yvj:1 boros:1 detailed:1 delivering:1 ten:1 zabih:2 svms:1 category:3 simplest:2 outperform:5 exist:1 millisecond:1 sign:1 fulfilled:1 per:2 diverse:31 discrete:4 write:1 liv:2 clarity:1 neither:1 ce:11 yadollahpour:2 kept:1 graph:18 relaxation:1 sum:6 enforced:1 run:11 inverse:1 uncertainty:2 powerful:1 place:2 reasonable:2 summarizes:1 comparable:2 bound:1 distinguish:1 copied:1 precisely:1 infinity:1 constraint:1 x2:2 sake:1 speed:2 min:8 structured:9 according:3 combination:3 ball:1 smaller:2 slightly:2 remain:1 lp:2 y32:4 nilsson:1 y13:5 explained:1 invariant:7 fulfilling:2 iccv:3 taken:1 y31:3 computationally:1 remains:1 assures:1 turn:1 loose:1 know:1 repulsive:1 junction:1 available:1 kirillov:2 enforce:1 anymore:1 alternative:1 shortly:1 slower:1 original:4 rfbr:1 top:1 moscow:1 denotes:1 substitute:1 running:3 graphical:9 ce3:4 build:4 added:1 distance:10 berlin:1 argue:1 reason:1 enforcing:7 rother:3 index:2 y4:3 illustration:2 providing:1 minimizing:2 innovation:1 difficult:1 unfortunately:1 statement:1 stated:2 y11:3 implementation:1 reliably:1 fromer:1 zabin:1 upper:2 observation:1 markov:4 sm:2 benchmark:1 finite:2 jak:1 supporting:1 extended:1 looking:2 incorporated:1 y1:14 rn:1 perturbation:1 arbitrary:4 thm:1 inferred:2 g1m:2 introduced:2 pair:1 required:1 rutenbar:1 optimized:2 connection:1 schn:1 engine:1 learned:7 narrow:1 established:2 nip:6 address:1 beyond:2 suggested:1 pattern:1 kulesza:1 challenge:5 including:1 max:14 belief:1 gool:1 power:3 demanding:1 natural:1 hybrid:1 kausler:1 business:1 cascaded:1 solvable:4 yanover:1 zhu:1 hlavac:1 arora:1 extract:1 prior:1 review:1 fully:2 expect:1 permutation:8 lecture:1 lv:27 ramakrishna:1 triple:1 validation:3 jegelka:2 sufficient:1 gruber:1 nowozin:1 translation:1 eccv:1 penalized:1 summary:1 supported:1 last:1 copy:2 yvi:1 taking:1 van:1 overcome:2 plain:1 xn:2 transition:1 boundary:1 author:2 qualitatively:2 commonly:1 made:1 programme:1 polynomially:1 cope:1 scribble:4 approximate:3 obtains:1 cutting:1 dmitry:1 configurationsin:1 keep:2 clique:4 sequentially:1 tuples:2 discriminative:2 search:1 table:4 additionally:2 nature:1 heidelberg:1 expansion:6 european:2 kindly:2 aistats:4 main:2 linearly:1 big:2 lampert:1 lellmann:1 allowed:1 x1:2 augmented:1 fig:3 referred:1 slow:1 lc:3 inferring:1 exponential:1 jmlr:1 theorem:9 svm:1 intrinsic:1 exists:1 workshop:1 sequential:8 importance:4 kr:1 magnitude:3 cartesian:1 horizon:1 chen:1 suited:1 depicted:1 generalizing:1 lt:5 intersection:2 likely:1 visual:1 ordered:15 approximatively:2 springer:2 ch:1 corresponds:1 nested:1 determines:1 truth:1 conditional:1 carsten:1 formulated:4 absence:1 feasible:1 experimentally:1 shortcut:1 reducing:1 lemma:9 batra:12 total:6 called:7 experimental:3 la:2 latter:3 alexander:2 tested:1 |
5,499 | 5,978 | Covariance-Controlled Adaptive Langevin
Thermostat for Large-Scale Bayesian Sampling
Xiaocheng Shang?
University of Edinburgh
x.shang@ed.ac.uk
Zhanxing Zhu?
University of Edinburgh
zhanxing.zhu@ed.ac.uk
Benedict Leimkuhler
University of Edinburgh
b.leimkuhler@ed.ac.uk
Amos J. Storkey
University of Edinburgh
a.storkey@ed.ac.uk
Abstract
Monte Carlo sampling for Bayesian posterior inference is a common approach
used in machine learning. The Markov Chain Monte Carlo procedures that are
used are often discrete-time analogues of associated stochastic differential equations (SDEs). These SDEs are guaranteed to leave invariant the required posterior
distribution. An area of current research addresses the computational benefits of
stochastic gradient methods in this setting. Existing techniques rely on estimating
the variance or covariance of the subsampling error, and typically assume constant
variance. In this article, we propose a covariance-controlled adaptive Langevin
thermostat that can effectively dissipate parameter-dependent noise while maintaining a desired target distribution. The proposed method achieves a substantial
speedup over popular alternative schemes for large-scale machine learning applications.
1
Introduction
In machine learning applications, direct sampling with the entire large-scale dataset is computationally infeasible. For instance, standard Markov Chain Monte Carlo (MCMC) methods [16], as well
as typical Hybrid Monte Carlo (HMC) methods [3, 6, 9], require the calculation of the acceptance
probability and the creation of informed proposals based on the whole dataset.
In order to improve computational efficiency, a number of stochastic gradient methods [4, 5, 20, 21]
have been proposed in the setting of Bayesian sampling based on random (and much smaller) subsets
to approximate the likelihood of the whole dataset, thus substantially reducing the computational
cost in practice. Welling and Teh proposed the so-called Stochastic Gradient Langevin Dynamics
(SGLD) [21], combining the ideas of stochastic optimization [18] and traditional Brownian dynamics, with a sequence of stepsizes decreasing to zero. A fixed stepsize is often adopted in practice
which is the choice in this article as in Vollmer et al. [20], where a modified SGLD (mSGLD) was
also introduced that was designed to reduce sampling bias.
SGLD generates samples from first order Brownian dynamics, and thus, with a fixed timestep, one
can show that it is unable to dissipate excess noise in gradient approximations while maintaining
the desired invariant distribution [4]. A Stochastic Gradient Hamiltonian Monte Carlo (SGHMC)
method was proposed by Chen et al. [4], which relies on second order Langevin dynamics and incorporates a parameter-dependent diffusion matrix that is intended to effectively offset the stochastic
perturbation of the gradient. However, it is difficult to accommodate the additional diffusion term
?
The first and second authors contributed equally, and the listed author order was decided by lot.
1
in practice. Moreover, as pointed out in [5] poor estimation of it may have a significant adverse
influence on the sampling of the target distribution; for example the effective system temperature
may be altered.
The ?thermostat? idea, which is widely used in molecular dynamics [7, 13], was recently adopted
in the Stochastic Gradient Nos?e-Hoover Thermostat (SGNHT) by Ding et al. [5] in order to adjust
the kinetic energy during simulation in such a way that the canonical ensemble is preserved (i.e. so
that a prescribed constant temperature distribution is maintained). In fact, the SGNHT method is
essentially equivalent to the Adaptive Langevin (Ad-Langevin) thermostat proposed earlier by Jones
and Leimkuhler [10] in the molecular dynamics setting (see [15] for discussion).
Despite the substantial interest generated by these methods, the mathematical foundation for
stochastic gradient methods has been incomplete. The underlying dynamics of the SGNHT [5]
was taken up by Leimkuhler and Shang [15], together with the design of discretization schemes
with high effective order of accuracy. SGNHT methods are designed based on the assumption of
constant noise variance. In this article, we propose a Covariance-Controlled Adaptive Langevin
(CCAdL) thermostat, that can handle parameter-dependent noise, improving both robustness and
reliability in practice, and which can effectively speed up the convergence to the desired invariant
distribution in large-scale machine learning applications.
The rest of the article is organized as follows. In Section 2, we describe the setting of Bayesian
sampling with noisy gradients and briefly review existing techniques. In Section 3, we construct
the novel Covariance-Controlled Adaptive Langevin (CCAdL) method that can effectively dissipate
parameter-dependent noise while maintaining the correct distribution. Various numerical experiments are performed in Section 4 to verify the usefulness of CCAdL in a wide range of large-scale
machine learning applications. Finally, we summarize our findings in Section 5.
2
Bayesian Sampling with Noisy Gradients
In the typical setting of Bayesian sampling [3, 19], one is interested in drawing states from a posterior
distribution defined as
?(?|X) ? ?(X|?)?(?) ,
(1)
where ? ? RNd is the parameter vector of interest, X denotes the entire dataset, and, ?(X|?)
and ?(?) are the likelihood and prior distributions, respectively. We introduce a potential energy
function U (?) by defining ?(?|X) ? exp(??U (?)), where ? is a positive parameter and can be
interpreted as being proportional to the reciprocal temperature in an associated physical system, i.e.
? ?1 = kB T (kB is the Boltzmann constant and T is temperature). In practice, ? is often set to be
unity for notational simplicity. Taking the logarithm of (1) yields
U (?) = ? log ?(X|?)?log ?(?) .
(2)
Assuming the data are independent and identically distributed (i.i.d.), the logarithm of the likelihood
can be calculated as
N
X
log ?(X|?) =
log ?(xi |?) ,
(3)
i=1
where N is the size of the entire dataset.
However, as already mentioned, it is computationally infeasible to deal with the entire large-scale
dataset at each timestep as would typically be required in MCMC and HMC methods. Instead, in order to improve the efficiency, a random (and much smaller, n N ) subset is preferred in stochastic
gradient methods, in which the likelihood of the dataset for given parameters is approximated as
n
NX
log ?(X|?) ?
log ?(xri |?) ,
(4)
n i=1
where {xri }ni=1 represents a random subset of X. Thus, the ?noisy? potential energy can be written
as
n
X
? (?) = ? N
U
log ?(xri |?)?log ?(?) ,
(5)
n i=1
?
? (?).
where the negative gradient of the potential is referred to as the ?noisy? force, i.e. F(?)
= ??U
2
Our goal is to correctly sample the Gibbs distribution ?(?) ? exp(??U (?)) (1). As in [4, 5], the
gradient noise is assumed to be Gaussian with mean zero and unknown variance, in which case one
may rewrite the noisy force as
p
?
F(?)
= ??U (?)+ ?(?)M1/2 R ,
(6)
where M typically is a diagonal matrix, ?(?) represents the covariance
matrix
of
the
noise
and
R
p
is a vector of i.i.d. standard normal random variables. Note that ?(?)M1/2 R here is actually
equivalent to N (0, ?(?)M).
In a typical setting
of numerical integration withassociated stepsize h,one has
p
? p
?
hF(?) = h ??U (?)+ ?(?)M1/2 R = ?h?U (?)+ h
h?(?) M1/2 R ,
(7)
2
and therefore, assuming a constant covariance matrix (i.e. ? = ? I, where I is the identity matrix),
the SGNHT method by Ding et al. [5], has the following underlying dynamics, written as a standard
It?o stochastic differential equation (SDE) system [15]:
d? = M?1 pdt ,
p
?
(8)
dp = ??U (?)dt+? hM1/2 dW??pdt+ 2A? ?1 M1/2 dWA ,
T ?1
?1
d? = ?
p M p?Nd kB T dt ,
where, colloquially, dW and dWA , respectively, represent vectors of independent
Wiener increp
ments; and are often informally denoted by N (0, dtI) [4]. The coefficient 2A? ?1 M1/2 , represents the strength of artificial noise added into the system to improve ergodicity, and A, which can be
termed as ?effective friction?, is a positive parameter and proportional to the variance of the noise.
The auxiliary variable ? ? R is governed by a Nos?e-Hoover device [8, 17] via a negative feedback
mechanism, i.e. when the instantaneous temperature (average kinetic energy per degree of freedom)
calculated as
kB T = pT M?1 p/Nd
(9)
is below the target temperature, the ?dynamical friction? ? would decrease allowing an increase
of temperature, while ? would increase when the temperature is above the target. ? is a coupling
parameter which is referred to as the ?thermal mass? in the molecular dynamics setting.
Proposition 1: (See Jones and Leimkuhler [10]) The SGNHT method (8) preserves the modified
Gibbs (stationary) distribution
? 2 /2 ,
??? (?, p, ?) = Z ?1 exp (??H(?, p)) exp ???(? ? ?)
(10)
where Z is the normalizing constant, H(?, p) = pT M?1 p/2+U (?) is the Hamiltonian, and
?? = A+?h? 2 /2 .
(11)
Proposition 1 tells us that the SGNHT method can adaptively dissipate excess noise pumped into
the system while maintaining the correct distribution. The variance of the gradient noise, ? 2 , does
not need to be known a priori. As long as ? 2 is constant, the auxiliary variable ? will be able to
automatically find its mean value ?? on the fly. However, with a parameter-dependent covariance
matrix ?(?), the SGNHT method (8) would not produce the required target distribution (10).
Ding et al. [5] claimed that it is reasonable to assume the covariance matrix ?(?) is constant when
the size of the dataset, N , is large, in which case the variance of the posterior of ? is small. The
magnitude of the posterior variance does not actually relate to the constancy of the ?, however,
in general ? is not constant. Simply assuming the non-constancy of the ? can have a significant
impact on the performance of the method (most notably the stability measured by the largest usable
stepsize). Therefore, it is essential to have an approach that can handle parameter-dependent noise.
In the following section we propose a covariance-controlled thermostat that can effectively dissipate
parameter-dependent noise while maintaining the target stationary distribution.
3
Covariance-Controlled Adaptive Langevin Thermostat
As mentioned in the previous section, the SGNHT method (8) can only dissipate noise with a constant covariance matrix. When the covariance matrix becomes parameter-dependent, in general a
parameter-dependent covariance matrix does not imply the required ?thermal equilibrium?, i.e. the
system cannot be expected to converge to the desired invariant distribution (10), typically resulting
in poor estimation of functions of parameters of interest. In fact, in that case it is not clear whether
or not there exists an invariant distribution at all.
3
In order to construct a stochastic-dynamical system that preserves the canonical distribution, we
suggest adding a suitable damping (viscous) term to effectively dissipate the parameter-dependent
gradient noise. To this end, we propose the following Covariance-Controlled Adaptive Langevin
(CCAdL) thermostat
d? = M?1 pdt ,
p
p
dp = ??U (?)dt+ h?(?)M1/2 dW?(h/2)??(?)pdt??pdt+ 2A? ?1 M1/2 dWA , (12)
d? = ??1 pT M?1 p?Nd kB T dt .
Proposition 2: The CCAdL thermostat (12) preserves the modified Gibbs (stationary)
distribution
??? (?, p, ?) = Z ?1 exp (??H(?, p)) exp ???(? ?A)2 /2 .
(13)
Proof: The Fokker-Planck equation corresponding to (12) is
?t = L? ? := ?M?1 p??? ?+?U (?)??p ?+(h/2)?p ?(?(?)M?p ?)+(h/2)??p ?(?(?)p?)
+??p ?(p?)+A? ?1 ?p ?(M?p ?)???1 pT M?1 p?Nd kB T ?? ? .
Just insert ??? (13) into the Fokker-Planck operator L? to see that it vanishes.
2.
The incorporation of the parameter-dependent covariance matrix ?(?) in (12) is intended to offset
the covariance matrix coming from the gradient approximation. However, in practice, one does not
know ?(?) a priori. Thus instead one must estimate ?(?) during the simulation, a task which will
be addressed in Section 3.1. This procedure is related to the method used in the SGHMC method
proposed by Chen et al. [4], which uses dynamics of the following form:
d? = M?1 pdt ,
p
p
(14)
dp = ??U (?)dt+ h?(?)M1/2 dW?Apdt+ 2? ?1 (AI?h?(?)/2)M1/2 dWA .
It can be shown that the SGHMC method preserves the Gibbs canonical distribution
?? (?, p) = Z ?1 exp (??H(?, p)) .
(15)
Although both CCAdL (12) and SGHMC (14) preserve their respective invariant distributions, let
us note several advantages of the former over the latter in practice:
(i) CCAdL and SGHMC both require estimation of the covariance matrix ?(?) during simulation, which can be costly in high dimension. In numerical experiments, we have found
that simply using the diagonal of the covariance matrix, at significantly reduced computational cost, works quite well in CCAdL. By contrast, it is difficult to find a suitable value
of the parameter A in SGHMC since one has to make sure the matrix AI?h?(?)/2 is
positive semi-definite. One may attempt to use a large value of the ?effective friction? A
and/or a small stepsize h. However, too-large a friction would essentially reduce SGHMC to SGLD, which is not desirable, as pointed out in [4], while extremely small stepsize
would significantly impact the computational efficiency.
(ii) Estimation of the covariance matrix ?(?) unavoidably introduces additional noise in both
CCAdL and SGHMC. Nonetheless, CCAdL can still effectively control the system temperature (i.e. maintaining the correct distribution of the momenta) due to the use of the
stabilizing Nos?e-Hoover control, while in SGHMC poor estimation of the covariance matrix may lead to significant deviations of the system temperature (as well as the distribution
of the momenta), resulting in poor sampling of the parameters of interest.
3.1
Covariance Estimation of Noisy Gradients
Under the assumption that the noise of the stochastic gradient follows a normal distribution, we
apply a similar method to that of [2] to estimate the covariance matrix associated with the noisy
gradient. If we let g(?; x) = ?? log ?(x|?) and assume that the size of subset n is large enough for
the central limit theorem to hold, we have
n
1
1X
g(? t ; xri ) ? N Ex [g(? t ; x)], It ,
(16)
n i=1
n
where It = Cov[g(? t ; x)] is the covariance of the gradient at ? t . Given that the noisy (stochastic)
? (? t ) = ? N Pn g(? t ; xr )?? log ?(? t ), and the clean (full)
gradient based on current subset ?U
i
i=1
n
4
Algorithm 1 Covariance-Controlled Adaptive Langevin (CCAdL)
1:
2:
3:
4:
5:
6:
7:
8:
?
Input: h, A, {?t }Tt=1 .
Initialize ? 0 , p0 , I0 , and ?0 = A.
for t = 1, 2, . . . , T? do
? t = ? t?1 +pt?1 h;
Estimate ?It using Eq. (18);
?
? (? t )h? h N 2 ?It pt?1 h??t?1 pt?1 h+ 2AhN (0, 1);
pt = pt?1 ??U
2 n
?t = ?t?1 + pTt pt /Nd ?1 h;
end for
PN
? (? t )] = Ex [?U (? t )], and
g(? t ; xi )?? log ?(? t ), we have Ex [?U
2
? (? t ) = ?U (? t )+N 0, N It ,
(17)
?U
n
i.e. ?(? t ) = N 2 It /n. Assuming ? t does not change dramatically over time, we use the moving
average update to estimate It ,
?It = (1??t )?It?1 +?t V(? t ) ,
(18)
where ?t = 1/t, and
n
1 X
T
V(? t ) =
(g(? t ; xri )??
g (? t )) (g(? t ; xri )??
g (? t ))
(19)
n?1 i=1
is the empirical covariance of gradient. g?(? t ) represents the mean gradient of the log likelihood
computed from a subset. As proved in [2], this estimator has a convergence order of O(1/N ).
gradient ?U (? t ) = ?
thus
i=1
As already mentioned, estimating the full covariance matrix is computationally infeasible in high
dimension. However, we have found that employing a diagonal approximation of the covariance
matrix (i.e. only estimating the variance along each dimension of the noisy gradient), works quite
well in practice, as demonstrated in Section 4.
The procedure of the CCAdL method is summarized in Algorithm 1, where we simply used M = I,
? = 1, and ? = Nd in order to be consistent with the original implementation of SGNHT [5].
Note that this is a simple, first order (in terms of the stepsize) algorithm. A recent article [15] has
introduced higher order of accuracy schemes which can improve accuracy, but our interest here is in
the direct comparison of the underlying machinery of SGHMC, SGNHT, and CCAdL, so we avoid
further modifications and enhancements related to timestepping at this stage.
In the following section, we compare the newly-established CCAdL method with SGHMC and
SGNHT on various machine learning tasks to demonstrate the benefits of CCAdL in Bayesian sampling with a noisy gradient.
4
4.1
Numerical Experiments
Bayesian Inference for Gaussian Distribution
We first compare the performance of the newly-established CCAdL method with SGHMC and
SGNHT for a simple task using synthetic data, i.e. Bayesian inference of both the mean and variance of a one-dimensional normal distribution. We apply the same experimental setting as in [5]. We
generated N = 100 samples from the standard normal distribution N (0, 1). We used the likelihood
function of N (xi |?, ? ?1 ) and assigned Normal-Gamma distribution as their prior distribution, i.e.
?, ? ? N (?|0, ?)Gam(?|1, 1). Then the corresponding posterior distribution is another NormalGamma distribution, i.e. (?, ?)|X ? N (?|?N , (?N ?)?1 )Gam(?|?N , ?N ), with
N
X
?
? )2
?2
N
(xi ? x
Nx
Nx
,
?N = 1+N ,
?N = 1+ ,
?N = 1+
+
,
?N =
N +1
2
2
2(1+N )
i=1
PN
? = i=1 xi /N . A random subset of size n = 10 was selected at each timestep to approxiwhere x
mate the full gradient, resulting in the
following stochastic gradients,
n
n
2
X
?N X
?
? = 1? N +1 + ? + N
?? U = (N +1)?? ?
x ri ,
?? U
(xr ??)2 .
n i=1
2?
2 2n i=1 i
5
It can be seen that the variance of the stochastic gradient noise is no longer constant and actually
depends on the size of the subset, n, and the values of ? and ? in each iteration. This directly violates
the constant noise variance assumption of SGNHT [5], while CCAdL adjusts to the varying noise
variance.
The marginal distributions of ? and ? obtained from various methods with different combinations of
h and A were compared and plotted in Figure 1, with Table 1 consisting of the corresponding root
mean square error (RMSE) of the distribution and autocorrelation time from 106 samples. In most
of the cases, both SGNHT and CCAdL easily outperform the SGHMC method possibly due to the
presence of the Nos?e-Hoover device, with SGHMC only showing superiority with small values of
h and large value of A, neither of which is desirable in practice as discussed in Section 3. Between
SGNHT and the newly-proposed CCAdL method, the latter achieves better performance in each
of the cases investigated, highlighting the importance of the covariance control with parameterdependent noise.
0
?
Density
0.5
True
SGHMC
SGNHT
CCAdL
3
2
1
0
0.5
0
?0.5
0
?
0
?0.5
0.5
True
SGHMC
SGNHT
CCAdL
3
1.5
(a) h = 0.001, A = 1
0
0.5
0
?
0.5
True
SGHMC
SGNHT
CCAdL
2
1
1
?
0
0.5
1.5
(b) h = 0.001, A = 10
2
1
3
2
True
SGHMC
SGNHT
CCAdL
3
1
1
1
?
2
4
Density
2
1
Density
0
?0.5
True
SGHMC
SGNHT
CCAdL
3
0
?0.5
0
?
0.5
True
SGHMC
SGNHT
CCAdL
3
Density
1
4
True
SGHMC
SGNHT
CCAdL
3
Density
Density
2
4
Density
True
SGHMC
SGNHT
CCAdL
3
Density
4
2
1
1
?
1.5
(c) h = 0.01, A = 1
0
0.5
1
?
1.5
(d) h = 0.01, A = 10
Figure 1: Comparisons of marginal distribution (density) of ? (top row) and ? (bottom row) with various
values of h and A indicated in each column. The peak region is highlighted in the inset.
Table 1: Comparisons of (RMSE, Autocorrelation time) of (?, ?) of various methods for Bayesian inference
of Gaussian mean and variance.
Methods
h = 0.001, A = 1 h = 0.001, A = 10 h = 0.01, A = 1 h = 0.01, A = 10
SGHMC
(0.0148, 236.12)
(0.0029, 333.04)
(0.0531, 29.78)
(0.0132, 39.33)
SGNHT
(0.0037, 238.32)
(0.0035, 406.71)
(0.0044, 26.71)
(0.0043, 55.00)
CCAdL
(0.0034, 238.06)
(0.0031, 402.45)
(0.0021, 26.71) (0.0035, 54.43)
4.2
Large-scale Bayesian Logistic Regression
We then consider a Bayesian logistic regression model trained on the benchmark MNIST dataset
for binary classification of digits 7 and 9 using 12, 214 training data points, with a test set of size
2037. A 100-dimensional random projection
of the original features was
used. We used the likeQN
T
lihood function of ? {xi , yi }N
|w
?
1/
1+exp(?y
w
x
)
, and the prior distribution
i
i
i=1
i=1
of ?(w) ? exp(?wT w/2), respectively. A subset of size n = 500 was used at each timestep.
Since the dimensionality of this problem is not that high, a full covariance estimation was used for
CCAdL.
We investigate the convergence speed of each method through measuring test log likelihood using
posterior mean against the number of passes over the entire dataset, see Figure 2 (top row). CCAdL
displays significant improvements over SGHMC and SGNHT with different values of h and A: (1)
CCAdL converges much faster than the other two, which also indicates its faster mixing speed and
shorter burn-in period; (2) CCAdL shows robustness in different values of the ?effective friction? A,
with SGHMC and SGNHT relying on a relative large value of A (especially the SGHMC method)
which is intended to dominate the gradient noise.
To compare the sample quality obtained from each method, Figure 2 (bottom row) plots the twodimensional marginal posterior distribution in randomly-selected dimensions of 2 and 5 based on
106 samples from each method after the burn-in period (i.e. we start to collect samples when the
6
test log likelihood stabilizes). The true (reference) distribution was obtained by a sufficiently long
run of standard HMC. We implemented 10 runs of standard HMC and found there was no variation
between these runs, which guarantees its qualification as the true (reference) distribution. Again,
CCAdL shows much better performance than SGHMC and SGNHT. Note that the SGHMC does
not even fit in the region of the plot, and in fact it shows significant deviation even in the estimation
of the mean.
?500
SGHMC, A=1
SGHMC, A=10
SGNHT, A=1
SGNHT, A=10
CCAdL, A=1
CCAdL, A=10
?700
?800
0
200
400
Number of Passes
?500
?600
?700
?800
600
0
?3
5
0
?800
300
0
10
5
300
True(HMC)
SGHMC
SGNHT
CCAdL
15
10
5
0
?5
0.03 0.035 0.04 0.045 0.05 0.055
w2
(a) h = 0.2?10?4
100
200
Number of Passes
x 10
True(HMC)
SGHMC
SGNHT
CCAdL
0
?5
0.03 0.035 0.04 0.045 0.05 0.055
w2
SGHMC, A=1
SGHMC, A=10
SGNHT, A=1
SGNHT, A=10
CCAdL, A=1
CCAdL, A=10
?700
?3
15
w5
5
w
?600
x 10
True(HMC)
SGHMC
SGNHT
CCAdL
10
100
200
Number of Passes
?500
?3
x 10
15
SGHMC, A=1
SGHMC, A=10
SGNHT, A=1
SGNHT, A=10
CCAdL, A=1
CCAdL, A=10
?400
w5
?600
?300
?400
Test Log Likelihood
?300
?400
Test Log Likelihood
Test Log Likelihood
?300
(b) h = 0.5?10?4
?5
0.03 0.035 0.04 0.045 0.05 0.055
w2
(c) h = 1?10?4
Figure 2: Comparisons of Bayesian Logistic Regression of various methods on the MNIST dataset of digits
7 and 9 with various values of h and A: (top row) test log likelihood using posterior mean against number
of passes over the entire dataset; (bottom row) two-dimensional marginal posterior distribution in (randomly
selected) dimensions 2 and 5 with A = 10 fixed, based on 106 samples from each method after the burn-in
period (i.e. we start to collect samples when the test log likelihood stabilizes). Magenta circle is the true
(reference) posterior mean obtained from standard HMC, and crosses represent the sample means computed
from various methods. Ellipses represent iso-probability contours covering 95% probability mass. Note that
the contour of SGHMC is well beyond the scale of figure and thus we do not include it here.
4.3
Discriminative Restricted Boltzmann Machine (DRBM)
DRBM [11] is a self-contained non-linear classifier, and the gradient of its discriminative objective
can be explicitly computed. Due to the limited space, we refer the readers to [11] for more details.
We trained a DRBM on different large-scale multi-class datasets from LIBSVM1 dataset collection,
including connect-4, letter, and SensIT Vehicle acoustic. The detailed information of these datasets
are presented in Table 2.
We selected the number of hidden units using cross-validation to achieve their best results. Since the
dimension of parameters, Nd , is relatively high, we only used diagonal covariance matrix estimation
for CCAdL to significantly reduce the computational cost, i.e. only estimating the variance along
each dimension. The size of the subset was chosen as 500?1000 to obtain a reasonable variance
estimation. For each dataset, we chose the first 20% of the total number of passes over the entire
dataset as the burn-in period, and collected the remaining samples for prediction.
Table 2: Datasets used in DRBM with corresponding parameter configurations.
Datasets
connect-4
letter
acoustic
training/test set
54,046/13,511
10,500/5,000
78,823/19,705
classes
3
26
3
features
126
16
50
hidden units
20
100
20
total number of parameters Nd
2603
4326
1083
The error rate computed by various methods on the test set using posterior mean against number of
passes over entire dataset was plotted in Figure 3. It can be observed that SGHMC and SGNHT only
work well with a large value of the effective friction A, which corresponds to a strong random walk
effect and thus slows down the convergence. On the contrary, CCAdL works reliably (much better
than the other two) in a wide range of A, and more importantly in the large stepsize regime, which
1
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/multiclass.html
7
speeds up the convergence rate in relation to the computational work performed. It can be easily
seen that the performance of SGHMC heavily relies on using a small value of h and large value of
A, which significantly limits its usefulness in practice.
0.29
0.29
0.28
0.27
0.27
50
100
150
Number of Passes
200
Test Error
0.15
0.1
SGHMC, A=1
SGHMC, A=10
SGNHT, A=1
SGNHT, A=10
CCAdL, A=1
CCAdL, A=10
100
100
150
Number of Passes
0.3
50
100
150
Number of Passes
(3a) acoustic, h = 0.2?10
200
?3
100
150
Number of Passes
200
SGHMC, A=1
SGHMC, A=10
SGNHT, A=1
SGNHT, A=10
CCAdL, A=1
CCAdL, A=10
0.25
0.2
0.15
0.1
200
300
Number of Passes
0.4
400
100
0.3
50
100
150
Number of Passes
(3b) acoustic, h = 0.5?10
200
300
Number of Passes
400
(2c) letter, h = 5?10?3
SGHMC, A=1
SGHMC, A=10
SGNHT, A=1
SGNHT, A=10
CCAdL, A=1
CCAdL, A=10
0.35
0.25
50
(1c) connect-4, h = 2?10?3
(2b) letter, h = 2?10?3
SGHMC, A=1
SGHMC, A=10
SGNHT, A=1
SGNHT, A=10
CCAdL, A=1
CCAdL, A=10
0.35
0.15
100
Test Error
0.4
0.3
0.29
0.27
200
SGHMC, A=1
SGHMC, A=10
SGNHT, A=1
SGNHT, A=10
CCAdL, A=1
CCAdL, A=10
0.2
400
(2a) letter, h = 1?10?3
0.25
50
0.1
200
300
Number of Passes
0.31
0.28
0.25
0.2
SGHMC, A=10
SGHMC, A=50
SGNHT, A=10
SGNHT, A=50
CCAdL, A=10
CCAdL, A=50
0.32
(1b) connect-4, h = 1?10?3
0.25
Test Error
0.3
0.28
(1a) connect-4, h = 0.5?10?3
Test Error
0.31
0.33
Test Error
0.3
SGHMC, A=10
SGHMC, A=50
SGNHT, A=10
SGNHT, A=50
CCAdL, A=10
CCAdL, A=50
0.32
200
?3
0.4
Test Error
Test Error
0.31
0.33
Test Error
SGHMC, A=10
SGHMC, A=50
SGNHT, A=10
SGNHT, A=50
CCAdL, A=10
CCAdL, A=50
0.32
Test Error
0.33
SGHMC, A=1
SGHMC, A=10
SGNHT, A=1
SGNHT, A=10
CCAdL, A=1
CCAdL, A=10
0.35
0.3
0.25
50
100
150
Number of Passes
200
?3
(3c) acoustic, h = 1?10
Figure 3: Comparisons of DRBM on datasets connect-4 (top row), letter (middle row), and acoustic (bottom
row) with various values of h and A indicated: test error rate of various methods using posterior mean against
number of passes over the entire dataset.
5
Conclusions and Future Work
In this article, we have proposed a novel Covariance-Controlled Adaptive Langevin (CCAdL) formulation that can effectively dissipate parameter-dependent noise while maintaining a desirable
invariant distribution. CCAdL combines ideas of SGHMC and SGNHT from the literature, but
achieves significant improvements over each of these methods in practice. The additional error
introduced by covariance estimation is expected to be small in a relative sense, i.e. substantially
smaller than the error arising from the noisy gradient. Our findings have been verified in large-scale
machine learning applications. In particular, we have consistently observed that SGHMC relies on
a small stepsize h and large friction A, which significantly reduces its usefulness in practice as
discussed. The techniques presented in this article could be of use in the more general setting of
large-scale Bayesian sampling and optimization, which we leave for future work.
A naive nonsymmetric splitting method has been applied for CCAdL for fair comparison in this
article. However, we point out that optimal design of splitting methods in ergodic SDE systems has
been explored recently in the mathematics community [1, 13, 14]. Moreover, it has been shown
in [15] that a certain type of symmetric splitting method for the Ad-Langevin/SGNHT method with
a clean (full) gradient inherits the superconvergence property (i.e. fourth order convergence to the
invariant distribution for configurational quantities) recently demonstrated in the setting of Langevin
dynamics [12, 14]. We leave further exploration of this direction in the context of noisy gradients
for future work.
8
References
[1] A. Abdulle, G. Vilmart, and K. C. Zygalakis. Long time accuracy of Lie-Trotter splitting
methods for Langevin dynamics. SIAM Journal on Numerical Analysis, 53(1):1?16, 2015.
[2] S. Ahn, A. Korattikara, and M. Welling. Bayesian posterior sampling via stochastic gradient
Fisher scoring. In Proceedings of the 29th International Conference on Machine Learning,
pages 1591?1598, 2012.
[3] S. Brooks, A. Gelman, G. Jones, and X.-L. Meng. Handbook of Markov Chain Monte Carlo.
CRC Press, 2011.
[4] T. Chen, E. B. Fox, and C. Guestrin. Stochastic gradient Hamiltonian Monte Carlo. In Proceedings of the 31st International Conference on Machine Learning, pages 1683?1691, 2014.
[5] N. Ding, Y. Fang, R. Babbush, C. Chen, R. D. Skeel, and H. Neven. Bayesian sampling using
stochastic gradient thermostats. In Advances in Neural Information Processing Systems 27,
pages 3203?3211, 2014.
[6] S. Duane, A. D. Kennedy, B. J. Pendleton, and D. Roweth. Hybrid Monte Carlo. Physics
Letters B, 195(2):216?222, 1987.
[7] D. Frenkel and B. Smit. Understanding Molecular Simulation: From Algorithms to Applications, Second Edition. Academic Press, 2001.
[8] W. G. Hoover. Computational Statistical Mechanics, Studies in Modern Thermodynamics.
Elsevier Science, 1991.
[9] A. M. Horowitz. A generalized guided Monte Carlo algorithm. Physics Letters B, 268(2):247?
252, 1991.
[10] A. Jones and B. Leimkuhler. Adaptive stochastic methods for sampling driven molecular systems. The Journal of Chemical Physics, 135:084125, 2011.
[11] H. Larochelle and Y. Bengio. Classification using discriminative restricted Boltzmann machines. In Proceedings of the 25th International Conference on Machine Learning, pages
536?543, 2008.
[12] B. Leimkuhler and C. Matthews. Rational construction of stochastic numerical methods for
molecular sampling. Applied Mathematics Research eXpress, 2013(1):34?56, 2013.
[13] B. Leimkuhler and C. Matthews. Molecular Dynamics: With Deterministic and Stochastic
Numerical Methods. Springer, 2015.
[14] B. Leimkuhler, C. Matthews, and G. Stoltz. The computation of averages from equilibrium
and nonequilibrium Langevin molecular dynamics. IMA Journal of Numerical Analysis, 2015.
[15] B. Leimkuhler and X. Shang. Adaptive thermostats for noisy gradient systems. arXiv preprint
arXiv:1505.06889, 2015.
[16] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller. Equation of
state calculations by fast computing machines. The Journal of Chemical Physics, 21(6):1087,
1953.
[17] S. Nos?e. A unified formulation of the constant temperature molecular dynamics methods. The
Journal of Chemical Physics, 81:511, 1984.
[18] H. Robbins and S. Monro. A stochastic approximation method. Annals of Mathematical
Statistics, 22(2):400?407, 1951.
[19] C. Robert and G. Casella. Monte Carlo Statistical Methods, Second Edition. Springer, 2004.
[20] S. J. Vollmer, K. C. Zygalakis, and Y. W. Teh. (Non-) asymptotic properties of stochastic
gradient Langevin dynamics. arXiv preprint arXiv:1501.00438, 2015.
[21] M. Welling and Y. W. Teh. Bayesian learning via stochastic gradient Langevin dynamics. In
Proceedings of the 28th International Conference on Machine Learning, pages 681?688, 2011.
9
| 5978 |@word middle:1 briefly:1 trotter:1 nd:8 simulation:4 covariance:33 p0:1 accommodate:1 configuration:1 existing:2 current:2 discretization:1 written:2 must:1 numerical:8 sdes:2 designed:2 plot:2 update:1 stationary:3 selected:4 device:2 iso:1 reciprocal:1 hamiltonian:3 mathematical:2 along:2 direct:2 differential:2 combine:1 autocorrelation:2 introduce:1 notably:1 expected:2 mechanic:1 multi:1 relying:1 decreasing:1 automatically:1 becomes:1 estimating:4 moreover:2 underlying:3 colloquially:1 mass:2 sde:2 viscous:1 interpreted:1 substantially:2 informed:1 unified:1 finding:2 guarantee:1 dti:1 classifier:1 uk:4 control:3 unit:2 superiority:1 planck:2 positive:3 benedict:1 qualification:1 limit:2 despite:1 normalgamma:1 meng:1 burn:4 chose:1 collect:2 limited:1 range:2 decided:1 practice:12 definite:1 xr:2 digit:2 procedure:3 area:1 empirical:1 significantly:5 projection:1 leimkuhler:10 suggest:1 cannot:1 operator:1 gelman:1 twodimensional:1 context:1 influence:1 www:1 equivalent:2 deterministic:1 demonstrated:2 ergodic:1 stabilizing:1 simplicity:1 splitting:4 estimator:1 adjusts:1 importantly:1 dominate:1 fang:1 dw:4 stability:1 handle:2 variation:1 annals:1 target:6 pt:10 heavily:1 construction:1 vollmer:2 us:1 storkey:2 approximated:1 lihood:1 bottom:4 constancy:2 observed:2 fly:1 ding:4 csie:1 preprint:2 region:2 decrease:1 substantial:2 mentioned:3 vanishes:1 dynamic:17 trained:2 rewrite:1 creation:1 efficiency:3 easily:2 various:11 fast:1 effective:6 describe:1 monte:10 artificial:1 tell:1 pendleton:1 quite:2 widely:1 drawing:1 pumped:1 cov:1 statistic:1 highlighted:1 noisy:13 sequence:1 advantage:1 propose:4 coming:1 combining:1 unavoidably:1 korattikara:1 mixing:1 achieve:1 convergence:6 enhancement:1 produce:1 leave:3 converges:1 coupling:1 ac:4 measured:1 eq:1 strong:1 auxiliary:2 implemented:1 larochelle:1 direction:1 guided:1 sensit:1 correct:3 stochastic:25 kb:6 exploration:1 libsvmtools:1 violates:1 crc:1 require:2 hoover:5 ntu:1 proposition:3 insert:1 hold:1 sufficiently:1 normal:5 sgld:4 exp:9 equilibrium:2 matthew:3 stabilizes:2 achieves:3 estimation:11 robbins:1 largest:1 amos:1 gaussian:3 modified:3 pn:3 avoid:1 stepsizes:1 varying:1 inherits:1 notational:1 improvement:2 consistently:1 likelihood:13 indicates:1 contrast:1 sense:1 elsevier:1 inference:4 dependent:12 i0:1 neven:1 typically:4 entire:9 hidden:2 relation:1 interested:1 classification:2 html:1 denoted:1 priori:2 integration:1 initialize:1 marginal:4 construct:2 sampling:16 represents:4 jones:4 future:3 modern:1 randomly:2 preserve:5 gamma:1 ima:1 intended:3 consisting:1 attempt:1 freedom:1 acceptance:1 interest:5 w5:2 investigate:1 adjust:1 introduces:1 chain:3 respective:1 shorter:1 machinery:1 damping:1 fox:1 incomplete:1 stoltz:1 logarithm:2 walk:1 desired:4 plotted:2 circle:1 roweth:1 instance:1 column:1 earlier:1 measuring:1 cost:3 zygalakis:2 deviation:2 subset:10 nonequilibrium:1 usefulness:3 too:1 connect:6 synthetic:1 adaptively:1 st:1 density:9 international:4 peak:1 siam:1 physic:5 together:1 again:1 central:1 possibly:1 horowitz:1 usable:1 potential:3 summarized:1 coefficient:1 explicitly:1 ad:2 dissipate:8 depends:1 performed:2 root:1 lot:1 vehicle:1 start:2 hf:1 rmse:2 monro:1 square:1 ni:1 accuracy:4 wiener:1 variance:16 ensemble:1 yield:1 bayesian:17 carlo:10 kennedy:1 casella:1 ed:4 against:4 energy:4 nonetheless:1 associated:4 proof:1 rational:1 newly:3 dataset:17 proved:1 popular:1 dimensionality:1 organized:1 actually:3 higher:1 dt:5 formulation:2 ergodicity:1 just:1 stage:1 logistic:3 quality:1 indicated:2 effect:1 verify:1 true:14 former:1 assigned:1 chemical:3 symmetric:1 deal:1 during:3 self:1 maintained:1 covering:1 generalized:1 tt:1 demonstrate:1 temperature:11 instantaneous:1 novel:2 recently:3 common:1 physical:1 discussed:2 nonsymmetric:1 m1:10 significant:6 refer:1 gibbs:4 ai:2 mathematics:2 pointed:2 reliability:1 moving:1 longer:1 ahn:2 posterior:14 brownian:2 recent:1 frenkel:1 driven:1 termed:1 claimed:1 certain:1 binary:1 yi:1 scoring:1 seen:2 guestrin:1 additional:3 converge:1 period:4 semi:1 ii:1 full:5 desirable:3 reduces:1 ptt:1 faster:2 calculation:2 cross:2 long:3 academic:1 equally:1 molecular:9 ellipsis:1 controlled:9 impact:2 prediction:1 regression:3 essentially:2 arxiv:4 iteration:1 represent:3 proposal:1 preserved:1 addressed:1 w2:3 rest:1 configurational:1 sure:1 pass:17 contrary:1 incorporates:1 presence:1 bengio:1 identically:1 enough:1 fit:1 reduce:3 idea:3 multiclass:1 whether:1 dramatically:1 clear:1 listed:1 informally:1 detailed:1 zhanxing:2 reduced:1 http:1 outperform:1 canonical:3 arising:1 correctly:1 per:1 discrete:1 express:1 sgnht:58 neither:1 clean:2 verified:1 diffusion:2 timestep:4 run:3 letter:8 fourth:1 reasonable:2 reader:1 guaranteed:1 display:1 strength:1 incorporation:1 ri:1 generates:1 speed:4 friction:7 prescribed:1 extremely:1 relatively:1 speedup:1 combination:1 poor:4 smaller:3 unity:1 tw:1 metropolis:1 modification:1 invariant:8 restricted:2 taken:1 computationally:3 equation:4 mechanism:1 cjlin:1 know:1 end:2 adopted:2 sghmc:60 apply:2 gam:2 pdt:6 stepsize:8 alternative:1 robustness:2 original:2 denotes:1 top:4 subsampling:1 include:1 remaining:1 maintaining:7 especially:1 objective:1 already:2 added:1 quantity:1 costly:1 traditional:1 diagonal:4 gradient:40 dp:3 unable:1 nx:3 collected:1 assuming:4 difficult:2 hmc:8 robert:1 relate:1 xri:6 negative:2 slows:1 design:2 implementation:1 reliably:1 boltzmann:3 unknown:1 contributed:1 teh:3 allowing:1 rosenbluth:2 markov:3 datasets:6 benchmark:1 mate:1 thermal:2 langevin:18 defining:1 perturbation:1 community:1 introduced:3 required:4 acoustic:6 established:2 brook:1 address:1 able:1 beyond:1 below:1 dynamical:2 regime:1 summarize:1 including:1 analogue:1 suitable:2 rely:1 hybrid:2 force:2 zhu:2 scheme:3 improve:4 altered:1 thermodynamics:1 imply:1 naive:1 review:1 prior:3 literature:1 understanding:1 teller:2 relative:2 asymptotic:1 proportional:2 validation:1 foundation:1 degree:1 consistent:1 article:8 row:9 infeasible:3 bias:1 wide:2 taking:1 edinburgh:4 benefit:2 distributed:1 calculated:2 feedback:1 dimension:7 skeel:1 contour:2 author:2 collection:1 adaptive:11 employing:1 welling:3 excess:2 approximate:1 preferred:1 handbook:1 assumed:1 xi:6 discriminative:3 table:4 improving:1 investigated:1 whole:2 noise:23 edition:2 fair:1 referred:2 momentum:2 lie:1 governed:1 theorem:1 magenta:1 down:1 inset:1 showing:1 offset:2 explored:1 ments:1 normalizing:1 thermostat:12 essential:1 dwa:4 exists:1 adding:1 effectively:8 importance:1 mnist:2 smit:1 magnitude:1 babbush:1 chen:4 simply:3 highlighting:1 contained:1 rnd:1 duane:1 fokker:2 corresponds:1 springer:2 relies:3 kinetic:2 goal:1 identity:1 hm1:1 fisher:1 adverse:1 change:1 typical:3 reducing:1 wt:1 shang:4 called:1 total:2 experimental:1 latter:2 mcmc:2 ex:3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.