Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N13-1038",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:39:58.869299Z"
},
"title": "Minibatch and Parallelization for Online Large Margin Structured Learning",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of New York",
"location": {}
},
"email": "kzhao@gc.cuny.edu"
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of New York",
"location": {}
},
"email": "huang@cs.qc.cuny.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Online learning algorithms such as perceptron and MIRA have become popular for many NLP tasks thanks to their simpler architecture and faster convergence over batch learning methods. However, while batch learning such as CRF is easily parallelizable, online learning is much harder to parallelize: previous efforts often witness a decrease in the converged accuracy, and the speedup is typically very small (\u223c3) even with many (10+) processors. We instead present a much simpler architecture based on \"mini-batches\", which is trivially parallelizable. We show that, unlike previous methods, minibatch learning (in serial mode) actually improves the converged accuracy for both perceptron and MIRA learning, and when combined with simple parallelization, minibatch leads to very significant speedups (up to 9x on 12 processors) on stateof-the-art parsing and tagging systems.",
"pdf_parse": {
"paper_id": "N13-1038",
"_pdf_hash": "",
"abstract": [
{
"text": "Online learning algorithms such as perceptron and MIRA have become popular for many NLP tasks thanks to their simpler architecture and faster convergence over batch learning methods. However, while batch learning such as CRF is easily parallelizable, online learning is much harder to parallelize: previous efforts often witness a decrease in the converged accuracy, and the speedup is typically very small (\u223c3) even with many (10+) processors. We instead present a much simpler architecture based on \"mini-batches\", which is trivially parallelizable. We show that, unlike previous methods, minibatch learning (in serial mode) actually improves the converged accuracy for both perceptron and MIRA learning, and when combined with simple parallelization, minibatch leads to very significant speedups (up to 9x on 12 processors) on stateof-the-art parsing and tagging systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Online structured learning algorithms such as the structured perceptron (Collins, 2002) and k-best MIRA (McDonald et al., 2005) have become more and more popular for many NLP tasks such as dependency parsing and part-of-speech tagging. This is because, compared to their batch learning counterparts, online learning methods offer faster convergence rates and better scalability to large datasets, while using much less memory and a much simpler architecture which only needs 1-best or k-best decoding. However, online learning for NLP typically involves expensive inference on each example for 10 or more passes over millions of examples, which often makes training too slow in practice; for example systems such as the popular (2nd-order) MST parser (McDonald and Pereira, 2006) usually require the order of days to train on the Treebank on a commodity machine (McDonald et al., 2010) .",
"cite_spans": [
{
"start": 72,
"end": 87,
"text": "(Collins, 2002)",
"ref_id": "BIBREF3"
},
{
"start": 104,
"end": 127,
"text": "(McDonald et al., 2005)",
"ref_id": "BIBREF13"
},
{
"start": 751,
"end": 779,
"text": "(McDonald and Pereira, 2006)",
"ref_id": "BIBREF12"
},
{
"start": 862,
"end": 885,
"text": "(McDonald et al., 2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are mainly two ways to address this scalability problem. On one hand, researchers have been developing modified learning algorithms that allow inexact search (Collins and Roark, 2004; Huang et al., 2012) . However, the learner still needs to loop over the whole training data (on the order of millions of sentences) many times. For example the best-performing method in Huang et al. (2012) still requires 5-6 hours to train a very fast parser.",
"cite_spans": [
{
"start": 164,
"end": 189,
"text": "(Collins and Roark, 2004;",
"ref_id": "BIBREF2"
},
{
"start": 190,
"end": 209,
"text": "Huang et al., 2012)",
"ref_id": "BIBREF9"
},
{
"start": 376,
"end": 395,
"text": "Huang et al. (2012)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the other hand, with the increasing popularity of multicore and cluster computers, there is a growing interest in speeding up training via parallelization. While batch learning such as CRF (Lafferty et al., 2001 ) is often trivially parallelizable (Chu et al., 2007) since each update is a batch-aggregate of the update from each (independent) example, online learning is much harder to parallelize due to the dependency between examples, i.e., the update on the first example should in principle influence the decoding of all remaining examples. Thus if we decode and update the first and the 1000th examples in parallel, we lose their interactions which is one of the reasons for online learners' fast convergence. This explains why previous work such as the iterative parameter mixing (IPM) method of McDonald et al. (2010) witnesses a decrease in the accuracies of parallelly-learned models, and the speedup is typically very small (about 3 in their experiments) even with 10+ processors.",
"cite_spans": [
{
"start": 192,
"end": 214,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF10"
},
{
"start": 251,
"end": 269,
"text": "(Chu et al., 2007)",
"ref_id": "BIBREF1"
},
{
"start": 807,
"end": 829,
"text": "McDonald et al. (2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We instead explore the idea of \"minibatch\" for online large-margin structured learning such as perceptron and MIRA. We argue that minibatch is advantageous in both serial and parallel settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "First, for minibatch perceptron in the serial set-ting, our intuition is that, although decoding is done independently within one minibatch, updates are done by averaging update vectors in batch, providing a \"mixing effect\" similar to \"averaged parameters\" of Collins (2002) which is also found in IPM (McDonald et al., 2010) , and online EM (Liang and Klein, 2009) . Secondly, minibatch MIRA in the serial setting has an advantage that, different from previous methods such as SGD which simply sum up the updates from all examples in a minibatch, a minibatch MIRA update tries to simultaneously satisfy an aggregated set of constraints that are collected from multiple examples in the minibatch. Thus each minibatch MIRA update involves an optimization over many more constraints than in pure online MIRA, which could potentially lead to a better margin. In other words we can view MIRA as an online version or stepwise approximation of SVM, and minibatch MIRA can be seen as a better approximation as well as a middleground between pure MIRA and SVM. 1 More interestingly, the minibatch architecture is trivially parallelizable since the examples within each minibatch could be decoded in parallel on multiple processors (while the update is still done in serial). This is known as \"synchronous minibatch\" and has been explored by many researchers (Gimpel et al., 2010; Finkel et al., 2008) , but all previous works focus on probabilistic models along with SGD or EM learning methods while our work is the first effort on large-margin methods.",
"cite_spans": [
{
"start": 260,
"end": 274,
"text": "Collins (2002)",
"ref_id": "BIBREF3"
},
{
"start": 302,
"end": 325,
"text": "(McDonald et al., 2010)",
"ref_id": "BIBREF14"
},
{
"start": 342,
"end": 365,
"text": "(Liang and Klein, 2009)",
"ref_id": "BIBREF11"
},
{
"start": 1053,
"end": 1054,
"text": "1",
"ref_id": null
},
{
"start": 1350,
"end": 1371,
"text": "(Gimpel et al., 2010;",
"ref_id": "BIBREF6"
},
{
"start": 1372,
"end": 1392,
"text": "Finkel et al., 2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We make the following contributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Theoretically, we present a serial minibatch framework (Section 3) for online large-margin learning and prove the convergence theorems for minibatch perceptron and minibatch MIRA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Empirically, we show that serial minibatch could speed up convergence and improve the converged accuracy for both MIRA and perceptron on state-of-the-art dependency parsing and part-of-speech tagging systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 In addition, when combined with simple (synchronous) parallelization, minibatch MIRA Algorithm 1 Generic Online Learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Input: data D = {(x (t) , y (t) )} n t=1 and feature map \u03a6 Output: weight vector w 1: repeat 2:",
"cite_spans": [
{
"start": 20,
"end": 23,
"text": "(t)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "for each example (x, y) in D do 3: C \u2190 FINDCONSTRAINTS(x, y, w) decoding 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "if C = \u2205 then UPDATE(w, C) 5: until converged leads to very significant speedups (up to 9x on 12 processors) that are much higher than that of IPM (McDonald et al., 2010) on state-of-the-art parsing and tagging systems.",
"cite_spans": [
{
"start": 143,
"end": 170,
"text": "IPM (McDonald et al., 2010)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first present a unified framework for online large-margin learning, where perceptron and MIRA are two special cases. Shown in Algorithm 1, the online learner considers each input example (x, y) sequentially and performs two steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Learning: Perceptron and MIRA",
"sec_num": "2"
},
{
"text": "1. find the set C of violating constraints, and 2. update the weight vector w according to C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Learning: Perceptron and MIRA",
"sec_num": "2"
},
{
"text": "Here a triple x, y, z is said to be a \"violating constraint\" with respect to model w if the incorrect label z scores higher than (or equal to) the correct label y in w, i.e., w \u2022 \u2206\u03a6( x, y, z ) \u2264 0, where \u2206\u03a6( x, y, z ) is a short-hand notation for the update vector \u03a6(x, y) \u2212 \u03a6(x, z) and \u03a6 is the feature map (see Huang et al. (2012) for details). The subroutines FINDCONSTRAINTS and UPDATE are analogous to \"APIs\", to be specified by specific instances of this online learning framework. For example, the structured perceptron algorithm of Collins (2002) is implemented in Algorithm 2 where FINDCON-STRAINTS returns a singleton constraint if the 1-best decoding result z (the highest scoring label according to the current model) is different from the true label y. Note that in the UPDATE function, C is always a singleton constraint for the perceptron, but we make it more general (as a set) to handle the batch update in the minibatch version in Section 3.",
"cite_spans": [
{
"start": 313,
"end": 332,
"text": "Huang et al. (2012)",
"ref_id": "BIBREF9"
},
{
"start": 540,
"end": 554,
"text": "Collins (2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Learning: Perceptron and MIRA",
"sec_num": "2"
},
{
"text": "On the other hand, Algorith 3 presents the k-best MIRA Algorithm of McDonald et al. (2005) which generalizes multiclass MIRA (Crammer and Singer, 2003) for structured prediction. The decoder now Algorithm 2 Perceptron (Collins, 2002) .",
"cite_spans": [
{
"start": 68,
"end": 90,
"text": "McDonald et al. (2005)",
"ref_id": "BIBREF13"
},
{
"start": 125,
"end": 151,
"text": "(Crammer and Singer, 2003)",
"ref_id": "BIBREF4"
},
{
"start": 218,
"end": 233,
"text": "(Collins, 2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Learning: Perceptron and MIRA",
"sec_num": "2"
},
{
"text": "1: function FINDCONSTRAINTS(x, y, w) 2: z \u2190 argmax s\u2208Y(x) w \u2022 \u03a6(x, s) decoding 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Learning: Perceptron and MIRA",
"sec_num": "2"
},
{
"text": "if z = y then return { x, y, z } 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Learning: Perceptron and MIRA",
"sec_num": "2"
},
{
"text": "else return \u2205 5: procedure UPDATE(w, C) 6: w \u2190 w + 1 |C| c\u2208C \u2206\u03a6(c) (batch) update",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Learning: Perceptron and MIRA",
"sec_num": "2"
},
{
"text": "Algorithm 3 k-best MIRA (McDonald et al., 2005) .",
"cite_spans": [
{
"start": 24,
"end": 47,
"text": "(McDonald et al., 2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Learning: Perceptron and MIRA",
"sec_num": "2"
},
{
"text": "1: function FINDCONSTRAINTS(x, y, w) 2: Z \u2190 k-best z\u2208Y(x) w \u2022 \u03a6(x, z)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Learning: Perceptron and MIRA",
"sec_num": "2"
},
{
"text": "3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Learning: Perceptron and MIRA",
"sec_num": "2"
},
{
"text": "Z \u2190 {z \u2208 Z | z = y, w \u2022 \u2206\u03a6( x, y, z ) \u2264 0} 4: return {( x, y, z , (y, z)) | z \u2208 Z} 5: procedure UPDATE(w, C) 6: w \u2190 argmin w :\u2200(c, )\u2208C, w \u2022\u2206\u03a6(c)\u2265 w \u2212 w 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Learning: Perceptron and MIRA",
"sec_num": "2"
},
{
"text": "finds the k-best solutions Z first, and returns a set of violating constraints in Z, The update in MIRA is more interesting: it searches for the new model w with minimum change from the current model w so that w corrects each violating constraint by a margin at least as large as the loss (y, z) of the incorrect label z.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Learning: Perceptron and MIRA",
"sec_num": "2"
},
{
"text": "Although not mentioned in the pseudocode, we also employ \"averaged parameters\" (Collins, 2002) for both perceptron and MIRA in all experiments.",
"cite_spans": [
{
"start": 79,
"end": 94,
"text": "(Collins, 2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Learning: Perceptron and MIRA",
"sec_num": "2"
},
{
"text": "The idea of serial minibatch learning is extremely simple: divide the data into n/m minibatches of size m, and do batch updates after decoding each minibatch (see Algorithm 4). The FIND-CONSTRAINTS and UPDATE subroutines remain unchanged for both perceptron and MIRA, although it is important to note that a perceptron batch update uses the average of update vectors, not the sum, which simplifies the proof. This architecture is often called \"synchronous minibatch\" in the literature (Gimpel et al., 2010; Liang and Klein, 2009; Finkel et al., 2008) . It could be viewed as a middleground between pure online learning and batch learning.",
"cite_spans": [
{
"start": 485,
"end": 506,
"text": "(Gimpel et al., 2010;",
"ref_id": "BIBREF6"
},
{
"start": 507,
"end": 529,
"text": "Liang and Klein, 2009;",
"ref_id": "BIBREF11"
},
{
"start": 530,
"end": 550,
"text": "Finkel et al., 2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Serial Minibatch",
"sec_num": "3"
},
{
"text": "We denote C(D) to be the set of all possible violating constraints in data D (cf. Huang et al. 2012):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "C(D) = { x, y, z | (x, y) \u2208 D, z \u2208 Y(x) \u2212 {y}}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "Algorithm 4 Serial Minibatch Online Learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "Input: data D, feature map \u03a6, and minibatch size m Output: weight vector w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "1: Split D into n/m minibatches D 1 . . . D n/m 2: repeat 3: for i \u2190 1 . . . n/m do for each minibatch 4: C \u2190 \u222a (x,y)\u2208Di FINDCONSTRAINTS(x, y, w) 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "if C = \u2205 then UPDATE(w, C) batch update 6: until converged A training set D is separable by feature map \u03a6 with margin \u03b4 > 0 if there exists a unit oracle vector u with u = 1 such that u",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "\u2022 \u2206\u03a6( x, y, z ) \u2265 \u03b4, for all x, y, z \u2208 C(D). Furthermore, let radius R \u2265 \u2206\u03a6( x, y, z ) for all x, y, z \u2208 C(D).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "Theorem 1. For a separable dataset D with margin \u03b4 and radius R, the minibatch perceptron algorithm (Algorithms 4 and 2) will terminate after t minibatch updates where t \u2264 R 2 /\u03b4 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "Proof. Let w t be the weight vector before the t th update; w 0 = 0. Suppose the t th update happens on the constraint set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "C t = {c 1 , c 2 , . . . , c a } where a = |C t |, and each c i = x i , y i , z i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "We convert them to the set of update vectors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "v i = \u2206\u03a6(c i ) = \u2206\u03a6( x i , y i , z i ) for all i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "We know that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "1. u \u2022 v i \u2265 \u03b4 (margin on unit oracle vector) 2. w t \u2022 v i \u2264 0 (violation: z i dominates y i ) 3. v i 2 \u2264 R 2 (radius)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "Now the update looks like",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "w t+1 = w t + 1 |C t | c\u2208Ct \u2206\u03a6(c) = w t + 1 a i v i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "(1) We will bound w t+1 from two directions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "1. Dot product both sides of the update equation (1) with the unit oracle vector u, we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "u \u2022 w t+1 = u \u2022 w t + 1 a i u \u2022 v i \u2265 u \u2022 w t + 1 a i \u03b4 (margin) = u \u2022 w t + \u03b4 ( i = a) \u2265 t\u03b4 (by induction)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "Since for any two vectors a and b we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "a b \u2265 a\u2022b, thus u w t+1 \u2265 u\u2022w t+1 \u2265 t\u03b4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "As u is a unit vector, we have w t+1 \u2265 t\u03b4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "2. On the other hand, take the norm of both sides of Eq. 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "w t+1 2 = w t + 1 a i v i 2 = w t 2 + i 1 a v i 2 + 2 a w t \u2022 i v i \u2264 w t 2 + i 1 a v i 2 + 0 (violation) \u2264 w t 2 + i 1 a v i 2 (Jensen's) \u2264 w t 2 + i 1 a R 2 (radius) = w t 2 + R 2 ( i = a) \u2264tR 2 (by induction)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "Combining the two bounds, we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "t 2 \u03b4 2 \u2264 w t+1 2 \u2264 tR 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "thus the number of minibatch updates t \u2264 R 2 /\u03b4 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "Note that this bound is identical to that of pure online perceptron (Collins, 2002 ",
"cite_spans": [
{
"start": 68,
"end": 82,
"text": "(Collins, 2002",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch Perceptron",
"sec_num": "3.1"
},
{
"text": "We also give a proof of convergence for MIRA with relaxation. 2 We present the optimization problem in the UPDATE function of Algorithm 3 as a quadratic program (QP) with slack variable \u03be:",
"cite_spans": [
{
"start": 62,
"end": 63,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch MIRA",
"sec_num": "3.2"
},
{
"text": "w t+1 \u2190 argmin w t+1 w t+1 \u2212 w t 2 + \u03be s.t. w t+1 \u2022 v i \u2265 i \u2212 \u03be, for all(c i , i ) \u2208 C t where v i = \u2206\u03a6(c i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch MIRA",
"sec_num": "3.2"
},
{
"text": "is the update vector for constraint c i . Consider the Lagrangian:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch MIRA",
"sec_num": "3.2"
},
{
"text": "L = w t+1 \u2212 w t 2 + \u03be + |Ct| i=1 \u03b7 i ( i \u2212 w \u2022 v i \u2212 \u03be) \u03b7 i \u2265 0, for 1 \u2264 i \u2264 |C t |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch MIRA",
"sec_num": "3.2"
},
{
"text": "2 Actually this relaxation is not necessary for the convergence proof. We employ it here solely to make the proof shorter. It is not used in the experiments either.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch MIRA",
"sec_num": "3.2"
},
{
"text": "Set the partial derivatives to 0 with respect to w and \u03be we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch MIRA",
"sec_num": "3.2"
},
{
"text": "w = w + i \u03b7 i v i (2) i \u03b7 i = 1 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch MIRA",
"sec_num": "3.2"
},
{
"text": "This result suggests that the weight change can always be represnted by a linear combination of the update vectors (i.e. normal vectors of the constraint hyperplanes), with the linear coefficencies sum to 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch MIRA",
"sec_num": "3.2"
},
{
"text": "Theorem 2 (convergence of minibatch MIRA). For a separable dataset D with margin \u03b4 and radius R, the minibatch MIRA algorithm (Algorithm 4 and 3) will make t updates where t \u2264 R 2 /\u03b4 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch MIRA",
"sec_num": "3.2"
},
{
"text": "Proof. 1. Dot product both sides of Equation 2 with unit oracle vector u:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch MIRA",
"sec_num": "3.2"
},
{
"text": "u \u2022 w t+1 = u \u2022 w t + i \u03b7 i u \u2022 v i \u2265u \u2022 w t + i \u03b7 i \u03b4 (margin) =u \u2022 w t + \u03b4 (Eq. 3) =t\u03b4 (by induction)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch MIRA",
"sec_num": "3.2"
},
{
"text": "2. On the other hand",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch MIRA",
"sec_num": "3.2"
},
{
"text": "w t+1 2 = w t + i \u03b7 i v i 2 = w t 2 + i \u03b7 i v i 2 + 2 w t \u2022 i \u03b7 i v i \u2264 w t 2 + i \u03b7 i v i 2 + 0 (violation) \u2264 w t 2 + i \u03b7 i v 2 i (Jensen's) \u2264 w t 2 + i \u03b7 i R 2 (radius) = w t 2 + R 2 (Eq. 3) \u2264tR 2 (by induction)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch MIRA",
"sec_num": "3.2"
},
{
"text": "From the two bounds we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch MIRA",
"sec_num": "3.2"
},
{
"text": "t 2 \u03b4 2 \u2264 w t+1 2 \u2264 tR 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch MIRA",
"sec_num": "3.2"
},
{
"text": "thus within at most t \u2264 R 2 /\u03b4 2 minibatch updates MIRA will converge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence of Minibatch MIRA",
"sec_num": "3.2"
},
{
"text": "The key insight into parallelization is that the calculation of constraints (i.e. decoding) for each example within a minibatch is completely independent of Algorithm 5 Parallized Minibatch Online Learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parallelized Minibatch",
"sec_num": "4"
},
{
"text": "Input: D, \u03a6, minibatch size m, and # of processors p Output:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parallelized Minibatch",
"sec_num": "4"
},
{
"text": "weight vector w Split D into n/m minibatches D 1 . . . D n/m Split each D i into m/p groups D i,1 . . . D i,m/p repeat for i \u2190 1 . . . n/m do for each minibatch for j \u2190 1 . . . m/p in parallel do C j \u2190 \u222a (x,y)\u2208Di,j FINDCONSTRAINTS(x, y, w) C \u2190 \u222a j C j in serial if C = \u2205 then UPDATE(w, C)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parallelized Minibatch",
"sec_num": "4"
},
{
"text": "in serial until converged other examples in the same batch. Thus we can easily distribute decoding for different examples in the same minibatch to different processors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parallelized Minibatch",
"sec_num": "4"
},
{
"text": "Shown in Algorithm 5, for each minibatch D i , we split D i into groups of equal size, and assign each group to a processor to decode. After all processors finish, we collect all constraints and do an update based on the union of all constraints. Figure 1 (a) ).",
"cite_spans": [],
"ref_spans": [
{
"start": 247,
"end": 259,
"text": "Figure 1 (a)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Parallelized Minibatch",
"sec_num": "4"
},
{
"text": "This synchronous parallelization framework should provide significant speedups over the serial mode. However, in each minibatch, inevitably, some processors will end up waiting for others to finish, especially when the lengths of sentences vary substantially (see the shaded area in Figure 1 (b) ).",
"cite_spans": [],
"ref_spans": [
{
"start": 283,
"end": 295,
"text": "Figure 1 (b)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Parallelized Minibatch",
"sec_num": "4"
},
{
"text": "To alleviate this problem, we propose \"perminibatch load-balancing\", which rearranges the sentences within each minibatch based on their lengths (which correlate with their decoding times) so that the total workload on each processor is balanced ( Figure 1c ). It is important to note that this shuffling does not affect learning at all thanks to the independence of each example within a minibatch. Basically, we put the shortest and longest sentences into the first thread, the second shortest and second longest into the second thread, etc. Although this is not necessary optimal scheduling, it works well in practice. As long as decoding time is linear in the length of sentence (as in incremental parsing or tagging), we expect a much smaller variance in processing time on each processor in one minibatch, which is confirmed in the experiments (see Figure 8 ). 3",
"cite_spans": [],
"ref_spans": [
{
"start": 248,
"end": 257,
"text": "Figure 1c",
"ref_id": "FIGREF1"
},
{
"start": 855,
"end": 863,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parallelized Minibatch",
"sec_num": "4"
},
{
"text": "We conduct experiments on two typical structured prediction problems: incremental dependency parsing and part-of-speech tagging; both are done on state-of-the-art baseline. We also compare our parallelized minibatch algorithm with the iterative parameter mixing (IPM) method of McDonald et al. (2010) . We perform our experiments on a commodity 64-bit Dell Precision T7600 workstation with two 3.1GHz 8-core CPUs (16 processors in total) and 64GB RAM. We use Python 2.7's multiprocessing module in all experiments. 4",
"cite_spans": [
{
"start": 278,
"end": 300,
"text": "McDonald et al. (2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We base our experiments on our dynamic programming incremental dependency parser (Huang and Sagae, 2010). 5 Following Huang et al. 2012, we use max-violation update and beam size b = 8. We evaluate on the standard Penn Treebank (PTB) using the standard split: Sections 02-21 for training, and Section 22 as the held-out set (which is indeed the test-set in this setting, following McDonald et al. 2010and Gimpel et al. (2010) ). We then extend it to employ 1-best MIRA learning. As stated in Section 2, MIRA separates the gold label y from the incorrect label z with a margin at least as large as the loss (y, z). Here in incremental dependency parsing we define the loss function between a gold tree y and an incorrect partial tree z as the number of incorrect edges in z, plus the number of correct edges in y which are already ruled out by z. This MIRA extension results in slightly higher accuracy of 92.36, which we will use as the pure online learning baseline in the comparisons below.",
"cite_spans": [
{
"start": 405,
"end": 425,
"text": "Gimpel et al. (2010)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Parsing with MIRA",
"sec_num": "5.1"
},
{
"text": "We first run minibatch in the serial mode with varying minibatch size of 4, 16, 24, 32, and 48 (see Figure 2 ). We can make the following observations. First, except for the largest minibatch size of 48, minibatch learning generally improves the accuracy of the converged model, which is explained by our intuition that optimization with a larger constraint set could improve the margin. In particular, m = 16 achieves the highest accuracy of 92.53, which is a 0.27 improvement over the baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 108,
"text": "Figure 2",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Serial Minibatch",
"sec_num": "5.1.1"
},
{
"text": "Secondly, minibatch learning can reach high levels of accuracy faster than the baseline can. For example, minibatch of size 4 can reach 92.35 in 3.5 hours, and minibatch of size 24 in 3.7 hours, while the pure online baseline needs 6.9 hours. In other words, just minibatch alone in serial mode can already speed up learning. This is also explained by the intuition of better optimization above, and contributes significantly to the final speedup of parallelized minibatch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Serial Minibatch",
"sec_num": "5.1.1"
},
{
"text": "Lastly, larger minibatch sizes slow down the convergence, with m = 4 converging the fastest and m = 48 the slowest. This can be explained by the trade-off between the relative strengths from online learning and batch update: with larger batch sizes, we lose the dependencies between examples within the same minibatch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Serial Minibatch",
"sec_num": "5.1.1"
},
{
"text": "Although larger minibatches slow down convergence, they actually offer better potential for parallelization since the number of processors p has to be smaller than minibatch size m (in fact, p should divide m). For example, m = 24 can work with 2, 3, 4, 6, 8, or 12 processors while m = 4 can only work with 2 or 4 and the speed up of 12 processors could easily make up for the slightly slower convergence Figure 3 : Parallelized minibatch is much faster than iterative parameter mixing. Top: minibatch of size 24 using 4 and 12 processors offers significant speedups over the serial minibatch and pure online baselines. Bottom: IPM with the same processors offers very small speedups.",
"cite_spans": [],
"ref_spans": [
{
"start": 406,
"end": 414,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Serial Minibatch",
"sec_num": "5.1.1"
},
{
"text": "rate. So there seems to be a \"sweetspot\" of minibatch sizes, similar to the tipping point observed in McDonald et al. (2010) when adding more processors starts to hurt convergence.",
"cite_spans": [
{
"start": 102,
"end": 124,
"text": "McDonald et al. (2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Serial Minibatch",
"sec_num": "5.1.1"
},
{
"text": "In the following experiments we use minibatch size of m = 24 and run it in parallel mode on various numbers of processors (p = 2 \u223c 12). Figure 3 (top) shows that 4 and 12 processors lead to very significant speedups over the serial minibatch and pure online baselines. For example, it takes the 12 processors only 0.66 hours to reach an accuracy of 92.35, which takes the pure online MIRA 6.9 hours, amounting to an impressive speedup of 10.5.",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 144,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parallelized Minibatch vs. IPM",
"sec_num": "5.1.2"
},
{
"text": "We compare our minibatch parallelization with the iterative parameter mixing (IPM) of McDonald et al. (2010) . Figure 3 (bottom) shows that IPM not only offers much smaller speedups, but also converges lower, and this drop in accuracy worsens with more processors. Figure 4 gives a detailed analysis of speedups. Here we perform both extrinsic and intrinsic comparisons. In the former, we care about the time to reach a given accuracy; in this plot we use 92.27 which is the converged accuracy of IPM on 12 processors. We choose it since it is the lowest accu- racy among all converged models; choosing a higher accuracy would reveal even larger speedups for our methods. This figure shows that our method offers superlinear speedups with small number of processors (1 to 6), and almost linear speedups with large number of processors (8 and 12). Note that even p = 1 offers a speedup of 1.5 thanks to serial minibatch's faster convergence; in other words, within the 9 fold speed-up at p = 12, parallelization contributes about 6 and minibatch about 1.5. By contrast, IPM only offers an almost constant speedup of around 3, which is consistent with the findings of McDonald et al. (2010) (both of their experiments show a speedup of around 3).",
"cite_spans": [
{
"start": 86,
"end": 108,
"text": "McDonald et al. (2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 111,
"end": 128,
"text": "Figure 3 (bottom)",
"ref_id": null
},
{
"start": 265,
"end": 273,
"text": "Figure 4",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Parallelized Minibatch vs. IPM",
"sec_num": "5.1.2"
},
{
"text": "We also try to understand where the speedup comes from. For that purpose we study intrinsic speedup, which is about the speed regardless of accuracy (see Figure 4 ). For our minibatch method, intrinsic speedup is the average time per iteration of a parallel run over the serial minibatch baseline. This answers the questions such as \"how CPUefficient is our parallelization\" or \"how much CPU time is wasted\". We can see that with small number of processors (2 to 4), the efficiency, defined as S p /p where S p is the intrinsic speedup for p processors, is almost 100% (ideal linear speedup), but with more processors it decreases to around 50% with p = 12, meaning about half of CPU time is wasted. This wasting is due to two sources: first, the load-balancing problem worsens with more processors, and secondly, the update procedure still runs in serial mode with p \u2212 1 processors sleeping.",
"cite_spans": [],
"ref_spans": [
{
"start": 154,
"end": 162,
"text": "Figure 4",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Parallelized Minibatch vs. IPM",
"sec_num": "5.1.2"
},
{
"text": "Part-of-speech tagging is usually considered as a simpler task compared to dependency parsing. Here we show that using minibatch can also bring better accuracies and speedups for part-of-speech tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Tagging with Perceptron",
"sec_num": "5.2"
},
{
"text": "We implement a part-of-speech tagger with averaged perceptron. Following the standard splitting of Penn Treebank (Collins, 2002) , we use Sections 00-18 for training and Sections 19-21 as held-out. Our implementation provides an accuracy of 96.98 with beam size 8.",
"cite_spans": [
{
"start": 113,
"end": 128,
"text": "(Collins, 2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Tagging with Perceptron",
"sec_num": "5.2"
},
{
"text": "First we run the tagger on a single processor with minibatch sizes 8, 16, 24, and 32. As in Figure 5 , we observe similar convergence acceleration and higher accuracies with minibatch. In particular, minibatch of size m = 16 provides the highest accuracy of 97.04, giving an improvement of 0.06. This improvement is smaller than what we observe in MIRA learning for dependency parsing experiments, which can be partly explained by the fast convergence of the tagger, and that perceptron does not involve optimization in the updates.",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 100,
"text": "Figure 5",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Part-of-Speech Tagging with Perceptron",
"sec_num": "5.2"
},
{
"text": "Then we choose minibatch of size 24 to investigate the parallelization performance. As Figure 6 (top) shows, with 12 processors our method takes only 0.10 hours to converge to an accuracy of 97.00, compared to the baseline of 96.98 with 0.45 hours. We also compare our method with IPM as in Figure 6 : Parallelized minibatch is faster than iterative parameter mixing (on tagging with perceptron). Top: minibatch of size 24 using 4 and 12 processors offers significant speedups over the baselines. Bottom: IPM with the same 4 and 12 processors offers slightly smaller speedups. Note that IPM with 4 processors converges lower than other parallelization curves. ure 6 (bottom). Again, our method converges faster and better than IPM, but this time the differences are much smaller than those in parsing. Figure 7 uses 96.97 as a criteria to evaluate the extrinsic speedups given by our method and IPM. Again we choose this number because it is the lowest accuracy all learners can reach. As the figure suggests, although our method does not have a higher pure parallelization speedup (intrinsic speedup), it still outperforms IPM.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 95,
"text": "Figure 6",
"ref_id": null
},
{
"start": 291,
"end": 299,
"text": "Figure 6",
"ref_id": null
},
{
"start": 802,
"end": 810,
"text": "Figure 7",
"ref_id": "FIGREF9"
}
],
"eq_spans": [],
"section": "Part-of-Speech Tagging with Perceptron",
"sec_num": "5.2"
},
{
"text": "We are interested in the reason why tagging benefits less from minibatch and parallelization compared to parsing. Further investigation reveals that in tagging the working load of different processors are more unbalanced than in parsing. Figure 8 shows that, when p is small, waiting time is negligible, but when p = 12, tagging wastes about 40% of CPU cycles and parser about 30%. By contrast, there is almost no waiting time in IPM and the intrinsic speedup for IPM is almost linear. The communication overhead is not included in this figure, but by comparing it to the speedups (Figures 4 and 7) , we conclude that the communication overhead is about 10% for both parsing and tagging at p = 12. Figure 8 : Percentage of time wasted due to synchronization (waiting for other processors to finish) (minibatch m = 24), which corresponds to the gray blocks in Figure 1 (b-c) . The number of sentences assigned to each processor decreases with more processors, which worsens the unbalance. Our load-balancing strategy (Figure 1 (c)) alleviates this problem effectively. The communication overhead and update time are not included.",
"cite_spans": [],
"ref_spans": [
{
"start": 238,
"end": 246,
"text": "Figure 8",
"ref_id": null
},
{
"start": 581,
"end": 598,
"text": "(Figures 4 and 7)",
"ref_id": "FIGREF5"
},
{
"start": 698,
"end": 706,
"text": "Figure 8",
"ref_id": null
},
{
"start": 859,
"end": 873,
"text": "Figure 1 (b-c)",
"ref_id": "FIGREF1"
},
{
"start": 1016,
"end": 1025,
"text": "(Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Part-of-Speech Tagging with Perceptron",
"sec_num": "5.2"
},
{
"text": "Besides synchronous minibatch and iterative parameter mixing (IPM) discussed above, there is another method of asychronous minibatch parallelization (Zinkevich et al., 2009; Gimpel et al., 2010; Chiang, 2012) , as in Figure 1 . The key advantage of asynchronous over synchronous minibatch is that the former allows processors to remain near-constant use, while the latter wastes a significant amount of time when some processors finish earlier than others in a minibatch, as found in our experiments. Gimpel et al. (2010) show significant speedups of asychronous parallelization over synchronous minibatch on SGD and EM methods, and Chiang (2012) finds asynchronous parallelization to be much faster than IPM on MIRA for machine translation. However, asynchronous is significantly more complicated to implement, which involves locking when one processor makes an update (see Fig. 1 (d) ), and (in languages like Python) message-passing to other processors after update. Whether this added complexity is worthwhile on large-margin learning is an open question.",
"cite_spans": [
{
"start": 149,
"end": 173,
"text": "(Zinkevich et al., 2009;",
"ref_id": "BIBREF16"
},
{
"start": 174,
"end": 194,
"text": "Gimpel et al., 2010;",
"ref_id": "BIBREF6"
},
{
"start": 195,
"end": 208,
"text": "Chiang, 2012)",
"ref_id": "BIBREF0"
},
{
"start": 501,
"end": 521,
"text": "Gimpel et al. (2010)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 217,
"end": 225,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 875,
"end": 885,
"text": "Fig. 1 (d)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Related Work and Discussions",
"sec_num": "6"
},
{
"text": "We have presented a simple minibatch parallelization paradigm to speed up large-margin structured learning algorithms such as (averaged) perceptron and MIRA. Minibatch has an advantage in both serial and parallel settings, and our experiments confirmed that a minibatch size of around 16 or 24 leads to a significant speedups over the pure online baseline, and when combined with parallelization, leads to almost linear speedups for MIRA, and very significant speedups for perceptron. These speedups are significantly higher than those of iterative parameter mixing of McDonald et al. (2010) which were almost constant (3\u223c4) in both our and their own experiments regardless of the number of processors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "One of the limitations of this work is that although decoding is done in parallel, update is still done in serial and in MIRA the quadratic optimization step (Hildreth algorithm (Hildreth, 1957) ) scales superlinearly with the number of constraints. This prevents us from using very large minibatches. For future work, we would like to explore parallelized quadratic optimization and larger minibatch sizes, and eventually apply it to machine translation.",
"cite_spans": [
{
"start": 158,
"end": 194,
"text": "(Hildreth algorithm (Hildreth, 1957)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "This is similar to Pegasos(Shalev-Shwartz et al., 2007) that applies subgradient descent over a minibatch. Pegasos becomes pure online when the minibatch size is 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In IPM, however, the waiting time is negligible, since the workload on each processor is almost balanced, analogous to a huge minibatch(Fig. 1a). Furthermore, shuffling does affect learning here since each thread in IPM is a pure online learner. So our IPM implementation does not use load-balancing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We turn off garbage-collection in worker processes otherwise their running times will be highly unbalanced. We also admit that Python is not the best choice for parallelization, e.g., asychronous minibatch(Gimpel et al., 2010) requires \"shared memory\" not found in the current Python (see also Sec. 6).5 Available at http://acl.cs.qc.edu/. The version with minibatch parallelization will be available there soon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Ryan McDonald, Yoav Goldberg, and Hal Daum\u00e9, III for helpful discussions, and the anonymous reviewers for suggestions. This work was partially supported by DARPA FA8750-13-2-0041 \"Deep Exploration and Filtering of Text\" (DEFT) Program and by Queens College for equipment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Hope and fear for discriminative training of statistical translation models",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2012,
"venue": "J. Machine Learning Research",
"volume": "13",
"issue": "",
"pages": "1159--1187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2012. Hope and fear for discriminative training of statistical translation models. J. Machine Learning Research (JMLR), 13:1159-1187.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Map-reduce for machine learning on multicore",
"authors": [
{
"first": "C.-T",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "S.-K",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Y.-A",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Y.-Y",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Bradski",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Olukotun",
"suffix": ""
}
],
"year": 2007,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.-T. Chu, S.-K. Kim, Y.-A. Lin, Y.-Y. Yu, G. Bradski, A. Ng, and K. Olukotun. 2007. Map-reduce for ma- chine learning on multicore. In Advances in Neural Information Processing Systems 19.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Incremental parsing with the perceptron algorithm",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of ACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In Proceedings of EMNLP.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Ultraconservative online algorithms for multiclass problems",
"authors": [
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "951--991",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koby Crammer and Yoram Singer. 2003. Ultraconser- vative online algorithms for multiclass problems. J. Mach. Learn. Res., 3:951-991, March.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Efficient, feature-based, conditional random field parsing",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Kleeman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Efficient, feature-based, conditional random field parsing. In Proceedings of ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Distributed asynchronous online learning for natural language processing",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Gimpel, Dipanjan Das, and Noah Smith. 2010. Distributed asynchronous online learning for natural language processing. In Proceedings of CoNLL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A quadratic programming procedure",
"authors": [
{
"first": "Clifford",
"middle": [],
"last": "Hildreth",
"suffix": ""
}
],
"year": 1957,
"venue": "Naval Research Logistics Quarterly",
"volume": "4",
"issue": "1",
"pages": "79--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clifford Hildreth. 1957. A quadratic programming pro- cedure. Naval Research Logistics Quarterly, 4(1):79- 85.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Dynamic programming for linear-time incremental parsing",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang and Kenji Sagae. 2010. Dynamic program- ming for linear-time incremental parsing. In Proceed- ings of ACL 2010.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Structured perceptron with inexact search",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Suphan",
"middle": [],
"last": "Fayong",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured perceptron with inexact search. In Proceed- ings of NAACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic mod- els for segmenting and labeling sequence data. In Pro- ceedings of ICML.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Online em for unsupervised models",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang and Dan Klein. 2009. Online em for unsu- pervised models. In Proceedings of NAACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Online learning of approximate dependency parsing algorithms",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald and Fernando Pereira. 2006. On- line learning of approximate dependency parsing al- gorithms. In Proceedings of EACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Online large-margin training of dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of the 43rd ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distributed training strategies for the structured perceptron",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Gideon",
"middle": [],
"last": "Mann",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Keith Hall, and Gideon Mann. 2010. Distributed training strategies for the structured per- ceptron. In Proceedings of NAACL, June.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Pegasos: Primal estimated sub-gradient solver for svm",
"authors": [
{
"first": "Shai",
"middle": [],
"last": "Shalev-Shwartz",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Srebro",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. 2007. Pegasos: Primal estimated sub-gradient solver for svm. In Proceedings of ICML.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Slow learners are fast",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zinkevich",
"suffix": ""
},
{
"first": "A",
"middle": [
"J"
],
"last": "Smola",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Langford",
"suffix": ""
}
],
"year": 2009,
"venue": "Advances in Neural Information Processing Systems 22",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Zinkevich, A. J. Smola, and J. Langford. 2009. Slow learners are fast. In Advances in Neural Information Processing Systems 22.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": ", Theorem 1) and is irrelevant to minibatch size m. The use of Jensen's inequality is inspired by McDonald et al. (2010).",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "Comparison of various methods for parallelizing online learning (number of processors p = 4). (a) iterative parameter mixing(McDonald et al., 2010). (b) unbalanced minibatch parallelization (minibatch size m = 8). (c) minibatch parallelization after load-balancing (within each minibatch). (d) asynchronous minibatch parallelization(Gimpel et al., 2010) (not implemented here). Each numbered box denotes the decoding of one example, and \u2295 denotes an aggregate operation, i.e., the merging of constraints after each minibatch or the mixing of weights after each iteration in IPM. Each gray shaded box denotes time wasted due to synchronization in (a)-(c) or blocking in (d). Note that in (d) at most one update can happen concurrently, making it substantially harder to implement than (a)-(c).",
"num": null
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"text": "Figure 1(b) illustrates minibatch parallelization, with comparison to iterative parameter mixing (IPM) of McDonald et al. (2010) (see",
"num": null
},
"FIGREF4": {
"uris": null,
"type_str": "figure",
"text": "Minibatch with various minibatch sizes (m = 4, 16, 24, 32, 48) for parsing with MIRA, compared to pure MIRA (m = 1). All curves are on a single CPU.",
"num": null
},
"FIGREF5": {
"uris": null,
"type_str": "figure",
"text": "Speedups of minibatch parallelization vs. IPM on 1 to 12 processors (parsing with MIRA). Extrinsic comparisons use \"the time to reach an accuracy of 92.27\" for speed calculations, 92.27 being the converged accuracy of IPM using 12 processors. Intrinsic comparisons use average time per iteration regardless of accuracy.",
"num": null
},
"FIGREF7": {
"uris": null,
"type_str": "figure",
"text": "Minibatch learning for tagging with perceptron (m = 16, 24, 32) compared with baseline (m = 1) for tagging with perceptron. All curves are on single CPU.",
"num": null
},
"FIGREF9": {
"uris": null,
"type_str": "figure",
"text": "Speedups of minibatch parallelization and IPM on 1 to 12 processors (tagging with perceptron). Extrinsic speedup uses \"the time to reach an accuracy of 96.97\" as the criterion to measure speed. Intrinsic speedup measures the pure parallelization speedup. IPM has an almost linear intrinsic speedup but a near constant extrinsic speedup of about 3 to 4.",
"num": null
}
}
}
}