{ "paper_id": "N12-1015", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:04:36.459718Z" }, "title": "Structured Perceptron with Inexact Search", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern California", "location": {} }, "email": "liang.huang.sh@gmail.com" }, { "first": "Suphan", "middle": [], "last": "Fayong", "suffix": "", "affiliation": {}, "email": "suphan.ying@gmail.com" }, { "first": "Yang", "middle": [], "last": "Guo", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Most existing theory of structured prediction assumes exact inference, which is often intractable in many practical problems. This leads to the routine use of approximate inference such as beam search but there is not much theory behind it. Based on the structured perceptron, we propose a general framework of \"violation-fixing\" perceptrons for inexact search with a theoretical guarantee for convergence under new separability conditions. This framework subsumes and justifies the popular heuristic \"early-update\" for perceptron with beam search (Collins and Roark, 2004). We also propose several new update methods within this framework, among which the \"max-violation\" method dramatically reduces training time (by 3 fold as compared to earlyupdate) on state-of-the-art part-of-speech tagging and incremental parsing systems.", "pdf_parse": { "paper_id": "N12-1015", "_pdf_hash": "", "abstract": [ { "text": "Most existing theory of structured prediction assumes exact inference, which is often intractable in many practical problems. This leads to the routine use of approximate inference such as beam search but there is not much theory behind it. Based on the structured perceptron, we propose a general framework of \"violation-fixing\" perceptrons for inexact search with a theoretical guarantee for convergence under new separability conditions. This framework subsumes and justifies the popular heuristic \"early-update\" for perceptron with beam search (Collins and Roark, 2004). We also propose several new update methods within this framework, among which the \"max-violation\" method dramatically reduces training time (by 3 fold as compared to earlyupdate) on state-of-the-art part-of-speech tagging and incremental parsing systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Discriminative structured prediction algorithms such as conditional random fields (Lafferty et al., 2001) , structured perceptron (Collins, 2002) , maxmargin markov networks (Taskar et al., 2003) , and structural SVMs (Tsochantaridis et al., 2005) lead to state-of-the-art performance on many structured prediction problems such as part-of-speech tagging, sequence labeling, and parsing. But despite their success, there remains a major problem: these learning algorithms all assume exact inference (over an exponentially-large search space), which is needed to ensure their theoretical properties such as convergence. This exactness assumption, however, rarely holds in practice since exact inference is often intractable in many important problems such as machine translation (Liang et al., 2006) , incremen-tal parsing (Collins and Roark, 2004; Huang and Sagae, 2010) , and bottom-up parsing (McDonald and Pereira, 2006; Huang, 2008) . This leads to routine use of approximate inference such as beam search as evidenced in the above-cited papers, but the inexactness unfortunately abandons existing theoretical guarantees of the learning algorithms, and besides notable exceptions discussed below and in Section 7, little is known for theoretical properties of structured prediction under inexact search.", "cite_spans": [ { "start": 82, "end": 105, "text": "(Lafferty et al., 2001)", "ref_id": "BIBREF10" }, { "start": 130, "end": 145, "text": "(Collins, 2002)", "ref_id": "BIBREF0" }, { "start": 174, "end": 195, "text": "(Taskar et al., 2003)", "ref_id": "BIBREF17" }, { "start": 218, "end": 247, "text": "(Tsochantaridis et al., 2005)", "ref_id": "BIBREF18" }, { "start": 778, "end": 798, "text": "(Liang et al., 2006)", "ref_id": "BIBREF11" }, { "start": 822, "end": 847, "text": "(Collins and Roark, 2004;", "ref_id": "BIBREF1" }, { "start": 848, "end": 870, "text": "Huang and Sagae, 2010)", "ref_id": "BIBREF7" }, { "start": 895, "end": 923, "text": "(McDonald and Pereira, 2006;", "ref_id": "BIBREF13" }, { "start": 924, "end": 936, "text": "Huang, 2008)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Among these notable exceptions, many examine how and which approximations break theoretical guarantees of existing learning algorithms (Kulesza and Pereira, 2007; Finley and Joachims, 2008 ), but we ask a deeper and practically more useful question: can we modify existing learning algorithms to accommodate the inexactness in inference, so that the theoretical properties are still maintained?", "cite_spans": [ { "start": 135, "end": 162, "text": "(Kulesza and Pereira, 2007;", "ref_id": "BIBREF9" }, { "start": 163, "end": 188, "text": "Finley and Joachims, 2008", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For the structured perceptron, Collins and Roark (2004) provides a partial answer: they suggest variant called \"early update\" for beam search, which updates on partial hypotheses when the correct solution falls out of the beam. This method works significantly better than standard perceptron, and is followed by later incremental parsers, for instance in (Zhang and Clark, 2008; Huang and Sagae, 2010) . However, two problems remain: first, up till now there has been no theoretical justification for early update; and secondly, it makes learning extremely slow as witnessed by the above-cited papers because it only learns on partial examples and often requires 15-40 iterations to converge while normal perceptron converges in 5-10 iterations (Collins, 2002) .", "cite_spans": [ { "start": 31, "end": 55, "text": "Collins and Roark (2004)", "ref_id": "BIBREF1" }, { "start": 355, "end": 378, "text": "(Zhang and Clark, 2008;", "ref_id": "BIBREF20" }, { "start": 379, "end": 401, "text": "Huang and Sagae, 2010)", "ref_id": "BIBREF7" }, { "start": 745, "end": 760, "text": "(Collins, 2002)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We develop a theoretical framework of \"violationfixing\" perceptron that addresses these challenges. In particular, we make the following contributions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We show that, somewhat surprisingly, exact search is not required by perceptron convergence. All we need is that each update involves a \"violation\", i.e., the 1-best sequence has a higher model score than the correct sequence. Such an update is considered a \"valid update\", and any perceptron variant that maintains this is bound to converge. We call these variants \"violation-fixing perceptrons\" (Section 3.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 This theory explains why standard perceptron update may fail to work with inexact search, because violation is no longer guaranteed: the correct structure might indeed be preferred by the model, but was pruned during the search process (Sec. 3.2). Such an update is thus considered invalid, and experiments show that invalid updates lead to bad learning (Sec. 6.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We show that the early update is always valid and is thus a special case in our framework; this is the first theoretical justification for early update (Section 4). We also show that (a variant of) LaSO (Daum\u00e9 and Marcu, 2005 ) is another special case (Section 7).", "cite_spans": [ { "start": 205, "end": 227, "text": "(Daum\u00e9 and Marcu, 2005", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We then propose several other update methods within this framework (Section 5). Experiments in Section 6 confirm that among them, the max-violation method can learn equal or better models with dramatically reduced learning times (by 3 fold as compared to early update) on state-of-the-art part-of-speech tagging (Collins, 2002) 1 and incremental parsing (Huang and Sagae, 2010) systems. We also found strong correlation between search error and invalid updates, suggesting that the advantage of valid update methods is more pronounced with harder inference problems.", "cite_spans": [ { "start": 314, "end": 329, "text": "(Collins, 2002)", "ref_id": "BIBREF0" }, { "start": 356, "end": 379, "text": "(Huang and Sagae, 2010)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our techniques are widely applicable to other strcutured prediction problems which require inexact search like machine translation and protein folding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We review the convergence properties of the standard structured perceptron (Collins, 2002) in our 1 Incidentally, we achieve the best POS tagging accuracy to date (97.35%) on English Treebank by early update (Sec. 6.1).", "cite_spans": [ { "start": 75, "end": 90, "text": "(Collins, 2002)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Structured Perceptron", "sec_num": "2" }, { "text": "Algorithm 1 Structured Perceptron (Collins, 2002) .", "cite_spans": [ { "start": 34, "end": 49, "text": "(Collins, 2002)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Structured Perceptron", "sec_num": "2" }, { "text": "Input: data D = {(x (t) , y (t) )} n t=1 and feature map \u03a6 Output: weight vector w Let:", "cite_spans": [ { "start": 20, "end": 23, "text": "(t)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Structured Perceptron", "sec_num": "2" }, { "text": "EXACT(x, w) \u2206 = argmax s\u2208Y(x) w \u2022 \u03a6(x, s) Let: \u2206\u03a6(x, y, z) \u2206 = \u03a6(x, y) \u2212 \u03a6(x, z) 1: repeat 2: for each example (x, y) in D do 3: z \u2190 EXACT(x, w) 4: if z = y then 5: w \u2190 w + \u2206\u03a6(x, y, z)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Perceptron", "sec_num": "2" }, { "text": "6: until converged own notations that will be reused in later sections for non-exact search. We first define a new concept:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Perceptron", "sec_num": "2" }, { "text": "Definition 1. The standard confusion set C s (D) for training data D = {(x (t) , y (t) )} n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Perceptron", "sec_num": "2" }, { "text": "t=1 is the set of triples (x, y, z) where z is a wrong label for input x:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Perceptron", "sec_num": "2" }, { "text": "C s (D) \u2206 = {(x, y, z) | (x, y) \u2208 D, z \u2208 Y(x) \u2212 {y}}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Perceptron", "sec_num": "2" }, { "text": "The rest of the theory, including separation and violation, all builds upon this concept. We call such a triple S = D, \u03a6, C a training scenario, and in the remainder of this section, we assume C = C s (D), though later we will define other confusion sets to accommodate other update methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Perceptron", "sec_num": "2" }, { "text": "The training scenario S = D, \u03a6, C is said to be linearly separable (i.e., dataset D is linearly separable in C by representation \u03a6) with margin \u03b4 > 0 if there exists an oracle vector u with u = 1 such that it can correctly classify all examples in D (with a gap of at least \u03b4), i.e., \u2200(x, y, z) \u2208 C, u \u2022 \u2206\u03a6(x, y, z) \u2265 \u03b4. We define the maximal margin \u03b4(S) to be the maximal such margin over all unit oracle vectors:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "\u03b4(S) \u2206 = max u =1 min (x,y,z)\u2208C u \u2022 \u2206\u03a6(x, y, z).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "Definition 3. A triple (x, y, z) is said to be a violation in training scenario S = D, \u03a6, C with respect to weight vector w if (x, y, z) \u2208 C and w \u2022 \u2206\u03a6(x, y, z) \u2264 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "Intuitively, this means model w is possible to mislabel example x (though not necessarily to z) since y is not its single highest scoring label under w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "Lemma 1. Each update triple (x, y, z) in Algo- rithm 1 (line 5) is a violation in S = D, \u03a6, C s (D) . Proof. z = EXACT(x, w), thus for all z \u2032 \u2208 Y(x), w\u2022\u03a6(x, z) \u2265 w\u2022\u03a6(x, z \u2032 ), i.e., w\u2022\u2206\u03a6(x, y, z) \u2264 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "On the other hand, z \u2208 Y(x) and z = y, so", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "(x, y, z) \u2208 C s (D).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "This lemma basically says exact search guarantees violation in each update, but as we will see in the convergence proof, violation itself is more fundamental than search exactness. Definition 4. The diameter R(S) for scenario S = D, \u03a6, C is max (x,y,z)\u2208C \u2206\u03a6(x, y, z) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "Theorem 1 (convergence, Collins). For a separable training scenario S = D, \u03a6, C s (D) with \u03b4(S) > 0, the perceptron algorithm in Algorithm 1 will make finite number of updates (before convergence):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "err (S) \u2264 R 2 (S)/\u03b4 2 (S).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "Proof. Let w (k) be the weight vector before the kth update; w (0) = 0. Suppose the kth update happens on the triple (x, y, z). We will bound w (k+1) from two directions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "1. w (k+1) = w (k) + \u2206\u03a6(x, y, z). Since scenario", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "S is separable with max margin \u03b4(S), there exists a unit oracle vector u that achieves this margin. Dot product both sides with u, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "u \u2022 w (k+1) = u \u2022 w (k) + u \u2022 \u2206\u03a6(x, y, z) \u2265 u \u2022 w (k) + \u03b4(S)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "by Lemma 1 that (x, y, z) \u2208 C s (D) and by the definition of margin. By induction, we have u \u2022 w (k+1) \u2265 k\u03b4(S). Since for any two vectors a and b we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "a b \u2265 a \u2022 b, thus u w (k+1) \u2265 u \u2022 w (k+1) \u2265 k\u03b4(S).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "As u is a unit vector, we have w (k+1) \u2265 k\u03b4(S).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "2. On the other hand, since a + b 2 = a 2 + b 2 + 2 a \u2022 b for any vectors a and b,we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "w (k+1) 2 = w (k) + \u2206\u03a6(x, y, z) 2 = w (k) 2 + \u2206\u03a6(x, y, z) 2 + 2 w (k) \u2022 \u2206\u03a6(x, y, z) \u2264 w (k) 2 + R 2 (S) + 0 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "By Lemma 1, the update triple is a violation so that w (k) \u2022\u2206\u03a6(x, y, z) \u2264 0, and", "cite_spans": [ { "start": 55, "end": 58, "text": "(k)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "that (x, y, z) \u2208 C s (D) thus \u2206\u03a6(x, y, z) 2 \u2264 R 2 (S) by the definition of diameter. By induction, we have w (k+1) 2 \u2264 kR 2 (S).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "Algorithm 2 Local Violation-Fixing Perceptron.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "Input: training scenario S = D, \u03a6, C 1: repeat 2: for each example (x, y) in D do 3: (x, y \u2032 , z) = FINDVIOLATION(x, y, w) 4: if z = y then \u22b2 (x, y \u2032 , z) is a violation 5: w \u2190 w + \u2206\u03a6(x, y \u2032 , z) 6: until converged", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "Combining the two bounds, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "k 2 \u03b4 2 (S) \u2264 w (k+1) 2 \u2264 kR 2 (S), thus k \u2264 R 2 (S)/\u03b4 2 (S).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2.", "sec_num": null }, { "text": "We now draw the central observation of this work from part 2 of the above proof: note that exact search (argmax) is not required in the proof, instead, it just needs a violation, which is a much weaker condition. 2 Exact search is simply used to ensure violation. In other words, if we can guarantee violation in each update (which we call \"valid update\"), it does not matter whether or how exact the search is.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Violation is All We Need", "sec_num": "3" }, { "text": "This observation leads us to two generalized variants of perceptron which we call \"violation-fixing perceptrons\". The local version, Algorithm 2 still works on one example at a time, and searches for one violation (if any) in that example to update with. The global version, Algorithm 3, can update on any violation in the dataset at any time. We state the following generalized theorem:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Violation-Fixing Perceptron", "sec_num": "3.1" }, { "text": "Theorem 2. For a separable training scenario S the perceptrons in Algorithms 2 and 3 both converge with the same update bounds of R 2 (S)/\u03b4 2 (S) as long as the FINDVIOLATION and FINDVIO-LATIONINDATA functions always return violation triples if there are any.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Violation-Fixing Perceptron", "sec_num": "3.1" }, { "text": "Proof. Same as the proof to Theorem 1, except for replacing Lemma 1 in part 2 by the fact that the update triples are guaranteed to be violations. (Note a violation triple is by definition in the confusion set, thus we can still use separation and diameter).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Violation-Fixing Perceptron", "sec_num": "3.1" }, { "text": "These generic violation-fixing perceptron algorithms can been seen as \"interfaces\" (or APIs), Algorithm 3 Global Violation-Fixing Perceptron.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Violation-Fixing Perceptron", "sec_num": "3.1" }, { "text": "Input: training scenario S = D, \u03a6, C 1: repeat 2: (x, y, z) \u2190 FINDVIOLATIONINDATA(C, w) 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Violation-Fixing Perceptron", "sec_num": "3.1" }, { "text": "if x = \u01eb then break \u22b2 no more violation?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Violation-Fixing Perceptron", "sec_num": "3.1" }, { "text": "4: where later sections will supply different implementations of the FINDVIOLATION and FINDVIOLA-TIONINDATA subroutines, thus establishing alternative update methods for inexact search as special cases in this general framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Violation-Fixing Perceptron", "sec_num": "3.1" }, { "text": "w \u2190 w + \u2206\u03a6(x, y, z) 5: until converged data D = {(x, y)}: x fruit flies fly . y N N V . search space: Y(x) = {N} \u00d7 {N, V} \u00d7 {N, V} \u00d7 {.}. feature map: \u03a6(x, y) = (# N\u2192N (y), # V\u2192. (y)). iter label z \u2206\u03a6(x, y, z) w\u2022\u2206\u03a6 new w 0 (0, 0) 1 N N N . (\u22121, +1) 0 \u221a (\u22121, 1) 2 N V N . (+1, +1) 0 \u221a (0, 2) 3 N N N . (\u22121, +1) 2 \u00d7 (\u22121, 3) 4 N V N . (+1, +1) 2 \u00d7 (0, 4) ... infinite loop ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Violation-Fixing Perceptron", "sec_num": "3.1" }, { "text": "What if we can not guarantee valid updates? Well, the convergence proof in Theorems 1 and 2 would break in part 2. This is exactly why standard structured perceptron may not work well with inexact search: with search errors it is no longer possible to guarantee violation in each update. For example, an inexact search method explores a (proper) subset of the search space Y \u2032 w (x) Y(x), and finds a label", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Convergence with Inexact Search", "sec_num": "3.2" }, { "text": "z = argmax s\u2208Y \u2032 w (x) w \u2022 \u03a6(x, s).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Convergence with Inexact Search", "sec_num": "3.2" }, { "text": "It is possible that the correct label y is outside of the explored subspace, and yet has a higher score:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Convergence with Inexact Search", "sec_num": "3.2" }, { "text": "\u2206\u03a6(x, y, z) > 0 but y / \u2208 Y \u2032 w (x).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Convergence with Inexact Search", "sec_num": "3.2" }, { "text": "In this case, (x, y, z) is not a violation, which breaks the proof. We show below that this situation actually exists.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Convergence with Inexact Search", "sec_num": "3.2" }, { "text": "Algorithm 4 Greedy Search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Convergence with Inexact Search", "sec_num": "3.2" }, { "text": "Let: NEXT(x, z) \u2206 = {z \u2022 a | a \u2208 Y |z|+1 (x)} \u22b2 set of possible one-step extensions (successors) BEST(x, z, w) \u2206 = argmax z \u2032 \u2208NEXT(x,z) w \u2022 \u03a6(x, z \u2032 ) \u22b2 best one-step extension based on history 1: function GREEDYSEARCH(x, w) 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Convergence with Inexact Search", "sec_num": "3.2" }, { "text": "z \u2190 \u01eb \u22b2 empty sequence 3: This situation happens very often in NLP: often the search space Y(x) is too big either because it does not factor locally, or because it is still too big after factorization, which requires some approximate search. In either case, updating the model w on a non-violation (i.e., \"invalid\") triple (x, y, z) does not make sense: it is not the model's problem: w does score the correct label y higher than the incorrect z; rather, it is a problem of the search, or its interaction with the model that prunes away the correct (sub)sequence during the search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Convergence with Inexact Search", "sec_num": "3.2" }, { "text": "for i \u2208 1 . . . |x| do 4: z \u2190 BEST(x, z, w)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Convergence with Inexact Search", "sec_num": "3.2" }, { "text": "What shall we do in this case? Collins and Roark (2004) suggest that instead of the standard full update, we should only update on the prefix (x, y [1:i] , z [1:i] ) up to the point i where the correct sequence falls off the beam. This intuitively makes a lot of sense, since up to i we can still guarantee violation, but after i we may not. The next section formalizes this intuition.", "cite_spans": [ { "start": 31, "end": 55, "text": "Collins and Roark (2004)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Non-Convergence with Inexact Search", "sec_num": "3.2" }, { "text": "We now proceed to show that early update is always valid and it is thus a special case of the violationfixing perceptron framework. First, let us study the simplest special case, greedy search (beam=1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Early Update is Violation-Fixing", "sec_num": "4" }, { "text": "Greedy search is the simplest form of inexact search. Shown in Algorithm 4, at each position, we commit to the single best action (e.g., tag for the current word) given the previous actions and continue to the Figure 2 : Early update at the first error in greedy search.", "cite_spans": [], "ref_spans": [ { "start": 210, "end": 218, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Greedy Search and Early Update", "sec_num": "4.1" }, { "text": "\u221a \u221a \u2022 \u2022 \u2022 \u221a \u00d7 \u2190\u2212 update \u2212\u2192 skip \u2212\u2192", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Greedy Search and Early Update", "sec_num": "4.1" }, { "text": "Algorithm 5 Early update for greedy search adapted from Collins and Roark (2004) .", "cite_spans": [ { "start": 56, "end": 80, "text": "Collins and Roark (2004)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Greedy Search and Early Update", "sec_num": "4.1" }, { "text": "Input: training scenario S = D, \u03a6, C g (D) 1: repeat 2: for each example (x, y) in D do 3: z \u2190 \u01eb \u22b2 empty sequence 4: for i \u2208 1 . . . |x| do 5: z \u2190 BEST(x, z, w) 6: if z i = y i then \u22b2 first wrong action 7: w \u2190 w + \u2206\u03a6(x, y [1:i] , z) \u22b2 early update 8:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Greedy Search and Early Update", "sec_num": "4.1" }, { "text": "break \u22b2 skip the rest of this example 9: until converged next position. The notation Y i (x) denotes the set of possible actions at position i for example x (for instance, the set of possible tags for a word).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Greedy Search and Early Update", "sec_num": "4.1" }, { "text": "The early update heuristic, originally proposed for beam search (Collins and Roark, 2004) , now simplifies into \"update at the first wrong action\", since this is exactly the place where the correct sequence falls off the singleton beam (see Algorithm 5 for pseudocode and Fig. 2) . Informally, it is easy to show (below) that this kind of update is always a valid violation, but we need to redefine confusion set.", "cite_spans": [ { "start": 64, "end": 89, "text": "(Collins and Roark, 2004)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 272, "end": 279, "text": "Fig. 2)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Greedy Search and Early Update", "sec_num": "4.1" }, { "text": "C g (D) for training data D = {(x (t) , y (t) )} n t=1 is the set of triples (x, y [1:i] , z [1:i] ) where y [1:i]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5. The greedy confusion set", "sec_num": null }, { "text": "is a i-prefix of the correct label y, and z [1:i] is an incorrect i-prefix that agrees with the correct prefix on all decisions except the last one:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5. The greedy confusion set", "sec_num": null }, { "text": "C g (D) \u2206 = {(x, y [1:i] , z [1:i] ) | (x, y, z) \u2208 C s (D), 1 \u2264 i \u2264 |y|, z [1:i\u22121] = y [1:i\u22121] , z i = y i }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5. The greedy confusion set", "sec_num": null }, { "text": "We can see intuitively that this new defintion is specially tailored to the early updates in greedy search. The concepts of separation/margin, violation, and diameter all change accordingly with this new confusion set. In particular, we say that a dataset D is greedily separable in representation \u03a6 if and only if D, \u03a6, C g (D) is linearly separable, and we say ( return (x, y, y) \u22b2 success (z = y), no violation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5. The greedy confusion set", "sec_num": null }, { "text": "x, y \u2032 , z \u2032 ) is a greedy violation if (x, y \u2032 , z \u2032 ) \u2208 C g (D) and w \u2022 \u2206\u03a6(x, y \u2032 , z \u2032 ) \u2264 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5. The greedy confusion set", "sec_num": null }, { "text": "We now express early update for greedy search (Algorithm 5) in terms of violation-fixing perceptron. Algorithm 6 implements the FINDVIOLATION function for the generic Local Violation-Fixing Perceptron in Algorithm 2. Thus Algorithm 5 is equivalent to Algorithm 6 plugged into Algorithm 2. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5. The greedy confusion set", "sec_num": null }, { "text": "i = z i . But y \u2032 j = z j for all j < i otherwise it would have returned before position i, so (x, y \u2032 , z) \u2208 C g (D). Also z = BEST(x, z, w), so w \u2022 \u03a6(x, z) \u2265 w \u2022 \u03a6(x, y \u2032 ), thus w \u2022 \u2206\u03a6(x, y \u2032 , z) \u2264 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5. The greedy confusion set", "sec_num": null }, { "text": "Theorem 4 (convergence of greedy search with early update). For a separable training scenario S = D, \u03a6, C g (D) , the early update perceptron by plugging Algorithm 6 into Algorithm 2 will make finite number of updates (before convergence):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5. The greedy confusion set", "sec_num": null }, { "text": "err (S) < R 2 (S)/\u03b4 2 (S).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5. The greedy confusion set", "sec_num": null }, { "text": "Proof. By Lemma 2 and Theorem 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5. The greedy confusion set", "sec_num": null }, { "text": "To formalize beam search, we need some notations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "Definition 6 (k-best). We denote argtop k z\u2208Z f (z) to return (a sorted list of) the top k unique z in terms of f (z), i.e., it returns a list B = [z (1) , z (2) , . . . , z (k) ] where z (i) \u2208 Z and f (z (1) ", "cite_spans": [ { "start": 174, "end": 177, "text": "(k)", "ref_id": null }, { "start": 205, "end": 208, "text": "(1)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": ") \u2265 f (z (2) ) \u2265 . . . \u2265 f (z (k) ) \u2265 f (z \u2032 ) for all z \u2032 \u2208 Z \u2212 B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "By unique we mean that no two elements are equivalent with respect to some equivalence relation, i.e., z (i) \u2261 z (j) \u21d2 i = j. This equivalence relation is useful for dynamic programming (when used with beam search). For example, in trigram tagging, two tag sequences are equivalent if they are of the Algorithm 7 Beam-Search. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "BEST k (x, B, w) \u2206 = argtop k z \u2032 \u2208\u222az\u2208B NEXT(z) w\u2022\u03a6(x, z \u2032 ) \u22b2 top k (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "B 0 \u2190 [\u01eb]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "\u22b2 initial beam 3: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "for i \u2208 1 . . . |x| do 4: B i \u2190 BEST k (x, B i\u22121 , w) 5: return B |x| [0] \u22b2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "B 0 \u2190 [\u01eb]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "3: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "for i \u2208 1 . . . |x| do 4: B i \u2190 BEST k (x, B i\u22121 , w) 5: if y [1:i] / \u2208 B i then \u22b2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "= z \u2032 |z|\u22121:|z| .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "In incremental parsing this equivalence relation could be relevant bits of information on the last few trees on the stack (depending on feature templates), as suggested in (Huang and Sagae, 2010 ). 3 Algorithm 7 shows the pseudocode for beam search. It is trivial to verify that greedy search is a special case of beam search with k = 1. However, the definition of confusion set changes considerably:", "cite_spans": [ { "start": 172, "end": 194, "text": "(Huang and Sagae, 2010", "ref_id": "BIBREF7" }, { "start": 198, "end": 199, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "Definition 7. The beam confusion set C b (D) for training data D = {(x (t) , y (t) )} n t=1 is the set of triples (x, y [1:i] , z [1:i] ) where y [1:i]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "is a i-prefix of the correct label y, and z [1:i] is an incorrect i-prefix that differs from the correct prefix (in at least one place): Figure 3 : Illustration of various update methods: early, max-violation, latest, and standard (full) update, in the case when standard update is invalid (shown in red). The rectangle denotes the beam and the blue line segments denote the trajectory of the correct sequence.", "cite_spans": [], "ref_spans": [ { "start": 137, "end": 145, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "C b (D) \u2206 = {(x, y [1:i] , z [1:i] ) | (x, y, z) \u2208 C s (D), 1 \u2264 i \u2264 |y|, z [1:i] = y [1:i] }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "D, \u03a6, C b (D) is linearly separable, and we say", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "(x, y \u2032 , z \u2032 ) is a beam violation if (x, y \u2032 , z \u2032 ) \u2208 C b (D) and w \u2022 \u2206\u03a6(x, y \u2032 , z \u2032 ) \u2264 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "It is easy to verify that beam confusion set is superset of both greedy and standard confusion sets: for all dataset D, C g (D)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "C b (D), and C s (D) C b (D). This means that beam separability is the strongest condition among the three separabilities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "Theorem 5. If a dataset D is beam separable, then it is also greedily and (standard) linear separable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "We now present early update for beam search as a Local Violation Fixing Perceptron in Algorithm 8. See Figure 3 for an illustration. ", "cite_spans": [], "ref_spans": [ { "start": 103, "end": 111, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "| = |y \u2032 |, thus (x, y \u2032 , z \u2032 ) \u2208 C b (D). Also we have w \u2022 \u03a6(x, z \u2032 ) \u2265 w \u2022 \u03a6(x, y \u2032 ) by defintion of argtop, so w \u2022 \u2206\u03a6(x, y \u2032 , z \u2032 ) \u2264 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "Theorem 6 (convergence of beam search with early update). For a separable training scenario S = D, \u03a6, C b (D) , the early update perceptron by plugging Algorithm 8 into Algorithm 2 will make finite number of updates (before convergence):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "err (S) < R 2 (S)/\u03b4 2 (S).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "Proof. By Lemma 3 and Theorem 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Beam Search and Early Update", "sec_num": "4.2" }, { "text": "We now propose three novel update methods for inexact search within the framework of violationfixing perceptron. These methods are inspired by early update but addresses its very limitation of slow learning. See Figure 3 for an illustration.", "cite_spans": [], "ref_spans": [ { "start": 212, "end": 220, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "New Update Methods for Inexact Search", "sec_num": "5" }, { "text": "1. \"hybrid\" update. When the standard update is valid (i.e., a violation), we perform it, otherwise we perform the early update.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Update Methods for Inexact Search", "sec_num": "5" }, { "text": "2. \"max-violation\" update. While there are more than one possible violations on one example x, we choose the triple that is most violated:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Update Methods for Inexact Search", "sec_num": "5" }, { "text": "(x, y * , z * ) = argmin (x,y \u2032 ,z \u2032 )\u2208C,z \u2032 \u2208\u222a i {B i [0]} w \u2022\u2206\u03a6(x, y \u2032 , z \u2032 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Update Methods for Inexact Search", "sec_num": "5" }, { "text": "3. \"latest\" update. Contrary to early update, we can also choose the latest point where the update is still a violation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Update Methods for Inexact Search", "sec_num": "5" }, { "text": "(x, y * , z * ) = argmax (x,y \u2032 ,z \u2032 )\u2208C,z \u2032 \u2208\u222a i {B i [0]},w\u2022\u2206\u03a6(x,y \u2032 ,z \u2032 )>0 |z \u2032 |.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Update Methods for Inexact Search", "sec_num": "5" }, { "text": "All these three methods go beyond early update but can be represented in the Local Violation Fixing Perceptron (Algorithm 2), and are thus all guaranteed to converge. As we will see in the experiments, these new methods are motivated to address the major limitation of early update, that is, it learns too slowly since it only updates on prefixes and neglect the rest of the examples. Hybrid update is trying to do as much standard (\"full\") updates as possible, and latest update further addresses the case when standard update is invalid: instead of backing-off to early update, it uses the longest possible update.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Update Methods for Inexact Search", "sec_num": "5" }, { "text": "We conduct experiments on two typical structured learning tasks: part-of-speech tagging with a trigram model where exact search is possible, and incremental dependency parsing with arbitrary non-local features where exact search is intractable. We run both experiments on state-of-the-art implementations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "Following the standard split for part-of-speech tagging introduced by Collins (2002), we use sections 00-18 of the Penn Treebank (Marcus et al., 96.4 96.6 96.8 97 97.2 1 2 3 4 5 6 7 8 9 10 best tagging accuracy on held-out beam size max-violation early standard Table 1 : Convergence rate of part-of-speech tagging. In general, max-violation converges faster and better than early and standard updates, esp. in smallest beams. 1993) for training, sections 19-21 as a held-out development set, and sections 22-24 for testing. Our baseline system is a faithful implementation of the perceptron tagger in Collins (2002) , i.e., a trigram model with spelling features from Ratnaparkhi (1996) , except that we replace one-count words as . With standard perceptron and exact search, our baseline system performs slightly better than Collins (2002) with a beam of 20 (M. Collins, p.c.) .", "cite_spans": [ { "start": 129, "end": 144, "text": "(Marcus et al.,", "ref_id": null }, { "start": 145, "end": 154, "text": "96.4 96.6", "ref_id": null }, { "start": 602, "end": 616, "text": "Collins (2002)", "ref_id": "BIBREF0" }, { "start": 669, "end": 687, "text": "Ratnaparkhi (1996)", "ref_id": "BIBREF15" }, { "start": 832, "end": 846, "text": "Collins (2002)", "ref_id": "BIBREF0" }, { "start": 869, "end": 883, "text": "Collins, p.c.)", "ref_id": null } ], "ref_spans": [ { "start": 262, "end": 269, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Part-of-Speech Tagging", "sec_num": "6.1" }, { "text": "We then implemented beam search on top of dynamic programming and experimented with standard, early, hybrid, and max-violation update methods with various beam settings (b = 1, 2, 4, 7, 10). Figure 4 (a) summarizes these experiments. We observe that, first of all, the standard update performs poorly with the smallest beams, esp. at b = 1 (greedy search), when search error is the most severe causing lots of invalid updates (see Figure 5) . Secondly,max-violation is almost consistently the best-performing method (except for b = 4). Table 1 shows convergence rates, where max-violation update also converges faster than early and standard methods. In particular, at b = 1, it achieves a 19% error reduction over standard update, while converg- Figure 5 : Percentages of invalid updates for standard update. In tagging it quickly drops to 0% while in parsing it converges to \u223c 50%. This means search-wise, parsing is much harder than tagging, which explains why standard update does OK in tagging but terribly in parsing. The harder the search is, the more needed valid updates are. ing twice as fast as early update. 4 This agrees with our intuition that by choosing the \"most-violated\" triple for update, the perceptron should learn faster. Table 2 presents the final tagging results on the test set. For each of the five update methods, we choose the beam size at which it achieves the highest accuracy on the held-out. For standard update, its best held-out accuracy 97.17 is indeed achieved by exact search (i.e., b = +\u221e) since it does not work well with beam search, but it costs 2.7 hours (162 minutes) to train. Figure 6 : Training progress curves for incremental parsing (b = 8). Max-violation learns faster and better: it takes 4.6 hours (10 iterations) to reach 92.25 on held-out, compared with early update's 15.4 hours (38 iterations), even though the latter is faster in each iteration due to early stopping (esp. at the first few iterations). et al. (2007) , the best tagging accuracy reported on the Penn Treebank to date. 5, 6 To conclude, with valid update methods, we can learn a better tagging model with 5 times faster training than exact search.", "cite_spans": [ { "start": 1120, "end": 1121, "text": "4", "ref_id": null }, { "start": 1960, "end": 1973, "text": "et al. (2007)", "ref_id": null }, { "start": 2041, "end": 2043, "text": "5,", "ref_id": null }, { "start": 2044, "end": 2045, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 191, "end": 199, "text": "Figure 4", "ref_id": "FIGREF4" }, { "start": 431, "end": 440, "text": "Figure 5)", "ref_id": null }, { "start": 536, "end": 543, "text": "Table 1", "ref_id": null }, { "start": 747, "end": 755, "text": "Figure 5", "ref_id": null }, { "start": 1245, "end": 1252, "text": "Table 2", "ref_id": "TABREF7" }, { "start": 1622, "end": 1630, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Part-of-Speech Tagging", "sec_num": "6.1" }, { "text": "While part-of-speech tagging is mainly a proof of concept, incremental parsing is much harder since non-local features rules out exact inference. We use the standard split for parsing: secs 02-21 for training, 22 as held-out, and 23 for testing. Our baseline system is a faithful reimplementation of the beam-search dynamic programming parser of Huang and Sagae (2010) . Like most incremental parsers, it used early update as search error is severe.", "cite_spans": [ { "start": 346, "end": 368, "text": "Huang and Sagae (2010)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Incremental Parsing", "sec_num": "6.2" }, { "text": "We first confirm that, as reported by Huang and Sagae, early update learns very slowly, reaching 92.24 on held-out with 38 iterations (15.4 hours).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental Parsing", "sec_num": "6.2" }, { "text": "We then experimented with the other update methods: standard, hybrid, latest, and maxviolation, with beam size b = 1, 2, 4, 8. We found that, first of all, the standard update performs horribly on this task: at b = 1 it only achieves 60.04% on held-out, while at b = 8 it improves to 78.99% but is still vastly below all other methods. This is because search error is much more severe in incremental parsing (than in part-of-speech tagging), thus standard update produces an enormous amount of invalid updates even at b = 8 (see Figure 5 ). This suggests that the advantage of valid update methods is more pronounced with tougher search problems. Secondly, max-violation learns much faster (and better) than early update: it takes only 10 iterations (4.6 hours) to reach 92.25, compared with early update's 15.4 hours (see Fig. 6 ). At its peak, max-violation achieves 92.18 on test which is better than (Huang and Sagae, 2010) . To conclude, we can train a parser with only 1/3 of training time with max-violation update, and the harder the search is, the more needed the valid update methods are.", "cite_spans": [ { "start": 904, "end": 927, "text": "(Huang and Sagae, 2010)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 529, "end": 537, "text": "Figure 5", "ref_id": null }, { "start": 823, "end": 829, "text": "Fig. 6", "ref_id": null } ], "eq_spans": [], "section": "Incremental Parsing", "sec_num": "6.2" }, { "text": "Besides the early update method of Collins and Roark (2004) which inspired us, this work is also related to the LaSO method of Daum\u00e9 and Marcu (2005) . LaSO is similar to early update, except that after each update, instead of skipping the rest of the example, LaSO continues on the same example with the correct hypothesis. For example, in the greedy case LaSO is just replacing the break statement in Algorithm 5 by 8': z i = y i and in beam search it is replacing it with 8':", "cite_spans": [ { "start": 35, "end": 59, "text": "Collins and Roark (2004)", "ref_id": "BIBREF1" }, { "start": 127, "end": 149, "text": "Daum\u00e9 and Marcu (2005)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work and Discussions", "sec_num": "7" }, { "text": "B i = [y [1:i] ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work and Discussions", "sec_num": "7" }, { "text": "This is beyond our Local Violation-Fixing Perceptron since it makes more than one updates on one example, but can be easily represented as a Global Violation-Fixing Perceptron (Algorithm 3), since we can prove any further updates on this example is a violation (under the new weights). We thus establish LaSO as a special case within our framework. 7 More interestingly, it is easy to verify that the greedy case of LaSO update is equivalent to training a local unstructured perceptron which independently classifies at each position based on history, which is related to SEARN (Daum\u00e9 et al., 2009) . Kulesza and Pereira (2007) study perceptron learning with approximate inference that overgenerates instead of undergenerates as in our work, but the underlying idea is similar: by learning in a harder setting (LP-relaxed version in their case and prefix-augmented version in our case) we can learn the simpler original setting. Our \"beam separability\" can be viewed as an instance of their \"algorithmic separability\". Finley and Joachims (2008) study similar approximate inference for structural SVMs.", "cite_spans": [ { "start": 349, "end": 350, "text": "7", "ref_id": null }, { "start": 578, "end": 598, "text": "(Daum\u00e9 et al., 2009)", "ref_id": "BIBREF3" }, { "start": 601, "end": 627, "text": "Kulesza and Pereira (2007)", "ref_id": "BIBREF9" }, { "start": 1019, "end": 1045, "text": "Finley and Joachims (2008)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work and Discussions", "sec_num": "7" }, { "text": "Our max-violation update is also related to other training methods for large-margin structured prediction, in particular the cutting-plane (Joachims et al., 2009) and subgradient (Ratliff et al., 2007) methods, but detailed exploration is left to future work.", "cite_spans": [ { "start": 139, "end": 162, "text": "(Joachims et al., 2009)", "ref_id": "BIBREF8" }, { "start": 179, "end": 201, "text": "(Ratliff et al., 2007)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work and Discussions", "sec_num": "7" }, { "text": "We have presented a unifying framework of \"violation-fixing\" perceptron which guarantees convergence with inexact search. This theory satisfingly explained why standard perceptron might not work well with inexact search, and why the early update works. We also proposed some new variants within this framework, among which the maxviolation method performs the best on state-of-theart tagging and parsing systems, leading to better models with greatly reduced training times. Lastly, the advantage of valid update methods is more pronounced when search error is severe.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "Crammer and Singer (2003) further demonstrates that a convex combination of violations can also be used for update.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that when checking whether the correct sequence falls off the beam (line 5), we could either store the whole (sub)sequence for each candidate in the beam (which is what we do for non-DP anyway), or check if the equivalence class of the correct sequence is in the beam, i.e. y [1:i] \u2261 \u2208 Bi, and if its backpointer points to y [1:i\u22121] \u2261. For example, in trigram tagging, we just check if yi\u22121, yi \u2208 Bi and if its backpointer points to yi\u22122, yi\u22121 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "for tagging (but not parsing) the difference in per-iteration speed between early update and max-violation update is small.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "according to ACL Wiki: http://aclweb.org/aclwiki/.6 Note thatShen et al. (2007) employ contextual features up to 5-gram which go beyond our local trigram window. We suspect that adding genuinely non-local features would demonstrate even better the advantages of valid update methods with beam search, since exact inference will no longer be tractable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It turns out the original theorem in the LaSO paper(Daum\u00e9 and Marcu, 2005) contains a bug; see(Xu et al., 2009) for corrections. Thanks to a reviewer for pointing it out.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We are grateful to the four anonymous reviewers, especially the one who wrote the comprehensive review. We also thank David Chiang, Kevin Knight, Ben Taskar, Alex Kulesza, Joseph Keshet, David McAllester, Mike Collins, Sasha Rush, and Fei Sha for discussions. This work is supported in part by a Google Faculty Research Award to the first author.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2002, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, Michael. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In Proceedings of EMNLP.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Incremental parsing with the perceptron algorithm", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, Michael and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of ACL.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Ultraconservative online algorithms for multiclass problems", "authors": [ { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "951--991", "other_ids": {}, "num": null, "urls": [], "raw_text": "Crammer, Koby and Yoram Singer. 2003. Ultra- conservative online algorithms for multiclass prob- lems. Journal of Machine Learning Research (JMLR), 3:951-991.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Search-based structured prediction", "authors": [ { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "John", "middle": [], "last": "Langford", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daum\u00e9, Hal, John Langford, and Daniel Marcu. 2009. Search-based structured prediction.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning as search optimization: Approximate large margin methods for structured prediction", "authors": [ { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daum\u00e9, Hal and Daniel Marcu. 2005. Learning as search optimization: Approximate large margin methods for structured prediction. In Proceedings of ICML.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Training structural SVMs when exact inference is intractable", "authors": [ { "first": "Thomas", "middle": [], "last": "Finley", "suffix": "" }, { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Finley, Thomas and Thorsten Joachims. 2008. Training structural SVMs when exact inference is intractable. In Proceedings of ICML.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Forest reranking: Discriminative parsing with non-local features", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the ACL: HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, Liang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of the ACL: HLT, Columbus, OH, June.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Dynamic programming for linear-time incremental parsing", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Sagae", "suffix": "" } ], "year": 2010, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, Liang and Kenji Sagae. 2010. Dynamic pro- gramming for linear-time incremental parsing. In Pro- ceedings of ACL 2010.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Cutting-plane training of structural svms", "authors": [ { "first": "T", "middle": [], "last": "Joachims", "suffix": "" }, { "first": "T", "middle": [], "last": "Finley", "suffix": "" }, { "first": "Chun-Nam", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2009, "venue": "Machine Learning", "volume": "77", "issue": "", "pages": "27--59", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joachims, T., T. Finley, and Chun-Nam Yu. 2009. Cutting-plane training of structural svms. Machine Learning, 77(1):27-59.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Structured learning with approximate inference", "authors": [ { "first": "Alex", "middle": [], "last": "Kulesza", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2007, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kulesza, Alex and Fernando Pereira. 2007. Structured learning with approximate inference. In NIPS.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lafferty, John, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilis- tic models for segmenting and labeling sequence data. In Proceedings of ICML.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An end-to-end discriminative approach to machine translation", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Bouchard-C\u00f4t\u00e9", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2006, "venue": "Proceedings of COLING-ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang, Percy, Alexandre Bouchard-C\u00f4t\u00e9, Dan Klein, and Ben Taskar. 2006. An end-to-end discriminative approach to machine translation. In Proceedings of COLING-ACL, Sydney, Australia, July.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Building a large annotated corpus of English: the Penn Treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcus, Mitchell P., Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated cor- pus of English: the Penn Treebank. Computational Linguistics, 19:313-330.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Online learning of approximate dependency parsing algorithms", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2006, "venue": "Proceedings of EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "McDonald, Ryan and Fernando Pereira. 2006. On- line learning of approximate dependency parsing al- gorithms. In Proceedings of EACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "(online) subgradient methods for structured prediction", "authors": [ { "first": "Nathan", "middle": [], "last": "Ratliff", "suffix": "" }, { "first": "J", "middle": [ "Andrew" ], "last": "Bagnell", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Zinkevich", "suffix": "" } ], "year": 2007, "venue": "Proceedings of AIStats", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ratliff, Nathan, J. Andrew Bagnell, and Martin Zinke- vich. 2007. (online) subgradient methods for struc- tured prediction. In Proceedings of AIStats.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A maximum entropy model for part-of-speech tagging", "authors": [ { "first": "Adwait", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1996, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ratnaparkhi, Adwait. 1996. A maximum entropy model for part-of-speech tagging. In Proceedings of EMNLP.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Guided learning for bidirectional sequence classification", "authors": [ { "first": "Libin", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Giorgio", "middle": [], "last": "Satta", "suffix": "" }, { "first": "Aravind", "middle": [], "last": "Joshi", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shen, Libin, Giorgio Satta, and Aravind Joshi. 2007. Guided learning for bidirectional sequence classifica- tion. In Proceedings of ACL.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Max-margin markov networks", "authors": [ { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Guestrin", "suffix": "" }, { "first": "Daphne", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2003, "venue": "Proceedings of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taskar, Ben, Carlos Guestrin, and Daphne Koller. 2003. Max-margin markov networks. In Proceedings of NIPS. MIT Press.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Large margin methods for structured and interdependent output variables", "authors": [ { "first": "I", "middle": [], "last": "Tsochantaridis", "suffix": "" }, { "first": "T", "middle": [], "last": "Joachims", "suffix": "" }, { "first": "T", "middle": [], "last": "Hofmann", "suffix": "" }, { "first": "Y", "middle": [], "last": "Altun", "suffix": "" } ], "year": 2005, "venue": "Journal of Machine Learning Research", "volume": "6", "issue": "", "pages": "1453--1484", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsochantaridis, I., T. Joachims, T. Hofmann, and Y. Al- tun. 2005. Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 6:1453-1484.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Learning linear ranking functions for beam search with application to planning", "authors": [ { "first": "Yuehua", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Fern", "suffix": "" }, { "first": "Sungwook", "middle": [], "last": "Yoon", "suffix": "" } ], "year": 2009, "venue": "Journal of Machine Learning Research (JMLR)", "volume": "10", "issue": "", "pages": "1349--1388", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu, Yuehua, Alan Fern, and Sungwook Yoon. 2009. Learning linear ranking functions for beam search with application to planning. Journal of Machine Learning Research (JMLR), 10:1349-1388.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A tale of two parsers: investigating and combining graph-based and transition-based dependency parsing using beamsearch", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, Yue and Stephen Clark. 2008. A tale of two parsers: investigating and combining graph-based and transition-based dependency parsing using beam- search. In Proceedings of EMNLP.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Example that standard perceptron does not converge with greedy search on a separable scenario (e.g. u = (1, 2) can separate D with exact search).", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": "For a separable training scenario S = D, \u03a6, C s (D) , if the argmax in Algorithm 1 is not exact, the perceptron might not converge.Proof. See the example inFigure 1.", "type_str": "figure" }, "FIGREF2": { "num": null, "uris": null, "text": "Each triple (x, y [1:i] , z) returned at line 6 in Algorithm 6 is a greedy violation. Proof. Let y \u2032 = y [1:i] . Clearly at line 6, |y \u2032 | = i = |z| and y \u2032", "type_str": "figure" }, "FIGREF3": { "num": null, "uris": null, "text": "Each update (lines 6 or 7 in Algorithm 8) involves a beam violation. Proof. Case 1: early update (line 6): Let z \u2032 = B i [0] and y \u2032 = y [1:i] . Case 2: full update (line 8): Let z \u2032 = B |x| [0] and y \u2032 = y. In both cases we have z \u2032 = y \u2032 and |z \u2032", "type_str": "figure" }, "FIGREF4": { "num": null, "uris": null, "text": "POS tagging using beam search with various update methods (hybrid/latest similar to early; omitted).", "type_str": "figure" }, "TABREF0": { "type_str": "table", "num": null, "content": "
2: 3: 4: 5:z \u2190 \u01eb for i \u2208 1 . . . |x| do z \u2190 BEST(x, z, w) if z i = y i then\u22b2 empty sequence \u22b2 first wrong action
6:return (x, y [1:i] , z) \u22b2 return for early update
7:
", "html": null, "text": "Alternative presentation of Alg. 5 as a Local Violation-Fixing Perceptron (Alg. 2). : functionFINDVIOLATION(x, y, w)" }, "TABREF1": { "type_str": "table", "num": null, "content": "", "html": null, "text": "unique) extensions from the beam 1: function BEAMSEARCH(x, w, k) \u22b2 k is beam width" }, "TABREF4": { "type_str": "table", "num": null, "content": "
best in the beamfull (standard)
co rre ct se qu en ceearlymax-violationlatest
worst in the beamlast validinvalid
falls offupdateupdate!
the beambiggest
violation
", "html": null, "text": "Similarly, we say that a dataset D is beam separable in representation \u03a6 if and only if" }, "TABREF5": { "type_str": "table", "num": null, "content": "
b = 1b = 2b = 7
methoditdevitdevitdev
standard 12 96.27 6 97.07 4 97.17
early13 96.97 6 97.15 7 97.19
max-viol. 7
", "html": null, "text": "" }, "TABREF7": { "type_str": "table", "num": null, "content": "", "html": null, "text": "Final test results on POS tagging." }, "TABREF9": { "type_str": "table", "num": null, "content": "
parsing accuracy on held-out91 91.25 91.5 91.75 92 92.25max-violation early
0 2 4 6 8 10 12 14 16 18
training time (hours)
", "html": null, "text": "Final results on incremental parsing. *: baseline." } } } }