{ "paper_id": "P11-1048", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:45:56.553207Z" }, "title": "A Comparison of Loopy Belief Propagation and Dual Decomposition for Integrated CCG Supertagging and Parsing", "authors": [ { "first": "Michael", "middle": [], "last": "Auli", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": {} }, "email": "m.auli@sms.ed.ac.uk" }, { "first": "Adam", "middle": [], "last": "Lopez", "suffix": "", "affiliation": { "laboratory": "", "institution": "HLTCOE Johns Hopkins University", "location": {} }, "email": "alopez@cs.jhu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Via an oracle experiment, we show that the upper bound on accuracy of a CCG parser is significantly lowered when its search space is pruned using a supertagger, though the supertagger also prunes many bad parses. Inspired by this analysis, we design a single model with both supertagging and parsing features, rather than separating them into distinct models chained together in a pipeline. To overcome the resulting increase in complexity, we experiment with both belief propagation and dual decomposition approaches to inference, the first empirical comparison of these algorithms that we are aware of on a structured natural language processing problem. On CCGbank we achieve a labelled dependency F-measure of 88.8% on gold POS tags, and 86.7% on automatic part-of-speeoch tags, the best reported results for this task.", "pdf_parse": { "paper_id": "P11-1048", "_pdf_hash": "", "abstract": [ { "text": "Via an oracle experiment, we show that the upper bound on accuracy of a CCG parser is significantly lowered when its search space is pruned using a supertagger, though the supertagger also prunes many bad parses. Inspired by this analysis, we design a single model with both supertagging and parsing features, rather than separating them into distinct models chained together in a pipeline. To overcome the resulting increase in complexity, we experiment with both belief propagation and dual decomposition approaches to inference, the first empirical comparison of these algorithms that we are aware of on a structured natural language processing problem. On CCGbank we achieve a labelled dependency F-measure of 88.8% on gold POS tags, and 86.7% on automatic part-of-speeoch tags, the best reported results for this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Accurate and efficient parsing of Combinatorial Categorial Grammar (CCG; Steedman, 2000) is a longstanding problem in computational linguistics, due to the complexities associated its mild context sensitivity. Even for practical CCG that are strongly context-free (Fowler and Penn, 2010) , parsing is much harder than with Penn Treebank-style contextfree grammars, with vast numbers of nonterminal categories leading to increased grammar constants. Where a typical Penn Treebank grammar may have fewer than 100 nonterminals (Hockenmaier and Steedman, 2002) , we found that a CCG grammar derived from CCGbank contained over 1500. The same grammar assigns an average of 22 lexical categories per word (Clark and Curran, 2004a) , resulting in an enormous space of possible derivations.", "cite_spans": [ { "start": 73, "end": 88, "text": "Steedman, 2000)", "ref_id": "BIBREF34" }, { "start": 264, "end": 287, "text": "(Fowler and Penn, 2010)", "ref_id": "BIBREF15" }, { "start": 524, "end": 556, "text": "(Hockenmaier and Steedman, 2002)", "ref_id": "BIBREF16" }, { "start": 699, "end": 724, "text": "(Clark and Curran, 2004a)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The most successful approach to CCG parsing is based on a pipeline strategy ( \u00a72). First, we tag (or multitag) each word of the sentence with a lexical category using a supertagger, a sequence model over these categories (Bangalore and Joshi, 1999; Clark, 2002) . Second, we parse the sentence under the requirement that the lexical categories are fixed to those preferred by the supertagger. Variations on this approach drive the widely-used, broad coverage C&C parser (Clark and Curran, 2004a; Clark and Curran, 2007; Kummerfeld et al., 2010) . However, it fails when the supertagger makes errors. We show experimentally that this pipeline significantly lowers the upper bound on parsing accuracy ( \u00a73).", "cite_spans": [ { "start": 221, "end": 248, "text": "(Bangalore and Joshi, 1999;", "ref_id": "BIBREF0" }, { "start": 249, "end": 261, "text": "Clark, 2002)", "ref_id": "BIBREF6" }, { "start": 470, "end": 495, "text": "(Clark and Curran, 2004a;", "ref_id": "BIBREF3" }, { "start": 496, "end": 519, "text": "Clark and Curran, 2007;", "ref_id": "BIBREF5" }, { "start": 520, "end": 544, "text": "Kummerfeld et al., 2010)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The same experiment shows that the supertagger prunes many bad parses. So, while we want to avoid the error propagation inherent to a pipeline, ideally we still want to benefit from the key insight of supertagging: that a sequence model over lexical categories can be quite accurate. Our solution is to combine the features of both the supertagger and the parser into a single, less aggressively pruned model. The challenge with this model is its prohibitive complexity, which we address with approximate methods: dual decomposition and belief propagation ( \u00a74). We present the first side-by-side comparison of these algorithms on an NLP task of this complexity, measuring accuracy, convergence behavior, and runtime. In both cases our model significantly outperforms the pipeline approach, leading to the best published results in CCG parsing ( \u00a75).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "CCG is a lexicalized grammar formalism encoding for each word lexical categories that are either basic (eg. NN, JJ) or complex. Complex lexical categories specify the number and directionality of arguments. For example, one lexical category for the verb like is (S\\N P )/N P , specifying the first argument as an NP to the right and the second as an NP to the left; there are over 100 lexical categories for like in our lexicon. In parsing, adjacent spans are combined using a small number of binary combinatory rules like forward application or composition (Steedman, 2000; Fowler and Penn, 2010) . In the first derivation below, (S\\N P )/N P and N P combine to form the spanning category S\\N P , which only requires an NP to its left to form a complete sentence-spanning S. The second derivation uses type-raising to change the category type of I.", "cite_spans": [ { "start": 558, "end": 574, "text": "(Steedman, 2000;", "ref_id": "BIBREF34" }, { "start": 575, "end": 597, "text": "Fowler and Penn, 2010)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "CCG and Supertagging", "sec_num": "2" }, { "text": "I like tea NP (S \\NP)/NP NP > S \\NP < S I like tea NP (S \\NP)/NP NP >T S /(S \\NP) >B S /NP > S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG and Supertagging", "sec_num": "2" }, { "text": "As can be inferred from even this small example, a key difficulty in parsing CCG is that the number of categories quickly becomes extremely large, and there are typically many ways to analyze every span of a sentence. Supertagging (Bangalore and Joshi, 1999; Clark, 2002) treats the assignment of lexical categories (or supertags) as a sequence tagging problem. Because they do this with high accuracy, they are often exploited to prune the parser's search space: the parser only considers lexical categories with high posterior probability (or other figure of merit) under the supertagging model (Clark and Curran, 2004a) . The posterior probabilities are then discarded; it is the extensive pruning of lexical categories that leads to substantially faster parsing times.", "cite_spans": [ { "start": 231, "end": 258, "text": "(Bangalore and Joshi, 1999;", "ref_id": "BIBREF0" }, { "start": 259, "end": 271, "text": "Clark, 2002)", "ref_id": "BIBREF6" }, { "start": 597, "end": 622, "text": "(Clark and Curran, 2004a)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "CCG and Supertagging", "sec_num": "2" }, { "text": "Pruning the categories in advance this way has a specific failure mode: sometimes it is not possible to produce a sentence-spanning derivation from the tag sequences preferred by the supertagger, since it does not enforce grammaticality. A workaround for this problem is the adaptive supertagging (AST) approach of Clark and Curran (2004a) . It is based on a step function over supertagger beam widths, relaxing the pruning threshold for lexical categories only if the parser fails to find an analysis. The process either succeeds and returns a parse after some iteration or gives up after a predefined number of iterations. As Clark and Curran (2004a) show, most sentences can be parsed with a very small number of supertags per word. However, the technique is inherently approximate: it will return a lower probability parse under the parsing model if a higher probability parse can only be constructed from a supertag sequence returned by a subsequent iteration. In this way it prioritizes speed over exactness, although the tradeoff can be modified by adjusting the beam step function. Regardless, the effect of the approximation is unbounded.", "cite_spans": [ { "start": 315, "end": 339, "text": "Clark and Curran (2004a)", "ref_id": "BIBREF3" }, { "start": 628, "end": 652, "text": "Clark and Curran (2004a)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "CCG and Supertagging", "sec_num": "2" }, { "text": "We will also explore reverse adaptive supertagging, a much less aggressive pruning method that seeks only to make sentences parseable when they otherwise would not be due to an impractically large search space. Reverse AST starts with a wide beam, narrowing it at each iteration only if a maximum chart size is exceeded. In this way it prioritizes exactness over speed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG and Supertagging", "sec_num": "2" }, { "text": "What is the effect of these approximations? To answer this question we computed oracle best and worst values for labelled dependency F-score using the algorithm of Huang (2008) on the hybrid model of Clark and Curran (2007) , the best model of their C&C parser. We computed the oracle on our development data, Section 00 of CCGbank (Hockenmaier and Steedman, 2007) , using both AST and Reverse AST beams settings shown in Table 1 .", "cite_spans": [ { "start": 164, "end": 176, "text": "Huang (2008)", "ref_id": "BIBREF18" }, { "start": 200, "end": 223, "text": "Clark and Curran (2007)", "ref_id": "BIBREF5" }, { "start": 332, "end": 364, "text": "(Hockenmaier and Steedman, 2007)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 422, "end": 429, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Oracle Parsing", "sec_num": "3" }, { "text": "The results (Table 2) show that the oracle best accuracy for reverse AST is more than 3% higher than the aggressive AST pruning. 1 In fact, it is almost as high as the upper bound oracle accuracy of 97.73% obtained using perfect supertags-in other words, the search space for reverse AST is theoretically near-optimal. 2 We also observe that the oracle 1 The numbers reported here and in later sections differ slightly from those in a previously circulated draft of this paper, for two reasons: we evaluate only on sentences for which a parse was returned instead of all parses, to enable direct comparison with Clark and Curran (2007) ; and we use their hybrid model instead of their normal-form model, except where noted. Despite these changes our main findings remained unchanged. worst accuracy is much lower in the reverse setting. It is clear that the supertagger pipeline has two effects: while it beneficially prunes many bad parses, it harmfully prunes some very good parses. We can also see from the scores of the Viterbi parses that while the reverse condition has access to much better parses, the model doesn't actually find them. This mirrors the result of Clark and Curran (2007) that they use to justify AST. Digging deeper, we compared parser model score against Viterbi F-score and oracle F-score at a varan (2004b). The reason that using the gold-standard supertags doesn't result in 100% oracle parsing accuracy is that some of the development set parses cannot be constructed by the learned grammar. riety of fixed beam settings (Figure 1 ), considering only the subset of our development set which could be parsed with all beam settings. The inverse relationship between model score and F-score shows that the supertagger restricts the parser to mostly good parses (under F-measure) that the model would otherwise disprefer. Exactly this effect is exploited in the pipeline model. However, when the supertagger makes a mistake, the parser cannot recover.", "cite_spans": [ { "start": 319, "end": 320, "text": "2", "ref_id": null }, { "start": 612, "end": 635, "text": "Clark and Curran (2007)", "ref_id": "BIBREF5" }, { "start": 1171, "end": 1194, "text": "Clark and Curran (2007)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 12, "end": 21, "text": "(Table 2)", "ref_id": "TABREF2" }, { "start": 1550, "end": 1559, "text": "(Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Oracle Parsing", "sec_num": "3" }, { "text": "The supertagger obviously has good but not perfect predictive features. An obvious way to exploit this without being bound by its decisions is to incorporate these features directly into the parsing model. In our case both the parser and the supertagger are feature-based models, so from the perspective of a single parse tree, the change is simple: the tree is simply scored by the weights corresponding to all of its active features. However, since the features of the supertagger are all Markov features on adjacent supertags, the change has serious implications for search. If we think of the supertagger as defining a weighted regular language consisting of all supertag sequences, and the parser as defining a weighted mildly context-sensitive language consisting of only a subset of these sequences, then the search problem is equivalent to finding the optimal derivation in the weighted intersection of a regular and mildly context-sensitive language. Even allowing for the observation of Fowler and Penn (2010) that our practical CCG is context-free, this problem still reduces to the construction of Bar-Hillel et al. (1964) , making search very expensive. Therefore we need approximations.", "cite_spans": [ { "start": 1110, "end": 1134, "text": "Bar-Hillel et al. (1964)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Integrated Supertagging and Parsing", "sec_num": "4" }, { "text": "Fortunately, recent literature has introduced two relevant approximations to the NLP community: loopy belief propagation (Pearl, 1988) , applied to dependency parsing by Smith and Eisner (2008) ; and dual decomposition (Dantzig and Wolfe, 1960; Komodakis et al., 2007; Sontag et al., 2010, inter alia) , applied to dependency parsing by Koo et al. (2010) and lexicalized CFG parsing by . We apply both techniques to our integrated supertagging and parsing model.", "cite_spans": [ { "start": 121, "end": 134, "text": "(Pearl, 1988)", "ref_id": "BIBREF27" }, { "start": 170, "end": 193, "text": "Smith and Eisner (2008)", "ref_id": "BIBREF31" }, { "start": 219, "end": 244, "text": "(Dantzig and Wolfe, 1960;", "ref_id": "BIBREF7" }, { "start": 245, "end": 268, "text": "Komodakis et al., 2007;", "ref_id": "BIBREF20" }, { "start": 269, "end": 301, "text": "Sontag et al., 2010, inter alia)", "ref_id": null }, { "start": 337, "end": 354, "text": "Koo et al. (2010)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Integrated Supertagging and Parsing", "sec_num": "4" }, { "text": "Belief propagation (BP) is an algorithm for computing marginals (i.e. expectations) on structured models. These marginals can be used for decoding (parsing) in a minimum-risk framework (Smith and Eisner, 2008) ; or for training using a variety of algorithms (Sutton and McCallum, 2010) . We experiment with both uses in \u00a75. Many researchers in NLP are familiar with two special cases of belief propagation: the forward-backward and inside-outside algorithms, used for computing expectations in sequence models and context-free grammars, respectively. 3 Our use of belief propagation builds directly on these two familiar algorithms.", "cite_spans": [ { "start": 185, "end": 209, "text": "(Smith and Eisner, 2008)", "ref_id": "BIBREF31" }, { "start": 258, "end": 285, "text": "(Sutton and McCallum, 2010)", "ref_id": "BIBREF36" }, { "start": 551, "end": 552, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "f(T 1 ) f(T 2 ) b(T 0 ) b(T 1 ) t 1 T 0 T 1 t 2 T 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "e 0 e 1 e 2 Figure 2 : Supertagging factor graph with messages. Circles are variables and filled squares are factors.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "BP is usually understood as an algorithm on bipartite factor graphs, which structure a global function into local functions over subsets of variables (Kschischang et al., 1998) . Variables maintain a belief (expectation) over a distribution of values and BP passes messages about these beliefs between variables and factors. The idea is to iteratively update each variable's beliefs based on the beliefs of neighboring variables (through a shared factor), using the sum-product rule.", "cite_spans": [ { "start": 150, "end": 176, "text": "(Kschischang et al., 1998)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "This results in the following equation for a message", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "m x\u2192f (x) from a variable x to a factor f m x\u2192f (x) = h\u2208n(x)\\f m h\u2192x (x)", "eq_num": "(1)" } ], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "where n(x) is the set of all neighbours of x. The message m f \u2192x from a factor to a variable is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "m f \u2192x (x) = \u223c{x} f (X) y\u2208n(f )\\x m y\u2192f (y) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "where \u223c {x} represents all variables other than x, X = n(f ) and f (X) is the set of arguments of the factor function f . Making this concrete, our supertagger defines a distribution over tags T 0 ...T I , based on emission factors e 0 ...e I and transition factors t 1 ...t I (Figure 2) . The message f i a variable T i receives from its neighbor to the left corresponds to the forward probability, while messages from the right correspond to backward probability b i .", "cite_spans": [], "ref_spans": [ { "start": 277, "end": 287, "text": "(Figure 2)", "ref_id": null } ], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "f i (T i ) = T i\u22121 f i\u22121 (T i\u22121 )e i\u22121 (T i\u22121 )t i (T i\u22121 , T i ) (3) b i (T i ) = T i+1 b i+1 (T i+1 )e i+1 (T i+1 )t i+1 (T i , T i+1 ) (4) span (0,2) span (1,3) span (0,3) TREE n(T 0 ) o(T 2 ) o(T 0 ) n(T 2 ) T 0 e 0 e 1 e 2 T 1 T 2 f(T 1 ) f(T 2 ) b(T 0 ) b(T 1 ) t 1 t 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "Figure 3: Factor graph for the combined parsing and supertagging model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "The current belief B x (x) for variable x can be computed by taking the normalized product of all its incoming messages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "B x (x) = 1 Z h\u2208n(x) m h\u2192x (x)", "eq_num": "(5)" } ], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "In the supertagger model, this is just:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(T i ) = 1 Z f i (T i )b i (T i )e i (T i )", "eq_num": "(6)" } ], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "Our parsing model is also a distribution over variables T i , along with an additional quadratic number of span(i, j) variables. Though difficult to represent pictorially, a distribution over parses is captured by an extension to graphical models called case-factor diagrams (McAllester et al., 2008) . We add this complex distribution to our model as a single factor ( Figure 3 ). This is a natural extension to the use of complex factors described by Smith and Eisner (2008) and Dreyer and Eisner (2009) .", "cite_spans": [ { "start": 275, "end": 300, "text": "(McAllester et al., 2008)", "ref_id": "BIBREF25" }, { "start": 453, "end": 476, "text": "Smith and Eisner (2008)", "ref_id": "BIBREF31" }, { "start": 481, "end": 505, "text": "Dreyer and Eisner (2009)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 370, "end": 378, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "When a factor graph is a tree as in Figure 2 , BP converges in a single iteration to the exact marginals. However, when the model contains cycles, as in Figure 3, we can iterate message passing. Under certain assumptions this loopy BP it will converge to approximate marginals that are bounded under an interpretation from statistical physics (Yedidia et al., 2001; Sutton and McCallum, 2010) .", "cite_spans": [ { "start": 343, "end": 365, "text": "(Yedidia et al., 2001;", "ref_id": "BIBREF37" }, { "start": 366, "end": 392, "text": "Sutton and McCallum, 2010)", "ref_id": "BIBREF36" } ], "ref_spans": [ { "start": 36, "end": 44, "text": "Figure 2", "ref_id": null }, { "start": 153, "end": 159, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "The TREE factor exchanges inside n i and outside o i messages with the tag and span variables, taking into account beliefs from the sequence model. We will omit the unchanged outside recursion for brevity, but inside messages n(C i,j ) for category C i,j in span(i, j) are computed using rule probabilities r as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "n(C i,j ) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 f i (C i,j )b i (C i,j )e i (C i,j ) if j=i+1 k,X,Y n(X i,k )n(Y k,j )r(C i,j , X i,k , Y k,j ) (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "Note that the only difference from the classic inside algorithm is that the recursive base case of a category spanning a single word has been replaced by a message from the supertag that contains both forward and backward factors, along with a unary emission factor, which doubles as a unary rule factor and thus contains the only shared features of the original models. This difference is also mirrored in the forward and backward messages, which are identical to Equations 3 and 4, except that they also incorporate outside messages from the tree factor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "Once all forward-backward and inside-outside probabilities have been calculated the belief of supertag T i can be computed as the product of all incoming messages. The only difference from Equation 6 is the addition of the outside message.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(T i ) = 1 Z f i (T i )b i (T i )e i (T i )o i (T i )", "eq_num": "(8)" } ], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "The algorithm repeatedly runs forward-backward and inside-outside, passing their messages back and forth, until these quantities converge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": "4.1" }, { "text": "Dual decomposition Koo et al., 2010) is a decoding (i.e. search) algorithm for problems that can be decomposed into exactly solvable subproblems: in our case, supertagging and parsing. Formally, given Y as the set of valid parses, Z as the set of valid supertag sequences, and T as the set of supertags, we want to solve the following optimization for parser f (y) and supertagger g(z).", "cite_spans": [ { "start": 19, "end": 36, "text": "Koo et al., 2010)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Dual Decomposition", "sec_num": "4.2" }, { "text": "y\u2208Y,z\u2208Z", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "arg max", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f (y) + g(z)", "eq_num": "(9)" } ], "section": "arg max", "sec_num": null }, { "text": "such that y(i, t) = z(i, t) for all (i, t) \u2208 I (10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "arg max", "sec_num": null }, { "text": "Here y(i, t) is a binary function indicating whether word i is assigned supertag t by the parser, for the set I = {(i, t) : i \u2208 1 . . . n, t \u2208 T } denoting the set of permitted supertags for each word; similarly z(i, t) for the supertagger. To enforce the constraint that the parser and supertagger agree on a tag sequence we introduce Lagrangian multipliers u = {u(i, t) : (i, t) \u2208 I} and construct a dual objective over variables u(i, t).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "arg max", "sec_num": null }, { "text": "L(u) = max y\u2208Y (f (y) \u2212 i,t u(i, t)y(i, t)) (11) + max z\u2208Z (f (z) + i,t u(i, t)z(i, t))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "arg max", "sec_num": null }, { "text": "This objective is an upper bound that we want to make as tight as possible by solving for min u L(u).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "arg max", "sec_num": null }, { "text": "We optimize the values of the u(i, t) variables using the same algorithm as for their tagging and parsing problem (essentially a perceptron update). 4 An advantages of DD is that, on convergence, it recovers exact solutions to the combined problem. However, if it does not converge or we stop early, an approximation must be returned: following we used the highest scoring output of the parsing submodel over all iterations.", "cite_spans": [ { "start": 149, "end": 150, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "arg max", "sec_num": null }, { "text": "Parser. We use the C&C parser (Clark and Curran, 2007) and its supertagger (Clark, 2002) . Our baseline is the hybrid model of Clark and Curran (2007) ; our integrated model simply adds the supertagger features to this model. The parser relies solely on the supertagger for pruning, using CKY for search over the pruned space. Training requires repeated calculation of feature expectations over packed charts of derivations. For training, we limited the number of items in this chart to 0.3 million, and for testing, 1 million. We also used a more permissive training supertagger beam (Table 3 ) than in previous work (Clark and Curran, 2007) . Models were trained with the parser's L-BFGS trainer. Evaluation. We evaluated on CCGbank (Hockenmaier and Steedman, 2007) , a right-most normalform CCG version of the Penn Treebank. We use sections 02-21 (39603 sentences) for training, section 00 (1913 sentences) for development and section 23 (2407 sentences) for testing. We supply gold-standard part-of-speech tags to the parsers. Evaluation is based on labelled and unlabelled predicate argument structure recovery and supertag accuracy. We only evaluate on sentences for which an analysis was returned; the coverage for all parsers is 99.22% on section 00, and 99.63% on section 23. Model combination. We combine the parser and the supertagger over the search space defined by the set of supertags within the supertagger beam (see Table 1); this avoids having to perform inference over the prohibitively large set of parses spanned by all supertags. Hence at each beam setting, the model operates over the same search space as the baseline; the difference is that we search with our integrated model.", "cite_spans": [ { "start": 30, "end": 54, "text": "(Clark and Curran, 2007)", "ref_id": "BIBREF5" }, { "start": 75, "end": 88, "text": "(Clark, 2002)", "ref_id": "BIBREF6" }, { "start": 127, "end": 150, "text": "Clark and Curran (2007)", "ref_id": "BIBREF5" }, { "start": 618, "end": 642, "text": "(Clark and Curran, 2007)", "ref_id": "BIBREF5" }, { "start": 752, "end": 767, "text": "Steedman, 2007)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 585, "end": 593, "text": "(Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We first experiment with the separately trained supertagger and parser, which are then combined using belief propagation (BP) and dual decomposition (DD). We run the algorithms for many iterations, and irrespective of convergence, for BP we compute the minimum risk parse from the current marginals, and for DD we choose the highest-scoring parse seen over all iterations. We measured the evolving accuracy of the models on the development set (Figure 4) . In line with our oracle experiment, these results demonstrate that we can coax more accurate parses from the larger search space provided by the reverse setting; the influence of the supertagger features allow us to exploit this advantage. One behavior we observe in the graph is that the DD results tend to incrementally improve in accuracy while the BP results quickly stabilize, mirroring the result of Smith and Eisner (2008) . This occurs because DD continues to find higher scoring parses at each iteration, and hence the results change. However for BP, even if the marginals have not converged, the minimum risk solution turns out to be fairly stable across successive iterations.", "cite_spans": [ { "start": 863, "end": 886, "text": "Smith and Eisner (2008)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 444, "end": 454, "text": "(Figure 4)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Parsing Accuracy", "sec_num": "5.1" }, { "text": "We next compare the algorithms against the baseline on our test set (Table 4) . We find that the early stability of BP's performance generalises to the test set as does DD's improvement over several iterations. More importantly, we find that the applying Table 4 : Results for individually-trained submodels combined using dual decomposition (DD) or belief propagation (BP) for k iterations, evaluated by labelled and unlabelled F-score (LF/UF) and supertag accuracy (ST). We compare against the previous best result of Clark and Curran (2007) ; our baseline is their model with wider training beams (cf. our combined model using either algorithm consistently outperforms the baseline after only a few iterations. Overall, we improve the labelled F-measure by almost 1.1% and unlabelled F-measure by 0.6% over the baseline. To the best of our knowledge, the results obtained with BP and DD are the best reported results on this task using gold POS tags.", "cite_spans": [ { "start": 520, "end": 543, "text": "Clark and Curran (2007)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 68, "end": 77, "text": "(Table 4)", "ref_id": null }, { "start": 255, "end": 262, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Parsing Accuracy", "sec_num": "5.1" }, { "text": "Next, we evaluate performance when using automatic part-of-speech tags as input to our parser and supertagger (Table 5 ). This enables us to compare against the results of Fowler and Penn (2010) , who trained the Petrov parser (Petrov et al., 2006) on CCGbank. We outperform them on all criteria. Hence our combined model represents the best CCG parsing results under any setting.", "cite_spans": [ { "start": 172, "end": 194, "text": "Fowler and Penn (2010)", "ref_id": "BIBREF15" }, { "start": 227, "end": 248, "text": "(Petrov et al., 2006)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 110, "end": 118, "text": "(Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Parsing Accuracy", "sec_num": "5.1" }, { "text": "Finally, we revisit the oracle experiment of \u00a73 using our combined models ( Figure 5 ). Both show an improved relationship between model score and Fmeasure. Figure 4 shows that parse accuracy converges after a few iterations. Do the algorithms converge? BP converges when the marginals do not change between iterations, and DD converges when both submodels agree on all supertags. We measured the convergence of each algorithm under these criteria over 1000 iterations ( Figure 6 ). DD converges much faster, while BP in the reverse condition converges quite slowly. This is interesting when contrasted with its behavior on parse accuracy-its rate of convergence after one iteration is 1.5%, but its accuracy is already the highest at this point. Over the entire 1000 iterations, most sentences converge: all but 3 for BP (both in AST and reverse) and all but 41 (2.6%) for DD in reverse (6 in AST).", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 84, "text": "Figure 5", "ref_id": null }, { "start": 157, "end": 165, "text": "Figure 4", "ref_id": "FIGREF2" }, { "start": 471, "end": 479, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Parsing Accuracy", "sec_num": "5.1" }, { "text": "Because the C&C parser with AST is very fast, we wondered about the effect on speed for our model. We measured the runtime of the algorithms under the condition that we stopped at a particular iteration (Table 6 ). Although our models improve substantially over C&C, there is a significant cost in speed for the best result.", "cite_spans": [], "ref_spans": [ { "start": 203, "end": 211, "text": "(Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Parsing Speed", "sec_num": "5.3" }, { "text": "In the experiments reported so far, the parsing and supertagging models were trained separately, and only combined at test time. Although the outcome of these experiments was successful, we wondered if we could obtain further improvements by training the model parameters together. Since the gradients produced by (loopy) BP are approximate, for these experiments we used a stochastic gradient descent (SGD) trainer (Bottou, 2003) . We found that the SGD parameters described by Finkel et al. (2008) worked equally well for our models, and, on the baseline, produced similar results to L-BFGS. Curiously, however, we found that the combined model does not perform as well when Table 7 : Results of training with SGD on approximate gradients from LPB on section 00. We test LBP in both inference and training (train) as well as in inference only (inf); a maximum number of 10 iterations is used. the parameters are trained together (Table 7) . A possible reason for this is that we used a stricter supertagger beam setting during training (Clark and Curran, 2007) to make training on a single machine practical. This leads to lower performance, particularly in the Reverse condition. Training a model using DD would require a different optimization algorithm based on Viterbi results (e.g. the perceptron) which we will pursue in future work.", "cite_spans": [ { "start": 416, "end": 430, "text": "(Bottou, 2003)", "ref_id": "BIBREF2" }, { "start": 479, "end": 499, "text": "Finkel et al. (2008)", "ref_id": "BIBREF14" }, { "start": 1038, "end": 1062, "text": "(Clark and Curran, 2007)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 677, "end": 684, "text": "Table 7", "ref_id": null }, { "start": 931, "end": 940, "text": "(Table 7)", "ref_id": null } ], "eq_spans": [], "section": "Training the Integrated Model", "sec_num": "5.4" }, { "text": "Our approach of combining models to avoid the pipeline problem (Felzenszwalb and McAllester, 2007) is very much in line with much recent work in NLP. Such diverse topics as machine translation (Dyer et al., 2008; Dyer and Resnik, 2010; Mi et al., 2008) , part-of-speech tagging (Jiang et al., 2008) , named entity recognition (Finkel and Manning, 2009) semantic role labelling (Sutton and McCallum, 2005; Finkel et al., 2006) , and others have also been improved by combined models. Our empirical comparison of BP and DD also complements the theoretically-oriented comparison of marginal-and margin-based variational approxima-tions for parsing described by Martins et al. (2010) .", "cite_spans": [ { "start": 63, "end": 98, "text": "(Felzenszwalb and McAllester, 2007)", "ref_id": "BIBREF11" }, { "start": 193, "end": 212, "text": "(Dyer et al., 2008;", "ref_id": "BIBREF10" }, { "start": 213, "end": 235, "text": "Dyer and Resnik, 2010;", "ref_id": "BIBREF9" }, { "start": 236, "end": 252, "text": "Mi et al., 2008)", "ref_id": "BIBREF26" }, { "start": 278, "end": 298, "text": "(Jiang et al., 2008)", "ref_id": "BIBREF19" }, { "start": 326, "end": 352, "text": "(Finkel and Manning, 2009)", "ref_id": "BIBREF12" }, { "start": 377, "end": 404, "text": "(Sutton and McCallum, 2005;", "ref_id": "BIBREF35" }, { "start": 405, "end": 425, "text": "Finkel et al., 2006)", "ref_id": "BIBREF13" }, { "start": 658, "end": 679, "text": "Martins et al. (2010)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "We have shown that the aggressive pruning used in adaptive supertagging significantly harms the oracle performance of the parser, though it mostly prunes bad parses. Based on these findings, we combined parser and supertagger features into a single model. Using belief propagation and dual decomposition, we obtained more principled-and more accurate-approximations than a pipeline. Models combined using belief propagation achieve very good performance immediately, despite an initial convergence rate just over 1%, while dual decomposition produces comparable results after several iterations, and algorithmically converges more quickly. Our best result of 88.8% represents the state-of-the art in CCG parsing accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "In future work we plan to integrate the POS tagger, which is crucial to parsing accuracy (Clark and Curran, 2004b) . We also plan to revisit the idea of combined training. Though we have focused on CCG in this work we expect these methods to be equally useful for other linguistically motivated but computationally complex formalisms such as lexicalized tree adjoining grammar.", "cite_spans": [ { "start": 89, "end": 114, "text": "(Clark and Curran, 2004b)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "Forward-backward and inside-outside are formally shown to be special cases of belief propagation bySmyth et al. (1997) andSato (2007), respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The u terms can be interpreted as the messages from factors to variables and the resulting message passing algorithms are similar to the max-product algorithm, a sister algorithm to BP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Phil Blunsom, Prachya Boonkwan, Christos Christodoulopoulos, Stephen Clark, Michael Collins, Chris Dyer, Timothy Fowler, Mark Granroth-Wilding, Philipp Koehn, Terry Koo, Tom Kwiatkowski, Andr\u00e9 Martins, Matt Post, David Smith, David Sontag, Mark Steedman, and Charles Sutton for helpful discussion related to this work and comments on previous drafts, and the anonymous reviewers for helpful comments. We also acknowledge funding from EPSRC grant EP/P504171/1 (Auli); the EuroMatrixPlus project funded by the European Commission, 7th Framework Programme (Lopez); and the resources provided by the Edinburgh Compute and Data Facility (http://www.ecdf.ed.ac.uk). The ECDF is partially supported by the eDIKT initiative (http://www.edikt.org.uk).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Supertagging: An Approach to Almost Parsing", "authors": [ { "first": "S", "middle": [], "last": "Bangalore", "suffix": "" }, { "first": "A", "middle": [ "K" ], "last": "Joshi", "suffix": "" } ], "year": 1999, "venue": "Computational Linguistics", "volume": "25", "issue": "2", "pages": "238--265", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Bangalore and A. K. Joshi. 1999. Supertagging: An Approach to Almost Parsing. Computational Linguis- tics, 25(2):238-265, June.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "On formal properties of simple phrase structure grammars", "authors": [ { "first": "Y", "middle": [], "last": "Bar-Hillel", "suffix": "" }, { "first": "M", "middle": [], "last": "Perles", "suffix": "" }, { "first": "E", "middle": [], "last": "Shamir", "suffix": "" } ], "year": 1964, "venue": "Language and Information: Selected Essays on their Theory and Application", "volume": "", "issue": "", "pages": "116--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Bar-Hillel, M. Perles, and E. Shamir. 1964. On formal properties of simple phrase structure grammars. In Language and Information: Selected Essays on their Theory and Application, pages 116-150.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Stochastic learning", "authors": [ { "first": "L", "middle": [], "last": "Bottou", "suffix": "" } ], "year": 2003, "venue": "Advanced Lectures in Machine Learning", "volume": "", "issue": "", "pages": "146--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Bottou. 2003. Stochastic learning. In Advanced Lec- tures in Machine Learning, pages 146-168.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The importance of supertagging for wide-coverage CCG parsing", "authors": [ { "first": "S", "middle": [], "last": "Clark", "suffix": "" }, { "first": "J", "middle": [ "R" ], "last": "Curran", "suffix": "" } ], "year": 2004, "venue": "COL-ING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Clark and J. R. Curran. 2004a. The importance of su- pertagging for wide-coverage CCG parsing. In COL- ING, Morristown, NJ, USA.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Parsing the WSJ using CCG and log-linear models", "authors": [ { "first": "S", "middle": [], "last": "Clark", "suffix": "" }, { "first": "J", "middle": [ "R" ], "last": "Curran", "suffix": "" } ], "year": 2004, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "104--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Clark and J. R. Curran. 2004b. Parsing the WSJ using CCG and log-linear models. In Proc. of ACL, pages 104-111, Barcelona, Spain.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Wide-Coverage Efficient Statistical Parsing with CCG and Log-Linear Models", "authors": [ { "first": "S", "middle": [], "last": "Clark", "suffix": "" }, { "first": "J", "middle": [ "R" ], "last": "Curran", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "4", "pages": "493--552", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Clark and J. R. Curran. 2007. Wide-Coverage Ef- ficient Statistical Parsing with CCG and Log-Linear Models. Computational Linguistics, 33(4):493-552.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Supertagging for Combinatory Categorial Grammar", "authors": [ { "first": "S", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Clark. 2002. Supertagging for Combinatory Catego- rial Grammar. In TAG+6.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Decomposition principle for linear programs", "authors": [ { "first": "G", "middle": [ "B" ], "last": "Dantzig", "suffix": "" }, { "first": "P", "middle": [], "last": "Wolfe", "suffix": "" } ], "year": 1960, "venue": "Operations Research", "volume": "8", "issue": "1", "pages": "101--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. B. Dantzig and P. Wolfe. 1960. Decomposition principle for linear programs. Operations Research, 8(1):101-111.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Graphical models over multiple strings", "authors": [ { "first": "M", "middle": [], "last": "Dreyer", "suffix": "" }, { "first": "J", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2009, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Dreyer and J. Eisner. 2009. Graphical models over multiple strings. In Proc. of EMNLP.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Context-free reordering, finite-state translation", "authors": [ { "first": "C", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "P", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2010, "venue": "Proc. of HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Dyer and P. Resnik. 2010. Context-free reordering, finite-state translation. In Proc. of HLT-NAACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Generalizing word lattice translation", "authors": [ { "first": "C", "middle": [ "J" ], "last": "Dyer", "suffix": "" }, { "first": "S", "middle": [], "last": "Muresan", "suffix": "" }, { "first": "P", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2008, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. J. Dyer, S. Muresan, and P. Resnik. 2008. Generaliz- ing word lattice translation. In Proc. of ACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The Generalized A* Architecture", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Felzenszwalb", "suffix": "" }, { "first": "D", "middle": [], "last": "Mcallester", "suffix": "" } ], "year": 2007, "venue": "In Journal of Artificial Intelligence Research", "volume": "29", "issue": "", "pages": "153--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. F. Felzenszwalb and D. McAllester. 2007. The Gener- alized A* Architecture. In Journal of Artificial Intelli- gence Research, volume 29, pages 153-190.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Joint parsing and named entity recognition", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Finkel", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2009, "venue": "Proc. of NAACL. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. R. Finkel and C. D. Manning. 2009. Joint parsing and named entity recognition. In Proc. of NAACL. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Solving the problem of cascading errors: Approximate Bayesian inference for linguistic annotation pipelines", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Finkel", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "A", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2006, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. R. Finkel, C. D. Manning, and A. Y. Ng. 2006. Solv- ing the problem of cascading errors: Approximate Bayesian inference for linguistic annotation pipelines. In Proc. of EMNLP.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Feature-based, conditional random field parsing", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Finkel", "suffix": "" }, { "first": "A", "middle": [], "last": "Kleeman", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. R. Finkel, A. Kleeman, and C. D. Manning. 2008. Feature-based, conditional random field parsing. In Proceedings of ACL-HLT.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Accurate contextfree parsing with combinatory categorial grammar", "authors": [ { "first": "T", "middle": [ "A D" ], "last": "Fowler", "suffix": "" }, { "first": "G", "middle": [], "last": "Penn", "suffix": "" } ], "year": 2010, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. A. D. Fowler and G. Penn. 2010. Accurate context- free parsing with combinatory categorial grammar. In Proc. of ACL.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Generative models for statistical parsing with Combinatory Categorial Grammar", "authors": [ { "first": "J", "middle": [], "last": "Hockenmaier", "suffix": "" }, { "first": "M", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2002, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Hockenmaier and M. Steedman. 2002. Generative models for statistical parsing with Combinatory Cat- egorial Grammar. In Proc. of ACL.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "CCGbank: A corpus of CCG derivations and dependency structures extracted from the Penn Treebank", "authors": [ { "first": "J", "middle": [], "last": "Hockenmaier", "suffix": "" }, { "first": "M", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "3", "pages": "355--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Hockenmaier and M. Steedman. 2007. CCGbank: A corpus of CCG derivations and dependency struc- tures extracted from the Penn Treebank. Computa- tional Linguistics, 33(3):355-396.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Forest Reranking: Discriminative parsing with Non-Local Features", "authors": [ { "first": "L", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Huang. 2008. Forest Reranking: Discriminative pars- ing with Non-Local Features. In Proceedings of ACL- 08: HLT.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A cascaded linear model for joint Chinese word segmentation and part-of-speech tagging", "authors": [ { "first": "W", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "L", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Q", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Y", "middle": [], "last": "L\u00fc", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Jiang, L. Huang, Q. Liu, and Y. L\u00fc. 2008. A cas- caded linear model for joint Chinese word segmen- tation and part-of-speech tagging. In Proceedings of ACL-08: HLT.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "MRF optimization via dual decomposition: Messagepassing revisited", "authors": [ { "first": "N", "middle": [], "last": "Komodakis", "suffix": "" }, { "first": "N", "middle": [], "last": "Paragios", "suffix": "" }, { "first": "G", "middle": [], "last": "Tziritas", "suffix": "" } ], "year": 2007, "venue": "Proc. of Int. Conf. on Computer Vision (ICCV)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Komodakis, N. Paragios, and G. Tziritas. 2007. MRF optimization via dual decomposition: Message- passing revisited. In Proc. of Int. Conf. on Computer Vision (ICCV).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Dual Decomposition for Parsing with Non-Projective Head Automata", "authors": [ { "first": "T", "middle": [], "last": "Koo", "suffix": "" }, { "first": "A", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "T", "middle": [], "last": "Jaakkola", "suffix": "" }, { "first": "D", "middle": [], "last": "Sontag", "suffix": "" } ], "year": 2010, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Koo, A. M. Rush, M. Collins, T. Jaakkola, and D. Son- tag. 2010. Dual Decomposition for Parsing with Non- Projective Head Automata. In In Proc. EMNLP.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Factor graphs and the sum-product algorithm", "authors": [ { "first": "F", "middle": [ "R" ], "last": "Kschischang", "suffix": "" }, { "first": "B", "middle": [ "J" ], "last": "Frey", "suffix": "" }, { "first": "H.-A", "middle": [], "last": "Loeliger", "suffix": "" } ], "year": 1998, "venue": "IEEE Transactions on Information Theory", "volume": "47", "issue": "", "pages": "498--519", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. R. Kschischang, B. J. Frey, and H.-A. Loeliger. 1998. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47:498-519.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Faster parsing by supertagger adaptation", "authors": [ { "first": "J", "middle": [ "K" ], "last": "Kummerfeld", "suffix": "" }, { "first": "J", "middle": [], "last": "Rosener", "suffix": "" }, { "first": "T", "middle": [], "last": "Dawborn", "suffix": "" }, { "first": "J", "middle": [], "last": "Haggerty", "suffix": "" }, { "first": "J", "middle": [ "R" ], "last": "Curran", "suffix": "" }, { "first": "S", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2010, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. K. Kummerfeld, J. Rosener, T. Dawborn, J. Haggerty, J. R. Curran, and S. Clark. 2010. Faster parsing by supertagger adaptation. In Proc. of ACL.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Turbo parsers: Dependency parsing by approximate variational inference", "authors": [ { "first": "A", "middle": [ "F T" ], "last": "Martins", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "E", "middle": [ "P" ], "last": "Xing", "suffix": "" }, { "first": "P", "middle": [ "M Q" ], "last": "Aguiar", "suffix": "" }, { "first": "M", "middle": [ "A T" ], "last": "Figueiredo", "suffix": "" } ], "year": 2010, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. F. T. Martins, N. A. Smith, E. P. Xing, P. M. Q. Aguiar, and M. A. T. Figueiredo. 2010. Turbo parsers: Depen- dency parsing by approximate variational inference. In Proc. of EMNLP.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Casefactor diagrams for structured probabilistic modeling", "authors": [ { "first": "D", "middle": [], "last": "Mcallester", "suffix": "" }, { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2008, "venue": "Journal of Computer and System Sciences", "volume": "74", "issue": "1", "pages": "84--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. McAllester, M. Collins, and F. Pereira. 2008. Case- factor diagrams for structured probabilistic modeling. Journal of Computer and System Sciences, 74(1):84- 96.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Forest-based translation", "authors": [ { "first": "H", "middle": [], "last": "Mi", "suffix": "" }, { "first": "L", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Q", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2008, "venue": "Proc. of ACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Mi, L. Huang, and Q. Liu. 2008. Forest-based trans- lation. In Proc. of ACL-HLT.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference", "authors": [ { "first": "J", "middle": [], "last": "Pearl", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Pearl. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Learning accurate, compact, and interpretable tree annotation", "authors": [ { "first": "S", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "L", "middle": [], "last": "Barrett", "suffix": "" }, { "first": "R", "middle": [], "last": "Thibaux", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Petrov, L. Barrett, R. Thibaux, and D. Klein. 2006. Learning accurate, compact, and interpretable tree an- notation. In Proc. of ACL.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "On dual decomposition and linear programming relaxations for natural language processing", "authors": [ { "first": "A", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "D", "middle": [], "last": "Sontag", "suffix": "" }, { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "T", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2010, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. M. Rush, D. Sontag, M. Collins, and T. Jaakkola. 2010. On dual decomposition and linear program- ming relaxations for natural language processing. In In Proc. EMNLP.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Inside-outside probability computation for belief propagation", "authors": [ { "first": "T", "middle": [], "last": "Sato", "suffix": "" } ], "year": 2007, "venue": "Proc. of IJCAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Sato. 2007. Inside-outside probability computation for belief propagation. In Proc. of IJCAI.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Dependency parsing by belief propagation", "authors": [ { "first": "D", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "J", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2008, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. A. Smith and J. Eisner. 2008. Dependency parsing by belief propagation. In Proc. of EMNLP.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Probabilistic independence networks for hidden Markov probability models", "authors": [ { "first": "P", "middle": [], "last": "Smyth", "suffix": "" }, { "first": "D", "middle": [], "last": "Heckerman", "suffix": "" }, { "first": "M", "middle": [], "last": "Jordan", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "2", "pages": "227--269", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Smyth, D. Heckerman, and M. Jordan. 1997. Prob- abilistic independence networks for hidden Markov probability models. Neural computation, 9(2):227- 269.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Introduction to dual decomposition", "authors": [ { "first": "D", "middle": [], "last": "Sontag", "suffix": "" }, { "first": "A", "middle": [], "last": "Globerson", "suffix": "" }, { "first": "T", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2010, "venue": "Optimization for Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Sontag, A. Globerson, and T. Jaakkola. 2010. Intro- duction to dual decomposition. In S. Sra, S. Nowozin, and S. J. Wright, editors, Optimization for Machine Learning. MIT Press.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "The syntactic process", "authors": [ { "first": "M", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Steedman. 2000. The syntactic process. MIT Press, Cambridge, MA.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Joint parsing and semantic role labelling", "authors": [ { "first": "C", "middle": [], "last": "Sutton", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2005, "venue": "Proc. of CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Sutton and A. McCallum. 2005. Joint parsing and semantic role labelling. In Proc. of CoNLL.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "An introduction to conditional random fields", "authors": [ { "first": "C", "middle": [], "last": "Sutton", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Sutton and A. McCallum. 2010. An introduction to conditional random fields. arXiv:stat.ML/1011.4088.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Generalized belief propagation", "authors": [ { "first": "J", "middle": [], "last": "Yedidia", "suffix": "" }, { "first": "W", "middle": [], "last": "Freeman", "suffix": "" }, { "first": "Y", "middle": [], "last": "Weiss", "suffix": "" } ], "year": 2001, "venue": "Proc. of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Yedidia, W. Freeman, and Y. Weiss. 2001. Generalized belief propagation. In Proc. of NIPS.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "Comparison between model score and Viterbi F-score (left); and between model score and oracle F-score (right) for different supertagger beams on a subset of CCGbank Section 00.", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "Labelled F-score of baseline (BL), belief propagation (BP), and dual decomposition (DD) on section 00.", "type_str": "figure", "num": null, "uris": null }, "FIGREF4": { "text": "Comparison between model score and Viterbi F-score for the integrated model using belief propagation (left) and dual decomposition (right); the results are based on the same data asFigure 1.. Rate of convergence for belief propagation (BP) and dual decomposition (DD) with maximum k = 1000.", "type_str": "figure", "num": null, "uris": null }, "TABREF1": { "html": null, "content": "
: Beam step function used for standard (AST) and less aggressive (Reverse) AST throughout our experiments.
Parameter \u03b2 is a beam threshold while k bounds the use of a part-of-speech tag dictionary, which is used for words
seen less than k times.
Viterbi F-scoreOracle Max F-scoreOracle Min F-score
LFLPLRLFLPLRLFLPLR cat/word
AST87.38 87.83 86.93 94.35 95.24 93.49 54.31 54.81 53.831.3-3.6
Reverse 87.36 87.55 87.17 97.65 98.21 97.09 18.09 17.75 18.433.6-1.3
", "type_str": "table", "num": null, "text": "" }, "TABREF2": { "html": null, "content": "", "type_str": "table", "num": null, "text": "" }, "TABREF4": { "html": null, "content": "
section 00 (dev)section 23 (test)
ASTReverseASTReverse
LFUFSTLFUFSTLFUFSTLFUFST
Baseline 87.38 93.08 94.21 87.36 93.13 93.99 87.73 93.09 94.33 87.65 93.06 94.01
C&C '07 87.24 93.00 94.16---87.64 93.00 94.32---
BP k=187.70 93.28 94.44 88.35 93.69 94.73 88.20 93.28 94.60 88.78 93.66 94.81
BP k=2587.70 93.31 94.44 88.33 93.72 94.71 88.19 93.27 94.59 88.80 93.68 94.81
DD k=187.40 93.09 94.23 87.38 93.15 94.03 87.74 93.10 94.33 87.67 93.07 94.02
DD k=2587.71 93.32 94.44 88.29 93.71 94.67 88.14 93.24 94.59 88.80 93.68 94.82
", "type_str": "table", "num": null, "text": "Beam step function used for training (cf.Table 1)." }, "TABREF5": { "html": null, "content": "
88.4
88.2
BL ASTBL RevBP AST
Labelled F--score87.8 88.0BP RevDD ASTDD Rev
87.6
87.4
87.2
161116212631364146
Itera0ons
", "type_str": "table", "num": null, "text": "." }, "TABREF6": { "html": null, "content": "
section 00 (dev)section 23 (test)
LFLPLRUFUPURLFLPLRUFUPUR
Baseline85.53
", "type_str": "table", "num": null, "text": "85.73 85.33 91.99 92.20 91.77 85.74 85.90 85.58 91.92 92.09 91.75 Petrov I-5 85.79 86.09 85.50 92.44 92.76 92.13 86.01 86.29 85.74 92.34 92.64 92.04 BP k=1 DD k=25 86.35 86.65 86.05 92.52 92.85 92.20 86.68 86.90 86.46 92.44 92.67 92.21" }, "TABREF7": { "html": null, "content": "", "type_str": "table", "num": null, "text": "Results on automatically assigned POS tags. Petrov I-5 is based on the parser output of Fowler and Penn (2010); we evaluate on sentences for which all parsers returned an analysis (2323 sentences for section 23 and 1834 sentences for section 00)." }, "TABREF9": { "html": null, "content": "
: Parsing time in seconds per sentence (vs. F-
measure) on section 00.
ASTReverse
LFUFSTLFUFST
Baseline 86.7 92.7 94.0 86.7 92.7 93.9
BP inf86.8 92.8 94.1 87.2 93.1 94.2
BP train 86.3 92.5 93.8 85.6 92.1 93.2
", "type_str": "table", "num": null, "text": "" } } } }