{ "paper_id": "P04-1015", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:43:45.341792Z" }, "title": "Incremental Parsing with the Perceptron Algorithm", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "", "affiliation": {}, "email": "mcollins@csail.mit.edu" }, { "first": "Brian", "middle": [], "last": "Roark", "suffix": "", "affiliation": {}, "email": "roark@research.att.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes an incremental parsing approach where parameters are estimated using a variant of the perceptron algorithm. A beam-search algorithm is used during both training and decoding phases of the method. The perceptron approach was implemented with the same feature set as that of an existing generative model (Roark, 2001a), and experimental results show that it gives competitive performance to the generative model on parsing the Penn treebank. We demonstrate that training a perceptron model to combine with the generative model during search provides a 2.1 percent F-measure improvement over the generative model alone, to 88.8 percent.", "pdf_parse": { "paper_id": "P04-1015", "_pdf_hash": "", "abstract": [ { "text": "This paper describes an incremental parsing approach where parameters are estimated using a variant of the perceptron algorithm. A beam-search algorithm is used during both training and decoding phases of the method. The perceptron approach was implemented with the same feature set as that of an existing generative model (Roark, 2001a), and experimental results show that it gives competitive performance to the generative model on parsing the Penn treebank. We demonstrate that training a perceptron model to combine with the generative model during search provides a 2.1 percent F-measure improvement over the generative model alone, to 88.8 percent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In statistical approaches to NLP problems such as tagging or parsing, it seems clear that the representation used as input to a learning algorithm is central to the accuracy of an approach. In an ideal world, the designer of a parser or tagger would be free to choose any features which might be useful in discriminating good from bad structures, without concerns about how the features interact with the problems of training (parameter estimation) or decoding (search for the most plausible candidate under the model). To this end, a number of recently proposed methods allow a model to incorporate \"arbitrary\" global features of candidate analyses or parses. Examples of such techniques are Markov Random Fields (Ratnaparkhi et al., 1994; Abney, 1997; Della Pietra et al., 1997; Johnson et al., 1999) , and boosting or perceptron approaches to reranking (Freund et al., 1998; Collins, 2000; Collins and Duffy, 2002) .", "cite_spans": [ { "start": 714, "end": 740, "text": "(Ratnaparkhi et al., 1994;", "ref_id": "BIBREF14" }, { "start": 741, "end": 753, "text": "Abney, 1997;", "ref_id": "BIBREF0" }, { "start": 754, "end": 780, "text": "Della Pietra et al., 1997;", "ref_id": "BIBREF6" }, { "start": 781, "end": 802, "text": "Johnson et al., 1999)", "ref_id": "BIBREF11" }, { "start": 856, "end": 877, "text": "(Freund et al., 1998;", "ref_id": "BIBREF8" }, { "start": 878, "end": 892, "text": "Collins, 2000;", "ref_id": "BIBREF2" }, { "start": 893, "end": 917, "text": "Collins and Duffy, 2002)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A drawback of these approaches is that in the general case, they can require exhaustive enumeration of the set of candidates for each input sentence in both the training and decoding phases 1 . For example, Johnson et al. (1999) and Riezler et al. (2002) use all parses generated by an LFG parser as input to an MRF approach -given the level of ambiguity in natural language, this set can presumably become extremely large. Collins (2000) and Collins and Duffy (2002) rerank the top N parses from an existing generative parser, but this kind of approach presupposes that there is an existing baseline model with reasonable performance. Many of these baseline models are themselves used with heuristic search techniques, so that the potential gain through the use of discriminative re-ranking techniques is further dependent on effective search.", "cite_spans": [ { "start": 207, "end": 228, "text": "Johnson et al. (1999)", "ref_id": "BIBREF11" }, { "start": 233, "end": 254, "text": "Riezler et al. (2002)", "ref_id": "BIBREF16" }, { "start": 424, "end": 438, "text": "Collins (2000)", "ref_id": "BIBREF2" }, { "start": 443, "end": 467, "text": "Collins and Duffy (2002)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper explores an alternative approach to parsing, based on the perceptron training algorithm introduced in Collins (2002) . In this approach the training and decoding problems are very closely related -the training method decodes training examples in sequence, and makes simple corrective updates to the parameters when errors are made. Thus the main complexity of the method is isolated to the decoding problem. We describe an approach that uses an incremental, left-to-right parser, with beam search, to find the highest scoring analysis under the model. The same search method is used in both training and decoding. We implemented the perceptron approach with the same feature set as that of an existing generative model (Roark, 2001a) , and show that the perceptron model gives performance competitive to that of the generative model on parsing the Penn treebank, thus demonstrating that an unnormalized discriminative parsing model can be applied with heuristic search. We also describe several refinements to the training algorithm, and demonstrate their impact on convergence properties of the method.", "cite_spans": [ { "start": 113, "end": 127, "text": "Collins (2002)", "ref_id": "BIBREF3" }, { "start": 730, "end": 744, "text": "(Roark, 2001a)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Finally, we describe training the perceptron model with the negative log probability given by the generative model as another feature. This provides the perceptron algorithm with a better starting point, leading to large improvements over using either the generative model or the perceptron algorithm in isolation (the hybrid model achieves 88.8% f-measure on the WSJ treebank, compared to figures of 86.7% and 86.6% for the separate generative and perceptron models). The approach is an extremely simple method for integrating new features into the generative model: essentially all that is needed is a definition of feature-vector representations of entire parse trees, and then the existing parsing algorithms can be used for both training and decoding with the models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section we describe a general framework -linear models for NLP -that could be applied to a diverse range of tasks, including parsing and tagging. We then describe a particular method for parameter estimation, which is a generalization of the perceptron algorithm. Finally, we give an abstract description of an incremental parser, and describe how it can be used with the perceptron algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The General Framework", "sec_num": "2" }, { "text": "We follow the framework outlined in Collins (2002; . The task is to learn a mapping from inputs x \u2208 X to outputs y \u2208 Y. For example, X might be a set of sentences, with Y being a set of possible parse trees. We assume:", "cite_spans": [ { "start": 36, "end": 50, "text": "Collins (2002;", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Linear Models for NLP", "sec_num": "2.1" }, { "text": ". Training examples (x i , y i ) for i = 1 . . . n. . A function GEN which enumerates a set of candi- dates GEN(x) for an input x. . A representation \u03a6 mapping each (x, y) \u2208 X \u00d7 Y to a feature vector \u03a6(x, y) \u2208 R d . . A parameter vector\u1fb1 \u2208 R d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Models for NLP", "sec_num": "2.1" }, { "text": "The components GEN, \u03a6 and\u1fb1 define a mapping from an input x to an output F (x) through", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Models for NLP", "sec_num": "2.1" }, { "text": "F (x) = arg max y\u2208GEN(x) \u03a6(x, y) \u2022\u1fb1 (1) where \u03a6(x, y) \u2022\u1fb1 is the inner product s \u03b1 s \u03a6 s (x, y).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Models for NLP", "sec_num": "2.1" }, { "text": "The learning task is to set the parameter values\u1fb1 using the training examples as evidence. The decoding algorithm is a method for searching for the arg max in Eq. 1. This framework is general enough to encompass several tasks in NLP. In this paper we are interested in parsing, where (x i , y i ), GEN, and \u03a6 can be defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Models for NLP", "sec_num": "2.1" }, { "text": "\u2022 Each training example (x i , y i ) is a pair where x i is a sentence, and y i is the gold-standard parse for that sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Models for NLP", "sec_num": "2.1" }, { "text": "\u2022 Given an input sentence x, GEN(x) is a set of possible parses for that sentence. For example, GEN(x) could be defined as the set of possible parses for x under some context-free grammar, perhaps a context-free grammar induced from the training examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Models for NLP", "sec_num": "2.1" }, { "text": "\u2022 The representation \u03a6(x, y) could track arbitrary features of parse trees. As one example, suppose that there are m rules in a context-free grammar (CFG) that defines GEN(x). Then we could define the i'th component of the representation, \u03a6 i (x, y), to be the number of times the i'th context-free rule appears in the parse tree (x, y). This is implicitly the representation used in probabilistic or weighted CFGs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Models for NLP", "sec_num": "2.1" }, { "text": "Note that the difficulty of finding the arg max in Eq. 1 is dependent on the interaction of GEN and \u03a6. In many cases GEN(x) could grow exponentially with the size of x, making brute force enumeration of the members of GEN(x) intractable. For example, a context-free grammar could easily produce an exponentially growing number of analyses with sentence length. For some representations, such as the \"rule-based\" representation described above, the arg max in the set enumerated by the CFG can be found efficiently, using dynamic programming algorithms, without having to explicitly enumerate all members of GEN(x). However in many cases we may be interested in representations which do not allow efficient dynamic programming solutions. One way around this problem is to adopt a two-pass approach, where GEN(x) is the top N analyses under some initial model, as in the reranking approach of Collins (2000) . In the current paper we explore alternatives to reranking approaches, namely heuristic methods for finding the arg max, specifically incremental beam-search strategies related to the parsers of Roark (2001a) and Ratnaparkhi (1999) .", "cite_spans": [ { "start": 891, "end": 905, "text": "Collins (2000)", "ref_id": "BIBREF2" }, { "start": 1102, "end": 1115, "text": "Roark (2001a)", "ref_id": "BIBREF18" }, { "start": 1120, "end": 1138, "text": "Ratnaparkhi (1999)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Linear Models for NLP", "sec_num": "2.1" }, { "text": "Estimation We now consider the problem of setting the parameters, \u03b1, given training examples (x i , y i ). We will briefly review the perceptron algorithm, and its convergence properties -see Collins (2002) for a full description. The algorithm and theorems are based on the approach to classification problems described in Freund and Schapire (1999) . Figure 1 shows the algorithm. Note that the most complex step of the method is finding z i = arg max z\u2208GEN(xi) \u03a6(x i , z)\u2022\u1fb1 -and this is precisely the decoding problem. Thus the training algorithm is in principle a simple part of the parser: any system will need a decoding method, and once the decoding algorithm is implemented the training algorithm is relatively straightforward.", "cite_spans": [ { "start": 192, "end": 206, "text": "Collins (2002)", "ref_id": "BIBREF3" }, { "start": 324, "end": 350, "text": "Freund and Schapire (1999)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 353, "end": 361, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The Perceptron Algorithm for Parameter", "sec_num": "2.2" }, { "text": "We will now give a first theorem regarding the convergence of this algorithm. First, we need the following definition:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Perceptron Algorithm for Parameter", "sec_num": "2.2" }, { "text": "Definition 1 Let GEN(x i ) = GEN(x i ) \u2212 {y i }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Perceptron Algorithm for Parameter", "sec_num": "2.2" }, { "text": "In other words GEN(x i ) is the set of incorrect candidates for an example x i . We will say that a training sequence (x i , y i ) for i = 1 . . . n is separable with margin \u03b4 > 0 if there exists some vector U with ||U|| = 1 such that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Perceptron Algorithm for Parameter", "sec_num": "2.2" }, { "text": "\u2200i, \u2200z \u2208 GEN(x i ), U \u2022 \u03a6(x i , y i ) \u2212 U \u2022 \u03a6(x i , z) \u2265 \u03b4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Perceptron Algorithm for Parameter", "sec_num": "2.2" }, { "text": "(2) (||U|| is the 2-norm of U, i.e., ||U|| = s U 2 s .) Next, define N e to be the number of times an error is made by the algorithm in figure 1 -that is, the number of times that z i = y i for some (t, i) pair. We can then state the following theorem (see (Collins, 2002) for a proof):", "cite_spans": [ { "start": 257, "end": 272, "text": "(Collins, 2002)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "The Perceptron Algorithm for Parameter", "sec_num": "2.2" }, { "text": "Theorem 1 For any training sequence (x i , y i ) that is separable with margin \u03b4, for any value of T , then for the perceptron algorithm in figure 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Perceptron Algorithm for Parameter", "sec_num": "2.2" }, { "text": "N e \u2264 R 2 \u03b4 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Perceptron Algorithm for Parameter", "sec_num": "2.2" }, { "text": "where R is a constant such that \u2200i, \u2200z", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Perceptron Algorithm for Parameter", "sec_num": "2.2" }, { "text": "\u2208 GEN(x i ) ||\u03a6(x i , y i ) \u2212 \u03a6(x i , z)|| \u2264 R.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Perceptron Algorithm for Parameter", "sec_num": "2.2" }, { "text": "This theorem implies that if there is a parameter vector U which makes zero errors on the training set, then after a finite number of iterations the training algorithm will converge to parameter values with zero training error. A crucial point is that the number of mistakes is independent of the number of candidates for each example", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Perceptron Algorithm for Parameter", "sec_num": "2.2" }, { "text": "Inputs: Training examples (x i , y i ) Algorithm: Initialization: Set\u1fb1 = 0 For t = 1 . . . T , i = 1 . . . n Output: Parameters\u1fb1 Calculate z i = arg max z\u2208GEN(xi) \u03a6(x i , z) \u2022\u1fb1 If(z i = y i ) then\u1fb1 =\u1fb1 + \u03a6(x i , y i ) \u2212 \u03a6(x i , z i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Perceptron Algorithm for Parameter", "sec_num": "2.2" }, { "text": "Figure 1: A variant of the perceptron algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Perceptron Algorithm for Parameter", "sec_num": "2.2" }, { "text": "(i.e. the size of GEN(x i ) for each i), depending only on the separation of the training data, where separation is defined above. This is important because in many NLP problems GEN(x) can be exponential in the size of the inputs. All of the convergence and generalization results in Collins (2002) depend on notions of separability rather than the size of GEN.", "cite_spans": [ { "start": 284, "end": 298, "text": "Collins (2002)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "The Perceptron Algorithm for Parameter", "sec_num": "2.2" }, { "text": "Two questions come to mind. First, are there guarantees for the algorithm if the training data is not separable? Second, performance on a training sample is all very well, but what does this guarantee about how well the algorithm generalizes to newly drawn test examples? Freund and Schapire (1999) discuss how the theory for classification problems can be extended to deal with both of these questions; Collins (2002) describes how these results apply to NLP problems.", "cite_spans": [ { "start": 272, "end": 298, "text": "Freund and Schapire (1999)", "ref_id": "BIBREF7" }, { "start": 404, "end": 418, "text": "Collins (2002)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "The Perceptron Algorithm for Parameter", "sec_num": "2.2" }, { "text": "As a final note, following Collins 2002, we used the averaged parameters from the training algorithm in decoding test examples in our experiments. Say\u1fb1 t i is the parameter vector after the i'th example is processed on the t'th pass through the data in the algorithm in figure 1. Then the averaged parameters\u1fb1 AV G are defined Freund and Schapire (1999) originally proposed the averaged parameter method; it was shown to give substantial improvements in accuracy for tagging tasks in Collins (2002) .", "cite_spans": [ { "start": 327, "end": 353, "text": "Freund and Schapire (1999)", "ref_id": "BIBREF7" }, { "start": 484, "end": 498, "text": "Collins (2002)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "The Perceptron Algorithm for Parameter", "sec_num": "2.2" }, { "text": "as\u1fb1 AV G = i,t\u1fb1 t i /N T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Perceptron Algorithm for Parameter", "sec_num": "2.2" }, { "text": "Parsing This section gives a description of the basic incremental parsing approach. The input to the parser is a sentence x with length n. A hypothesis is a triple x, t, i such that x is the sentence being parsed, t is a partial or full analysis of that sentence, and i is an integer specifying the number of words of the sentence which have been processed. Each full parse for a sentence will have the form x, t, n . The initial state is x, \u2205, 0 where \u2205 is a \"null\" or empty analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Abstract Description of Incremental", "sec_num": "2.3" }, { "text": "We assume an \"advance\" function ADV which takes a hypothesis triple as input, and returns a set of new hypotheses as output. The advance function will absorb another word in the sentence: this means that if the input to ADV is x, t, i , then each member of ADV( x, t, i ) will have the form x, t ,i+1 . Each new analysis t will be formed by somehow incorporating the i+1'th word into the previous analysis t.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Abstract Description of Incremental", "sec_num": "2.3" }, { "text": "With these definitions in place, we can iteratively define the full set of partial analyses H i for the first i words of the sentence as H 0 (x) = { x, \u2205, 0 }, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Abstract Description of Incremental", "sec_num": "2.3" }, { "text": "H i (x) = \u222a h \u2208Hi\u22121(x) ADV(h ) for i = 1 . . . n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Abstract Description of Incremental", "sec_num": "2.3" }, { "text": "The full set of parses for a sentence x is then GEN(x) = H n (x) where n is the length of x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Abstract Description of Incremental", "sec_num": "2.3" }, { "text": "Under this definition GEN(x) can include a huge number of parses, and searching for the highest scoring parse, arg max h\u2208Hn(x) \u03a6(h) \u2022\u1fb1, will be intractable. For this reason we introduce one additional function, FILTER(H), which takes a set of hypotheses H, and returns a much smaller set of \"filtered\" hypotheses. Typically, FILTER will calculate the score \u03a6(h) \u2022\u1fb1 for each h \u2208 H, and then eliminate partial analyses which have low scores under this criterion. For example, a simple version of FILTER would take the top N highest scoring members of H for some constant N . We can then redefine the set of partial analyses as follows (we use F i (x) to denote the set of filtered partial analyses for the first i words of the sentence):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Abstract Description of Incremental", "sec_num": "2.3" }, { "text": "F 0 (x) = { x, \u2205, 0 } F i (x) = FILTER \u222a h \u2208Fi\u22121(x) ADV(h ) for i=1 . . . n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Abstract Description of Incremental", "sec_num": "2.3" }, { "text": "The parsing algorithm returns arg max h\u2208Fn \u03a6(h) \u2022\u1fb1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Abstract Description of Incremental", "sec_num": "2.3" }, { "text": "Note that this is a heuristic, in that there is no guarantee that this procedure will find the highest scoring parse, arg max h\u2208Hn \u03a6(h) \u2022\u1fb1. Search errors, where arg max h\u2208Fn \u03a6(h) \u2022\u1fb1 = arg max h\u2208Hn \u03a6(h) \u2022\u1fb1, will create errors in decoding test sentences, and also errors in implementing the perceptron training algorithm in Figure 1. In this paper we give empirical results that suggest that FILTER can be chosen in such a way as to give efficient parsing performance together with high parsing accuracy. The exact implementation of the parser will depend on the definition of partial analyses, of ADV and FILTER, and of the representation \u03a6. The next section describes our instantiation of these choices.", "cite_spans": [], "ref_spans": [ { "start": 322, "end": 328, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "An Abstract Description of Incremental", "sec_num": "2.3" }, { "text": "The parser is an incremental beam-search parser very similar to the sort described in Roark (2001a; , with some changes in the search strategy to accommodate the perceptron feature weights. We first describe the parsing algorithm, and then move on to the baseline feature set for the perceptron model.", "cite_spans": [ { "start": 86, "end": 99, "text": "Roark (2001a;", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "A full description of the parsing approach", "sec_num": "3" }, { "text": "The input to the parser is a string w n 0 , a grammar G, a mapping \u03a6 from derivations to feature vectors, and a parameter vector\u1fb1. The grammar G = (V, T, S \u2020 ,S, C, B) consists of a set of non-terminal symbols V , a set of terminal symbols T , a start symbol S \u2020 \u2208 V , an end-ofconstituent symbolS \u2208 V , a set of \"allowable chains\" C, and a set of \"allowable triples\" B.S is a special empty non-terminal that marks the end of a constituent. Each chain is a sequence of non-terminals followed by a terminal symbol, for example S \u2020 \u2192 S \u2192 NP \u2192 NN \u2192 Trash . Each \"allowable triple\" is a tuple X, Y, Z where X, Y, Z \u2208 V . The triples specify which nonterminals Z are allowed to follow a non-terminal Y under a parent X. For example, the triple S,NP,VP specifies that a VP can follow an NP under an S. The triple NP,NN,S would specify that theS symbol can follow an NN under an NP -i.e., that the symbol NN is allowed to be the final child of a rule with parent NP The initial state of the parser is the input string alone, w n 0 . In absorbing the first word, we add all chains of the form S \u2020 . . . \u2192 w 0 . For example, in figure 2 the chain S \u2020 \u2192 S \u2192 NP \u2192 NN \u2192 Trash is used to construct an analysis for the first word alone. Other chains which start with S \u2020 and end with Trash would give competing analyses for the first word of the string. Figure 2 shows an example of how the next word in a sentence can be incorporated into a partial analysis for the previous words. For any partial analysis there will be a set of potential attachment sites: in the example, the attachment sites are under the NP or the S. There will also be a set of possible chains terminating in the next word -there are three in the example. Each chain could potentially be attached at each attachment site, giving 6 ways of incorporating the next word in the example. For illustration, assume that the set B is { S,NP,VP , NP,NN,NN , NP,NN,S , S,NP,VP }. Then some of the 6 possible attachments may be disallowed because they create triples that are not in the set B. For example, in figure 2 attaching either of the VP chains under the NP is disallowed because the triple NP,NN,VP is not in B. Similarly, attaching the NN chain under the S will be disallowed if the triple S,NP,NN is not in B. In contrast, adjoining NN \u2192 can under the NP creates a single triple, NP,NN,NN , which is allowed. Adjoining either of the VP chains under the S creates two triples, S,NP,VP and NP,NN,S , which are both in the set B.", "cite_spans": [], "ref_spans": [ { "start": 1340, "end": 1348, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Parser control", "sec_num": "3.1" }, { "text": "Note that the \"allowable chains\" in our grammar are what Costa et al. (2001) call \"connection paths\" from the partial parse to the next word. It can be shown that the method is equivalent to parsing with a transformed context-free grammar (a first-order \"Markov\" grammar) -for brevity we omit the details here.", "cite_spans": [ { "start": 57, "end": 76, "text": "Costa et al. (2001)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Parser control", "sec_num": "3.1" }, { "text": "In this way, given a set of candidates F i (x) for the first i words of the string, we can generate a set of candidates for the first i + 1 words, \u222a h \u2208Fi(x) ADV(h ), where the ADV function uses the grammar as described above. We then calculate \u03a6(h) \u2022\u1fb1 for all of these partial hypotheses, and rank the set from best to worst. A FILTER function is then applied to this ranked set to give F i+1 . Let h k be the kth ranked hypothesis in H i+1 (x). Then h k \u2208 F i+1 if and only if \u03a6(h k ) \u2022\u1fb1 \u2265 \u03b8 k . In our case, we parameterize the calculation of \u03b8 k with \u03b3 as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parser control", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b8 k = \u03a6(h 0 ) \u2022\u1fb1 \u2212 \u03b3 k 3 .", "eq_num": "(3)" } ], "section": "Parser control", "sec_num": "3.1" }, { "text": "The problem with using left-child chains is limiting them in number. With a left-recursive grammar, of course, the set of all possible left-child chains is infinite. We use two techniques to reduce the number of left-child chains: first, we remove some (but not all) of the recursion from the grammar through a tree transform; next, we limit the left-child chains consisting of more than two non-terminal categories to those actually observed in the training data more than once. Left-child chains of length less than or equal to two are all those observed in training data. As a practical matter, the set of leftchild chains for a terminal x is taken to be the union of the sets of left-child chains for all pre-terminal part-ofspeech (POS) tags T for x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parser control", "sec_num": "3.1" }, { "text": "Before inducing the left-child chains and allowable triples from the treebank, the trees are transformed with a selective left-corner transformation (Johnson and Roark, 2000) that has been flattened as presented in Roark (2001b) . This transform is only applied to left-recursive productions, i.e. productions of the form A \u2192 A\u03b3. The transformed trees look as in figure 3. The transform has the benefit of dramatically reducing the number of left-child chains, without unduly disrupting the immediate dominance relationships that provide features for the model. The parse trees that are returned by the parser are then de-transformed to the original form of the grammar for evaluation 2 . Table 1 presents the number of left-child chains of length greater than 2 in sections 2-21 and 24 of the Penn Wall St. Journal Treebank, both with and without the flattened selective left-corner transformation (FSLC), for gold-standard part-of-speech (POS) tags and automatically tagged POS tags. When the FSLC has been applied and the set is restricted to those occurring more than once in the training corpus, we can reduce the total number of left-child chains of length greater than 2 by half, while leaving the number of words in the held-out corpus with an unobserved left-child chain (out-of-vocabulary rate -OOV) to just one in every thousand words.", "cite_spans": [ { "start": 149, "end": 174, "text": "(Johnson and Roark, 2000)", "ref_id": "BIBREF10" }, { "start": 215, "end": 228, "text": "Roark (2001b)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 689, "end": 696, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Parser control", "sec_num": "3.1" }, { "text": "F 0 = {L 00 , L 10 } F 4 = F 3 \u222a {L 03 } F 8 = F 7 \u222a {L 21 } F 12 = F 11 \u222a {L 11 } F 1 = F 0 \u222a {LKP } F 5 = F 4 \u222a {L 20 } F 9 = F 8 \u222a {CL} F 13 = F 12 \u222a {L 30 } F 2 = F 1 \u222a {L 01 } F 6 = F 5 \u222a {L 11 } F 10 = F 9 \u222a {LK} F 14 = F 13 \u222a {CCP } F 3 = F 2 \u222a {L 02 } F 7 = F 6 \u222a {L 30 } F 11 = F 0 \u222a {L 20 } F 15 = F 14 \u222a {CC}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parser control", "sec_num": "3.1" }, { "text": "For this paper, we wanted to compare the results of a perceptron model with a generative model for a comparable feature set. Unlike in Roark (2001a; , there is no look-ahead statistic, so we modified the feature set from those papers to explicitly include the lexical item and POS tag of the next word. Otherwise the features are basically the same as in those papers. We then built a generative model with this feature set and the same tree transform, for use with the beam-search parser from Roark (2004) to compare against our baseline perceptron model. To concisely present the baseline feature set, let us establish a notation. Features will fire whenever a new node is built in the tree. The features are labels from the left-context, i.e. the already built part of the tree. All of the labels that we will include in our feature sets are i levels above the current node in the tree, and j nodes to the left, which we will denote L ij . Hence, L 00 is the node label itself; L 10 is the label of parent of the current node; L 01 is the label of the sibling of the node, immediately to its left; L 11 is the label of the sibling of the parent node, etc. We also include: the lexical head of the current constituent (CL); the c-commanding lexical head (CC) and its POS (CCP); and the look-ahead word (LK) and its POS (LKP). All of these features are discussed at more length in the citations above. Table 2 presents the baseline feature set.", "cite_spans": [ { "start": 135, "end": 148, "text": "Roark (2001a;", "ref_id": "BIBREF18" }, { "start": 494, "end": 506, "text": "Roark (2004)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 1403, "end": 1410, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Features", "sec_num": "3.2" }, { "text": "In addition to the baseline feature set, we will also present results using features that would be more difficult to embed in a generative model. We included some punctuation-oriented features, which included (i) a Boolean feature indicating whether the final punctuation is a question mark or not; (ii) the POS label of the word after the current look-ahead, if the current lookahead is punctuation or a coordinating conjunction; and (iii) a Boolean feature indicating whether the look-ahead is punctuation or not, that fires when the category immediately to the left of the current position is immediately preceded by punctuation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "3.2" }, { "text": "This section describes two modifications to the \"basic\" training algorithm in figure 1. Figure 4 shows a modified algorithm for parameter estimation. The input to the function is a gold standard parse, together with a set of candidates F generated by the incremental parser. There are two steps. First, the model is updated as usual with the current example, which is then added to a cache of examples. Second, the method repeatedly iterates over the cache, updating the model at each cached example if the gold standard parse is not the best scoring parse from among the stored candidates for that example. In our experiments, the cache was restricted to contain the parses from up to N previously processed sentences, where N was set to be the size of the training set. The motivation for these changes is primarily efficiency. One way to think about the algorithms in this paper is as methods for finding parameter values that satisfy a set of linear constraints -one constraint for each incorrect parse in training data. The incremental parser is Input: A gold-standard parse = g for sentence k of N . A set of candidate parses F. Current parameters \u03b1. A Cache of triples g j , F j , c j for j = 1 . . . N where each g j is a previously generated gold standard parse, F j is a previously generated set of candidate parses, and c j is a counter of the number of times that\u1fb1 has been updated due to this particular triple. Parameters T 1 and T 2 controlling the number of iterations below. In our experiments, T 1 = 5 and T 2 = 50. Initialize the Cache to include, for j = 1 . . . N , g j , \u2205, T 2 .", "cite_spans": [], "ref_spans": [ { "start": 88, "end": 96, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Refinements to the Training Algorithm", "sec_num": "4" }, { "text": "Step 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Making Repeated Use of Hypotheses", "sec_num": "4.1" }, { "text": "Step 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Making Repeated Use of Hypotheses", "sec_num": "4.1" }, { "text": "Calculate z = arg max t\u2208F \u03a6(t) \u2022\u1fb1 For t = 1 . . . T 1 , j = 1 . . . N If (z = g) then\u1fb1 =\u1fb1 + \u03a6(g) \u2212 \u03a6(z)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Making Repeated Use of Hypotheses", "sec_num": "4.1" }, { "text": "If c j < T 2 then Set the kth triple in the Cache to g, F, 0 Figure 4 : The refined parameter update method makes repeated use of hypotheses a method for dynamically generating constraints (i.e. incorrect parses) which are violated, or close to being violated, under the current parameter settings. The basic algorithm in Figure 1 is extremely wasteful with the generated constraints, in that it only looks at one constraint on each sentence (the arg max), and it ignores constraints implied by previously parsed sentences. This is inefficient because the generation of constraints (i.e., parsing an input sentence), is computationally quite demanding. More formally, it can be shown that the algorithm in figure 4 also has the upper bound in theorem 1 on the number of parameter updates performed. If the cost of steps 1 and 2 of the method are negligible compared to the cost of parsing a sentence, then the refined algorithm will certainly converge no more slowly than the basic algorithm, and may well converge more quickly.", "cite_spans": [], "ref_spans": [ { "start": 61, "end": 69, "text": "Figure 4", "ref_id": null }, { "start": 322, "end": 330, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Making Repeated Use of Hypotheses", "sec_num": "4.1" }, { "text": "Calculate z = arg max t\u2208Fj \u03a6(t) \u2022\u1fb1 If (z = g j ) then \u03b1 =\u1fb1 + \u03a6(g j ) \u2212 \u03a6(z) c j = c j + 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Making Repeated Use of Hypotheses", "sec_num": "4.1" }, { "text": "As a final note, we used the parameters T 1 and T 2 to limit the number of passes over examples, the aim being to prevent repeated updates based on outlier examples which are not separable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Making Repeated Use of Hypotheses", "sec_num": "4.1" }, { "text": "As before, define y i to be the gold standard parse for the i'th sentence, and also define y j i to be the partial analysis under the gold-standard parse for the first j words of the i'th sentence. Then if y j i / \u2208 F j (x i ) a search error has been made, and there is no possibility of the gold standard parse y i being in the final set of parses, F n (x i ). We call the following modification to the parsing algorithm during training \"early update\": if y j i / \u2208 F j (x i ), exit the parsing process, pass y j i , F j (x i ) to the parameter estimation method, and move on to the next string in the training set. Intuitively, the motivation behind this is clear. It makes sense to make a correction to the parameter values at the point that a search error has been made, rather than allowing the parser to continue to the end of the sentence. This is likely to lead to less noisy input to the parameter estimation algorithm; and early update will also improve efficiency, as at the early stages of training the parser will frequently give up after a small proportion of each sentence is processed. It is more difficult to justify from a formal point of view, we leave this to future work. Figure 5 shows the convergence of the training algorithm with neither of the two refinements presented; with just early update; and with both. Early update makes an enormous difference in the quality of the resulting model; repeated use of examples gives a small improvement, mainly in recall.", "cite_spans": [], "ref_spans": [ { "start": 1193, "end": 1201, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Early Update During Training", "sec_num": "4.2" }, { "text": "The parsing models were trained and tested on treebanks from the Penn Wall St. Journal Treebank: sections 2-21 were kept training data; section 24 was held-out development data; and section 23 was for evaluation. After each pass over the training data, the averaged perceptron model was scored on the development data, and the best performing model was used for test evaluation. For this paper, we used POS tags that were provided either by the Treebank itself (gold standard tags) or by the perceptron POS tagger 3 presented in Collins (2002) . The former gives us an upper bound on the improvement that we might expect if we integrated the POS tagging with the parsing. Table 3 : Parsing results, section 23, all sentences, including labeled precision (LP), labeled recall (LR), and F-measure Table 3 shows results on section 23, when either goldstandard or POS-tagger tags are provided to the parser 4 . With the base features, the generative model outperforms the perceptron parser by between a half and one point, but with the additional punctuation features, the perceptron model matches the generative model performance.", "cite_spans": [ { "start": 529, "end": 543, "text": "Collins (2002)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 672, "end": 679, "text": "Table 3", "ref_id": null }, { "start": 795, "end": 802, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Empirical results", "sec_num": "5" }, { "text": "Of course, using the generative model and using the perceptron algorithm are not necessarily mutually exclusive. Another training scenario would be to include the generative model score as another feature, with some weight in the linear model learned by the perceptron algorithm. This sort of scenario was used in for training an n-gram language model using the perceptron algorithm. We follow that paper in fixing the weight of the generative model, rather than learning the weight along the the weights of the other perceptron features. The value of the weight was empirically optimized on the held-out set by performing trials with several values. Our optimal value was 10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical results", "sec_num": "5" }, { "text": "In order to train this model, we had to provide generative model scores for strings in the training set. Of course, to be similar to the testing conditions, we cannot use the standard generative model trained on every sentence, since then the generative score would be from a model that had already seen that string in the training data. To control for this, we built ten generative models, each trained on 90 percent of the training data, and used each of the ten to score the remaining 10 percent that was not seen in that training set. For the held-out and testing conditions, we used the generative model trained on all of sections 2-21.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical results", "sec_num": "5" }, { "text": "In table 4 we present the results of including the generative model score along with the other perceptron features, just for the run with POS-tagger tags. The generative model score (negative log probability) effectively provides a much better initial starting point for the perceptron algorithm. The resulting F-measure on section 23 is 2.1 percent higher than either the generative model or perceptron-trained model used in isolation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical results", "sec_num": "5" }, { "text": "In this paper we have presented a discriminative training approach, based on the perceptron algorithm with a couple of effective refinements, that provides a model capable of effective heuristic search over a very difficult search space. In such an approach, the unnormalized discriminative parsing model can be applied without either an external model to present it with candidates, or potentially expensive dynamic programming. When the training algorithm is provided the generative model scores as an additional feature, the resulting parser is quite competitive on this task. The improvement that was derived from the additional punctuation features demonstrates the flexibility of the approach in incorporating novel features in the model. Future research will look in two directions. First, we will look to include more useful features that are difficult for a generative model to include. This paper was intended to compare search with the generative model and the perceptron model with roughly similar feature sets. Much improvement could potentially be had by looking for other features that could improve the models. Secondly, combining with the generative model can be done in several ways. Some of the constraints on the search technique that were required in the absence of the generative model can be relaxed if the generative model score is included as another feature. In the current paper, the generative score was simply added as another feature. Another approach might be to use the generative model to produce candidates at a word, then assign perceptron features for those candidates. Such variants deserve investigation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "Overall, these results show much promise in the use of discriminative learning techniques such as the perceptron algorithm to help perform heuristic search in difficult domains such as statistical parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "Dynamic programming methods(Geman and Johnson, 2002;Lafferty et al., 2001) can sometimes be used for both training and decoding, but this requires fairly strong restrictions on the features in the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See Johnson (1998) for a presentation of the transform/detransform paradigm in parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For trials when the generative or perceptron parser was given POS tagger output, the models were trained on POS tagged sections 2-21, which in both cases helped performance slightly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "When POS tagging is integrated directly into the generative parsing process, the baseline performance is 87.0. For comparison with the perceptron model, results are shown with pre-tagged input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The work by Michael Collins was supported by the National Science Foundation under Grant No. 0347631.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Stochastic attribute-value grammars", "authors": [ { "first": "Steven", "middle": [], "last": "Abney", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "4", "pages": "597--617", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Abney. 1997. Stochastic attribute-value gram- mars. Computational Linguistics, 23(4):597-617.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "New ranking algorithms for parsing and tagging: Kernels over discrete structures and the voted perceptron", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Nigel", "middle": [], "last": "Duffy", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "263--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins and Nigel Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over dis- crete structures and the voted perceptron. In Proceed- ings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 263-270.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Discriminative reranking for natural language parsing", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2000, "venue": "The Proceedings of the 17th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 2000. Discriminative reranking for natural language parsing. In The Proceedings of the 17th International Conference on Machine Learning.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1-8.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Parameter estimation for statistical parsing models: Theory and practice of distribution-free methods", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2004, "venue": "New Developments in Parsing Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 2004. Parameter estimation for sta- tistical parsing models: Theory and practice of distribution-free methods. In Harry Bunt, John Car- roll, and Giorgio Satta, editors, New Developments in Parsing Technology. Kluwer.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Wide coverage incremental parsing by learning attachment preferences", "authors": [ { "first": "Fabrizio", "middle": [], "last": "Costa", "suffix": "" }, { "first": "Vincenzo", "middle": [], "last": "Lombardo", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Frasconi", "suffix": "" }, { "first": "Giovanni", "middle": [], "last": "Soda", "suffix": "" } ], "year": 2001, "venue": "Conference of the Italian Association for Artificial Intelligence (AIIA)", "volume": "", "issue": "", "pages": "297--307", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabrizio Costa, Vincenzo Lombardo, Paolo Frasconi, and Giovanni Soda. 2001. Wide coverage incremental parsing by learning attachment preferences. In Con- ference of the Italian Association for Artificial Intelli- gence (AIIA), pages 297-307.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Inducing features of random fields", "authors": [ { "first": "Vincent", "middle": [ "Della" ], "last": "Stephen Della Pietra", "suffix": "" }, { "first": "John", "middle": [], "last": "Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 1997, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "19", "issue": "", "pages": "380--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Della Pietra, Vincent Della Pietra, and John Laf- ferty. 1997. Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 19:380-393.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Large margin classification using the perceptron algorithm. Machine Learning", "authors": [ { "first": "Yoav", "middle": [], "last": "Freund", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Schapire", "suffix": "" } ], "year": 1999, "venue": "", "volume": "3", "issue": "", "pages": "277--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Freund and Robert Schapire. 1999. Large mar- gin classification using the perceptron algorithm. Ma- chine Learning, 3(37):277-296.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "An efficient boosting algorithm for combining preferences", "authors": [ { "first": "Yoav", "middle": [], "last": "Freund", "suffix": "" }, { "first": "Raj", "middle": [], "last": "Iyer", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Schapire", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 1998, "venue": "Proc. of the 15th Intl. Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Freund, Raj Iyer, Robert Schapire, and Yoram Singer. 1998. An efficient boosting algorithm for combining preferences. In Proc. of the 15th Intl. Con- ference on Machine Learning.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Dynamic programming for parsing and estimation of stochastic unification-based grammars", "authors": [ { "first": "Stuart", "middle": [], "last": "Geman", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "279--286", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stuart Geman and Mark Johnson. 2002. Dynamic pro- gramming for parsing and estimation of stochastic unification-based grammars. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 279-286.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Compact nonleft-recursive grammars using the selective left-corner transform and factoring", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 18th International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "355--361", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson and Brian Roark. 2000. Compact non- left-recursive grammars using the selective left-corner transform and factoring. In Proceedings of the 18th International Conference on Computational Linguis- tics (COLING), pages 355-361.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Estimators for stochastic \"unification-based\" grammars", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Stuart", "middle": [], "last": "Geman", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Canon", "suffix": "" }, { "first": "Zhiyi", "middle": [], "last": "Chi", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "535--541", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson, Stuart Geman, Steven Canon, Zhiyi Chi, and Stefan Riezler. 1999. Estimators for stochastic \"unification-based\" grammars. In Proceedings of the 37th Annual Meeting of the Association for Computa- tional Linguistics, pages 535-541.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "PCFG models of linguistic tree representations", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "4", "pages": "617--636", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson. 1998. PCFG models of linguis- tic tree representations. Computational Linguistics, 24(4):617-636.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 18th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic mod- els for segmenting and labeling sequence data. In Pro- ceedings of the 18th International Conference on Ma- chine Learning, pages 282-289.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A maximum entropy model for parsing", "authors": [ { "first": "Adwait", "middle": [], "last": "Ratnaparkhi", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "R. Todd", "middle": [], "last": "Ward", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the International Conference on Spoken Language Processing (ICSLP)", "volume": "", "issue": "", "pages": "803--806", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adwait Ratnaparkhi, Salim Roukos, and R. Todd Ward. 1994. A maximum entropy model for parsing. In Pro- ceedings of the International Conference on Spoken Language Processing (ICSLP), pages 803-806.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Learning to parse natural language with maximum entropy models", "authors": [ { "first": "Adwait", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1999, "venue": "Machine Learning", "volume": "34", "issue": "", "pages": "151--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adwait Ratnaparkhi. 1999. Learning to parse natural language with maximum entropy models. Machine Learning, 34:151-175.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Parsing the wall street journal using a lexicalfunctional grammar and discriminative estimation techniques", "authors": [ { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "Tracy", "middle": [], "last": "King", "suffix": "" }, { "first": "Ronald", "middle": [ "M" ], "last": "Kaplan", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Crouch", "suffix": "" }, { "first": "John", "middle": [ "T" ], "last": "Maxwell", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "271--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan Riezler, Tracy King, Ronald M. Kaplan, Richard Crouch, John T. Maxwell III, and Mark Johnson. 2002. Parsing the wall street journal using a lexical- functional grammar and discriminative estimation techniques. In Proceedings of the 40th Annual Meet- ing of the Association for Computational Linguistics, pages 271-278.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Corrective language modeling for large vocabulary ASR with the perceptron algorithm", "authors": [ { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" }, { "first": "Murat", "middle": [], "last": "Saraclar", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "749--752", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian Roark, Murat Saraclar, and Michael Collins. 2004. Corrective language modeling for large vocabulary ASR with the perceptron algorithm. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 749-752.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Probabilistic top-down parsing and language modeling", "authors": [ { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2001, "venue": "Computational Linguistics", "volume": "27", "issue": "2", "pages": "249--276", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian Roark. 2001a. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249-276.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Robust Probabilistic Predictive Syntactic Processing", "authors": [ { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian Roark. 2001b. Robust Probabilistic Predictive Syntactic Processing. Ph.D. thesis, Brown University. http://arXiv.org/abs/cs/0105019.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Robust garden path parsing", "authors": [ { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2004, "venue": "Natural Language Engineering", "volume": "10", "issue": "1", "pages": "1--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian Roark. 2004. Robust garden path parsing. Natural Language Engineering, 10(1):1-24.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Left child chains and connection paths. Dotted lines represent potential attachments", "num": null, "uris": null }, "FIGREF1": { "type_str": "figure", "text": "Three representations of NP modifications: (a) the original treebank representation; (b) Selective left-corner representation; and (c) a flat structure that is unambiguously equivalent to (b)", "num": null, "uris": null }, "FIGREF2": { "type_str": "figure", "text": "over training data F\u2212measure parsing accuracy No early update, no repeated use of examples Early update, no repeated use of examples Early update, repeated use of examples", "num": null, "uris": null }, "FIGREF3": { "type_str": "figure", "text": "Performance on development data (section f24) after each pass over the training data, with and without repeated use of examples and early update.", "num": null, "uris": null }, "FIGREF4": { "type_str": "figure", "text": "", "num": null, "uris": null }, "TABREF1": { "type_str": "table", "text": "Left-child chain type counts (of length > 2) for sections of the Wall St. Journal Treebank, and out-ofvocabulary (OOV) rate on the held-out corpus.", "html": null, "content": "", "num": null }, "TABREF2": { "type_str": "table", "text": "", "html": null, "content": "
", "num": null }, "TABREF3": { "type_str": "table", "text": "Perceptron (w/ punctuation features) 88.1 87.6 87.8 87.0 86.3 86.6", "html": null, "content": "
Gold-standard tagsPOS-tagger tags
LPLRFLPLRF
Generative88.1 87.6 87.8 86.8 86.5 86.7
Perceptron (baseline)87.5 86.9 87.2 86.2 85.5 85.8
", "num": null }, "TABREF5": { "type_str": "table", "text": "", "html": null, "content": "
: Parsing results, section 23, all sentences, in-
cluding labeled precision (LP), labeled recall (LR), and
F-measure
", "num": null } } } }