{ "paper_id": "U10-1014", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:09:21.404505Z" }, "title": "Reranking a wide-coverage CCG parser", "authors": [ { "first": "Dominick", "middle": [], "last": "Ng", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Sydney NSW 2006", "location": { "country": "Australia" } }, "email": "" }, { "first": "Matthew", "middle": [], "last": "Honnibal", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Sydney NSW 2006", "location": { "country": "Australia" } }, "email": "" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Sydney NSW 2006", "location": { "country": "Australia" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "n-best parse reranking is an important technique for improving the accuracy of statistical parsers. Reranking is not constrained by the dynamic programming required for tractable parsing, so arbitrary features of each parse may be considered. We adapt the reranking features and methodology used by Charniak and Johnson (2005) for the C&C Combinatory Categorial Grammar parser, and develop new features based on the richer formalism. The reranker achieves a labeled dependency F-score of 87.59%, which is a significant improvement over prior results.", "pdf_parse": { "paper_id": "U10-1014", "_pdf_hash": "", "abstract": [ { "text": "n-best parse reranking is an important technique for improving the accuracy of statistical parsers. Reranking is not constrained by the dynamic programming required for tractable parsing, so arbitrary features of each parse may be considered. We adapt the reranking features and methodology used by Charniak and Johnson (2005) for the C&C Combinatory Categorial Grammar parser, and develop new features based on the richer formalism. The reranker achieves a labeled dependency F-score of 87.59%, which is a significant improvement over prior results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Accurate syntactic parsing has proven to be critical for many tasks in natural language processing (NLP), including semantic role labeling (Gildea and Jurafsky, 2002) , question answering (Echihabi and Marcu, 2003) , and machine translation (De-Neefe and Knight, 2009) . Improved parser accuracy benefits many downstream tasks in the field.", "cite_spans": [ { "start": 139, "end": 166, "text": "(Gildea and Jurafsky, 2002)", "ref_id": "BIBREF10" }, { "start": 188, "end": 214, "text": "(Echihabi and Marcu, 2003)", "ref_id": "BIBREF9" }, { "start": 241, "end": 268, "text": "(De-Neefe and Knight, 2009)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One method of improving parsing accuracy is reranking -the process of reordering the top n analyses as determined by a base parser (Collins, 2000) . The statistical models used in phrasestructure and dependency parsers rely on dynamic programming algorithms that restrict possible features to a local context. This is necessary for efficient decoding of the potential parse forest, ensuring tractability at the cost of excluding any nonlocal features from consideration. Reranking operates over complete trees that are the most probable derivations under the dynamic programming model, allowing arbitrary complex features of the parse to be incorporated without sacrificing efficiency. Poor local decisions made by parsers are easier to model and capture in the reranking phase. Collins (2000) reports a 1.55% accuracy improvement with reranking for the Collins parser, and Charniak and Johnson (2005) reports a 1.3% improvement for a reranked Charniak parser. An open question is how well reranking applies to parsers of different design to the Charniak and Collins parsers. An attempt to port the Charniak and Johnson reranker for the Berkeley parser (Petrov et al., 2006) produced only minimal accuracy improvements (Johnson and Ural, 2010) , suggesting that careful feature engineering is necessary for good performance.", "cite_spans": [ { "start": 131, "end": 146, "text": "(Collins, 2000)", "ref_id": "BIBREF7" }, { "start": 779, "end": 793, "text": "Collins (2000)", "ref_id": "BIBREF7" }, { "start": 854, "end": 901, "text": "Collins parser, and Charniak and Johnson (2005)", "ref_id": null }, { "start": 944, "end": 952, "text": "Charniak", "ref_id": null }, { "start": 1153, "end": 1174, "text": "(Petrov et al., 2006)", "ref_id": "BIBREF19" }, { "start": 1219, "end": 1243, "text": "(Johnson and Ural, 2010)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we describe the implementation of a discriminative maximum entropy reranker for the C&C parser (Clark and Curran, 2007) , a stateof-the-art system based on Combinatory Categorial Grammar (CCG). We reimplement the features described in Charniak and Johnson (2005) to suit the CCG parser and replicate the Charniak reranker setup. Our experiments show that the PCFGstyle features are less effective at reranking CCG than Penn Treebank-style trees. We hypothesise that the binary-branching structure of CCG is the cause, as CCG trees are deeper and create different constituent structures compared to Penn Treebank trees. To address this, we develop a number of new features to take advantage of the more detailed formalism and the evaluation over recovered dependencies. We also experiment with regression and classification approaches, variations in feature pruning, and differing numbers of n-best parses for the reranker to consider.", "cite_spans": [ { "start": 109, "end": 133, "text": "(Clark and Curran, 2007)", "ref_id": "BIBREF5" }, { "start": 249, "end": 276, "text": "Charniak and Johnson (2005)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The reranker achieves a best labeled dependency F-score of 87.13% on Section 00 of CCGbank and 87.59% on Section 23. The performance gains are statistically significant, but small in real terms, indicating that crafting reranking features is not a trivial process. However, the continued improvements in parsing accuracy will benefit downstream applications utilising the parser through more accurate syntactic analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Reranking has been successfully applied to dependency parsing (Sangati et al., 2009) , machine translation (Shen et al., 2004) , and natural language generation with CCG (White and Rajkumar, 2009) . Collins (2000) describes reranking for the Collins (Model 2) parser (Collins, 1999) . 36,000 sentences from Sections 02-21 of the Penn Treebank WSJ data are parsed with a modified version of the base parser, producing an average of 27 parses per sentence. Features are extracted from the parses to create reranker training data, including lexical heads and the distances between them, context-free rules in the tree, n-grams and their ancestors, and parent-grandparent relationships. Collins reports a final PARSEVAL F-score of 89.75% using a boosting-based reranker, a 1.55% improvement compared to the baseline parser.", "cite_spans": [ { "start": 62, "end": 84, "text": "(Sangati et al., 2009)", "ref_id": "BIBREF21" }, { "start": 107, "end": 126, "text": "(Shen et al., 2004)", "ref_id": "BIBREF22" }, { "start": 170, "end": 196, "text": "(White and Rajkumar, 2009)", "ref_id": "BIBREF24" }, { "start": 199, "end": 213, "text": "Collins (2000)", "ref_id": "BIBREF7" }, { "start": 267, "end": 282, "text": "(Collins, 1999)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Parser Reranking", "sec_num": "2" }, { "text": "The potential benefits from reranking are dependent on the quality of the candidate n-best parses. Huang and Chiang (2005) describe efficient and accurate algorithms for this task based on a directed hypergraph analysis framework (Klein and Manning, 2001) . By improving the quality of the candidate parses, Huang and Chiang demonstrate how oracle reranking scores (using a perfect reranker that always choses the best parse from an n-best list) can be dramatically improved compared to the parses used in Collins (2000) . Charniak and Johnson (2005) describe discriminative reranking for the Charniak parser. A coarseto-fine parsing approach allows high-quality nbest parses to be tractably computed while retaining dynamic programming in the parser. When run in 50-best mode the Charniak n-best parser has an oracle F-score of 96.8% in the standard PARSE-VAL metric -much higher than the 89.7% parser baseline. The reranker produces a final F-score of 91.0% in 50-best mode. This is an improvement of 1.3% over the baseline model. Self-training over the reranked parses further improves performance to 92.1% F-score, which remains the state-of-theart (McClosky et al., 2006) . Self-training provides the additional benefit of improving the Charniak parser's performance on out-of-domain data -a known weakness of supervised parsing.", "cite_spans": [ { "start": 99, "end": 122, "text": "Huang and Chiang (2005)", "ref_id": "BIBREF13" }, { "start": 230, "end": 255, "text": "(Klein and Manning, 2001)", "ref_id": "BIBREF16" }, { "start": 506, "end": 520, "text": "Collins (2000)", "ref_id": "BIBREF7" }, { "start": 523, "end": 550, "text": "Charniak and Johnson (2005)", "ref_id": "BIBREF3" }, { "start": 1153, "end": 1176, "text": "(McClosky et al., 2006)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Parser Reranking", "sec_num": "2" }, { "text": "More recently, the Charniak reranking system has been adapted for the Berkeley parser (Petrov et al., 2006) . Unlike the Collins and Charniak parsers, which are broadly similar and heavily based on lexicalised models, the Berkeley parser Jack baked a cake with raisins", "cite_spans": [ { "start": 86, "end": 107, "text": "(Petrov et al., 2006)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Parser Reranking", "sec_num": "2" }, { "text": "N (S \\NP )/NP NP /N N (NP \\NP )/NP N > NP NP NP > NP \\NP < NP > S \\NP < S Figure 1: A simple CCG derivation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parser Reranking", "sec_num": "2" }, { "text": "uses a split-merge technique to acquire a much smaller, unlexicalised grammar from its training data. Johnson and Ural (2010) report that reranking leads to negligible performance improvements for the Berkeley parser, and acknowledge that the reranker's feature set, adapted from Charniak and Johnson (2005) , may be implicitly tailored to the Charniak parser over the Berkeley parser. In particular, the feature pruning process for reranking was conducted over output from the Charniak parser, which may have prevented useful features for the Berkeley parser from being chosen.", "cite_spans": [ { "start": 102, "end": 125, "text": "Johnson and Ural (2010)", "ref_id": "BIBREF15" }, { "start": 280, "end": 307, "text": "Charniak and Johnson (2005)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Parser Reranking", "sec_num": "2" }, { "text": "Combinatory Categorial Grammar (CCG, Steedman (2000)) is a lexicalised grammar formalism based on combinatory logic. The grammar is directly encoded in the lexicon in the form of categories that govern the syntactic behaviour of each word. A small number of generic rules combine categories together to form a spanning analysis. Categories may be atomic or complex. Atomic categories represent words and constituents that are syntactically complete, such as nouns (N ), noun phrases (NP ), prepositional phrases (PP ), and sentences (S ). Complex categories are binary structures of the form X /Y or X \\Y , and represent structures which combine with an argument of category Y to produce a result of category X . The forward and backward slashes indicate that Y is expected to the right and left respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "3" }, { "text": "Complex categories can be thought of as functors that require particular arguments to produce a grammatical construction. Subcategorization information is encoded using nested categories. For example, transitive verbs have the category (S \\NP )/NP , which indicates that one object NP is expected to the right to form a verb phrase S \\NP , which in turn expects one subject NP to the left to form a sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "3" }, { "text": "In addition to forward and backward application, CCG has a number of other binary combinators based on function composition. There are also unary type-changing combinators that take a single category and transform it into another category. Figure 1 gives a simple CCG derivation, showing how categories are successively combined together to yield an analysis.", "cite_spans": [], "ref_spans": [ { "start": 240, "end": 248, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "3" }, { "text": "The C&C parser (Clark and Curran, 2007) is a fast, highly accurate parser based on the CCG formalism. The parser is used in question answering systems (Bos et al., 2007) , computational semantics tools (Bos et al., 2004) , and has been shown to perform well in recovering unbounded dependencies (Rimell et al., 2009) .", "cite_spans": [ { "start": 15, "end": 39, "text": "(Clark and Curran, 2007)", "ref_id": "BIBREF5" }, { "start": 151, "end": 169, "text": "(Bos et al., 2007)", "ref_id": "BIBREF1" }, { "start": 202, "end": 220, "text": "(Bos et al., 2004)", "ref_id": "BIBREF0" }, { "start": 295, "end": 316, "text": "(Rimell et al., 2009)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "The C&C parser", "sec_num": "4" }, { "text": "The parser divides the parsing process into two main phases: supertagging and parsing. First, the supertagger assigns a small set of initial categories to each word in the sentence. Then, the parser attempts to find a spanning analysis using the proposed categories using the modified CKY algorithm described in Steedman (2000) . If the parser cannot find an analysis (i.e. there is no sequence of combinators that can combine the proposed categories) the supertagger is run again at a higher ambiguity level, giving each word a larger set of possible categories, and the process is repeated. The supertagging phase dramatically reduces the number of derivations for the parser to consider, making the system highly efficient.", "cite_spans": [ { "start": 312, "end": 327, "text": "Steedman (2000)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "The C&C parser", "sec_num": "4" }, { "text": "An n-best version of the C&C parser has recently been developed (Brennan, 2008) , incorporating the algorithms described in Huang and Chiang (2005) . The n-best parser is almost as efficient as the baseline 1-best version, and we use it as the basis for all experiments presented in this paper.", "cite_spans": [ { "start": 64, "end": 79, "text": "(Brennan, 2008)", "ref_id": null }, { "start": 124, "end": 147, "text": "Huang and Chiang (2005)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "The C&C parser", "sec_num": "4" }, { "text": "CCGbank is the standard corpus for English parsing with CCG. It is a transformation of the Penn Treebank WSJ data into CCG derivations and dependencies (Hockenmaier and Steedman, 2007) . Sections 02-21 are the standard training data for the C&C parser, with Section 00 used for development and Section 23 for evaluation. The supertagger requires part-of-speech information for each word as part of its feature set, so a POS tagger is also included with the C&C parser. Both the supertagger and the POS tagger are trained over tags extracted from Sections 02-21 of CCGbank.", "cite_spans": [ { "start": 152, "end": 184, "text": "(Hockenmaier and Steedman, 2007)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "The C&C parser", "sec_num": "4" }, { "text": "We frame the reranking task for CCG parsing as follows: given an n-best list of parses, ranked by the parser, choose the parse that is as close as possible to the gold standard. We use the standard CCG labeled dependency metric as described in Hockenmaier (2003) to define closeness to the gold standard, allowing us to explore both classification and regression as frameworks for the task. In classification, the closest sentence(s) to the gold standard with respect to F-score are labeled as positive, while all other sentences are labeled as negative. If there are multiple parses with the highest F-score, they are all labeled as positive. In regression, the F-score of each parse is used as the target value. Both classification and regression approaches were implemented using MEGAM 1 .", "cite_spans": [ { "start": 244, "end": 262, "text": "Hockenmaier (2003)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "5" }, { "text": "n-best lists of parses were generated using the n-best C&C parser using Algorithm 3 of Huang and Chiang (2005) . We used the normal-form model for the C&C parser as described in Clark and Curran (2007) for all experiments in this paper. Reranker training data was created using nbest parses of each sentence in Sections 02-21 of CCGbank. As this is also the parser's training data, care must be taken to avoid generating training data where the parser's confidence level is different to that at run-time (caused by parsing the training data). We constructed ten folds of Sections 02-21, training the POS tagger, supertagger, and parser on nine of the folds and producing nbest parses over the remaining fold.", "cite_spans": [ { "start": 87, "end": 110, "text": "Huang and Chiang (2005)", "ref_id": "BIBREF13" }, { "start": 178, "end": 201, "text": "Clark and Curran (2007)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "5" }, { "text": "Features were generated over the n-best parses of the folded training data and the appropriate label assigned based on the F-score. This data was used to train the reranker. Similarly, Section 24 of CCGbank was parsed using a model trained over Sections 02-21 for use as tuning data. At run-time, features were generated over the n-best parses of the test data, and the most probable parse (classification) or the parse with the highest predicted F-score (regression) was returned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "5" }, { "text": "We experimented with values of 10 and 50 for n to balance between the potential accuracy improvement and the efficiency of the reranker. n was kept constant between the training data and the final test data (i.e. a reranker trained on 50best parses was then tested over 50-best parses).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "5" }, { "text": "Following Charniak and Johnson (2005) we implemented feature pruning for the reranker train-ing data as follows. For each sentence, define a feature as being pseudo-constant if it does not differ in value over all the parses for that sentence. We keep all features that are non pseudo-constant in at least t sentences in the training data. We experimented with values of 0, 2, and 5 for t to investigate the effect of feature pruning.", "cite_spans": [ { "start": 10, "end": 37, "text": "Charniak and Johnson (2005)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "5" }, { "text": "The features described in this section are calculated over CCG derivation trees produced by the C&C parser. We began by implementing the features described by Charniak and Johnson (2005) , before developing features specifically for CCG derivations. We also implemented the CCG parsing features described by Clark and Curran (2007) , so that our reranking model would have access to the information used by the parser. These features include various combinations of word-category, word-POS, CCG rule, distance, and dependency information. Finally, the log score and rank assigned to each derivation by the parser were encoded as core features for the reranker.", "cite_spans": [ { "start": 159, "end": 186, "text": "Charniak and Johnson (2005)", "ref_id": "BIBREF3" }, { "start": 308, "end": 331, "text": "Clark and Curran (2007)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Reranking Features", "sec_num": "6" }, { "text": "CCG derivation trees have some important structural differences from the trees that the Charniak and Johnson features were designed for. The most important difference is that CCG trees are at most binary branching 2 . As the longest non-terminal in the Penn Treebank has 51 children, features designed to generalise long production rules are useful in the Charniak and Johnson reranker but are less relevant to CCG trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reranking Features", "sec_num": "6" }, { "text": "Another important difference is that CCG production rules are constrained by the combinatory rules, whereas Penn Treebank productions combine unrelated atomic symbols. For instance, a Penn Treebank production NP\u2192 NP PP would be translated into CCG as NP \u2192 NP NP \\NP . Much of the information in the production is already present in the structure of the NP \\NP category. We speculate that this will make the features that capture the vertical context of a production rule less useful for CCG.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reranking Features", "sec_num": "6" }, { "text": "Finally, each ccg tree corresponds to exactly one dependency analysis, and this is produced as output by the C&C parser. This gives the reranker access to the full dependency analysis of each sentence, making the dependency-approximation heuristics used by Charniak and Johnson (2005) unnecessary for our purposes.", "cite_spans": [ { "start": 257, "end": 284, "text": "Charniak and Johnson (2005)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Reranking Features", "sec_num": "6" }, { "text": "The features adapted from Charniak and Johnson (2005) are described in Sections 6.1 and 6.2 below. The novel CCG features we develop are described in Section 6.3. Most features were implemented as simple boolean indicator functions. Maximum entropy modelling exponentiates feature values, so real-valued features are more influential than boolean features. We mitigated this effect by taking the log of real-valued features.", "cite_spans": [ { "start": 26, "end": 53, "text": "Charniak and Johnson (2005)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Reranking Features", "sec_num": "6" }, { "text": "These features attempt to describe the overall shape of the parse tree, to capture the fact that English generally favours right-branching parse trees, with phonologically heavy constituents generally occurring in sentence-final position. Tree topology can also be useful in capturing the balance found in coordination attachment. These guidelines distinguish the correct parse tree in Figure 2a from the incorrect parse tree in Figure 2b the incorrect tree is more left-branching than the correct tree, with a shallower depth of balance in the coordination.", "cite_spans": [], "ref_spans": [ { "start": 386, "end": 392, "text": "Figure", "ref_id": null }, { "start": 429, "end": 438, "text": "Figure 2b", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Tree Topology Features", "sec_num": "6.1" }, { "text": "CoPar: records coordination parallelism at various depths. Indicates whether both sides of a coordination are identical in structure and category labels at depths of 1 to 4 from the coordinator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Topology Features", "sec_num": "6.1" }, { "text": "CoLenPar: indicates the difference in size between two halves of a conjunction (where size is the number of nodes in the yield) as well as whether the latter half is the final element.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Topology Features", "sec_num": "6.1" }, { "text": "Heavy: encodes the category and size of the subtree rooted at each non-terminal, whether the nonterminal is at the end of the sentence, and whether it is followed by punctuation. This crudely captures the tendency for larger constituents to lie further to the right in a tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Topology Features", "sec_num": "6.1" }, { "text": "RightBranch: encodes the number of nonterminals on the longest path from the root of the tree to the right-most non-punctuation node in the tree, and the number of non-terminals in the tree that are not on this path.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Topology Features", "sec_num": "6.1" }, { "text": "SubjVerbAgr: captures the conjoined POS tags of the subject noun and verb in a sentence to distinguish cases where the pluralisation does not agree. The subject is assumed to be the final NP before the verb phrase (S \\NP ) in a sentence. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Topology Features", "sec_num": "6.1" }, { "text": "These features, adapted from Charniak and Johnson (2005) , attempt to represent various fragments of the tree as well as incorporate layers of vertical and horizontal context that are difficult to encode in the parser model.", "cite_spans": [ { "start": 29, "end": 56, "text": "Charniak and Johnson (2005)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Local Context Features", "sec_num": "6.2" }, { "text": "Edge: captures the words and POS tags immediately preceding and following the subtree rooted at each non-terminal in the tree. This crudely captures poor attachment decisions in local trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Local Context Features", "sec_num": "6.2" }, { "text": "Heads: represents pairs of constituent heads as indicated by the parser at various levels in the tree. Heads are encoded as lexical items and POS tags.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Local Context Features", "sec_num": "6.2" }, { "text": "HeadTree: records the entire tree fragment (in a bracketed string format) projected upwards from the head word of the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Local Context Features", "sec_num": "6.2" }, { "text": "Neighbours: encodes the category of each nonterminal, its binned size, and the POS tags of the \u2113 1 preceding words and the \u2113 2 following words, where \u2113 1 = 1 or 2 and \u2113 2 = 1. Binned size is the number of words in the yield of the non-terminal, bucketed into 0, 1, 2, 4, or 5+.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Local Context Features", "sec_num": "6.2" }, { "text": "NGramTree: records tree fragments rooted at the lowest common ancestor node of \u2113 = 2 or 3 contiguous terminals in the tree. This represents the subtree encompassing each sequence of \u2113 words in the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Local Context Features", "sec_num": "6.2" }, { "text": "Rule: captures the equivalent CCG rule application represented at each non-terminal node; equivalent to a context-free production rule.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Local Context Features", "sec_num": "6.2" }, { "text": "SynSemHeads: yield pairs of semantic heads (e.g. the rightmost noun in a noun phrase) and functional heads (e.g. the determiner in a noun phrase) at each non-terminal in the tree. Heads are encoded as lexical items and POS tags.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Local Context Features", "sec_num": "6.2" }, { "text": "Word: yields each word in a sentence along with the categories of \u2113 = 2 or 3 of its immediate ancestor nodes in the tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Local Context Features", "sec_num": "6.2" }, { "text": "WProj: for each terminal in the tree, encode the word combined with the category of its maximal projection parent, which is the first node found by climbing the tree until the child node is no longer the head of its parent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Local Context Features", "sec_num": "6.2" }, { "text": "We devised a number of new features for CCG aimed at uncovering various combinator sequences or combinations that may indicate an overly complicated or undesirable derivation. Additionally, these features attempt to encode more information about the dependencies licensed by the derivation as it is these dependencies which will be evaluated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Features", "sec_num": "6.3" }, { "text": "Balance: encodes the overall balance of the tree in terms of the ratio of leaves and the ratio of nodes in the left and right subtrees from the root. This feature reflects the decision to make all nominal compounds in CCGbank right branching (Hock-enmaier and Steedman, 2007) .", "cite_spans": [ { "start": 242, "end": 275, "text": "(Hock-enmaier and Steedman, 2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "CCG Features", "sec_num": "6.3" }, { "text": "CoHeads: records the heads of both halves of a coordination as indicated by the parser, along with the depth at which the head is found. This attempts to encode the conjunction dependencies in the tree as incorrect conjunction dependencies propagate through to other dependencies in the tree. Heads are encoded as lexical items and POS tags.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Features", "sec_num": "6.3" }, { "text": "LexDep: CCG dependencies can be partially captured via the children of non-terminals in the tree. This feature is active for non-terminals with two children and encodes the heads of the children in terms of lexical items, POS tags, categories, and depth from the non-terminal. Dependencies involving punctuation are ignored as they are not assessed in the evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Features", "sec_num": "6.3" }, { "text": "NumDeps: distinguishes between parses based on the log number of dependencies that they yield ignoring punctuation. Dependencies are located using the same heuristic as the LexDep feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Features", "sec_num": "6.3" }, { "text": "TypeRaising: indicates the presence of unary type-raising in the tree. While type-raising is necessary to analyse some constructions in CCG, it has tightly restricted in the parser due to its power, and is expected to appear only rarely.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Features", "sec_num": "6.3" }, { "text": "UnaryRule, BiUnaryRule: indicates the unary rules present in the tree and the bigram combinations of these rules. The unary rules do not include type-raising and are non-standard in CCG; they were added by Hockenmaier and Steedman (2007) to CCGbank for constructions such as clausal adjuncts, which are poorly handled by the formalism.", "cite_spans": [ { "start": 206, "end": 237, "text": "Hockenmaier and Steedman (2007)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "CCG Features", "sec_num": "6.3" }, { "text": "C&C Features: Finally, we also incorporate the dependency and normal-form features used by the C&C parser as described in Clark and Curran (2007) . These features encode various combinations of word-category, word-POS, root-word, CCG rule, distance, and dependency information.", "cite_spans": [ { "start": 122, "end": 145, "text": "Clark and Curran (2007)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "CCG Features", "sec_num": "6.3" }, { "text": "We follow the CCG dependency evaluation methodology established by Hockenmaier (2003) , using the EVALUATE scorer distributed with the C&C parser. It evaluates a CCG parse as a set of labeled dependencies consisting of the head, its lexical category, the child, and the argument slot that it fills. A dependency is considered correct only if all four elements match the gold standard. Table 1 : Baseline and oracle n-best parser performance over Section 00 of CCGbank.", "cite_spans": [ { "start": 67, "end": 85, "text": "Hockenmaier (2003)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 385, "end": 392, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Measures", "sec_num": "7" }, { "text": "Statistical significance was calculated using the test described in Chinchor (1992) , which measures the probability that the two sets of responses are drawn from the same distribution. A score below 0.05 is considered significant.", "cite_spans": [ { "start": 68, "end": 83, "text": "Chinchor (1992)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Measures", "sec_num": "7" }, { "text": "We report labeled precision (LP), labeled recall (LR), and labeled F-score (LF) results over gold standard POS tags and labeled F-score over automatically assigned POS tags (AF).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Measures", "sec_num": "7" }, { "text": "Reranking is dependent on high-quality parses from the n-best parser. As seen in Table 1 , the oracle labeled dependency F-score of the n-best C&C parser is 92.48% given a perfect reranker over 50best parses. This is a significant improvement over the baseline result of 86.75% and provides a solid basis for a reranker.", "cite_spans": [], "ref_spans": [ { "start": 81, "end": 88, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Oracle Performance", "sec_num": "8.1" }, { "text": "Our oracle score falls notably short of the 50best oracle of 96.8% reported by Charniak and Johnson (2005) , over a baseline of 89.7%. However, these numbers refer to the PARSEVAL score for constituency parses, so they are not directly comparable to our dependency recovery metric.", "cite_spans": [ { "start": 79, "end": 106, "text": "Charniak and Johnson (2005)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Oracle Performance", "sec_num": "8.1" }, { "text": "We present results in Tables 2 and 3 comparing the 1-best C&C parser using the normalform model (Clark and Curran, 2007) , randomized baselines (choosing a parse at random from the n-best list), and the reranking C&C parser in labeled dependency recovery over Section 00 of CCGbank. Our best result for 10-best reranking is an F-score of 87.13% with gold POS tags and 85.22% with automatically assigned POS tags. This is achieved using the regression setup and all features without pruning. The best result for 50best reranking is F-scores of 87.08% and 85.23% respectively, using the classification setup with all features and a pruning value of 2. These two results are both statistically significant improvements over the baseline parser.", "cite_spans": [ { "start": 96, "end": 120, "text": "(Clark and Curran, 2007)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 22, "end": 36, "text": "Tables 2 and 3", "ref_id": null } ], "eq_spans": [], "section": "Oracle Performance", "sec_num": "8.1" }, { "text": "Randomly choosing a parse from the n-best list Table 4 : Subtractive analysis on the top performing 10-best model on Section 00. Bold indicates a statistically significant change from the baseline.", "cite_spans": [], "ref_spans": [ { "start": 47, "end": 54, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Oracle Performance", "sec_num": "8.1" }, { "text": "results in much poorer performance than the 1-best baseline. All our experiments produced results that were significantly higher than the randomized result, indicating that our approaches were learning useful features from the training data. Even though the oracle scores increase with n (as shown in Table 1 ), the overall parse quality deteriorates. Regression was generally more successful for 10-best reranking, while classification was better for 50-best reranking. However, there were very few cases where a statistically significant difference in performance was observed between regression and classification approaches.", "cite_spans": [], "ref_spans": [ { "start": 301, "end": 308, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Oracle Performance", "sec_num": "8.1" }, { "text": "We investigated the performance of three sets of features: those adapted from Charniak and Johnson (2005) (CJ), our new features (CCG), and the union of the two sets (ALL). The log score and rank of each parse was included as core features in every experiment. In general, more features improved performance. The best results were produced using all of the possible features in the reranker model. In terms of the top F-score for each set of features, the CCG-specific features were better than the Charniak and Johnson (2005) features by a statistically significant margin. This held in all experiments except one (10-best classification), indicating that features tailored to CCG trees and dependency evaluation were more discriminative between good and bad CCG parses. This also implies that for reranking to improve the accuracy of a parser, the features must target that parser and the nature of its evaluation. Features producing state-of-the-art performance for the Charniak reranker had no positive impact on CCG parsing in isolation.", "cite_spans": [ { "start": 499, "end": 526, "text": "Charniak and Johnson (2005)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "8.2" }, { "text": "We conducted subtractive feature analysis on our best performing model (10-best regression with all features and no pruning) to investigate the contribution of individual features. Features were individually removed and the reranker was retrained and retested on Section 00. The removal of the SubjVerbAgr and Edges features are statistically significant, while the removal of any other single feature results in a non-significant decrease in F-score. We then performed an isolation experiment, training and testing the reranker using just the SubjVerbAgr and Edge features with the log score and rank from the parser. Table 5 shows that these features do significantly worse than the baseline in isolation, indicating that it is the combination of features together which produces the improved performance.", "cite_spans": [], "ref_spans": [ { "start": 619, "end": 626, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Features", "sec_num": "8.2" }, { "text": "We found that increased feature pruning had a negative impact on parsing accuracy. None of our experiments showed a significant improvement with higher pruning values, as opposed to Charniak and Johnson (2005) who found the count-based pruning to be useful. The best performing systems overall used pruning values of 0 or 2, implying that the pruning strategy is ineffective with respect to performance over such a varied set of features. One area where pruning does help is in the training times for the reranker: some experiments are nearly twice as fast with a pruning value t = 5 compared to t = 0. However, as this cost must only be paid once, the benefit of pruning with respect to actual parsing time is negligible. Table 6 summarises the performance of our best reranker model against the baseline normal-form model on Section 23 of CCGbank. We achieve statistical significant improvement in F-score over the baseline. However, in real terms the change in F-score is small, indicating that reranking may not guarantee performance improvements even if it is carefully targeted to the parser.", "cite_spans": [ { "start": 182, "end": 209, "text": "Charniak and Johnson (2005)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 723, "end": 730, "text": "Table 6", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Pruning", "sec_num": "8.3" }, { "text": "We have implemented a maximum entropy reranker for the C&C CCG parser, building on the methodology and features of Charniak and Johnson (2005) and extending the approach with new features. We have found that performance improvements from reranking stem from targeting the reranker features at the parser and its evaluation: features tailored to CCG perform better than PCFG-style features in isolation. Our best system achieves an of 87.59%, which is a statistically significant improvement over the baseline parser.", "cite_spans": [ { "start": 115, "end": 142, "text": "Charniak and Johnson (2005)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "9" }, { "text": "The reranker scales with the efficiency of calculating features on parse trees. The features described in this paper require time linear in the number of nodes in the tree. However, the reranker is currently implemented as an external postprocessing step. This leads to an order of magnitude speed decrease; future work will include integrating the reranker into the parser itself to mitigate this speed impact.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "9" }, { "text": "The improvement in accuracy that we achieve is small in absolute terms, showing that reranking is a considerably difficult task. However, continued improvements such as this one in parsing accuracy will benefit the variety of downstream applications that utilise parsing for practical NLP tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "9" }, { "text": "http://www.umiacs.umd.edu/\u02dchal/megam", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Steedman (2000) describes a ternary conjunction rule, but this is broken into two binary productions in CCGbank, using the marker[conj ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by Australian Research Council Discovery grants DP0665973 and DP1097291, the Capital Markets CRC and a University of Sydney Merit Scholarship.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Wide-Coverage Semantic Representations from a CCG Parser", "authors": [ { "first": "Johan", "middle": [], "last": "Bos", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th International Conference on Computational Linguistics (COLING-04)", "volume": "", "issue": "", "pages": "1240--1246", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johan Bos, Stephen Clark, Mark Steedman, James R. Curran, and Julia Hockenmaier. 2004. Wide-Coverage Semantic Representations from a CCG Parser. In Proceedings of the 20th International Conference on Computational Linguis- tics (COLING-04), pages 1240-1246, Geneva, Switzer- land, August.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The Pronto QA system at TREC-2007: Harvesting Hyponyms, Using Nominalisation Patterns, and Computing Answer Cardinality", "authors": [ { "first": "Johan", "middle": [], "last": "Bos", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" }, { "first": "Edoardo", "middle": [], "last": "Guzzetti", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Sixteenth Text REtrieval Conference (TREC 2007)", "volume": "", "issue": "", "pages": "726--732", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johan Bos, James R. Curran, and Edoardo Guzzetti. 2007. The Pronto QA system at TREC-2007: Harvesting Hy- ponyms, Using Nominalisation Patterns, and Comput- ing Answer Cardinality. In Proceedings of the Sixteenth Text REtrieval Conference (TREC 2007), pages 726-732, Gaitersburg, Maryland, USA.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "2008. k-best Parsing Algorithms for a Natural Language Parser", "authors": [ { "first": "Forrest", "middle": [], "last": "Brennan", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Forrest Brennan. 2008. k-best Parsing Algorithms for a Nat- ural Language Parser. Master's thesis, University of Ox- ford.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Coarse-to-Fine n-Best Parsing and MaxEnt Discriminative Reranking", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL-05)", "volume": "", "issue": "", "pages": "173--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse-to-Fine n-Best Parsing and MaxEnt Discriminative Reranking. In Proceedings of the 43rd Annual Meeting of the Associa- tion for Computational Linguistics (ACL-05), pages 173- 180, Ann Arbor, Michigan, USA, June.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The statistical significance of the MUC-4 results", "authors": [ { "first": "Nancy", "middle": [], "last": "Chinchor", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the Fourth Message Understanding Conference (MUC-4)", "volume": "", "issue": "", "pages": "30--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nancy Chinchor. 1992. The statistical significance of the MUC-4 results. In Proceedings of the Fourth Mes- sage Understanding Conference (MUC-4), pages 30-50, McLean, Virginia, June.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Wide-Coverage Efficient Statistical Parsing with CCG and Log-Linear Models", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "4", "pages": "493--552", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Clark and James R. Curran. 2007. Wide-Coverage Efficient Statistical Parsing with CCG and Log-Linear Models. Computational Linguistics, 33(4):493-552.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Head-Driven Statistical Models for Natural Language Parsing", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia, Pennsylvania, USA.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Discriminative Reranking for Natural Language Parsing", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 17th International Conference on Machine Learning (ICML-00)", "volume": "", "issue": "", "pages": "175--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 2000. Discriminative Reranking for Natu- ral Language Parsing. In Proceedings of the 17th Interna- tional Conference on Machine Learning (ICML-00), pages 175-182, Palo Alto, California, USA, June.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Synchronous Tree Adjoining Machine Translation", "authors": [ { "first": "Steve", "middle": [], "last": "Deneefe", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP-09)", "volume": "", "issue": "", "pages": "727--736", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steve DeNeefe and Kevin Knight. 2009. Synchronous Tree Adjoining Machine Translation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP-09), pages 727-736, Singa- pore, August.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A Noisy-Channel Approach to Question Answering", "authors": [ { "first": "Abdessamad", "middle": [], "last": "Echihabi", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-03)", "volume": "", "issue": "", "pages": "16--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abdessamad Echihabi and Daniel Marcu. 2003. A Noisy- Channel Approach to Question Answering. In Proceed- ings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-03), pages 16-23, Sap- poro, Japan, July.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Automatic Labeling of Semantic Roles", "authors": [ { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics", "volume": "28", "issue": "3", "pages": "245--288", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic La- beling of Semantic Roles. Computational Linguistics, 28(3):245-288.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "CCGbank: A Corpus of CCG Derivations and Dependency Structures Extracted from the Penn Treebank", "authors": [ { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "3", "pages": "355--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julia Hockenmaier and Mark Steedman. 2007. CCGbank: A Corpus of CCG Derivations and Dependency Structures Extracted from the Penn Treebank. Computational Lin- guistics, 33(3):355-396.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Parsing with Generative Models of Predicate-Argument Structure", "authors": [ { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-03)", "volume": "", "issue": "", "pages": "359--366", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julia Hockenmaier. 2003. Parsing with Generative Mod- els of Predicate-Argument Structure. In Proceedings of the 41st Annual Meeting of the Association for Compu- tational Linguistics (ACL-03), pages 359-366, Sapporo, Japan, July.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Better k-best Parsing", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Ninth International Workshop on Parsing Technology (IWPT-05)", "volume": "", "issue": "", "pages": "53--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Huang and David Chiang. 2005. Better k-best Pars- ing. In Proceedings of the Ninth International Workshop on Parsing Technology (IWPT-05), pages 53-64, Vancou- ver, British Columbia, Canada, October.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Forest Reranking: Discriminative Parsing with Non-Local Features", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Human Language Technology Conference at the 45th Annual Meeting of the Association for Computational Linguistics (HLT/ACL-08)", "volume": "", "issue": "", "pages": "586--594", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Huang. 2008. Forest Reranking: Discriminative Pars- ing with Non-Local Features. In Proceedings of the Hu- man Language Technology Conference at the 45th Annual Meeting of the Association for Computational Linguistics (HLT/ACL-08), pages 586-594, Columbus, Ohio, June.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Reranking the Berkeley and Brown Parsers", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Ahmet", "middle": [ "Engin" ], "last": "Ural", "suffix": "" } ], "year": 2010, "venue": "Proceedings of Human Language Technologies: the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (HLT/NAACL-10)", "volume": "", "issue": "", "pages": "665--668", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson and Ahmet Engin Ural. 2010. Reranking the Berkeley and Brown Parsers. In Proceedings of Human Language Technologies: the 2010 Annual Conference of the North American Chapter of the Association for Com- putational Linguistics (HLT/NAACL-10), pages 665-668, Los Angeles, California, USA, June.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Parsing and Hypergraphs", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 7th International Workshop on Parsing Technologies (IWPT-01)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Klein and Christopher D. Manning. 2001. Parsing and Hypergraphs. In Proceedings of the 7th International Workshop on Parsing Technologies (IWPT-01), Beijing, China, October.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Building a Large Annotated Corpus of English: The Penn Treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a Large Annotated Cor- pus of English: The Penn Treebank. Computational Lin- guistics, 19(2):313-330.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Effective Self-Training for Parsing", "authors": [ { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 2006 Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics (HLT/NAACL-06)", "volume": "", "issue": "", "pages": "152--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective Self-Training for Parsing. In Proceedings of the 2006 Human Language Technology Conference of the North American Chapter of the Association of Com- putational Linguistics (HLT/NAACL-06), pages 152-159, New York City, New York, USA, June.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Learning Accurate, Compact, and Interpretable Tree Annotation", "authors": [ { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Leon", "middle": [], "last": "Barrett", "suffix": "" }, { "first": "Romain", "middle": [], "last": "Thibaux", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING/ACL-06)", "volume": "", "issue": "", "pages": "433--440", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning Accurate, Compact, and Interpretable Tree Annotation. In Proceedings of the 21st Interna- tional Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING/ACL-06), pages 433-440, Sydney, Australia, July.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Unbounded Dependency Recovery for Parser Evaluation", "authors": [ { "first": "Laura", "middle": [], "last": "Rimell", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP-09)", "volume": "", "issue": "", "pages": "813--821", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura Rimell, Stephen Clark, and Mark Steedman. 2009. Unbounded Dependency Recovery for Parser Evaluation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP-09), pages 813-821, Singapore, August.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A generative re-ranking model for dependency parsing", "authors": [ { "first": "Federico", "middle": [], "last": "Sangati", "suffix": "" }, { "first": "Willem", "middle": [], "last": "Zuidema", "suffix": "" }, { "first": "Rens", "middle": [], "last": "Bod", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 11th International Conference on Parsing Technologies (IWPT-09)", "volume": "", "issue": "", "pages": "238--241", "other_ids": {}, "num": null, "urls": [], "raw_text": "Federico Sangati, Willem Zuidema, and Rens Bod. 2009. A generative re-ranking model for dependency parsing. In Proceedings of the 11th International Conference on Parsing Technologies (IWPT-09), pages 238-241, Paris, France, October.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Discriminative Reranking for Machine Translation", "authors": [ { "first": "Libin", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Sarkar", "suffix": "" }, { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2004 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT/NAACL-04)", "volume": "", "issue": "", "pages": "177--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Libin Shen, Anoop Sarkar, and Franz Josef Och. 2004. Dis- criminative Reranking for Machine Translation. In Pro- ceedings of the 2004 Human Language Technology Con- ference of the North American Chapter of the Association for Computational Linguistics (HLT/NAACL-04), pages 177-184, Boston, Massachusetts, USA, May.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "The Syntactic Process", "authors": [ { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Steedman. 2000. The Syntactic Process. MIT Press, Cambridge, Massachusetts, USA.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Perceptron Reranking for CCG Realization", "authors": [ { "first": "Michael", "middle": [], "last": "White", "suffix": "" }, { "first": "Rajakrishnan", "middle": [], "last": "Rajkumar", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP-09)", "volume": "", "issue": "", "pages": "410--419", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael White and Rajakrishnan Rajkumar. 2009. Percep- tron Reranking for CCG Realization. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP-09), pages 410-419, Sin- gapore, August.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Parse featuring a conjunction error.", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "Two CCG derivations for the sentence, It rose 2% this week and 9% this year.", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "html": null, "content": "
LPLRLFAF
Baseline87.19
", "text": "86.32 86.75 84.80 Oracle 10 91.98 90.89 91.43 89.47 Oracle 50 93.43 92.26 92.84 90.96", "type_str": "table", "num": null }, "TABREF1": { "html": null, "content": "
LPLRLFAF
Baseline 87.19 Table 5: 10-best isolation experiments for the
SubjVerbAgr and Edges features on Section 00 us-
ing regression and no pruning.
LPLRLFAF
Baseline 87.75 86.98 87.36 85.07
Reranker 87.98 87.21 87.59 85.36
", "text": "86.32 86.75 84.80 SubjVerbAgr 86.76 85.87 86.31 84.33 Edges 86.29 85.45 85.87 83.95 Both 86.30 85.49 85.89 83.95", "type_str": "table", "num": null }, "TABREF2": { "html": null, "content": "
: Baseline and final reranker performance
over Section 23 of CCGbank with the normal-
form model.
", "text": "", "type_str": "table", "num": null } } } }