{ "paper_id": "L14-1324", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T11:57:09.659517Z" }, "title": "All Fragments Count in Parser Evaluation", "authors": [ { "first": "Jasmijn", "middle": [], "last": "Bastings", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Amsterdam", "location": {} }, "email": "bastings@uva.nl" }, { "first": "Sima", "middle": [ "'" ], "last": "Khalil", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Amsterdam", "location": {} }, "email": "" }, { "first": "", "middle": [], "last": "An", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Amsterdam", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "PARSEVAL, the default paradigm for evaluating constituency parsers, calculates parsing success (Precision/Recall) as a function of the number of matching labeled brackets across the test set. Nodes in constituency trees, however, are connected together to reflect important linguistic relations such as predicate-argument and direct-dominance relations between categories. In this paper, we present FREVAL, a generalization of PARSEVAL, where the precision and recall are calculated not only for individual brackets, but also for co-occurring, connected brackets (i.e. fragments). FREVAL fragments precision (FLP) and recall (FLR) interpolate the match across the whole spectrum of fragment sizes ranging from those consisting of individual nodes (labeled brackets) to those consisting of full parse trees. We provide evidence that FREVAL is informative for inspecting relative parser performance by comparing a range of existing parsers.", "pdf_parse": { "paper_id": "L14-1324", "_pdf_hash": "", "abstract": [ { "text": "PARSEVAL, the default paradigm for evaluating constituency parsers, calculates parsing success (Precision/Recall) as a function of the number of matching labeled brackets across the test set. Nodes in constituency trees, however, are connected together to reflect important linguistic relations such as predicate-argument and direct-dominance relations between categories. In this paper, we present FREVAL, a generalization of PARSEVAL, where the precision and recall are calculated not only for individual brackets, but also for co-occurring, connected brackets (i.e. fragments). FREVAL fragments precision (FLP) and recall (FLR) interpolate the match across the whole spectrum of fragment sizes ranging from those consisting of individual nodes (labeled brackets) to those consisting of full parse trees. We provide evidence that FREVAL is informative for inspecting relative parser performance by comparing a range of existing parsers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Current approaches to parsing usually employ a training treebank to learn a statistical parser. The goal of learning is to obtain a parser that can reproduce the test set treebank parses as accurately as possible. The rationale behind this is that the treebank parses themselves are products of trained human annotators and, hence, should serve as the gold standard. If indeed the ultimate goal of learning statistical parsers from treebanks is to obtain parsers that immitate human parsing capability as represented by a sample in a treebank, then parser evaluation should aim at measuring the amount match/mismatch between a parse produced by a parser and a parse produced by human annotation. It is crucial at this point to highlight the difference between this view of parser evaluation and a linguistically-oriented point of view: a linguistically-motivated parser evaluation focuses on linguistically-relevant aspects of parse trees that are crucial for subsequent linguistic processing, e.g., dependency relations might be very important for semantic or other linguistic processing (cf. alternative linguistically relevant proposals (Sampson and Babarczy, 2003; Carroll et al., 1998) ). The contrast between linguistic relevance and the statistical view of treebank parsing as a learning problem (with some cognitive relevance) is crucial, because parser output often has other practical uses besides serving as mere input for subsequent linguistic processing, e.g., parsers may serve as target language models in machine translation systems. Consequently, to evaluate a statistical parser learned from a treebank we need a measure of similarity between its output parse and the human annotated parse in the test set. Such a measure of similarity between two tre could measure different shared aspects between two trees. The PARSEVAL measures (Black et al., 1991) are currently the de facto standard for evaluating (English) parser output. To calculate the Precision and Recall, the output trees of a parser are compared to a gold standard, i.e. human-annotated trees in a treebank. A well known treebank in this respect is the Penn Wall Street Journal treebank (Marcus et al., 1993) . To facilitate comparison among different parsers, it is common practice to test a parser on section 23 of that corpus and report PARSEVAL F-scores. PARSEVAL counts how many individual brackets match between a test-tree and a gold-tree, and also whether the test-tree was a complete match or not. However, what PARSEVAL does not count, for example, is whether the matching brackets together constitute a connected unit (e.g. a subtree or paths of direct-dominance relations). Consider for example the following trees: The trees differ in a single node labeled VP vs XP. This label change ruins the relation of VP with its VBZ verb and object NP, with its parent S and finally, this ruins the subject-verb structure (S (NP VP)). Consequently, different parsers may report very close F-scores coming from completely different parse trees, some of which might be more useful than others. In this paper we exploit a more elaborate measure of similarity between two trees as the basis for a new parser evaluation measure called FREVAL. FREVAL is a generalization of PARSEVAL from individual nodes to arbitrary size fragments, i.e., subtrees defined as connected non-empty subgraphs of a tree. FREVAL computes its final precision (and recall) as a mixture of the individually computed precisions (and recalls) for each of the fragment granularity levels. By employing a mixture of evaluation measures of a range of fragment sizes, FREVAL allows discriminating between parsers performing closely under PARSEVAL but otherwise having completely different kinds of output. As well as subtrees, FREVAL considers paths and parent-child relations also as fragments, thereby accommodating certain as-pects of leaf-ancestor (Sampson and Babarczy, 2003) and dependency (Carroll et al., 1998) proposals. Interestingly, fragment mixtures has been exploited in statistical parsing, e.g., (Bod et al., 2003; Sima'an, 2000; Bansal and Klein, 2010) , but never before for parser evaluation as far as we know.", "cite_spans": [ { "start": 1140, "end": 1168, "text": "(Sampson and Babarczy, 2003;", "ref_id": "BIBREF13" }, { "start": 1169, "end": 1190, "text": "Carroll et al., 1998)", "ref_id": "BIBREF5" }, { "start": 1850, "end": 1870, "text": "(Black et al., 1991)", "ref_id": "BIBREF3" }, { "start": 2169, "end": 2190, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF10" }, { "start": 3901, "end": 3929, "text": "(Sampson and Babarczy, 2003)", "ref_id": "BIBREF13" }, { "start": 3945, "end": 3967, "text": "(Carroll et al., 1998)", "ref_id": "BIBREF5" }, { "start": 4061, "end": 4079, "text": "(Bod et al., 2003;", "ref_id": "BIBREF4" }, { "start": 4080, "end": 4094, "text": "Sima'an, 2000;", "ref_id": "BIBREF16" }, { "start": 4095, "end": 4118, "text": "Bansal and Klein, 2010)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1." }, { "text": "We start with an overview of PARSEVAL. We assume a test set consisting of sentences {U 1 , U 2 , . . . , U n } and their corresponding gold-standard trees T C = {\u03c4 1 C , \u03c4 2 C , . . . , \u03c4 n C }. Now, let the parser output be a set of 'guessed' trees T g = {\u03c4 1 g , \u03c4 2 g , . . . , \u03c4 n g }. More accurately, \u03c4 i C and \u03c4 i g denote the correct and the 'guessed' tree for sentence U i , respectively. PARSEVAL can be seen to represent a tree \u03c4 as a set of labeled constituents:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2." }, { "text": "Tree(\u03c4 ) = { i, X, j | i, X, j \u2208 \u03c4 }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2." }, { "text": "where i, X, j stands for a constituent in \u03c4 that covers span i to j with label X. |Tree(\u03c4 )| is the cardinality of the set, in this case the number of brackets/constituents. The PARSEVAL (Labeled Recall, Precision, and Exact Match) are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2." }, { "text": "LR(T C , T g ) def = i |Tree(\u03c4 i C ) \u2229 Tree(\u03c4 i g )| i |Tree(\u03c4 i C )| LP (T C , T g ) def = i |Tree(\u03c4 i C ) \u2229 Tree(\u03c4 i g )| i |Tree(\u03c4 i g | EM (T C , T g ) def = i \u03b4(T i C , T i g ) n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2." }, { "text": "where \u03b4 is the Kronecker delta function, returning 1 if the specified trees are equal and 0 otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2." }, { "text": "We introduce a new representation of trees in terms of their fragments. Let max = |\u03c4 | denote the number of nodes in a tree \u03c4 . A tree \u03c4 is represented by a sequence of sets of situated fragments", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FREVAL: Beyond Sets of Constituents", "sec_num": "3." }, { "text": "Tree 1 (\u03c4 ), Tree 2 (\u03c4 ), . . . , Tree max (\u03c4 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FREVAL: Beyond Sets of Constituents", "sec_num": "3." }, { "text": "where for every 1 \u2264 s \u2264 max, we define Tree s (\u03c4 ) as the set of all situated fragments \u03d5 in \u03c4 of size |\u03d5| = s. A situated fragment i, \u03d5, j is a fragment \u03d5 together with the span span(\u03d5) = i, j that \u03d5 covers. More formally,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FREVAL: Beyond Sets of Constituents", "sec_num": "3." }, { "text": "Tree s (\u03c4 ) def = { i, \u03d5, j | fragment(\u03d5, \u03c4 ) \u2227 |\u03d5| = s \u2227 span(\u03d5) = i, j }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FREVAL: Beyond Sets of Constituents", "sec_num": "3." }, { "text": "Where fragment(\u03d5, \u03c4 ) is True iff \u03d5 is a fragment of \u03c4 , i.e., a non-empty, connected subgraphs of \u03c4 . Note here that we maintain for every fragment size s a separate set Tree s (\u03c4 ) of situated fragments, i.e., we do not put together fragments of different sizes. This is crucial next because we will calculate over the whole test set a separate precision/recall for each fragments size separately. Had we not done so, the counts of larger fragments would dominate the final precision and recall figures because the number of fragements of a certain size in a tree could be exponential in the number of nodes in the tree. With this new representation of trees in place, now we define for every fragment size s a separate Labeled Precision (LP s ) and Labeled Recall (LR s ):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FREVAL: Beyond Sets of Constituents", "sec_num": "3." }, { "text": "LP s (T C , T g ) = i |Tree s (\u03c4 i C ) \u2229 Tree s (\u03c4 i g )| i |Tree s (\u03c4 i g )| LR s (T C , T g ) = i |Tree s (\u03c4 i C ) \u2229 Tree s (\u03c4 i g )| i |Tree s (\u03c4 i C )|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FREVAL: Beyond Sets of Constituents", "sec_num": "3." }, { "text": "The Fragment Labeled Recall (FLR) and Fragment Labeled Precision (FLP) are defined as a linear interpolation over the sequence of different fragment sizes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FREVAL: Beyond Sets of Constituents", "sec_num": "3." }, { "text": "1 FLR = s \u03b1 s \u00d7 LR s FLP = s \u03b1 s \u00d7 LP s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FREVAL: Beyond Sets of Constituents", "sec_num": "3." }, { "text": "where \u03b1 s fulfills s \u03b1 s = 1.0. If we set \u03b1 1 = 1 we would obtain standard PARSEVAL LP and LR. And when \u03b1 max = 1.0 this is the Exact Match for the largest trees in the treebank. Hence, FREVAL is the mean of all measures between these two extremes. In the lack of preference for certain fragments sizes over others, we choose to set \u03b1 s uniformly over all fragment sizes. Another reasonable setting for \u03b1 s could be one that takes the sparsity of the space of fragments of size s into account for smoothing the FREVAL outcomes for larger fragment sizes using results from smaller fragment sizes. The FREVAL F1 is defined F1 = 2\u00d7(FLR\u00d7FLP) FLR+FLP , but we also define F1 s values for every s using the corresponding LR s and LP s values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FREVAL: Beyond Sets of Constituents", "sec_num": "3." }, { "text": "Equipped with our new evaluation metric, we ran various popular and new parsers of English on section 23 of the Penn Wall Street Journal tree-bank (Marcus et al., 1993) . We cleaned up section 23 by (1) pruning traces subtrees (-NONE-), (2) removing numbers in labels (e.g., NP-2 or NP=2), (3) removing semantic tags (e.g., NP-SBJ), and finally by removing redundant rules (e.g., NP \u2192 NP). The tested parsers are Bansal and Klein (2010) with basic refinement (B&K (basic)), Bansal and Klein (2011) allfragments, shortest-derivation with richer annotations and state-splits (B&K (SDP)); the Berkeley Parser 2 (Petrov et al., 2006) ; the Charniak parser (Charniak and Johnson, 2005) 3 with and without Johnson reranking; the Collins parser (Collins, 1999) as implemented in (Bikel, 2004) 4 ; Double-DOP 5 (Sangati and Zuidema, 2011) ; and the Stanford Parser 6 (Klein and Manning, 2003) with PCFG and with Factored models. We ran the parsers using the models trained on WSJ sections 02-21. 7 Most parsers provided such a model out-of-the-box, except for the Bikel-Collins parser, which we trained ourselves on the same sections. We evaluated the output of the parsers with PARSEVAL (using the Evalb implementation of Sekine and Collins (1997)) and FREVAL. Table 1 shows the FREVAL evaluation results for the tested parsers. We can choose to only evaluate up to a certain fragment size, which is reflected in the various columns. In the first column, the maximum fragment size is 1 -single nodes. Therefore, here FREVAL's results are identical to the results of PARSEVAL. 8 In the second column, FLR and FLP were calculated on fragments of size 1, 2, . . . , 15, and in the third column on fragments of size 1, 2, . . . , 25. Finally, in the fourth column all fragments are taken into account (in this case, 1, 2, . . . , 55). Interestingly, as we take bigger fragments into account the ranking of the parsers changes. For example, when evaluating with just single nodes the Berkeley parser outperforms B&K (basic), but when we also take larger fragments into account (all other columns) B&K (basic) has the upper hand. The same is true for Double-DOP, which is outperformed by Berkeley under PARSEVAL, but finally is on par with it when evaluating using all fragments. shows how these performance changes depend on the individual F 1 scores (of LR s and LP s ) for each fragment size. Intuitively, with uniform \u03b1, the parser with the largest \"area\" under the curve (sum) performs best. For fragments up to size 15, Charniak's reranking parser clearly scores highest. The Berkeley parser, though, scores high on fragment size 1, but starts losing to other parsers as we take larger fragments into account. Figure 2 magnifies the differences between the parsers: it plots for each parser the difference between its F 1 score and the Berkeley parser's corresponding score as a function of fragment size. Some parsers seem to have worse performance for smaller fragment sizes but improve considerably for larger fragment sizes (e.g., Bikel-Collins, Double-DOP, Stanford factored). Both versions of Charniak's parser as well as B&K (SDP) perform well across the whole range of fragment sizes, with the plot of the latter looking almost as a horizontal shift of that of the former.", "cite_spans": [ { "start": 147, "end": 168, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF10" }, { "start": 413, "end": 436, "text": "Bansal and Klein (2010)", "ref_id": "BIBREF0" }, { "start": 474, "end": 497, "text": "Bansal and Klein (2011)", "ref_id": "BIBREF1" }, { "start": 608, "end": 629, "text": "(Petrov et al., 2006)", "ref_id": "BIBREF12" }, { "start": 652, "end": 680, "text": "(Charniak and Johnson, 2005)", "ref_id": "BIBREF6" }, { "start": 738, "end": 753, "text": "(Collins, 1999)", "ref_id": "BIBREF7" }, { "start": 772, "end": 785, "text": "(Bikel, 2004)", "ref_id": "BIBREF2" }, { "start": 803, "end": 830, "text": "(Sangati and Zuidema, 2011)", "ref_id": "BIBREF14" }, { "start": 859, "end": 884, "text": "(Klein and Manning, 2003)", "ref_id": "BIBREF8" }, { "start": 988, "end": 989, "text": "7", "ref_id": null }, { "start": 1215, "end": 1241, "text": "Sekine and Collins (1997))", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 1254, "end": 1261, "text": "Table 1", "ref_id": null }, { "start": 2703, "end": 2711, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Empirical explorations", "sec_num": "4." }, { "text": "The FLR and FLP measures provide an interesting new perspective on parser performance. The Parser ranking, according to the highest FREVAL F1 score, may change along the Fragment size axes. It is easy to see that one node's mismatch can cause the mismatch of a whole lot of bigger fragments. For this very reason, there are hardly any matches for fragments of size 25 and bigger. Moreover, FREVAL, like PARSEVAL, can punish a parser severely for certain mistakes, e.g. attachment errors (see e.g. K\u00fcbler and Telljohann (2002) ).", "cite_spans": [ { "start": 497, "end": 525, "text": "K\u00fcbler and Telljohann (2002)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5." }, { "text": "The tested parsers differ from one another in various ways. We concentrate on two particular axis of differences (A) The grammar units: Context-Free productions or larger fragments, and (B) Enriched categories by manual refinements, automatic state-splits, head-lexicalization. A single parser employs discriminative reranking (Charniak (reranking) ) and most parsers employ horizontal Markovization of treebank productions, leading to CFG and fragment models with horizontal Markovization. Comparing the two B&K versions (basic and SDP), the increase in FREVAL F1 scores as larger fragment sizes are included confirms the importance of category refinement; the same holds for the head-lexicalization of categories, where some of the best performing parsers are found (Charniak's, Bikel-Collins) ; and finally, we see that parsers using fragment models are performing increasingly well along the size line, most notably B&K (basic) already outperforms Berkeley parser for fragment size 1-15, 1-25 and for all sizes, whereas it is far less accurate than Berkeley according to PAR-SEVAL. And surprisingly, for all fragment sizes, we find that Double-DOP (a selected-fragments parser) performs as well as Berkeley. The mix of head-lexicalization/category refinement with all-fragment modeling B&K (SDP) provides for a parser that outperforms Charniak's (without reranking) for FREVAL values (1-15), (1-25) and all fragments, despite performing slightly less accurately according to PARSEVAL. Adding a fragment-based discriminative reranker on top of Charniak's arrives at the overall best results.", "cite_spans": [ { "start": 327, "end": 348, "text": "(Charniak (reranking)", "ref_id": null }, { "start": 768, "end": 795, "text": "(Charniak's, Bikel-Collins)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5." }, { "text": "Where the original PARSEVAL measure only looks at individual nodes when matching two trees, we present FREVAL, which looks at all the situated fragments in those trees. This causes a radically more fine-grained analysis of the performance of existing parsers. By looking at increasingly larger situated fragments, FREVAL indeed shows what is inside the 'evaluation gap' between the original Precision and Recall scores on the one hand and the Complete Match score on the other. Furthermore, FREVAL helps explore the impact of the different kinds of techniques (CFG rules vs. all-fragments, and refined categories) at a variety of treebank linguistic units.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "An alternative is to interpolate log-linearly (e.g., geometric mean) in following of BLEU in machine translation(Papineni et al., 2002). It is not yet clear whether this has added value over simple linear interpolation.2 http://code.google.com/p/berkeleyparser/ 3 ftp://ftp.cs.brown.edu/pub/nlparser/ 4 http://www.cis.upenn.edu/\u223cdbikel/software.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "B & K (2010) B & K (2011) Berkeley Bikel\u2212Collins Charniak (Rerank.) Charniak Double\u2212DOP Stanford (Factored) Stanford (PCFG)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://staff.science.uva.nl/\u223cfsangati/ 6 http://nlp.stanford.edu/software/lex-parser.shtml 7 For Double-DOP and the B&K parsers we received the output directly from the respective authors.8 In line with the behavior of Evalb, our FREVAL implementation deletes certain nodes (e.g. 'TOP') from its input trees and tries to re-insert pre-terminals in case it deleted one holding a quote in the one tree but not in the other. On top of the default configuration, we also delete nodes with label 'ROOT'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Bansal & Klein Berkeley Bikel\u2212Collins Charniak (Rerank.) Charniak Double\u2212DOP Stanford (Factored) Stanford (PCFG)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Simple, accurate parsing with an all-fragments grammar", "authors": [ { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1098--1107", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bansal, Mohit and Klein, Dan. (2010). Simple, accurate parsing with an all-fragments grammar. In Proceedings of the 48th Annual Meeting of the Association for Com- putational Linguistics, pages 1098-1107, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The surprising variance in shortest-derivation parsing", "authors": [ { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2011, "venue": "ACL (Short Papers)", "volume": "", "issue": "", "pages": "720--725", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bansal, Mohit and Klein, Dan. (2011). The surprising vari- ance in shortest-derivation parsing. In ACL (Short Pa- pers), pages 720-725.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Intricacies of collins' parsing model", "authors": [ { "first": "Daniel", "middle": [ "M" ], "last": "Bikel", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "4", "pages": "479--511", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bikel, Daniel M. (2004). Intricacies of collins' parsing model. Computational Linguistics, 30(4):479-511.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A procedure for quantitatively comparing the syntactic coverage of english grammars", "authors": [ { "first": "", "middle": [], "last": "Black", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the February 1991 DARPA Speech and Natural Language Workshop", "volume": "", "issue": "", "pages": "306--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Black et al., Ezra. (1991). A procedure for quantitatively comparing the syntactic coverage of english grammars. In Proceedings of the February 1991 DARPA Speech and Natural Language Workshop, pages 306-311. Morgan Kaufmann.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Data Oriented Parsing", "authors": [ { "first": "R", "middle": [], "last": "Bod", "suffix": "" }, { "first": "R", "middle": [], "last": "Scha", "suffix": "" }, { "first": "K", "middle": [], "last": "Sima'an", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bod, R., Scha, R., and Sima'an, K., editors. (2003). Data Oriented Parsing. CSLI Publications, Stanford Univer- sity, Stanford, California, USA.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Parser evaluation: a survey and a new proposal", "authors": [ { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Sanfilippo", "suffix": "" } ], "year": 1998, "venue": "Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carroll, John, Briscoe, Ted, and Sanfilippo, Antonio. (1998). Parser evaluation: a survey and a new proposal. In Language Resources and Evaluation.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Coarse-tofine n-best parsing and maxent discriminative reranking", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "Johnson", "middle": [], "last": "", "suffix": "" }, { "first": "Mark", "middle": [], "last": "", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05", "volume": "", "issue": "", "pages": "173--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charniak, Eugene and Johnson, Mark. (2005). Coarse-to- fine n-best parsing and maxent discriminative reranking. In Proceedings of the 43rd Annual Meeting on Associa- tion for Computational Linguistics, ACL '05, pages 173- 180, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Head-Driven Statistical Models for Natural Language Parsing", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, Michael. (1999). Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Fast Exact Inference with a Factored Model for Natural Language Parsing", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2003, "venue": "Advances in Neural Information Processing Systems", "volume": "15", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klein, Dan and Manning, Christopher D. (2003). Fast Ex- act Inference with a Factored Model for Natural Lan- guage Parsing. In Advances in Neural Information Pro- cessing Systems, volume 15. MIT Press.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Towards a dependency-oriented evaluation for partial parsing", "authors": [ { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Heike", "middle": [], "last": "Telljohann", "suffix": "" } ], "year": 2002, "venue": "LREC 2002 Workshop Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K\u00fcbler, Sandra and Telljohann, Heike. (2002). Towards a dependency-oriented evaluation for partial parsing. In LREC 2002 Workshop Proceedings.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Building a large annotated corpus of english: the penn treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Mary", "middle": [], "last": "Marcinkiewicz", "suffix": "" }, { "first": "", "middle": [], "last": "Ann", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" } ], "year": 1993, "venue": "Comput. Linguist", "volume": "19", "issue": "", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcus, Mitchell P., Marcinkiewicz, Mary Ann, and San- torini, Beatrice. (1993). Building a large annotated cor- pus of english: the penn treebank. Comput. Linguist., 19:313-330, June.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W.-J", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "ACL", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002). Bleu: a method for automatic evaluation of machine translation. In ACL, pages 311-318.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Learning accurate, compact, and interpretable tree annotation", "authors": [ { "first": "", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "", "middle": [], "last": "Slav", "suffix": "" }, { "first": "", "middle": [], "last": "Barrett", "suffix": "" }, { "first": "", "middle": [], "last": "Leon", "suffix": "" }, { "first": "Romain", "middle": [], "last": "Thibaux", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "433--440", "other_ids": {}, "num": null, "urls": [], "raw_text": "Petrov, Slav, Barrett, Leon, Thibaux, Romain, and Klein, Dan. (2006). Learning accurate, compact, and inter- pretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Compu- tational Linguistics, pages 433-440, Sydney, Australia, July. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A test of the leaf-ancestor metric for parse accuracy", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Sampson", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Babarczy", "suffix": "" } ], "year": 2003, "venue": "Natural Language Engineering", "volume": "9", "issue": "", "pages": "365--380", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sampson, Geoffrey and Babarczy, Anna. (2003). A test of the leaf-ancestor metric for parse accuracy. Natural Language Engineering, 9:365-380, December.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Accurate Parsing with Compact Tree-Substitution Grammars: Double-DOP", "authors": [ { "first": "Federico", "middle": [], "last": "Sangati", "suffix": "" }, { "first": "Willem", "middle": [], "last": "Zuidema", "suffix": "" } ], "year": 2011, "venue": "In proceedings of EMNLP", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sangati, Federico and Zuidema, Willem. (2011). Accu- rate Parsing with Compact Tree-Substitution Grammars: Double-DOP . In In proceedings of EMNLP, pages 1- 12, June.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Evalb bracket scoring program", "authors": [ { "first": "S", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "M", "middle": [ "J" ], "last": "Collins", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sekine, S. and Collins, M. J. (1997). Evalb bracket scoring program.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Tree-gram Parsing: Lexical Dependencies and Structural Relations", "authors": [ { "first": "K", "middle": [], "last": "Sima'an", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics (ACL'00)", "volume": "", "issue": "", "pages": "53--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sima'an, K. (2000). Tree-gram Parsing: Lexical Depen- dencies and Structural Relations. In Proceedings of the 38th Annual Meeting of the Association for Computa- tional Linguistics (ACL'00), pages 53-60, Hong Kong, China.", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "num": null, "text": "Absolute F 1 measure as function of fragment size up to 15.", "type_str": "figure" }, "FIGREF2": { "uris": null, "num": null, "text": "With Berkeley's parser as baseline, a plot of F 1 difference (P arser \u2212 Berkeley) for every other parser as a function of fragment size.", "type_str": "figure" }, "FIGREF3": { "uris": null, "num": null, "text": "Figure 1 shows how these performance changes depend on the individual F 1 scores (of LR s and LP s ) for each fragment size. Intuitively, with uniform \u03b1, the parser with the largest \"area\" under the curve (sum) performs best. For fragments up to size 15, Charniak's reranking parser clearly scores highest. The Berkeley parser, though, scores high on fragment size 1, but starts losing to other parsers as we take larger fragments into account. Figure 2 magnifies the differences between the parsers: it plots for each parser the difference between its F 1 score and the Berkeley parser's corresponding score as a function of fragment size. Some parsers seem to have worse performance for smaller fragment sizes but improve considerably for larger fragment sizes (e.g., Bikel-Collins, Double-DOP, Stanford factored). Both versions of Charniak's parser as well as B&K (SDP) perform well across the whole range of fragment sizes, with the plot of the latter looking almost as a horizontal shift of that of the former.", "type_str": "figure" }, "FIGREF4": { "uris": null, "num": null, "text": "F1 measure (of LR s and LP s ) for fragment sizes 6-13.", "type_str": "figure" }, "TABREF0": { "text": "FREVAL results of the tested parsers on WSJ section 23 (all sentences), using various maximum fragment sizes. B&K stands for Bansal&Klein. FLR and FLP were computed with uniform \u03b1 weights. When fragment size is exactly 1 (single nodes), FLR and FLP become identical to PARSEVAL's LR and LP scores.", "num": null, "content": "
Evaluated Fragment Sizes
Fragment sizes1 (PARSEVAL)1-151-25All Fragments
ParserFLR FLP F1FLR FLP F1FLR FLP F1FLR FLP F1
B&K (basic) 201087.7 87.6 87.6 49.0 49.8 49.4 35.2 35.2 35.2 16.6 16.7 16.7
B&K (richer) 201189.5 89.4 89.5 52.8 58.0 55.3 35.9 44.7 39.8 16.5 20.7 18.4
Berkeley90.0 90.3 90.2 47.4 51.1 49.2 31.1 35.0 33.0 14.2 16.4 15.3
Bikel-Collins88.3 88.2 88.2 49.8 55.4 52.5 33.7 41.1 37.0 15.4 18.9 17.0
Charniak89.7 89.9 89.8 52.8 57.0 54.8 35.7 40.0 37.7 16.3 18.4 17.3
Charniak (Rerank.)91.2 91.8 91.5 56.8 60.0 58.4 38.7 42.1 40.4 17.8 19.4 18.6
Double-DOP86.3 87.7 87.0 47.1 48.1 47.6 32.4 33.8 33.1 14.9 15.6 15.3
Stanford (Factored)86.7 86.4 86.5 46.1 48.7 47.4 31.2 34.3 32.7 14.3 15.8 15.0
Stanford (PCFG)85.0 86.1 85.5 43.1 44.6 43.8 29.1 29.6 29.3 13.4 13.5 13.4
Table 1:
", "type_str": "table", "html": null } } } }