{ "paper_id": "D11-1036", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:33:20.060303Z" }, "title": "Evaluating Dependency Parsing: Robust and Heuristics-Free Cross-Annotation Evaluation", "authors": [ { "first": "Reut", "middle": [], "last": "Tsarfaty", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Evelina", "middle": [], "last": "Andersson", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Methods for evaluating dependency parsing using attachment scores are highly sensitive to representational variation between dependency treebanks, making cross-experimental evaluation opaque. This paper develops a robust procedure for cross-experimental evaluation, based on deterministic unificationbased operations for harmonizing different representations and a refined notion of tree edit distance for evaluating parse hypotheses relative to multiple gold standards. We demonstrate that, for different conversions of the Penn Treebank into dependencies, performance trends that are observed for parsing results in isolation change or dissolve completely when parse hypotheses are normalized and brought into the same common ground.", "pdf_parse": { "paper_id": "D11-1036", "_pdf_hash": "", "abstract": [ { "text": "Methods for evaluating dependency parsing using attachment scores are highly sensitive to representational variation between dependency treebanks, making cross-experimental evaluation opaque. This paper develops a robust procedure for cross-experimental evaluation, based on deterministic unificationbased operations for harmonizing different representations and a refined notion of tree edit distance for evaluating parse hypotheses relative to multiple gold standards. We demonstrate that, for different conversions of the Penn Treebank into dependencies, performance trends that are observed for parsing results in isolation change or dissolve completely when parse hypotheses are normalized and brought into the same common ground.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Data-driven dependency parsing has seen a considerable surge of interest in recent years. Dependency parsers have been tested on parsing sentences in English (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; McDonald et al., 2005) as well as many other languages (Nivre et al., 2007a) . The evaluation metric traditionally associated with dependency parsing is based on scoring labeled or unlabeled attachment decisions, whereby each correctly identified pair of head-dependent words is counted towards the success of the parser (Buchholz and Marsi, 2006) . As it turns out, however, such evaluation procedures are sensitive to the annotation choices in the data on which the parser was trained.", "cite_spans": [ { "start": 158, "end": 186, "text": "(Yamada and Matsumoto, 2003;", "ref_id": "BIBREF32" }, { "start": 187, "end": 210, "text": "Nivre and Scholz, 2004;", "ref_id": "BIBREF22" }, { "start": 211, "end": 233, "text": "McDonald et al., 2005)", "ref_id": "BIBREF17" }, { "start": 266, "end": 287, "text": "(Nivre et al., 2007a)", "ref_id": "BIBREF23" }, { "start": 532, "end": 558, "text": "(Buchholz and Marsi, 2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Different annotation schemes often make different assumptions with respect to how linguistic content is represented in a treebank (Rambow, 2010) . The consequence of such annotation discrepancies is that when we compare parsing results across different experiments, even ones that use the same parser and the same set of sentences, the gap between results in different experiments may not reflect a true gap in performance, but rather a difference in the annotation decisions made in the respective treebanks.", "cite_spans": [ { "start": 130, "end": 144, "text": "(Rambow, 2010)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Different methods have been proposed for making dependency parsing results comparable across experiments. These methods include picking a single gold standard for all experiments to which the parser output should be converted (Carroll et al., 1998; Cer et al., 2010) , evaluating parsers by comparing their performance in an embedding task (Miyao et al., 2008; Buyko and Hahn, 2010) , or neutralizing the arc direction in the native representation of dependency trees (Schwartz et al., 2011) .", "cite_spans": [ { "start": 226, "end": 248, "text": "(Carroll et al., 1998;", "ref_id": "BIBREF5" }, { "start": 249, "end": 266, "text": "Cer et al., 2010)", "ref_id": "BIBREF6" }, { "start": 340, "end": 360, "text": "(Miyao et al., 2008;", "ref_id": "BIBREF19" }, { "start": 361, "end": 382, "text": "Buyko and Hahn, 2010)", "ref_id": "BIBREF4" }, { "start": 468, "end": 491, "text": "(Schwartz et al., 2011)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Each of these methods has its own drawbacks. Picking a single gold standard skews the results in favor of parsers which were trained on it. Transforming dependency trees to a set of pre-defined labeled dependencies, or into task-based features, requires the use of heuristic rules that run the risk of distorting correct information and introducing noise of their own. Neutralizing the direction of arcs is limited to unlabeled evaluation and local context, and thus may not cover all possible discrepancies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper proposes a new three-step protocol for cross-experiment parser evaluation, and in particular for comparing parsing results across data sets that adhere to different annotation schemes. In the first step all structures are brought into a single formal space of events that neutralizes representation peculiarities (for instance, arc directionality). The second step formally computes, for each sentence in the data, the common denominator of the different gold standards, containing all and only linguistic content that is shared between the different schemes. The last step computes the normalized distance from this common denominator to parse hypotheses, minus the cost of distances that reflect mere annotation idiosyncrasies. The procedure that implements this protocol is fully deterministic and heuristics-free.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We use the proposed procedure to compare dependency parsing results trained on Penn Treebank trees converted into dependency trees according to five different sets of linguistic assumptions. We show that when starting off with the same set of sentences and the same parser, training on different conversion schemes yields apparently significant performance gaps. When results across schemes are normalized and compared against the shared linguistic content, these performance gaps decrease or dissolve completely. This effect is robust across parsing algorithms. We conclude that it is imperative that cross-experiment parse evaluation be a well thoughtthrough endeavor, and suggest ways to extend the protocol to additional evaluation scenarios.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Dependency treebanks contain information about the grammatically meaningful elements in the utterance and the grammatical relations between them. Even if the formal representation in a dependency treebank is well-defined according to current standards (K\u00fcbler et al., 2009) , there are different ways in which the trees can be used to express syntactic content (Rambow, 2010) . Consider, for instance, algorithms for converting the phrase-structure trees in the Penn Treebank (Marcus et al., 1993) into dependency structures. Different conversion algorithms implicitly make different assumptions about how to represent linguistic content in the data. When multiple conversion algorithms are applied to the same data, we end up with different dependency trees for the same sentences (Johansson and Nugues, 2007; Choi and Palmer, 2010; de Marneffe et al., 2006) . Some common cases of discrepancies are as follows.", "cite_spans": [ { "start": 252, "end": 273, "text": "(K\u00fcbler et al., 2009)", "ref_id": "BIBREF14" }, { "start": 361, "end": 375, "text": "(Rambow, 2010)", "ref_id": "BIBREF26" }, { "start": 476, "end": 497, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF15" }, { "start": 782, "end": 810, "text": "(Johansson and Nugues, 2007;", "ref_id": "BIBREF12" }, { "start": 811, "end": 833, "text": "Choi and Palmer, 2010;", "ref_id": "BIBREF7" }, { "start": 834, "end": 859, "text": "de Marneffe et al., 2006)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "The Challenge: Treebank Theories", "sec_num": "2" }, { "text": "Lexical vs. Functional Head Choice. In linguistics, there is a distinction between lexical heads and functional heads. A lexical head carries the semantic gist of a phrase while a functional one marks its relation to other parts of the sentence. The two kinds of heads may or may not coincide in a single word form (Zwicky, 1993) . Common examples refer to prepositional phrases, such as the phrase \"on Sunday\". This phrase has two possible analyses, one selects a lexical head (1a) and the other selects a functional one (1b), as depicted below. ", "cite_spans": [ { "start": 315, "end": 329, "text": "(Zwicky, 1993)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "The Challenge: Treebank Theories", "sec_num": "2" }, { "text": "Similar choices are found in phrases which contain functional elements such as determiners, coordination markers, subordinating elements, and so on.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sunday", "sec_num": null }, { "text": "Multi-Headed Constructions. Some phrases are considered to have multiple lexical heads, for instance, coordinated structures. Since dependencybased formalisms require us to represent all content as binary relations, there are different ways we could represent such constructions. Let us consider the coordination of nominals below. We can choose between a functional head (1a) and a lexical head (2b, 2c). We can further choose between a flat representation in which the first conjunct is a single head (2b), or a nested structure where each conjunct/marker is the head of the following element (2c). All three alternatives empirically exist. Example (2a) reflects the structures in the CoNLL 2007 shared task data (Nivre et al., 2007a) . Johansson and Nugues (2007) use structures like (2b). Example (2c) reflects the analysis of Mel'\u010duk (1988) . Periphrastic Marking. When a phrase includes periphrastic marking -such as the tense and modal marking in the phrase \"would have worked\" below -there are different ways to consider its division into phrases. One way to analyze this phrase would be to choose auxiliaries as heads, as in (3a). Another alternative would be to choose the final verb as the In standard settings, an experiment that uses a data set which adheres to a certain annotation scheme reports results that are compared against the annotation standard that the parser was trained on. But if parsers were trained on different annotation standards, the empirical results are not comparable across experiments. Consider, for instance, the example in Figure 1 . If parse1 and parse2 are compared against gold2 using labeled attachment scores (LAS), then parse1 results are lower than the results of parse2, even though both parsers produced linguistically correct and perfectly useful output.", "cite_spans": [ { "start": 715, "end": 736, "text": "(Nivre et al., 2007a)", "ref_id": "BIBREF23" }, { "start": 739, "end": 766, "text": "Johansson and Nugues (2007)", "ref_id": "BIBREF12" }, { "start": 831, "end": 845, "text": "Mel'\u010duk (1988)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 1564, "end": 1572, "text": "Figure 1", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Sunday", "sec_num": null }, { "text": "Existing methods for making parsing results comparable across experiments include heuristics for converting outputs into dependency trees of a predefined standard (Briscoe et al., 2002; Cer et al., 2010) or evaluating the performance of a parser within an embedding task (Miyao et al., 2008; Buyko and Hahn, 2010) . However, heuristic rules for crossannotation conversion are typically hand written and error prone, and may not cover all possible discrepancies. Task-based evaluation may be sensitive to the particular implementation of the embedding task and the procedures that extract specific task-related features from the different parses. Beyond that, conversion heuristics and task-based procedures are currently developed almost exclusively for English. Other languages typically lack such resources.", "cite_spans": [ { "start": 163, "end": 185, "text": "(Briscoe et al., 2002;", "ref_id": "BIBREF2" }, { "start": 186, "end": 203, "text": "Cer et al., 2010)", "ref_id": "BIBREF6" }, { "start": 271, "end": 291, "text": "(Miyao et al., 2008;", "ref_id": "BIBREF19" }, { "start": 292, "end": 313, "text": "Buyko and Hahn, 2010)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Sunday", "sec_num": null }, { "text": "A recent study by Schwartz et al. (2011) takes a different approach towards cross-annotation evaluation.", "cite_spans": [ { "start": 18, "end": 40, "text": "Schwartz et al. (2011)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Sunday", "sec_num": null }, { "text": "They consider different directions of head-dependent relations (such as on\u2192Sunday and Sunday\u2192on) and different parent-child and grandparent-child relations in a chain (such as arrive\u2192on and arrive\u2192sunday in \"arrive on sunday\") as equivalent. They then score arcs that fall within corresponding equivalence sets. Using these new scores Schwartz et al. (2011) neutralize certain annotation discrepancies that distort parse comparison. However, their treatment is limited to local context and does not treat structures larger than two sequential arcs. Additionally, since arcs in different directions are typically labeled differently, this method only applies for unlabeled dependencies.", "cite_spans": [ { "start": 335, "end": 357, "text": "Schwartz et al. (2011)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Sunday", "sec_num": null }, { "text": "What we need is a fully deterministic and formally precise procedure for comparing any set of labeled or unlabeled dependency trees, by consolidating the shared linguistic content of the complete dependency trees in different annotation schemes, and comparing parse hypotheses through sound metrics that can take into account multiple gold standards.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sunday", "sec_num": null }, { "text": "We propose a new protocol for cross-experiment parse evaluation, consisting of three fundamental components: (i) abstracting away from annotation peculiarities, (ii) generalizing theory-specific structures into a single linguistically coherent gold standard that contains all and only consistent information from all sources, and (iii) defining a sound metric that takes into account the different gold standards that are being considered in the experiments. In this section we first define functional trees as the common space of formal objects and define a deterministic conversion procedure from dependency trees to functional trees. Next we define a set of formal operations on functional trees that compute, for every pair of corresponding trees of the same yield, a single gold tree that resolves inconsistencies among gold standard alternatives and combines the information that they share. Finally, we define scores based on tree edit distance, refined to consider the distance from parses to the overall gold tree as well as the different annotation alternatives. Preliminaries. Let T be a finite set of terminal symbols and let L be a set of grammatical relation labels. A dependency graph d is a directed graph which consists of nodes V d and arcs", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "A d \u2286 V d \u00d7 V d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "We assume that all nodes in V d are labeled by terminal symbols via a function", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "label V : V d \u2192 T . A well-formed dependency graph d = (V d , A d ) for a sentence S = t 1 , t 2 , .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": ".., t n is any dependency graph that is a directed tree originating out of a node v 0 labeled t 0 = ROOT , and spans all terminals in the sentence, that is, for every t i \u2208 S there exists", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "v j \u2208 V d labeled label V (v j ) = t i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "For simplicity we assume that every node v j is indexed according to the position of the terminal label, i.e., that for each t i labeling v j , i always equals j. In a labeled dependency tree, arcs in A d are labeled by elements of L via a function label A : A d \u2192 L that encodes the grammatical relation between the terminals labeling the connected nodes. We define two auxiliary functions on nodes in dependency trees. The function subtree : ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "V d \u2192 P(V d ) assigns to every node v \u2208 V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "V d \u2192 P(T ) assigns to every node v \u2208 V d a set of terminals such that span(v) = {t \u2208 T |t = label V (u) and u \u2208 subtree(v)}. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "Step 1: Functional Representation Our first goal is to define a representation format that keeps all functional relationships that are represented in the dependency trees intact, but remains neutral with respect to the directionality of the head-dependent relations. To do so we define functional trees -linearly-ordered labeled trees which, instead of head-to-head binary relations, represent the complete functional structure of a sentence. Assuming the same sets of terminal symbols T and grammatical relation labels L, and assuming extended sets of nodes V and arcs A \u2286 V \u00d7 V , a functional tree \u03c0 = (V, A) is a directed tree originating from a single root v 0 \u2208 V where all non-terminal nodes in \u03c0 are labeled with grammatical relation labels that signify the grammatical function of the chunk they dominate inside the tree via label NT : V \u2192 L. All terminal nodes in \u03c0 are labeled with terminal symbols via a label T : V \u2192 T function. The function span : V \u2192 P(V ) now picks out the set of terminal labels of the terminal nodes accessible by a node v \u2208 V via A. We obtain functional trees from dependency trees using the following procedure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "\u2022 Initialize the set of nodes and arcs in the tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "V := V d A := A d \u2022 Label each node v \u2208 V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "with the label of its incoming arc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "label NT (v) = label A (u, v)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "\u2022 In case |span(v)| > 1 add a new node u as a daughter designating the lexical head, labeled with the wildcard symbol *:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "V := V \u222a {u} A := A \u222a {(v, u)} label NT (u) = * \u2022 For each node v such that |span(v)| = 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": ", add a new node u as a daughter, labeled with its own terminal:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "V := V \u222a {u} A := A \u222a {(v, u)} if (label NT (v) = * ) label T (u) := label V (v) else label T (u) := label V (parent(v))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "That is to say, we label all nodes with spans greater than 1 with the grammatical function of their head, and for each node we add a new daughter u designating the head word, labeled with its grammatical function. Wildcard labels are compatible with any, more specific, grammatical function of the word inside the phrase. This gives us a constituencylike representation of dependency trees labeled with functional information, which retains the linguistic assumptions reflected in the dependency trees. When applying this procedure, examples (1)-(3) get transformed into (4)-(6) respectively. Considering the functional trees resulting from our procedure, it is easy to see that for tree pairs (4a)-(4b) and (5a)-(5b) the respective functional trees are identical modulo wildcards, while tree pairs (5b)-(5c) and (6a)-(6b) end up with different tree structures that realize different assumptions concerning the internal structure of the tree. In order to compare, combine or detect inconsistencies in the information inherent in different functional trees, we define a set of formal operations that are inspired by familiar notions from unification-based formalisms (Shieber (1986) and references therein).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "Step 2: Formal Operations on Trees The intuition behind the formal operations we define is simple. A completely flat tree over a span is the most general structural description that can be given to it. The more nodes dominate a span, the more linguistic assumptions are made with respect to its structure. If an arc structure in one tree merely elaborates an existing flat span in another tree, the theories underlying the schemes are compatible, and their information can be combined. Otherwise, there exists a conflict in the linguistic assumptions, and we need to relax some of the assumptions, i.e., remove functional nodes, in order to obtain a coherent structure that contains the information on which they agree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "Let \u03c0 1 , \u03c0 2 be functional trees over the same yield t 1 , .., t n . Let the function span(v) pick out the terminals labeling terminal nodes that are accessible via a node v \u2208 V in the functional tree through the relation A. We define first the tree subsumption relation for comparing the amount of information inherent in the arc-structure of two trees. 2 T-Subsumption, denoted t , is a relation between trees which indicates that a tree \u03c0 1 is consistent with and more general than tree \u03c0 2 . Formally: \u03c0 1 t \u03c0 2 iff for every node n \u2208 \u03c0 1 there exists a node m \u2208 \u03c0 2 such that span(n) = span(m) and label(n) = label(m).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "Looking at the functional trees of (4a)-(4b) we see that their unlabeled skeletons mutually subsume each other. In their labeled versions, however, each tree contains labeling information that is lacking in the other. In the functional trees (5b)-(5c) a flat structure over a span in (5b) is more elaborated in (5c). In order to combine information in trees with compatible arc structures, we define tree unification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "T-Unification, denoted t , is the operation that returns the most general tree structure \u03c0 3 that is subsumed by both \u03c0 1 , \u03c0 2 if such exists, and fails otherwise. Formally: \u03c0 1 t \u03c0 2 = \u03c0 3 iff \u03c0 1 t \u03c0 3 and \u03c0 2 t \u03c0 3 , and for all \u03c0 4 such that \u03c0 1 t \u03c0 4 and \u03c0 2 t \u03c0 4 it holds that \u03c0 3 t \u03c0 4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "Tree unification collects the information from two trees into a single result if they are consistent, and detects an inconsistency otherwise. In case of an inconsistency, as is the case in the functional trees (6a) and (6b), we cannot unify the structures due to a conflict concerning the internal division of an expression into phrases. However, we still want to generalize these two trees into one tree that contains all and only the information that they share. For that we define the tree generalization operation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "T-Generalization, denoted t , is the operation that returns the most specific tree that is more general than both trees. Formally, \u03c0 1 t \u03c0 2 = \u03c0 3 iff \u03c0 3 t \u03c0 1 and \u03c0 3 t \u03c0 2 , and for every \u03c0 4 such that \u03c0 4 t \u03c0 1 and \u03c0 4 t \u03c0 2 it holds that \u03c0 4 t \u03c0 3 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "Unlike unification, generalization can never fail. For every pair of trees there exists a tree that is more general than both: in the extreme case, pick the completely flat structure over the yield, which is more general than any other structure. For (6a)-(6b), for instance, we get that (6a) t (6b) is a flat tree over pre-terminals where \"would\" and \"have\" are labeled with 'vg' and \"worked\" is the head, labeled with '*'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "The generalization of two functional trees provides us with one structure that reflects the common and consistent content of the two trees. These structures thus provide us with a formally well-defined gold standard for cross-treebank evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "Step 3: Measuring Distances. Our functional trees superficially look like constituency-based trees, so a simple proposal would be to use Parseval measures (Black et al., 1991) for comparing the parsed trees against the new generalized gold trees. Parseval scores, however, have two significant drawbacks. First, they are known to be too restrictive with respect to some errors and too permissive with respect to others (Carroll et al., 1998; K\u00fcbler and Telljohann, 2002; Roark, 2002; Rehbein and van Genabith, 2007) . Secondly, F 1 scores would still penalize structures that are correct with respect to the original gold, but are not there in the generalized structure. Here we propose to adopt measures that are based on tree edit distance (TED) instead. TEDbased measures are, in fact, an extension of attachment scores for dependency trees. Consider, for instance, the following operations on dependency arcs. reattach-arc remove arc (u, v) \u2208 A d and add an arc", "cite_spans": [ { "start": 155, "end": 175, "text": "(Black et al., 1991)", "ref_id": "BIBREF1" }, { "start": 419, "end": 441, "text": "(Carroll et al., 1998;", "ref_id": "BIBREF5" }, { "start": 442, "end": 470, "text": "K\u00fcbler and Telljohann, 2002;", "ref_id": "BIBREF13" }, { "start": 471, "end": 483, "text": "Roark, 2002;", "ref_id": "BIBREF28" }, { "start": 484, "end": 515, "text": "Rehbein and van Genabith, 2007)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "A d \u222a {(w, v)}. relabel-arc relabel arc l 1 (u, v) as l 2 (u, v)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "Assuming that each operation is assigned a cost, the attachment score of comparing two dependency trees is simply the cost of all edit operations that are required to turn a parse tree into its gold standard, normalized with respect to the overall size of the dependency tree and subtracted from a unity. 3 Here we apply the idea of defining scores by TED costs normalized relative to the size of the tree and substracted from a unity, and extend it from fixed-size dependency trees to ordered trees of arbitrary size.", "cite_spans": [ { "start": 305, "end": 306, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "Our formalization follows closely the formulation of the T-Dice measure of Emms (2008) , building on his thorough investigation of the formal and empirical differences between TED-based measures and Parseval. We first define for any ordered and labeled tree \u03c0 the following operations. An edit script ES(\u03c0 1 , \u03c0 2 ) = {e 0 , e 1 ....e k } between \u03c0 1 and \u03c0 2 is a set of edit operations required for turning \u03c0 1 into \u03c0 2 . Now, assume that we are given a cost function defined for each edit operation. The cost of ES(\u03c0 1 , \u03c0 2 ) is the sum of the costs of the operations in the script. An optimal edit script is an edit script between \u03c0 1 and \u03c0 2 of minimum cost.", "cite_spans": [ { "start": 75, "end": 86, "text": "Emms (2008)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "ES * (\u03c0 1 , \u03c0 2 ) = argmin ES(\u03c0 1 ,\u03c0 2 ) e\u2208ES(\u03c0 1 ,\u03c0 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "cost(e) The tree edit distance problem is defined to be the problem of finding the optimal edit script and computing the corresponding distance (Bille, 2005) .", "cite_spans": [ { "start": 144, "end": 157, "text": "(Bille, 2005)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "A simple way to calculate the error \u03b4 of a parse would be to define it as the edit distance between the parse hypothesis \u03c0 1 and the gold standard \u03c0 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "\u03b4(\u03c0 1 , \u03c0 2 ) = cost(ES * (\u03c0 1 , \u03c0 2 ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "However, in such cases the parser may still get penalized for recovering nodes that are lacking in the generalization. To solve this, we refine the distance between a parse tree and the generalized gold tree to discard edit operations on nodes that are there in the native gold tree but are eliminated through generalization. We compute the intersection of the edit script turning the parse tree into the generalize gold with the edit script turning the native gold tree into the generalized gold, and discard its cost. That is, if parse1 and parse2 are compared against gold1 and gold2 respectively, and if we set gold3 to be the result of gold1 t gold2, then \u03b4 new is defined as: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "\u03b4 new (parse1, gold1,gold3) = \u03b4(parse1,gold3) \u2212cost(ES * (parse1,gold3)\u2229ES * (gold1,gold3))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "Now, if gold1 and gold3 are identical, then ES * (gold1,gold3)=\u2205 and we fall back on the simple tree edit distance score \u03b4 new (parse1,gold1,gold3)=\u03b4(parse1, gold3). When parse1 and gold1 are identical, i.e., the parser produced perfect output with respect to its own scheme, then gold1,gold3) )=0, and the parser does not get penalized for recovering a correct structure in gold1 that is lacking in gold3.", "cite_spans": [], "ref_spans": [ { "start": 281, "end": 293, "text": "gold1,gold3)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "\u03b4 new (parse1,gold1,gold3)=\u03b4 new (gold1,gold1,gold3) =\u03b4(gold1,gold3) \u2212 cost(ES * (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "In order to turn distances into accuracy measures we have to normalize distances relative to the maximal number of operations that is conceivable. In the worst case, we would have to remove all the internal nodes in the parse tree and add all the internal nodes of the generalized gold, so our normalization factor \u03b9 is defined as follows, where |\u03c0| is the size 4 of \u03c0. \u03b9(parse1,gold3) = |parse1| + |gold3|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "We now define the score of parse1 as follows: 5 1 \u2212 \u03b4 new (parse1,gold1,gold3) \u03b9(parse1,gold3) Figure 2 summarizes the steps in the evaluation procedure we defined so far.", "cite_spans": [], "ref_spans": [ { "start": 95, "end": 103, "text": "Figure 2", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "We start off with two versions of the treebank, TB1 and TB2, which are parsed separately and provide their own gold standards and parse hypotheses in a labeled dependencies format. All dependency trees are then converted into functional trees, and we compute the generalization of each pair of gold trees for each sentence in the data. This provides the generalized gold standard for all experiments, here marked as gold3. 6 We finally compute the distances \u03b4 new (parse1,gold1,gold3) and \u03b4 new (parse2,gold2,gold3) using the different tree edit distances that are now available, and we repeat the procedure for each sentence in the test set.", "cite_spans": [ { "start": 423, "end": 424, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "To normalize the scores for an entire test set of size n we can take the arithmetic mean of the scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "|test-set| i=1 score(parse1 i ,gold1 i ,gold3 i ) |test-set|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "Alternatively we can globally average of all edit distance costs, normalized by the maximally possible edits on parse trees turned into generalized trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "1 \u2212 |test-set| i=1 \u03b4 new (parse1 i ,gold1 i ,gold3 i ) |test-set| i=1 \u03b9(parse1 i ,gold3 i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "The latter score, global averaging over the entire test set, is the metric we use in our evaluation procedure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposal: Cross-Annotation Evaluation in Three Simple Steps", "sec_num": "3" }, { "text": "We demonstrate the application of our procedure to comparing dependency parsing results on different versions of the Penn Treebank (Marcus et al., 1993) .", "cite_spans": [ { "start": 131, "end": 152, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "The Data We use data from the PTB, converted into dependency structures using the LTH software, a general purpose tool for constituency-todependency conversion (Johansson and Nugues, 2007) . We use LTH to implement the five different annotation standards detailed in Table 3 . 6 Generalization is an associative and commutative operation, so it can be extended for n experiments in any order. \u03b4 (gold 1,go ld3) \u03b4 (pa rse 2,g old 3) Table 2 : Cross-experiment dependency parsing evaluation for the MST parser trained on multiple schemes. We report standard LAS scores and TEDEVAL global average metrics. Boldface results outperform the rest of the results reported in the same row. The \u2020 sign marks pairwise results where the difference is not statistically significant.", "cite_spans": [ { "start": 160, "end": 188, "text": "(Johansson and Nugues, 2007)", "ref_id": "BIBREF12" }, { "start": 277, "end": 278, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 267, "end": 274, "text": "Table 3", "ref_id": null }, { "start": 432, "end": 439, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "The LTH conversion default settings OldLTH", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ID Description Default", "sec_num": null }, { "text": "The conversion used in Johansson and Nugues (2007) CoNLL07 The conversion used in the CoNLL shared task (Nivre et al., 2007a) Lexical Same as CoNLL, but selecting only lexical heads when a choice exists Functional Same as CoNLL, but selecting only functional heads when a choice exists Table 3 : LTH conversion schemes used in the experiments. The LTH conversion settings in terms of the complete feature-value pairs associated with the LTH parameters in different schemes are detailed in the supplementary material.", "cite_spans": [ { "start": 23, "end": 50, "text": "Johansson and Nugues (2007)", "ref_id": "BIBREF12" }, { "start": 104, "end": 125, "text": "(Nivre et al., 2007a)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 286, "end": 293, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "ID Description Default", "sec_num": null }, { "text": "The Default, OldLTH and CoNLL schemes mainly differ in their coordination structure, and the Functional and Lexical schemes differ in their selection of a functional and a lexical head, respectively. All schemes use the same inventory of labels. 7 The LTH parameter settings for the different schemes are elaborated in the supplementary material.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ID Description Default", "sec_num": null }, { "text": "The Setup We use two different parsers: (i) Malt-Parser (Nivre et al., 2007b) with the arc eager algorithm as optimized for English in (Nivre et al., 2010) and (ii) MSTParser with the second-order projective model of McDonald and Pereira (2006) . Both parsers were trained on the different instances of sections 2-21 of the PTB obeying the different annotation schemes in Table 3 . Each trained model was used to parse section 23. All non-projective dependencies in the training and gold sets were projectivized prior to training and parsing using the algorithm of Nivre and Nilsson (2005) . A more principled treatment of non-projective dependency trees is an important topic for future research. We evaluated the parses using labeled and unlabeled attachment scores, and using our TEDEVAL software package.", "cite_spans": [ { "start": 56, "end": 77, "text": "(Nivre et al., 2007b)", "ref_id": "BIBREF24" }, { "start": 135, "end": 155, "text": "(Nivre et al., 2010)", "ref_id": "BIBREF25" }, { "start": 217, "end": 244, "text": "McDonald and Pereira (2006)", "ref_id": "BIBREF16" }, { "start": 565, "end": 589, "text": "Nivre and Nilsson (2005)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 372, "end": 379, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "ID Description Default", "sec_num": null }, { "text": "Evaluation Our TEDEVAL software package implements the pipeline described in Section 3. We convert all parse and gold trees into functional trees using the algorithm defined in Section 3, and for each pair of parsing experiments we calculate a shared gold standard using generalization determined through a chart-based greedy algorithm. 8 Our scoring procedure uses the TED algorithm defined by Zhang and Shasha (1989) . 9 The unlabeled score is obtained by assigning cost(e) = 0 for every e relabeling operation. To calculate pairwise statistical significance we use a shuffling test with 10,000 iterations (Cohen, 1995) . A sample of all files in the evaluation pipeline for a subset of 10 PTB sentences is available in the supplementary materials. 10 7 In case the labels are not taken from the same inventory, e.g., subjects in one scheme are marked as SUB and in the other marked as SBJ, it is possible define a a set of zero-cost operation types -in such case, to the operation relabel(SUB,SBJ) -in order not to penalize string label discrepancies.", "cite_spans": [ { "start": 395, "end": 418, "text": "Zhang and Shasha (1989)", "ref_id": "BIBREF33" }, { "start": 421, "end": 422, "text": "9", "ref_id": null }, { "start": 608, "end": 621, "text": "(Cohen, 1995)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "ID Description Default", "sec_num": null }, { "text": "8 Our algorithm has space and runtime complexity of O(n 2 ). 9 Available via http://web.science.mq.edu.au/ swan/howtos/treedistance/ 10 The TEDEVAL software package is available via http: //stp.lingfil.uu.se/\u02dctsarfaty/unipar Results Table 1 reports the results for the interand cross-experiment evaluation of parses produced by MaltParser. The left hand side of the table presents the parsing results for a set of experiments in which we compare parsing results trained on the Default, OldLTH and CoNLL07 schemes. In a second set of experiments we compare the CoNLL07, Lexical and Functional schemes. Table 2 reports the evaluation of the parses produced by MSTParser for the same experimental setup. Our goal here is not to compare the parsers, but to verify that the effects of switching from LAS to TEDEVAL are robust across parsing algorithms.", "cite_spans": [], "ref_spans": [ { "start": 233, "end": 240, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 601, "end": 608, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "ID Description Default", "sec_num": null }, { "text": "In each of the tables, the top three groups of four rows compare results of parsed dependency trees trained on a particular scheme against gold trees of the same and the other schemes. The next three groups of two rows report the results for comparing pairwise sets of experiments against a generalized gold using our proposed procedure. In the last group of two rows we compare all parsing results against a single gold obtained through a three-way generalization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ID Description Default", "sec_num": null }, { "text": "As expected, every parser appears to perform at its best when evaluated against the scheme it was trained on. This is the case for both LAS and TEDE-VAL measures and the performance gaps are statistically significant. When moving to pairwise evaluation against a single generalized gold, for instance, when comparing CoNLL07 to the Default settings, there is still a gap in performance, e.g., between OldLTH and CoNLL07, and between OldLTH and Default. This gap is however a lot smaller and is not always statistically significant. In fact, when evaluating the effect of linguistically disparate annotation variations such as Lexical and Functional on the performance of MaltParser, Table 1 shows that when using TEDEVAL and a generalized gold the performance gaps are small and statistically insignificant.", "cite_spans": [], "ref_spans": [ { "start": 683, "end": 690, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "ID Description Default", "sec_num": null }, { "text": "Moreover, observed performance trends when evaluating individual experiments on their original training scheme may change when compared against a generalized gold. The Default scheme, for Malt-Parser, appears better than OldLTH when both are evaluated against their training schemes. But looking at the pairwise-evaluated experiments, it is the other way round (the difference is smaller, but statistically significant). In evaluating against a three-way generalization, all the results obtained for different training schemes are on a par with one another, with minor gaps in performance, rarely statistically significant. This suggests that apparent performance trends between experiments when evaluating with respect to the training schemes may be misleading.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ID Description Default", "sec_num": null }, { "text": "These observations are robust across parsing algorithms. In each of the tables, results obtained against the training schemes show significant differences whereas applying our cross-experimental procedure shows small to no gaps in performance across different schemes. Annotation variants which seem to have crucial effects have a relatively small influence when parsed structures are brought into the same formal and theoretical common ground for comparison. Of course, it may be the case that one parser is better trained on one scheme while the other utilizes better another scheme, but objective performance gaps can only be observed when they are compared against shared linguistic content.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ID Description Default", "sec_num": null }, { "text": "This paper addresses the problem of crossexperiment evaluation. As it turns out, this problem arises in NLP in different shapes and forms; when evaluating a parser against different annotation schemes, when evaluating parsing performance across parsers and different formalisms, and when comparing parser performance across languages. We consider our contribution successful if after reading it the reader develops a healthy suspicion to blunt comparison of numbers across experiments, or better yet, across different papers. Cross-experiment comparison should be a careful and well thoughtthrough endeavor, in which we retain as much information as we can from the parsed structures, avoid lossy conversions, and focus on an object of evaluation which is agreed upon by all variants.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Extensions", "sec_num": "5" }, { "text": "Our proposal introduces one way of doing so in a streamlined, efficient and formally worked out way. While individual components may be further refined or improved, the proposed setup and implementation can be straightforwardly applied to crossparser and cross-framework evaluation. In the future we plan to use this procedure for comparing constituency and dependency parsers. A conversion from constituency-based trees into functional trees is straightforward to define: simply replace the node labels with the grammatical function of their dominating arc -and the rest of the pipeline follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Extensions", "sec_num": "5" }, { "text": "A pre-condition for cross-framework evaluation is that all representations encode the same set of grammatical relations by, e.g., annotating arcs in dependency trees or decorating nodes in constituency trees. For some treebanks this is already the case (Nivre and Megyesi, 2007; Skut et al., 1997; Hinrichs et al., 2004) while for others this is still lacking. Recent studies (Briscoe et al., 2002; de Marneffe et al., 2006) suggest that evaluation through a single set of grammatical relations as the common denominator is a linguistically sound and practically useful way to go. To guarantee extensions for crossframework evaluation it would be fruitful to make sure that resources use the same set of grammatical relation labels across different formal representation types. Moreover, we further aim to inquire whether we can find a single set of grammatical relation labels that can be used across treebanks for multiple languages. This would then pave the way for the development of cross-language evaluation procedures.", "cite_spans": [ { "start": 253, "end": 278, "text": "(Nivre and Megyesi, 2007;", "ref_id": "BIBREF20" }, { "start": 279, "end": 297, "text": "Skut et al., 1997;", "ref_id": "BIBREF31" }, { "start": 298, "end": 320, "text": "Hinrichs et al., 2004)", "ref_id": "BIBREF11" }, { "start": 376, "end": 398, "text": "(Briscoe et al., 2002;", "ref_id": "BIBREF2" }, { "start": 399, "end": 424, "text": "de Marneffe et al., 2006)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Extensions", "sec_num": "5" }, { "text": "We propose an end-to-end procedure for comparing dependency parsing results across experiments based on three steps: (i) converting dependency trees to functional trees, (ii) generalizing functional trees to harmonize information from different sources, and (iii) using distance-based metrics that take the different sources into account. When applied to parsing results of different dependency schemes, dramatic gaps observed when comparing parsing results obtained in isolation decrease or dissolve completely when using our proposed pipeline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "If a dependency tree d is projective, than for all v \u2208 V d the terminals in span(v) form a contiguous segment of S. The current discussion assumes that all trees are projective. We comment on non-projective dependencies in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that the wildcard symbol * is equal to any other symbol. In case the node labels consist of complex feature structures made of attribute-value lists, we replace label(n) = label(m) in the subsumption definition with label(n) label(m) in the sense of(Shieber, 1986).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The size of a dependency tree, either parse or gold, is always fixed by the number of terminals.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Following common practice, we equate size |\u03c0| with the number of nodes in \u03c0, discarding the terminals and root node.5 If the trees have only root and leaves, \u03b9 = 0, score := 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Acknowledgments We thank the developers of the LTH and TED software who made their code available for our use. We thank Richard Johansson for providing us with the LTH parameter settings of existing dependency schemes. We thank Ari Rappoport, Omri Abend, Roy Schwartz and members of the NLP lab at the Hebrew University of Jerusalem for stimulating discussion. We finally thank three anonymous reviewers for useful comments on an earlier draft. The research reported in the paper was partially funded by the Swedish Research Council.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A survey on tree edit distance and related. problems", "authors": [ { "first": "Philip", "middle": [], "last": "Bille", "suffix": "" } ], "year": 2005, "venue": "Theoretical Computer Science", "volume": "337", "issue": "", "pages": "217--239", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Bille. 2005. A survey on tree edit distance and related. problems. Theoretical Computer Science, 337:217-239.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Procedure for quantitatively comparing the syntactic coverage of English grammars", "authors": [ { "first": "Ezra", "middle": [], "last": "Black", "suffix": "" }, { "first": "Steven", "middle": [ "P" ], "last": "Abney", "suffix": "" }, { "first": "D", "middle": [], "last": "Flickenger", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Gdaniec", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "P", "middle": [], "last": "Harrison", "suffix": "" }, { "first": "Donald", "middle": [], "last": "Hindle", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Ingria", "suffix": "" }, { "first": "Frederick", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "Judith", "middle": [ "L" ], "last": "Klavans", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Liberman", "suffix": "" }, { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Tomek", "middle": [], "last": "Strzalkowski", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the workshop on Speech and Natural Language, HLT", "volume": "", "issue": "", "pages": "306--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ezra Black, Steven P. Abney, D. Flickenger, Claudia Gdaniec, Ralph Grishman, P. Harrison, Donald Hin- dle, Robert Ingria, Frederick Jelinek, Judith L. Kla- vans, Mark Liberman, Mitchell P. Marcus, Salim Roukos, Beatrice Santorini, and Tomek Strzalkowski. 1991. Procedure for quantitatively comparing the syn- tactic coverage of English grammars. In E. Black, ed- itor, Proceedings of the workshop on Speech and Nat- ural Language, HLT, pages 306-311. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Relational evaluation schemes", "authors": [ { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Copestake", "suffix": "" } ], "year": 2002, "venue": "Proceedings of LREC Workshop\"Beyond Parseval -Towards improved evaluation measures for parsing systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ted Briscoe, John Carroll, Jonathan Graham, and Ann Copestake. 2002. Relational evaluation schemes. In Proceedings of LREC Workshop\"Beyond Parseval -Towards improved evaluation measures for parsing systems\".", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "CoNLL-X shared task on multilingual dependency parsing", "authors": [ { "first": "Sabine", "middle": [], "last": "Buchholz", "suffix": "" }, { "first": "Erwin", "middle": [], "last": "Marsi", "suffix": "" } ], "year": 2006, "venue": "Proceedings of CoNLL-X", "volume": "", "issue": "", "pages": "149--164", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceedings of CoNLL-X, pages 149-164.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Evaluating the impact of alternative dependency graph encodings on solving event extraction tasks", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Buyko", "suffix": "" }, { "first": "Udo", "middle": [], "last": "Hahn", "suffix": "" } ], "year": 2010, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "982--992", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ekaterina Buyko and Udo Hahn. 2010. Evaluating the impact of alternative dependency graph encodings on solving event extraction tasks. In Proceedings of EMNLP, pages 982-992.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Parser evaluation: a survey and a new proposal", "authors": [ { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Sanfilippo", "suffix": "" } ], "year": 1998, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "447--454", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Carroll, Ted Briscoe, and Antonio Sanfilippo. 1998. Parser evaluation: a survey and a new proposal. In Proceedings of LREC, pages 447-454.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Parsing to stanford dependencies: Trade-offs between speed and accuracy", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2010, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Cer, Marie-Catherine de Marneffe, Daniel Juraf- sky, and Christopher D. Manning. 2010. Parsing to stanford dependencies: Trade-offs between speed and accuracy. In Proceedings of LREC.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Robust constituent-to-dependency conversion for English", "authors": [ { "first": "D", "middle": [], "last": "Jinho", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Choi", "suffix": "" }, { "first": "", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2010, "venue": "Proceedings of TLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinho D. Choi and Martha Palmer. 2010. Robust constituent-to-dependency conversion for English. In Proceedings of TLT.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Empirical Methods for Artificial Intelligence", "authors": [ { "first": "Paul", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Cohen. 1995. Empirical Methods for Artificial In- telligence. The MIT Press.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Generating typed dependency parses from phrase structure parses", "authors": [ { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Maccartney", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2006, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "449--454", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed de- pendency parses from phrase structure parses. In Pro- ceedings of LREC, pages 449-454.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Tree-distance and some other variants of evalb", "authors": [ { "first": "Martin", "middle": [], "last": "Emms", "suffix": "" } ], "year": 2008, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Emms. 2008. Tree-distance and some other vari- ants of evalb. In Proceedings of LREC.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Recent development in linguistic annotations of the T\u00fcBa-D/Z Treebank", "authors": [ { "first": "Erhard", "middle": [], "last": "Hinrichs", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Karin", "middle": [], "last": "Naumann", "suffix": "" }, { "first": "Heike", "middle": [], "last": "Telljohan", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Trushkina", "suffix": "" } ], "year": 2004, "venue": "Proceedings of TLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erhard Hinrichs, Sandra K\u00fcbler, Karin Naumann, Heike Telljohan, and Julia Trushkina. 2004. Recent develop- ment in linguistic annotations of the T\u00fcBa-D/Z Tree- bank. In Proceedings of TLT.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Extended constituent-to-dependency conversion for English", "authors": [ { "first": "Richard", "middle": [], "last": "Johansson", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Nugues", "suffix": "" } ], "year": 2007, "venue": "Proceedings of NODALIDA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Johansson and Pierre Nugues. 2007. Extended constituent-to-dependency conversion for English. In Proceedings of NODALIDA.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Towards a dependency-oriented evaluation for partial parsing", "authors": [ { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Heike", "middle": [], "last": "Telljohann", "suffix": "" } ], "year": 2002, "venue": "Proceedings of LREC Workshop\"Beyond Parseval -Towards improved evaluation measures for parsing systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sandra K\u00fcbler and Heike Telljohann. 2002. Towards a dependency-oriented evaluation for partial parsing. In Proceedings of LREC Workshop\"Beyond Parseval -Towards improved evaluation measures for parsing systems\".", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Dependency Parsing. Number 2 in Synthesis Lectures on Human Language Technologies", "authors": [ { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sandra K\u00fcbler, Ryan McDonald, and Joakim Nivre. 2009. Dependency Parsing. Number 2 in Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Building a large annotated corpus of English: The Penn Treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated cor- pus of English: The Penn Treebank. Computational Linguistics, 19:313-330.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Online learning of approximate dependency parsing algorithms", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2006, "venue": "Proceedings of EACL", "volume": "", "issue": "", "pages": "81--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald and Fernando Pereira. 2006. On- line learning of approximate dependency parsing al- gorithms. In Proceedings of EACL, pages 81-88.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Online large-margin training of dependency parsers", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "91--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of ACL, pages 91-98.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Dependency Syntax: Theory and Practice", "authors": [ { "first": "Igor", "middle": [], "last": "Mel", "suffix": "" }, { "first": "'", "middle": [], "last": "\u010cuk", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Igor Mel'\u010duk. 1988. Dependency Syntax: Theory and Practice. State University of New York Press.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Task-oriented evaluation of syntactic parsers and their representations", "authors": [ { "first": "Yusuke", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "Rune", "middle": [], "last": "Saetre", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Sagae", "suffix": "" }, { "first": "Takuya", "middle": [], "last": "Matsuzaki", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "46--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yusuke Miyao, Rune Saetre, Kenji Sagae, Takuya Mat- suzaki, and Jun'ichi Tsujii. 2008. Task-oriented eval- uation of syntactic parsers and their representations. In Proceedings of ACL, pages 46-54.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Bootstrapping a Swedish Treebank using cross-corpus harmonization and annotation projection", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Beata", "middle": [], "last": "Megyesi", "suffix": "" } ], "year": 2007, "venue": "Proceedings of TLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre and Beata Megyesi. 2007. Bootstrapping a Swedish Treebank using cross-corpus harmonization and annotation projection. In Proceedings of TLT.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Pseudo projective dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" } ], "year": 2005, "venue": "Proceeding of ACL", "volume": "", "issue": "", "pages": "99--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre and Jens Nilsson. 2005. Pseudo projective dependency parsing. In Proceeding of ACL, pages 99- 106.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Deterministic dependency parsing of English text", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Scholz", "suffix": "" } ], "year": 2004, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "64--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre and Mario Scholz. 2004. Deterministic dependency parsing of English text. In Proceedings of COLING, pages 64-70.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "The CoNLL 2007 shared task on dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Deniz", "middle": [], "last": "Yuret", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL", "volume": "", "issue": "", "pages": "915--932", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Johan Hall, Sandra K\u00fcbler, Ryan McDon- ald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007a. The CoNLL 2007 shared task on dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 915-932.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Maltparser: A languageindependent system for data-driven dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Atanas", "middle": [], "last": "Chanev", "suffix": "" }, { "first": "G\u00fclsen", "middle": [], "last": "Eryigit", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Svetoslav", "middle": [], "last": "Marinov", "suffix": "" }, { "first": "Erwin", "middle": [], "last": "Marsi", "suffix": "" } ], "year": 2007, "venue": "Natural Language Engineering", "volume": "13", "issue": "1", "pages": "1--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Jens Nilsson, Johan Hall, Atanas Chanev, G\u00fclsen Eryigit, Sandra K\u00fcbler, Svetoslav Marinov, and Erwin Marsi. 2007b. Maltparser: A language- independent system for data-driven dependency pars- ing. Natural Language Engineering, 13(1):1-41.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Evaluation of dependency parsers on unbounded dependencies", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Rimell", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "G\u00f3mez-Rodr\u00edguez", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "813--821", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Laura Rimell, Ryan McDonald, and Carlos G\u00f3mez-Rodr\u00edguez. 2010. Evaluation of dependency parsers on unbounded dependencies. pages 813-821.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The Simple Truth about Dependency and Phrase Structure Representations: An Opinion Piece", "authors": [ { "first": "Owen", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 2010, "venue": "Proceedings of HLT-ACL", "volume": "", "issue": "", "pages": "337--340", "other_ids": {}, "num": null, "urls": [], "raw_text": "Owen Rambow. 2010. The Simple Truth about Depen- dency and Phrase Structure Representations: An Opin- ion Piece. In Proceedings of HLT-ACL, pages 337- 340.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Why is it so difficult to compare treebanks? Tiger and T\u00fcBa-D/Z revisited", "authors": [ { "first": "Ines", "middle": [], "last": "Rehbein", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2007, "venue": "Proceedings of TLT", "volume": "", "issue": "", "pages": "115--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ines Rehbein and Josef van Genabith. 2007. Why is it so difficult to compare treebanks? Tiger and T\u00fcBa-D/Z revisited. In Proceedings of TLT, pages 115-126.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Evaluating parser accuracy using edit distance", "authors": [ { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2002, "venue": "Proceedings of LREC Workshop\"Beyond Parseval -Towards improved evaluation measures for parsing systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian Roark. 2002. Evaluating parser accuracy us- ing edit distance. In Proceedings of LREC Work- shop\"Beyond Parseval -Towards improved evaluation measures for parsing systems\".", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Neutralizing linguistically problematic annotations in unsupervised dependency parsing evaluation", "authors": [ { "first": "Roy", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" } ], "year": 2011, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "663--672", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roy Schwartz, Omri Abend, Roi Reichart, and Ari Rap- poport. 2011. Neutralizing linguistically problematic annotations in unsupervised dependency parsing eval- uation. In Proceedings of ACL, pages 663-672.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "An Introduction to Unification-Based Grammars. Center for the Study of Language and Information", "authors": [ { "first": "M", "middle": [], "last": "Stuart", "suffix": "" }, { "first": "", "middle": [], "last": "Shieber", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stuart M. Shieber. 1986. An Introduction to Unification- Based Grammars. Center for the Study of Language and Information.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "An annotation scheme for free word-order languages", "authors": [ { "first": "Wojciech", "middle": [], "last": "Skut", "suffix": "" }, { "first": "Brigitte", "middle": [], "last": "Krenn", "suffix": "" }, { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" }, { "first": "Hans", "middle": [], "last": "Uszkoreit", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the fifth conference on Applied natural language processing", "volume": "", "issue": "", "pages": "88--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wojciech Skut, Brigitte Krenn, Thorsten Brants, and Hans Uszkoreit. 1997. An annotation scheme for free word-order languages. In Proceedings of the fifth con- ference on Applied natural language processing, pages 88-95.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Statistical dependency analysis with support vector machines", "authors": [ { "first": "Hiroyasu", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2003, "venue": "Proceeding of IWPT", "volume": "", "issue": "", "pages": "195--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceeding of IWPT, pages 195-206.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Simple fast algorithms for the editing distance between trees and related problems", "authors": [ { "first": "Kaizhong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Dennis", "middle": [], "last": "Shasha", "suffix": "" } ], "year": 1989, "venue": "In SIAM Journal of Computing", "volume": "18", "issue": "", "pages": "1245--1262", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaizhong Zhang and Dennis Shasha. 1989. Simple fast algorithms for the editing distance between trees and related problems. In SIAM Journal of Computing, vol- ume 18, pages 1245-1262.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Heads, bases, and functors", "authors": [ { "first": "Arnold", "middle": [ "M" ], "last": "Zwicky", "suffix": "" } ], "year": 1993, "venue": "Heads in Grammatical Theory", "volume": "", "issue": "", "pages": "292--315", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arnold M. Zwicky. 1993. Heads, bases, and functors. In G.G. Corbett, N. Fraser, and S. McGlashan, editors, Heads in Grammatical Theory, pages 292-315. Cam- bridge University Press.", "links": null } }, "ref_entries": { "FIGREF2": { "text": "Calculating cross-experiment LAS results main head, and let the auxiliaries create a verb chain with different levels of projection. Each annotation decision dictates a different direction of the arcs and imposes its own internal division into phrases.", "uris": null, "type_str": "figure", "num": null }, "FIGREF4": { "text": "d the set of nodes accessible by it through the reflexive transitive closure of the arc relation A d . The function span :", "uris": null, "type_str": "figure", "num": null }, "FIGREF6": { "text": "relabel-node change the label of node v in \u03c0 delete-node delete a non-root node v in \u03c0 with parent u, making the children of v the children of u, inserted in the place of v as a subsequence in the left-to-right order of the children of u.insert-node insert a node v as a child of u in \u03c0 making it the parent of a consecutive subsequence of the children of u.", "uris": null, "type_str": "figure", "num": null }, "FIGREF7": { "text": "The evaluation pipeline. Different versions of the treebank go into different experiments, resulting in different parse and gold files. All trees are transformed into functional trees. All gold files enter generalization to yield a new gold. The different \u03b4 arcs represent the different tree distances used for calculating the TED-based scores.", "uris": null, "type_str": "figure", "num": null }, "TABREF1": { "content": "
Train GoldDefault Old LTH CoNLL07Train GoldCoNLL07 Functional Lexical
Default UAS LAS U-TED L-TED Old LTH UAS LAS U-TED L-TED CoNLL07 UAS LAS U-TED L-TED0.9173 0.8833 0.9513 0.9249 0.6078 0.4809 0.8960 0.7823 0.7767 0.6504 0.9289 0.85020.6085 0.4780 0.8903 0.7727 0.8952 0.8471 0.9550 0.9224 0.6517 0.5725 0.9087 0.81590.7709 0.6414 0.9236 0.8424 0.6415 0.5669 0.9096 0.8170 0.8991 0.8709 0.9479 0.9208CoNLL07 UAS LAS U-TED L-TED Functional UAS LAS U-TED L-TED Lexical UAS LAS U-TED L-TED0.8991 0.8709 0.9479 0.9208 0.8083 0.7895 0.9356 0.8929 0.6997 0.6835 0.9259 0.85930.8077 0.7902 0.9373 0.8955 0.8978 0.8782 0.9476 0.9226 0.6161 0.6034 0.9152 0.83400.7018 0.6804 0.9221 0.8505 0.6150 0.5975 0.9092 0.8218 0.8826 0.8491 0.9483 0.9160
Default-oldLTH U-TED L-TED Default-CoNLL U-TED 0.9474 \u2020 0.9533 0.9289 L-TED 0.9281 OldLTH-CoNLL U-TED L-TED0.9515 0.9224 0.9479 0.92340.9460 \u2020 0.9238 0.9493 0.9258CoNLL-Functional U-TED L-TED CoNLL-Lexical U-TED L-TED Functional-Lexical U-TED L-TED0.9479 \u2020 0.9209 0.9497 0.92280.9487 \u2020 0.9237 0.9504 0.92580.9483 0.9161 0.9483 0.9161
Default-OldLTH-CoNLL U-TED 0.9492 \u2020 L-TED 0.92980.9461 0.9241 \u20200.9480 \u2020 0.9258 \u2020CoNLL-Functional-Lexical U-TED L-TED0.9498 0.92290.9504 \u2020 0.92580.9483 \u2020 0.9161
", "text": "Cross-experiment dependency parsing evaluation for MaltParser trained on multiple schemes. We report standard LAS scores and TEDEVAL global average metrics. Boldface results outperform the rest of the results reported in the same row. The \u2020 sign marks pairwise results where the difference is not statistically significant.", "type_str": "table", "html": null, "num": null } } } }