{ "paper_id": "P05-1013", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:38:31.420264Z" }, "title": "Pseudo-Projective Dependency Parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "", "affiliation": { "laboratory": "", "institution": "V\u00e4xj\u00f6 University", "location": { "postCode": "SE-35195", "settlement": "V\u00e4xj\u00f6", "country": "Sweden" } }, "email": "nivre@msi.vxu.se" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "", "affiliation": { "laboratory": "", "institution": "V\u00e4xj\u00f6 University", "location": { "postCode": "SE-35195", "settlement": "V\u00e4xj\u00f6", "country": "Sweden" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. We show how a datadriven deterministic dependency parser, in itself restricted to projective structures, can be combined with graph transformation techniques to produce non-projective structures. Experiments using data from the Prague Dependency Treebank show that the combined system can handle nonprojective constructions with a precision sufficient to yield a significant improvement in overall parsing accuracy. This leads to the best reported performance for robust non-projective parsing of Czech.", "pdf_parse": { "paper_id": "P05-1013", "_pdf_hash": "", "abstract": [ { "text": "In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. We show how a datadriven deterministic dependency parser, in itself restricted to projective structures, can be combined with graph transformation techniques to produce non-projective structures. Experiments using data from the Prague Dependency Treebank show that the combined system can handle nonprojective constructions with a precision sufficient to yield a significant improvement in overall parsing accuracy. This leads to the best reported performance for robust non-projective parsing of Czech.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "It is sometimes claimed that one of the advantages of dependency grammar over approaches based on constituency is that it allows a more adequate treatment of languages with variable word order, where discontinuous syntactic constructions are more common than in languages like English (Mel'\u010duk, 1988; Covington, 1990) . However, this argument is only plausible if the formal framework allows non-projective dependency structures, i.e. structures where a head and its dependents may correspond to a discontinuous constituent. From the point of view of computational implementation this can be problematic, since the inclusion of non-projective structures makes the parsing problem more complex and therefore compromises efficiency and in practice also accuracy and robustness. Thus, most broad-coverage parsers based on dependency grammar have been restricted to projective structures. This is true of the widely used link grammar parser for English (Sleator and Temperley, 1993) , which uses a dependency grammar of sorts, the probabilistic dependency parser of Eisner (1996) , and more recently proposed deterministic dependency parsers (Yamada and Matsumoto, 2003; . It is also true of the adaptation of the Collins parser for Czech (Collins et al., 1999) and the finite-state dependency parser for Turkish by Oflazer (2003) . This is in contrast to dependency treebanks, e.g. Prague Dependency Treebank (Haji\u010d et al., 2001b) , Danish Dependency Treebank (Kromann, 2003) , and the METU Treebank of Turkish (Oflazer et al., 2003) , which generally allow annotations with nonprojective dependency structures. The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective treebanks is often neglected because of the relative scarcity of problematic constructions. While the proportion of sentences containing non-projective dependencies is often 15-25%, the total proportion of non-projective arcs is normally only 1-2%. As long as the main evaluation metric is dependency accuracy per word, with state-of-the-art accuracy mostly below 90%, the penalty for not handling non-projective constructions is almost negligible. Still, from a theoretical point of view, projective parsing of non-projective structures has the drawback that it rules out perfect accuracy even as an asymptotic goal. There exist a few robust broad-coverage parsers that produce non-projective dependency structures, notably Tapanainen and J\u00e4rvinen (1997) and Wang and Harper (2004) for English, Foth et al. (2004) for German, and Holan (2004) for Czech. In addition, there are several approaches to non-projective dependency parsing that are still to be evaluated in the large (Covington, 1990; Kahane et al., 1998; Duchier and Debusmann, 2001; Holan et al., 2001; Hellwig, 2003) . Finally, since non-projective constructions often involve long-distance dependencies, the problem is closely related to the recovery of empty categories and non-local dependencies in constituency-based parsing (Johnson, 2002; Dienes and Dubey, 2003; Jijkoun and de Rijke, 2004; Cahill et al., 2004; Levy and Manning, 2004; Campbell, 2004) .", "cite_spans": [ { "start": 285, "end": 300, "text": "(Mel'\u010duk, 1988;", "ref_id": "BIBREF22" }, { "start": 301, "end": 317, "text": "Covington, 1990)", "ref_id": "BIBREF5" }, { "start": 949, "end": 978, "text": "(Sleator and Temperley, 1993)", "ref_id": "BIBREF28" }, { "start": 1062, "end": 1075, "text": "Eisner (1996)", "ref_id": "BIBREF9" }, { "start": 1138, "end": 1166, "text": "(Yamada and Matsumoto, 2003;", "ref_id": "BIBREF31" }, { "start": 1235, "end": 1257, "text": "(Collins et al., 1999)", "ref_id": "BIBREF3" }, { "start": 1312, "end": 1326, "text": "Oflazer (2003)", "ref_id": "BIBREF27" }, { "start": 1406, "end": 1427, "text": "(Haji\u010d et al., 2001b)", "ref_id": null }, { "start": 1457, "end": 1472, "text": "(Kromann, 2003)", "ref_id": "BIBREF20" }, { "start": 1508, "end": 1530, "text": "(Oflazer et al., 2003)", "ref_id": "BIBREF26" }, { "start": 2443, "end": 2473, "text": "Tapanainen and J\u00e4rvinen (1997)", "ref_id": "BIBREF29" }, { "start": 2478, "end": 2500, "text": "Wang and Harper (2004)", "ref_id": "BIBREF30" }, { "start": 2514, "end": 2532, "text": "Foth et al. (2004)", "ref_id": "BIBREF10" }, { "start": 2549, "end": 2561, "text": "Holan (2004)", "ref_id": "BIBREF16" }, { "start": 2696, "end": 2713, "text": "(Covington, 1990;", "ref_id": "BIBREF5" }, { "start": 2714, "end": 2734, "text": "Kahane et al., 1998;", "ref_id": "BIBREF19" }, { "start": 2735, "end": 2763, "text": "Duchier and Debusmann, 2001;", "ref_id": "BIBREF8" }, { "start": 2764, "end": 2783, "text": "Holan et al., 2001;", "ref_id": "BIBREF15" }, { "start": 2784, "end": 2798, "text": "Hellwig, 2003)", "ref_id": "BIBREF14" }, { "start": 3011, "end": 3026, "text": "(Johnson, 2002;", "ref_id": "BIBREF18" }, { "start": 3027, "end": 3050, "text": "Dienes and Dubey, 2003;", "ref_id": "BIBREF7" }, { "start": 3051, "end": 3078, "text": "Jijkoun and de Rijke, 2004;", "ref_id": "BIBREF17" }, { "start": 3079, "end": 3099, "text": "Cahill et al., 2004;", "ref_id": "BIBREF0" }, { "start": 3100, "end": 3123, "text": "Levy and Manning, 2004;", "ref_id": "BIBREF21" }, { "start": 3124, "end": 3139, "text": "Campbell, 2004)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we show how non-projective dependency parsing can be achieved by combining a datadriven projective parser with special graph transformation techniques. First, the training data for the parser is projectivized by applying a minimal number of lifting operations (Kahane et al., 1998) and encoding information about these lifts in arc labels. When the parser is trained on the transformed data, it will ideally learn not only to construct projective dependency structures but also to assign arc labels that encode information about lifts. By applying an inverse transformation to the output of the parser, arcs with non-standard labels can be lowered to their proper place in the dependency graph, giving rise to non-projective structures. We call this pseudoprojective dependency parsing, since it is based on a notion of pseudo-projectivity (Kahane et al., 1998) .", "cite_spans": [ { "start": 275, "end": 296, "text": "(Kahane et al., 1998)", "ref_id": "BIBREF19" }, { "start": 855, "end": 876, "text": "(Kahane et al., 1998)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is structured as follows. In section 2 we introduce the graph transformation techniques used to projectivize and deprojectivize dependency graphs, and in section 3 we describe the data-driven dependency parser that is the core of our system. We then evaluate the approach in two steps. First, in section 4, we evaluate the graph transformation techniques in themselves, with data from the Prague Dependency Treebank and the Danish Dependency Treebank. In section 5, we then evaluate the entire parsing system by training and evaluating on data from the Prague Dependency Treebank.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We assume that the goal in dependency parsing is to construct a labeled dependency graph of the kind depicted in Figure 1 . Formally, we define dependency graphs as follows:", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 121, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Dependency Graph Transformations", "sec_num": "2" }, { "text": "1. Let R = {r 1 , . . . , r m } be the set of permissible dependency types (arc labels).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Graph Transformations", "sec_num": "2" }, { "text": "W = w 1 \u2022 \u2022 \u2022w n is a labeled directed graph D = (W, A)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A dependency graph for a string of words", "sec_num": "2." }, { "text": ", where (a) W is the set of nodes, i.e. word tokens in the input string, ordered by a linear precedence relation <, (b) A is a set of labeled arcs (w i , r, w j ), where If (w i , r, w j ) \u2208 A, we say that w i is the head of w j and w j a dependent of w i . In the following, we use the notation w i r \u2192 w j to mean that (w i , r, w j ) \u2208 A; we also use w i \u2192 w j to denote an arc with unspecified label and w i \u2192 * w j for the reflexive and transitive closure of the (unlabeled) arc relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A dependency graph for a string of words", "sec_num": "2." }, { "text": "w i , w j \u2208 W , r \u2208 R, (c) for every w j \u2208 W ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A dependency graph for a string of words", "sec_num": "2." }, { "text": "The dependency graph in Figure 1 satisfies all the defining conditions above, but it fails to satisfy the condition of projectivity (Kahane et al., 1998 ):", "cite_spans": [ { "start": 132, "end": 152, "text": "(Kahane et al., 1998", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 24, "end": 32, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "A dependency graph for a string of words", "sec_num": "2." }, { "text": "1. An arc w i \u2192 w k is projective iff, for every word w j occurring between w i and w k in the string", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A dependency graph for a string of words", "sec_num": "2." }, { "text": "(w i < w j < w k or w i > w j > w k ), w i \u2192 * w j . 2. A dependency graph D = (W, A)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A dependency graph for a string of words", "sec_num": "2." }, { "text": "is projective iff every arc in A is projective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A dependency graph for a string of words", "sec_num": "2." }, { "text": "The arc connecting the head jedna (one) to the dependent Z (out-of) spans the token je (is), which is not dominated by jedna. As observed by Kahane et al. (1998) , any (nonprojective) dependency graph can be transformed into a projective one by a lifting operation, which replaces each non-projective arc w j \u2192 w k by a projective arc w i \u2192 w k such that w i \u2192 * w j holds in the original graph. Here we use a slightly different notion of lift, applying to individual arcs and moving their head upwards one step at a time:", "cite_spans": [ { "start": 141, "end": 161, "text": "Kahane et al. (1998)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "A dependency graph for a string of words", "sec_num": "2." }, { "text": "LIFT(w j \u2192 w k ) = w i \u2192 w k if w i \u2192 w j undefined otherwise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A dependency graph for a string of words", "sec_num": "2." }, { "text": "Intuitively, lifting an arc makes the word w k dependent on the head w i of its original head w j (which is unique in a well-formed dependency graph), unless w j is a root in which case the operation is undefined (but then w j \u2192 w k is necessarily projective if the dependency graph is well-formed).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A dependency graph for a string of words", "sec_num": "2." }, { "text": "Projectivizing a dependency graph by lifting nonprojective arcs is a nondeterministic operation in the general case. However, since we want to preserve as much of the original structure as possible, we are interested in finding a transformation that involves a minimal number of lifts. Even this may be nondeterministic, in case the graph contains several non-projective arcs whose lifts interact, but we use the following algorithm to construct a minimal projective transformation D = (W, A ) of a (nonprojective) dependency graph D = (W, A):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A dependency graph for a string of words", "sec_num": "2." }, { "text": "PROJECTIVIZE(W , A) 1 A \u2190 A 2 while (W, A ) is non-projective 3 a \u2190 SMALLEST-NONP-ARC(A ) 4 A \u2190 (A \u2212 {a}) \u222a {LIFT(a)} 5 return (W, A )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A dependency graph for a string of words", "sec_num": "2." }, { "text": "The function SMALLEST-NONP-ARC returns the non-projective arc with the shortest distance from head to dependent (breaking ties from left to right). Applying the function PROJECTIVIZE to the graph in Figure 1 yields the graph in Figure 2 , where the problematic arc pointing to Z has been lifted from the original head jedna to the ancestor je. Using the terminology of Kahane et al. (1998) , we say that jedna is the syntactic head of Z, while je is its linear head in the projectivized representation.", "cite_spans": [ { "start": 369, "end": 389, "text": "Kahane et al. (1998)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 199, "end": 207, "text": "Figure 1", "ref_id": null }, { "start": 228, "end": 236, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "A dependency graph for a string of words", "sec_num": "2." }, { "text": "Unlike Kahane et al. (1998) , we do not regard a projectivized representation as the final target of the parsing process. Instead, we want to apply an in- verse transformation to recover the underlying (nonprojective) dependency graph. In order to facilitate this task, we extend the set of arc labels to encode information about lifting operations. In principle, it would be possible to encode the exact position of the syntactic head in the label of the arc from the linear head, but this would give a potentially infinite set of arc labels and would make the training of the parser very hard. In practice, we can therefore expect a trade-off such that increasing the amount of information encoded in arc labels will cause an increase in the accuracy of the inverse transformation but a decrease in the accuracy with which the parser can construct the labeled representations. To explore this tradeoff, we have performed experiments with three different encoding schemes (plus a baseline), which are described schematically in Table 1 . The baseline simply retains the original labels for all arcs, regardless of whether they have been lifted or not, and the number of distinct labels is therefore simply the number n of distinct dependency types. 2 In the first encoding scheme, called Head, we use a new label d\u2191h for each lifted arc, where d is the dependency relation between the syntactic head and the dependent in the non-projective representation, and h is the dependency relation that the syntactic head has to its own head in the underlying structure. Using this encoding scheme, the arc from je to Z in Figure 2 would be assigned the label AuxP\u2191Sb (signifying an AuxP that has been lifted from a Sb). In the second scheme, Head+Path, we in addition modify the label of every arc along the lifting path from the syntactic to the linear head so that if the original label is p the new label is p\u2193. Thus, the arc from je to jedna will be labeled Sb\u2193 (to indicate that there is a syntactic head below it). In the third and final scheme, denoted Path, we keep the extra infor-mation on path labels but drop the information about the syntactic head of the lifted arc, using the label d\u2191 instead of d\u2191h (AuxP\u2191 instead of AuxP\u2191Sb).", "cite_spans": [ { "start": 7, "end": 27, "text": "Kahane et al. (1998)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 1029, "end": 1036, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 1615, "end": 1623, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "A dependency graph for a string of words", "sec_num": "2." }, { "text": "As can be seen from the last column in Table 1 , both Head and Head+Path may theoretically lead to a quadratic increase in the number of distinct arc labels (Head+Path being worse than Head only by a constant factor), while the increase is only linear in the case of Path. On the other hand, we can expect Head+Path to be the most useful representation for reconstructing the underlying non-projective dependency graph. In approaching this problem, a variety of different methods are conceivable, including a more or less sophisticated use of machine learning. In the present study, we limit ourselves to an algorithmic approach, using a deterministic breadthfirst search. The details of the transformation procedure are slightly different depending on the encoding schemes:", "cite_spans": [], "ref_spans": [ { "start": 39, "end": 46, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "A dependency graph for a string of words", "sec_num": "2." }, { "text": "\u2022 Head: For every arc of the form w i d\u2191h \u2212\u2192 w n , we search the graph top-down, left-to-right, breadth-first starting at the head node w i . If we find an arc w l h \u2212\u2192 w m , called a target arc, we", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A dependency graph for a string of words", "sec_num": "2." }, { "text": "replace w i d\u2191h \u2212\u2192 w n by w m d \u2212\u2192 w n ; otherwise we replace w i d\u2191h \u2212\u2192 w n by w i d \u2212\u2192 w n (i.e.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A dependency graph for a string of words", "sec_num": "2." }, { "text": "we let the linear head be the syntactic head). In section 4 we evaluate these transformations with respect to projectivized dependency treebanks, and in section 5 they are applied to parser output. Before Table 2 : Features used in predicting the next parser action we turn to the evaluation, however, we need to introduce the data-driven dependency parser used in the latter experiments.", "cite_spans": [], "ref_spans": [ { "start": 205, "end": 212, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "A dependency graph for a string of words", "sec_num": "2." }, { "text": "In the experiments below, we employ a data-driven deterministic dependency parser producing labeled projective dependency graphs, 3 previously tested on Swedish and English (Nivre and Scholz, 2004) . The parser builds dependency graphs by traversing the input from left to right, using a stack to store tokens that are not yet complete with respect to their dependents. At each point during the derivation, the parser has a choice between pushing the next input token onto the stack -with or without adding an arc from the token on top of the stack to the token pushed -and popping a token from the stack -with or without adding an arc from the next input token to the token popped. More details on the parsing algorithm can be found in Nivre (2003) . The choice between different actions is in general nondeterministic, and the parser relies on a memorybased classifier, trained on treebank data, to predict the next action based on features of the current parser configuration. Table 2 shows the features used in the current version of the parser. At each point during the derivation, the prediction is based on six word tokens, the two topmost tokens on the stack, and the next four input tokens. For each token, three types of features may be taken into account: the word form; the part-of-speech assigned by an automatic tagger; and labels on previously assigned dependency arcs involving the token -the arc from its head and the arcs to its leftmost and rightmost dependent, respectively. Except for the left-most dependent of the next input token, dependency type features are limited to tokens on the stack.", "cite_spans": [ { "start": 173, "end": 197, "text": "(Nivre and Scholz, 2004)", "ref_id": "BIBREF23" }, { "start": 737, "end": 749, "text": "Nivre (2003)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 980, "end": 987, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Memory-Based Dependency Parsing", "sec_num": "3" }, { "text": "The prediction based on these features is a knearest neighbor classification, using the IB1 algorithm and k = 5, the modified value difference metric (MVDM) and class voting with inverse distance weighting, as implemented in the TiMBL software package (Daelemans et al., 2003) . More details on the memory-based prediction can be found in and Nivre and Scholz (2004) .", "cite_spans": [ { "start": 252, "end": 276, "text": "(Daelemans et al., 2003)", "ref_id": "BIBREF6" }, { "start": 343, "end": 366, "text": "Nivre and Scholz (2004)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Memory-Based Dependency Parsing", "sec_num": "3" }, { "text": "The first experiment uses data from two dependency treebanks. The Prague Dependency Treebank (PDT) consists of more than 1M words of newspaper text, annotated on three levels, the morphological, analytical and tectogrammatical levels (Haji\u010d, 1998) . Our experiments all concern the analytical annotation, and the first experiment is based only on the training part. The Danish Dependency Treebank (DDT) comprises about 100K words of text selected from the Danish PAROLE corpus, with annotation of primary and secondary dependencies (Kromann, 2003) . The entire treebank is used in the experiment, but only primary dependencies are considered. 4 In all experiments, punctuation tokens are included in the data but omitted in evaluation scores.", "cite_spans": [ { "start": 234, "end": 247, "text": "(Haji\u010d, 1998)", "ref_id": "BIBREF13" }, { "start": 532, "end": 547, "text": "(Kromann, 2003)", "ref_id": "BIBREF20" }, { "start": 643, "end": 644, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiment 1: Treebank Transformation", "sec_num": "4" }, { "text": "In the first part of the experiment, dependency graphs from the treebanks were projectivized using the algorithm described in section 2. As shown in Table 3 , the proportion of sentences containing some non-projective dependency ranges from about 15% in DDT to almost 25% in PDT. However, the overall percentage of non-projective arcs is less than 2% in PDT and less than 1% in DDT. Table 3 show the distribution of nonprojective arcs with respect to the number of lifts required. It is worth noting that, although nonprojective constructions are less frequent in DDT than in PDT, they seem to be more deeply nested, since only about 80% can be projectivized with a single lift, while almost 95% of the non-projective arcs in PDT only require a single lift.", "cite_spans": [], "ref_spans": [ { "start": 149, "end": 156, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 383, "end": 390, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experiment 1: Treebank Transformation", "sec_num": "4" }, { "text": "In the second part of the experiment, we applied the inverse transformation based on breadth-first search under the three different encoding schemes. The results are given in Table 4 . As expected, the most informative encoding, Head+Path, gives the highest accuracy with over 99% of all non-projective arcs being recovered correctly in both data sets. However, it can be noted that the results for the least informative encoding, Path, are almost comparable, while the third encoding, Head, gives substantially worse results for both data sets. We also see that the increase in the size of the label sets for Head and Head+Path is far below the theoretical upper bounds given in Table 1 . The increase is generally higher for PDT than for DDT, which indicates a greater diversity in non-projective constructions.", "cite_spans": [], "ref_spans": [ { "start": 175, "end": 182, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 680, "end": 687, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiment 1: Treebank Transformation", "sec_num": "4" }, { "text": "The second experiment is limited to data from PDT. 5 The training part of the treebank was projectivized under different encoding schemes and used to train memory-based dependency parsers, which were run on the test part of the treebank, consisting of 7,507 sentences and 125,713 tokens. 6 The inverse transformation was applied to the output of the parsers and the result compared to the gold standard test set. Table 5 shows the overall parsing accuracy attained with the three different encoding schemes, compared to the baseline (no special arc labels) and to training directly on non-projective dependency graphs. Evaluation metrics used are Attachment Score (AS), i.e. the proportion of tokens that are attached to the correct head, and Exact Match (EM), i.e. the proportion of sentences for which the dependency graph exactly matches the gold standard. In the labeled version of these metrics (L) both heads and arc labels must be correct, while the unlabeled version (U) only considers heads.", "cite_spans": [ { "start": 51, "end": 52, "text": "5", "ref_id": null }, { "start": 288, "end": 289, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 413, "end": 420, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Experiment 2: Memory-Based Parsing", "sec_num": "5" }, { "text": "The first thing to note is that projectivizing helps in itself, even if no encoding is used, as seen from the fact that the projective baseline outperforms the non-projective training condition by more than half a percentage point on attachment score, although the gain is much smaller with respect to exact match. The second main result is that the pseudo-projective approach to parsing (using special arc labels to guide an inverse transformation) gives a further improvement of about one percentage point on attachment score. With respect to exact match, the improvement is even more noticeable, which shows quite clearly that even if non-projective dependencies are rare on the token level, they are nevertheless important for getting the global syntactic structure correct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: Memory-Based Parsing", "sec_num": "5" }, { "text": "All improvements over the baseline are statistically significant beyond the 0.01 level ( Table 6 : Precision, recall and F-measure for non-projective arcs test). By contrast, when we turn to a comparison of the three encoding schemes it is hard to find any significant differences, and the overall impression is that it makes little or no difference which encoding scheme is used, as long as there is some indication of which words are assigned their linear head instead of their syntactic head by the projective parser. This may seem surprising, given the experiments reported in section 4, but the explanation is probably that the non-projective dependencies that can be recovered at all are of the simple kind that only requires a single lift, where the encoding of path information is often redundant. It is likely that the more complex cases, where path information could make a difference, are beyond the reach of the parser in most cases.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 96, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Experiment 2: Memory-Based Parsing", "sec_num": "5" }, { "text": "However, if we consider precision, recall and Fmeasure on non-projective dependencies only, as shown in Table 6 , some differences begin to emerge. The most informative scheme, Head+Path, gives the highest scores, although with respect to Head the difference is not statistically significant, while the least informative scheme, Path -with almost the same performance on treebank transformation -is significantly lower (p < 0.01). On the other hand, given that all schemes have similar parsing accuracy overall, this means that the Path scheme is the least likely to introduce errors on projective arcs.", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 111, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Experiment 2: Memory-Based Parsing", "sec_num": "5" }, { "text": "The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers. Although the best published results for the Collins parser is 80% UAS (Collins, 1999) , this parser reaches 82% when trained on the entire training data set, and an adapted version of Charniak's parser (Charniak, 2000) performs at 84% (Jan Haji\u010d, pers. comm.). However, the accuracy is considerably higher than previously reported results for robust non-projective parsing of Czech, with a best performance of 73% UAS (Holan, 2004) .", "cite_spans": [ { "start": 197, "end": 212, "text": "(Collins, 1999)", "ref_id": "BIBREF4" }, { "start": 329, "end": 345, "text": "(Charniak, 2000)", "ref_id": "BIBREF2" }, { "start": 545, "end": 558, "text": "(Holan, 2004)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: Memory-Based Parsing", "sec_num": "5" }, { "text": "Compared to related work on the recovery of long-distance dependencies in constituency-based parsing, our approach is similar to that of Dienes and Dubey (2003) in that the processing of non-local dependencies is partly integrated in the parsing process, via an extension of the set of syntactic categories, whereas most other approaches rely on postprocessing only. However, while Dienes and Dubey recognize empty categories in a pre-processing step and only let the parser find their antecedents, we use the parser both to detect dislocated dependents and to predict either the type or the location of their syntactic head (or both) and use post-processing only to transform the graph in accordance with the parser's analysis.", "cite_spans": [ { "start": 137, "end": 160, "text": "Dienes and Dubey (2003)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: Memory-Based Parsing", "sec_num": "5" }, { "text": "We have presented a new method for non-projective dependency parsing, based on a combination of data-driven projective dependency parsing and graph transformation techniques. The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The dependency graph has been modified to make the final period a dependent of the main verb instead of being a dependent of a special root node for the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that this is a baseline for the parsing experiment only (Experiment 2). For Experiment 1 it is meaningless as a baseline, since it would result in 0% accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The graphs satisfy all the well-formedness conditions given in section 2 except (possibly) connectedness. For robustness reasons, the parser may output a set of dependency trees instead of a single tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "If secondary dependencies had been included, the dependency graphs would not have satisfied the well-formedness conditions formulated in section 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Preliminary experiments using data from DDT indicated that the limited size of the treebank creates a severe sparse data problem with respect to non-projective constructions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The part-of-speech tagging used in both training and testing was the uncorrected output of an HMM tagger distributed with the treebank; cf.Haji\u010d et al. (2001a).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported in part by the Swedish Research Council (621-2002-4207). Memory-based classifiers for the experiments were created using TiMBL (Daelemans et al., 2003) . Special thanks to Jan Haji\u010d and Matthias Trautner Kromann for assistance with the Czech and Danish data, respectively, and to Jan Haji\u010d, Tom\u00e1\u0161 Holan, Dan Zeman and three anonymous reviewers for valuable comments on a preliminary version of the paper.", "cite_spans": [ { "start": 151, "end": 175, "text": "(Daelemans et al., 2003)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Long-distance dependency resolution in automatically acquired wide-coverage PCFG-based LFG approximations", "authors": [ { "first": "A", "middle": [], "last": "Cahill", "suffix": "" }, { "first": "M", "middle": [], "last": "Burke", "suffix": "" }, { "first": "R", "middle": [], "last": "O'donovan", "suffix": "" }, { "first": "J", "middle": [], "last": "Van Genabith", "suffix": "" }, { "first": "A", "middle": [], "last": "Way", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cahill, A., Burke, M., O'Donovan, R., Van Genabith, J. and Way, A. 2004. Long-distance dependency resolution in automatically acquired wide-coverage PCFG-based LFG ap- proximations. In Proceedings of ACL.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Using linguistic principles to recover empty categories", "authors": [ { "first": "R", "middle": [], "last": "Campbell", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Campbell, R. 2004. Using linguistic principles to recover empty categories. In Proceedings of ACL.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A maximum-entropy-inspired parser", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2000, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charniak, E. 2000. A maximum-entropy-inspired parser. In Proceedings of NAACL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A statistical parser for Czech", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "J", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "E", "middle": [], "last": "Brill", "suffix": "" }, { "first": "L", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 1999, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, M., Haji\u010d, J., Brill, E., Ramshaw, L. and Tillmann, C. 1999. A statistical parser for Czech. In Proceedings of ACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Head-Driven Statistical Models for Natural Language Parsing", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, M. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Parsing discontinuous constituents in dependency grammar", "authors": [ { "first": "M", "middle": [ "A" ], "last": "Covington", "suffix": "" } ], "year": 1990, "venue": "Computational Linguistics", "volume": "16", "issue": "", "pages": "234--236", "other_ids": {}, "num": null, "urls": [], "raw_text": "Covington, M. A. 1990. Parsing discontinuous constituents in dependency grammar. Computational Linguistics, 16:234- 236.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "TiMBL: Tilburg Memory Based Learner, version 5.0, Reference Guide", "authors": [ { "first": "W", "middle": [], "last": "Daelemans", "suffix": "" }, { "first": "J", "middle": [], "last": "Zavrel", "suffix": "" }, { "first": "K", "middle": [], "last": "Van Der Sloot", "suffix": "" }, { "first": "", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "A", "middle": [], "last": "Bosch", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daelemans, W., Zavrel, J., van der Sloot, K. and van den Bosch, A. 2003. TiMBL: Tilburg Memory Based Learner, version 5.0, Reference Guide. Technical Report ILK 03-10, Tilburg University, ILK.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Deep syntactic processing by combining shallow methods", "authors": [ { "first": "P", "middle": [], "last": "Dienes", "suffix": "" }, { "first": "A", "middle": [], "last": "Dubey", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dienes, P. and Dubey, A. 2003. Deep syntactic processing by combining shallow methods. In Proceedings of ACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Topological dependency trees: A constraint-based account of linear precedence", "authors": [ { "first": "D", "middle": [], "last": "Duchier", "suffix": "" }, { "first": "R", "middle": [], "last": "Debusmann", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duchier, D. and Debusmann, R. 2001. Topological dependency trees: A constraint-based account of linear precedence. In Proceedings of ACL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Three new probabilistic models for dependency parsing: An exploration", "authors": [ { "first": "J", "middle": [ "M" ], "last": "Eisner", "suffix": "" } ], "year": 1996, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eisner, J. M. 1996. Three new probabilistic models for depen- dency parsing: An exploration. In Proceedings of COLING.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A broad-coverage parser for German based on defeasible constraints", "authors": [ { "first": "K", "middle": [], "last": "Foth", "suffix": "" }, { "first": "M", "middle": [], "last": "Daum", "suffix": "" }, { "first": "W", "middle": [], "last": "Menzel", "suffix": "" } ], "year": 2004, "venue": "Proceedings of KONVENS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Foth, K., Daum, M. and Menzel, W. 2004. A broad-coverage parser for German based on defeasible constraints. In Pro- ceedings of KONVENS.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Serial combination of rules and statistics: A case study in Czech tagging", "authors": [ { "first": "J", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "P", "middle": [], "last": "Krbec", "suffix": "" }, { "first": "K", "middle": [], "last": "Oliva", "suffix": "" }, { "first": "P", "middle": [], "last": "Kveton", "suffix": "" }, { "first": "V", "middle": [], "last": "Petkevic", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haji\u010d, J., Krbec, P., Oliva, K., Kveton, P. and Petkevic, V. 2001. Serial combination of rules and statistics: A case study in Czech tagging. In Proceedings of ACL.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Building a syntactically annotated corpus: The Prague Dependency Treebank", "authors": [ { "first": "J", "middle": [], "last": "Haji\u010d", "suffix": "" } ], "year": 1998, "venue": "Issues of Valency and Meaning", "volume": "", "issue": "", "pages": "106--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haji\u010d, J. 1998. Building a syntactically annotated corpus: The Prague Dependency Treebank. In Issues of Valency and Meaning, pages 106-132. Karolinum.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Dependency unification grammar", "authors": [ { "first": "P", "middle": [], "last": "Hellwig", "suffix": "" } ], "year": 2003, "venue": "Dependency and Valency", "volume": "", "issue": "", "pages": "593--635", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hellwig, P. 2003. Dependency unification grammar. In Depen- dency and Valency, pages 593-635. Walter de Gruyter.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Word-order relaxations and restrictions within a dependency grammar", "authors": [ { "first": "T", "middle": [], "last": "Holan", "suffix": "" }, { "first": "V", "middle": [], "last": "Kubo\u0148", "suffix": "" }, { "first": "M", "middle": [], "last": "Pl\u00e1tek", "suffix": "" } ], "year": 2001, "venue": "Proceedings of IWPT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holan, T., Kubo\u0148, V. and Pl\u00e1tek, M. 2001. Word-order re- laxations and restrictions within a dependency grammar. In Proceedings of IWPT.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Tvorba zavislostniho syntaktickeho analyzatoru", "authors": [ { "first": "T", "middle": [], "last": "Holan", "suffix": "" } ], "year": 2004, "venue": "Proceedings of MIS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holan, T. 2004. Tvorba zavislostniho syntaktickeho analyza- toru. In Proceedings of MIS'2004.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Enriching the output of a parser using memory-based learning", "authors": [ { "first": "V", "middle": [], "last": "Jijkoun", "suffix": "" }, { "first": "M", "middle": [], "last": "De Rijke", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jijkoun, V. and de Rijke, M. 2004. Enriching the output of a parser using memory-based learning. In Proceedings of ACL.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A simple pattern-matching algorithm for recovering empty nodes and their antecedents", "authors": [ { "first": "M", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johnson, M. 2002. A simple pattern-matching algorithm for re- covering empty nodes and their antecedents. In Proceedings of ACL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Pseudoprojectivity: A polynomially parsable non-projective dependency grammar", "authors": [ { "first": "S", "middle": [], "last": "Kahane", "suffix": "" }, { "first": "A", "middle": [], "last": "Nasr", "suffix": "" }, { "first": "O", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 1998, "venue": "Proceedings of ACL-COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kahane, S., Nasr, A. and Rambow, O. 1998. Pseudo- projectivity: A polynomially parsable non-projective depen- dency grammar. In Proceedings of ACL-COLING.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The Danish Dependency Treebank and the DTAG treebank tool", "authors": [ { "first": "M", "middle": [ "T" ], "last": "Kromann", "suffix": "" } ], "year": 2003, "venue": "Proceedings of TLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kromann, M. T. 2003. The Danish Dependency Treebank and the DTAG treebank tool. In Proceedings of TLT 2003.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Deep dependencies from context-free statistical parsers: Correcting the surface dependency approximation", "authors": [ { "first": "R", "middle": [], "last": "Levy", "suffix": "" }, { "first": "C", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levy, R. and Manning, C. 2004. Deep dependencies from context-free statistical parsers: Correcting the surface depen- dency approximation. In Proceedings of ACL.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Dependency Syntax: Theory and Practice", "authors": [ { "first": "I", "middle": [], "last": "Mel'\u010duk", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mel'\u010duk, I. 1988. Dependency Syntax: Theory and Practice. State University of New York Press.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Deterministic dependency parsing of English text", "authors": [ { "first": "J", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "M", "middle": [], "last": "Scholz", "suffix": "" } ], "year": 2004, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nivre, J. and Scholz, M. 2004. Deterministic dependency pars- ing of English text. In Proceedings of COLING.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Memory-based dependency parsing", "authors": [ { "first": "J", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "J", "middle": [], "last": "Hall", "suffix": "" }, { "first": "J", "middle": [], "last": "Nilsson", "suffix": "" } ], "year": 2004, "venue": "Proceedings of CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nivre, J., Hall, J. and Nilsson, J. 2004. Memory-based depen- dency parsing. In Proceedings of CoNLL.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "An efficient algorithm for projective dependency parsing", "authors": [ { "first": "J", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2003, "venue": "Proceedings of IWPT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nivre, J. 2003. An efficient algorithm for projective depen- dency parsing. In Proceedings of IWPT.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Building a Turkish treebank", "authors": [ { "first": "K", "middle": [], "last": "Oflazer", "suffix": "" }, { "first": "B", "middle": [], "last": "Say", "suffix": "" }, { "first": "D", "middle": [ "Z" ], "last": "Hakkani-T\u00fcr", "suffix": "" }, { "first": "G", "middle": [], "last": "T\u00fcr", "suffix": "" } ], "year": 2003, "venue": "Treebanks: Building and Using Parsed Corpora", "volume": "", "issue": "", "pages": "261--277", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oflazer, K., Say, B., Hakkani-T\u00fcr, D. Z. and T\u00fcr, G. 2003. Building a Turkish treebank. In Treebanks: Building and Using Parsed Corpora, pages 261-277. Kluwer Academic Publishers.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Dependency parsing with an extended finitestate approach", "authors": [ { "first": "K", "middle": [], "last": "Oflazer", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "", "pages": "515--544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oflazer, K. 2003. Dependency parsing with an extended finite- state approach. Computational Linguistics, 29:515-544.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Parsing English with a link grammar", "authors": [ { "first": "D", "middle": [], "last": "Sleator", "suffix": "" }, { "first": "D", "middle": [], "last": "Temperley", "suffix": "" } ], "year": 1993, "venue": "Proceedings of IWPT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sleator, D. and Temperley, D. 1993. Parsing English with a link grammar. In Proceedings of IWPT.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A non-projective dependency parser", "authors": [ { "first": "P", "middle": [], "last": "Tapanainen", "suffix": "" }, { "first": "T", "middle": [], "last": "J\u00e4rvinen", "suffix": "" } ], "year": 1997, "venue": "Proceedings of ANLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tapanainen, P. and J\u00e4rvinen, T. 1997. A non-projective depen- dency parser. In Proceedings of ANLP.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A statistical constraint dependency grammar (CDG) parser", "authors": [ { "first": "W", "middle": [], "last": "Wang", "suffix": "" }, { "first": "M", "middle": [ "P" ], "last": "Harper", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Workshop in Incremental Parsing (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, W. and Harper, M. P. 2004. A statistical constraint dependency grammar (CDG) parser. In Proceedings of the Workshop in Incremental Parsing (ACL).", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Statistical dependency analysis with support vector machines", "authors": [ { "first": "H", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2003, "venue": "Proceedings of IWPT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yamada, H. and Matsumoto, Y. 2003. Statistical dependency analysis with support vector machines. In Proceedings of IWPT.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Head+Path: Same as Head, but the search only follows arcs of the form w j p\u2193 \u2212\u2192 w k and a target arc must have the form w l h\u2193 \u2212\u2192 w m ; if no target arc is found, Head is used as backoff. \u2022 Path: Same as Head+Path, but a target arc must have the form w l p\u2193 \u2212\u2192 w m and no outgoing arcs of the form w m p \u2193 \u2212\u2192 w o ; no backoff.", "type_str": "figure", "uris": null, "num": null }, "TABREF2": { "text": "Encoding schemes (d = dependent, h = syntactic head, p = path; n = number of dependency types)", "content": "
Baselinedpn
Headd\u2191hpn(n + 1)
Head+Pathd\u2191hp\u21932n(n + 1)
Pathd\u2191p\u21934n
", "type_str": "table", "html": null, "num": null }, "TABREF4": { "text": "Non-projective sentences and arcs in PDT and DDT (NonP = non-projective)", "content": "
Data setHeadH+PPath
PDT training (28 labels) 92.3 (230) 99.3 (314) 97.3 (84)
DDT total (54 labels)92.3 (123) 99.8 (147) 98.3 (99)
", "type_str": "table", "html": null, "num": null }, "TABREF5": { "text": "Percentage of non-projective arcs recovered correctly (number of labels in parentheses) columns in", "content": "", "type_str": "table", "html": null, "num": null } } } }