{
"paper_id": "D12-1044",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:23:45.780318Z"
},
"title": "Dynamic Programming for Higher Order Parsing of Gap-Minding Trees",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania Philadelphia",
"location": {
"postCode": "19104",
"region": "PA"
}
},
"email": "epitler@seas.upenn.edu"
},
{
"first": "Sampath",
"middle": [],
"last": "Kannan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania Philadelphia",
"location": {
"postCode": "19104",
"region": "PA"
}
},
"email": "kannan@seas.upenn.edu"
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania Philadelphia",
"location": {
"postCode": "19104",
"region": "PA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce gap inheritance, a new structural property on trees, which provides a way to quantify the degree to which intervals of descendants can be nested. Based on this property, two new classes of trees are derived that provide a closer approximation to the set of plausible natural language dependency trees than some alternative classes of trees: unlike projective trees, a word can have descendants in more than one interval; unlike spanning trees, these intervals cannot be nested in arbitrary ways. The 1-Inherit class of trees has exactly the same empirical coverage of natural language sentences as the class of mildly nonprojective trees, yet the optimal scoring tree can be found in an order of magnitude less time. Gap-minding trees (the second class) have the property that all edges into an interval of descendants come from the same node, and thus an algorithm which uses only single intervals can produce trees in which a node has descendants in multiple intervals.",
"pdf_parse": {
"paper_id": "D12-1044",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce gap inheritance, a new structural property on trees, which provides a way to quantify the degree to which intervals of descendants can be nested. Based on this property, two new classes of trees are derived that provide a closer approximation to the set of plausible natural language dependency trees than some alternative classes of trees: unlike projective trees, a word can have descendants in more than one interval; unlike spanning trees, these intervals cannot be nested in arbitrary ways. The 1-Inherit class of trees has exactly the same empirical coverage of natural language sentences as the class of mildly nonprojective trees, yet the optimal scoring tree can be found in an order of magnitude less time. Gap-minding trees (the second class) have the property that all edges into an interval of descendants come from the same node, and thus an algorithm which uses only single intervals can produce trees in which a node has descendants in multiple intervals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dependency parsers vary in what space of possible tree structures they search over when parsing a sentence. One commonly used space is the set of projective trees, in which every node's descendants form a contiguous interval in the input sentence. Finding the optimal tree in the set of projective trees can be done efficiently (Eisner, 2000) , even when the score of a tree depends on higher order factors (McDonald and Pereira, 2006; Carreras, 2007; . However, the projectivity assumption is too strict for all natural language dependency trees; for example, only 63.6% of Dutch sentences from the CoNLL-X training set are projective (Table 1) .",
"cite_spans": [
{
"start": 328,
"end": 342,
"text": "(Eisner, 2000)",
"ref_id": "BIBREF9"
},
{
"start": 407,
"end": 435,
"text": "(McDonald and Pereira, 2006;",
"ref_id": "BIBREF17"
},
{
"start": 436,
"end": 451,
"text": "Carreras, 2007;",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 636,
"end": 645,
"text": "(Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "At the other end of the spectrum, some parsers search over all spanning trees, a class of structures much larger than the set of plausible linguistic structures. The maximum scoring directed spanning tree can be found efficiently when the score of a tree depends only on edge-based factors (McDonald et al., 2005b) . However, it is NP-hard to extend MST to include sibling or grandparent factors (McDonald and Pereira, 2006; McDonald and Satta, 2007) . MSTbased non-projective parsers that use higher order factors (Martins et al., 2009; , utilize different techniques than the basic MST algorithm. In addition, learning is done over a relaxation of the problem, so the inference procedures at training and at test time are not identical.",
"cite_spans": [
{
"start": 290,
"end": 314,
"text": "(McDonald et al., 2005b)",
"ref_id": "BIBREF20"
},
{
"start": 396,
"end": 424,
"text": "(McDonald and Pereira, 2006;",
"ref_id": "BIBREF17"
},
{
"start": 425,
"end": 450,
"text": "McDonald and Satta, 2007)",
"ref_id": "BIBREF18"
},
{
"start": 515,
"end": 537,
"text": "(Martins et al., 2009;",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose two new classes of trees between projective trees and the set of all spanning trees. These two classes provide a closer approximation to the set of plausible natural language dependency trees: unlike projective trees, a word can have descendants in more than one interval; unlike spanning trees, these intervals cannot be nested in arbitrary ways. We introduce gap inheritance, a new structural property on trees, which provides a way to quantify the degree to which these intervals can be nested. Different levels of gap inheritance define each of these two classes (Section 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The 1-Inherit class of trees (Section 4) has exactly the same empirical coverage (Table 1) of natural language sentences as the class of mildly non-projective trees (Bodirsky et al., 2005 ), yet the optimal scoring tree can be found in an order of magnitude less time (Section 4.1).",
"cite_spans": [
{
"start": 165,
"end": 187,
"text": "(Bodirsky et al., 2005",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 81,
"end": 90,
"text": "(Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Gap-minding trees (the second class) have the property that all edges into an interval of descendants come from the same node. Non-contiguous intervals are therefore decoupled given this single node, and thus an algorithm which uses only single intervals (as in projective parsing) can produce trees in which a node has descendants in multiple intervals (as in mildly non-projective parsing (G\u00f3mez-Rodr\u00edguez et al., 2011) ). A procedure for finding the optimal scoring tree in this space is given in Section 5, which can be searched in yet another order of magnitude faster than the 1-Inherit class. Unlike the class of spanning trees, it is still tractable to find the optimal tree in these new spaces when higher order factors are included. An extension which finds the optimal scoring gap-minding tree with scores over pairs of adjacent edges (grandparent scoring) is given in Section 6. These gapminding algorithms have been implemented in practice and empirical results are presented in Section 7.",
"cite_spans": [
{
"start": 391,
"end": 421,
"text": "(G\u00f3mez-Rodr\u00edguez et al., 2011)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we review some relevant definitions from previous work that characterize degrees of non-projectivity. We also review how well these definitions cover empirical data from six languages: Arabic, Czech, Danish, Dutch, Portuguese, and Swedish. These are the six languages whose CoNLL-X shared task data are either available open source 1 or from the LDC 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "A dependency tree is a rooted, directed spanning tree that represents a set of dependencies between words in a sentence. 3 The tree has one artificial root node and vertices that correspond to the words in an input sentence w 1 , w 2 ,...,w n . There is an edge from h to m if m depends on (or modifies) h. Definition 1. The projection of a node is the set of words in the subtree rooted at it (including itself).",
"cite_spans": [
{
"start": 121,
"end": 122,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "A tree is projective if, for every node in the tree, that node's projection forms a contiguous interval in the input sentence order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "A tree is non-projective if the above does not hold, i.e., there exists at least one word whose descendants do not form a contiguous interval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Definition 2. A gap of a node v is a non-empty, maximal interval that does not contain any words in the projection of v but lies between words that are in the projection of v. The gap degree of a node is the number of gaps it has. The gap degree of a tree is the maximum of the gap degrees of its vertices. (Bodirsky et al., 2005) Note that a projective tree will have gap degree 0. Two subtrees interleave if there are vertices l 1 , r 1 from one subtree and l 2 , r 2 from the other such that",
"cite_spans": [
{
"start": 307,
"end": 330,
"text": "(Bodirsky et al., 2005)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "l 1 < l 2 < r 1 < r 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Definition 3. A tree is well-nested if no two disjoint subtrees interleave (Bodirsky et al., 2005) .",
"cite_spans": [
{
"start": 75,
"end": 98,
"text": "(Bodirsky et al., 2005)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Definition 4. A mildly non-projective tree has gap degree at most one and is well-nested.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Mildly non-projective trees are of both theoretical and practical interest, as they correspond to derivations in Lexicalized Tree Adjoining Grammar (Bodirsky et al., 2005) and cover the overwhelming majority of sentences found in treebanks for Czech and Danish (Kuhlmann and Nivre, 2006) . Table 1 shows the proportion of mildly nonprojective sentences for Arabic, Czech, Danish, Dutch, Portuguese, and Swedish, ranging from 95.4% of Portuguese sentences to 99.9% of Arabic sentences. 4 This definition covers a substantially larger set of sentences than projectivity does -an assumption of projectivity covers only 63.6% (Dutch) to 90.2% (Swedish) of examples (Table 1) .",
"cite_spans": [
{
"start": 148,
"end": 171,
"text": "(Bodirsky et al., 2005)",
"ref_id": "BIBREF0"
},
{
"start": 261,
"end": 287,
"text": "(Kuhlmann and Nivre, 2006)",
"ref_id": "BIBREF15"
},
{
"start": 485,
"end": 486,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 290,
"end": 297,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 661,
"end": 670,
"text": "(Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Empirically, natural language sentences seem to be mostly mildly non-projective trees, but mildly nonprojective trees are quite expensive to parse (O(n 7 ) (G\u00f3mez-Rodr\u00edguez et al., 2011)). The parsing complexity comes from the fact that the definition allows two non-contiguous intervals of a projection to be tightly coupled, with an unbounded number of edges passing back and forth between the two intervals; however, this type of structure seems unusual for natural language. We therefore investigate if we can define further structural properties that are both appropriate for describing natural language trees and which admit more efficient parsing algorithms. Let us first consider an example of a tree which both has gap degree at most one and satisfies wellnestedness, yet appears to be an unrealistic structure for a natural language syntactic tree. Consider a tree which is rooted at node x n+2 , which has one child, node",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap Inheritance",
"sec_num": "3"
},
{
"text": "x n+1 , whose projection is [x 1 , x n+1 ] \u222a [x n+3 , x 2n+2 ], with n children (x 1 , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap Inheritance",
"sec_num": "3"
},
{
"text": ".., x n ), and each child x i has a child at x 2n\u2212i+3 . This tree is wellnested, has gap degree 1, but all n of x n+1 's children have edges into the other projection interval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap Inheritance",
"sec_num": "3"
},
{
"text": "We introduce a further structural restriction in this section, and show that trees satisfying our new property can be parsed more efficiently with no drop in empirical coverage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap Inheritance",
"sec_num": "3"
},
{
"text": "Definition 5. A child is gap inheriting if its parent has gap degree 1 and it has descendants on both sides of its parent's gap. The inheritance degree of a node is the number of its children which inherit its gap. The inheritance degree of a tree is the maximum inheritance degree over all its nodes. Figure 1 gives examples of trees with varying degrees of gap inheritance. Each projection of a node with a gap is shown with two matching rectangles. If a child has a projection rectangle nested inside each of the parent's projection rectangles, then that child inherits the parent's gap. Figure 1 (a) shows a mildly projective tree (with inheritance degree 2), with both node 2 and node 11 inheriting their parent (node 3)'s gap (note that both the dashed and dotted rectangles each show up inside both of the solid rectangles). there is now only one pair of rectangles (the dotted ones) which show up in both of the solid ones. Figure 1 (c) shows a tree with inheritance degree 0: while there are gaps, each set of matching rectangles is contained within a single rectangle (projection interval) of its parent, i.e., the two dashed rectangles of node 2's projection are contained within the left interval of node 3; the two dotted rectangles of node 12's projection are contained within the right interval of node 3, etc.",
"cite_spans": [],
"ref_spans": [
{
"start": 302,
"end": 310,
"text": "Figure 1",
"ref_id": null
},
{
"start": 591,
"end": 599,
"text": "Figure 1",
"ref_id": null
},
{
"start": 932,
"end": 940,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gap Inheritance",
"sec_num": "3"
},
{
"text": "We now ask:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap Inheritance",
"sec_num": "3"
},
{
"text": "1. How often does gap inheritance occur in the parses of natural language sentences found in treebanks?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap Inheritance",
"sec_num": "3"
},
{
"text": "2. Furthermore, how often are there multiple gap inheriting children of the same node (inheritance degree at least two)? Table 1 shows what proportion of mildly nonprojective trees have the added property of gap inheritance degree 0 (Mild+0-Inherit) or have gap inheritance degree 1 (Mild+1-Inherit). Over all six languages, there are no examples of multiple gap inheritance -Mild+1-Inherit has exactly the same empirical coverage as the unrestricted set of mildly non-projective trees.",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 128,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Gap Inheritance",
"sec_num": "3"
},
{
"text": "There are some reasons from syntactic theory why we might expect at most one child to inherit its parent's gap. Traditional Government and Binding theories of syntax (Chomsky, 1981) assume that there is an underlying projective (phrase structure) tree, and that gaps primarily arise through movement of (a) Mildly Non-Projective: The projections (set of descendants) of both node 2 (the dashed red rectangles) and node 11 (dotted magenta) appear in both of node 3's intervals (the solid blue rectangles). (b) Mild+1-Inherit: Only node 2 inherits node 3's gap: the dashed red rectangles appear in each of the two solid blue rectangles. (c) Mild+0-Inherit: Even though node 3 has children with gaps (node 2 and node 12), neither of them inherit node 3's gap. There are several nodes with gaps, but every node with a gap is properly contained within just one of its parent's intervals.",
"cite_spans": [
{
"start": 166,
"end": 181,
"text": "(Chomsky, 1981)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mild+1-Inherit Trees",
"sec_num": "4"
},
{
"text": "Figure 1: Rectangles that match in color and style indicate the two projection intervals of a node, separated by a gap. In all three trees, node 3's two projection intervals are shown in the two solid blue rectangles. The number of children which inherit its gap vary, however; in 1(a), two children have descendants within both sides; in 1(b) only one child has descendants on both sides; in 1(c), none of its children do.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mild+1-Inherit Trees",
"sec_num": "4"
},
{
"text": "subtrees (constituents). One of the fundamental assumptions of syntactic theory is that movement is upward in the phrase structure tree. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mild+1-Inherit Trees",
"sec_num": "4"
},
{
"text": "Consider one movement operation and its effect on the gap degree of all other nodes in the tree: (a) it should have no effect on the gap degree of the nodes in the subtree itself, (b) it can create a gap for an ancestor node if it moves out of its projection interval, and (c) it can create a gap for a non-ancestor node if it moves in to its projection interval. Now consider which cases can lead to gap inheritance: in case (b), there is a single path from the ancestor to the root of the subtree, so the parent of the subtree will have no gap inheritance and any higher ancestors will have a single child inherit the gap created by this movement. In case (c), it is possible for there to be multiple children that inherit this newly created gap if multiple children had descendents on both sides. However, the assumption of upward movement in the phrase structure tree should rule out movement into the projection interval of a non-ancestor. Therefore, under these syntactic assumptions, we would expect at most one child to inherit a parent's gap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mild+1-Inherit Trees",
"sec_num": "4"
},
{
"text": "Finding the optimal Mild+1-Inherit tree can be done by bottom-up constructing the tree for each node and its descendants. We can maintain subtrees with two intervals (two endpoints each) and one root (O(n 5 ) space). Consider the most complicated possible case: a parent that has a gap, a (single) child which inherits the gap, and additional children. An example of this is seen with the parent node 3 in Figure 1 ",
"cite_spans": [],
"ref_spans": [
{
"start": 406,
"end": 415,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parsing Mild+1-Inherit Trees",
"sec_num": "4.1"
},
{
"text": "(b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Mild+1-Inherit Trees",
"sec_num": "4.1"
},
{
"text": "This subtree can be constructed by first starting with the child spanning the gap, updating its root index to be the parent, and then expanding the interval indices to the left and right to include the other children. In each case, only one index needs to be updated at a time, so the optimal tree can be found in O(n 6 ) time. In the Figure 1(b) example, the subtree rooted at 3 would be built by starting with the intervals [1, 2] \u222a [12, 13] rooted at 2, first adding the edge from 2 to 3 (so the root is updated to 3), then adding an edge from 3 to 4 to extend the left interval to [1, 5] , and then adding an edge from 3 to 11 to extend the right interval to [8, 13] . The subtree corresponds to the completed item [1, 5] \u222a [8, 13] rooted at 3.",
"cite_spans": [
{
"start": 585,
"end": 588,
"text": "[1,",
"ref_id": null
},
{
"start": 589,
"end": 591,
"text": "5]",
"ref_id": null
},
{
"start": 663,
"end": 666,
"text": "[8,",
"ref_id": null
},
{
"start": 667,
"end": 670,
"text": "13]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Mild+1-Inherit Trees",
"sec_num": "4.1"
},
{
"text": "This procedure corresponds to G\u00f3mez-Rodr\u00edguez et al. (2011)'s O(n 7 ) algorithm for parsing mildly non-projective structures if the most expensive step (Combine Shrinking Gap Centre) is dropped; this step would only ever be needed if a parent node has more than one child inheriting its gap. This is also similar in spirit to the algorithm described in Satta and Schuler (1998) for parsing a restricted version of TAG, in which there are some limitations on adjunction operations into the spines of trees. 6 That algorithm has similar steps and items, with the root portion of the item replaced with a node in a phrase structure tree (which may be a nonterminal).",
"cite_spans": [
{
"start": 353,
"end": 377,
"text": "Satta and Schuler (1998)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Mild+1-Inherit Trees",
"sec_num": "4.1"
},
{
"text": "The algorithm in the previous section used O(n 5 ) space and O(n 6 ) time. While more efficient than parsing in the space of mildly projective trees, this is still probably not practically implementable. Part of the difficulty lies in the fact that gap inheritance causes the two non-contiguous projection intervals to be coupled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-minding Trees",
"sec_num": "5"
},
{
"text": "Definition 6. A tree is called gap-minding 7 if it has gap degree at most one, is well-nested, and has gap inheritance degree 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-minding Trees",
"sec_num": "5"
},
{
"text": "Gap-minding trees still have good empirical coverage (between 90.4% for Dutch and 97.7% for Swedish). We now turn to the parsing of gapminding trees and show how a few consequences of its definition allow us to use items ranging over only one interval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-minding Trees",
"sec_num": "5"
},
{
"text": "In Figure 1(c) , notice how each rectangle has edges incoming from exactly one node. This is not unique to this example; all projection intervals in a gap-minding tree have incoming edges from exactly one node outside the interval.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 14,
"text": "Figure 1(c)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gap-minding Trees",
"sec_num": "5"
},
{
"text": "Claim 1. Within a gap-minding tree, consider any node n with a gap (i.e., n's projection forms two non-contiguous intervals",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-minding Trees",
"sec_num": "5"
},
{
"text": "[x i , x j ] \u222a [x k , x l ]).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-minding Trees",
"sec_num": "5"
},
{
"text": "Let p be the parent of n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-minding Trees",
"sec_num": "5"
},
{
"text": "1. For each of the intervals of n's projection: (a) If the interval contains n, the only edge incoming to that interval is from p to n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-minding Trees",
"sec_num": "5"
},
{
"text": "6 That algorithm has a running time of O(Gn 5 ), where as written G would likely add a factor of n 2 with bilexical selectional preferences; this can be lowered to n using the same technique as in Eisner and Satta (2000) for non-restricted TAG.",
"cite_spans": [
{
"start": 197,
"end": 220,
"text": "Eisner and Satta (2000)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-minding Trees",
"sec_num": "5"
},
{
"text": "7 The terminology is a nod to the London Underground but imagines parents admonishing children to mind the gap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-minding Trees",
"sec_num": "5"
},
{
"text": "(b) If the interval does not contain n, all edges incoming to that interval come from n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-minding Trees",
"sec_num": "5"
},
{
"text": "([x j+1 , x k\u22121 ]):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "(a) If the interval contains p, then the only edge incoming is from p's parent to p (b) If the interval does not contain p, then all edges incoming to that interval come from p.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "As a consequence of the above, Proof. (Part 1): Assume there was a directed edge (x, y) such that y is inside a projection interval of n and x is not inside the same interval, and x = y = n. y is a descendant of n since it is contained in n's projection. Since there is a directed edge from x to y, x is y's parent, and thus x must also be a descendant of n and therefore in another of n's projection intervals. Since x and y are in different intervals, then whichever child of n that x and y are descended from would have inherited n's gap, leading to a contradiction. (Part 2): First, suppose there existed a set of nodes in n's gap which were not descended from p. Then p has a gap over these nodes. (p clearly has descendants on each side of the gap, because all descendants of n are also descendants of p). n, p's child, would then have descendants on both sides of p's gap, which would violate the property of no gap inheritance. It is also not possible for there to be edges incoming from other descendants of p outside the gap, as that would imply another child of p being ill-nested with respect to n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "[x i , x j ] \u222a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "From the above, we can build gap-minding trees using only single intervals, potentially with a single node outside of the interval. Our objective is to find the maximum scoring gap-minding tree, in which the score of a tree is the sum of the scores of its edges. Let Score(p, x) indicate the score of the directed edge from p to x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "Therefore, the main type of sub-problems we will use are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "1. C[i, j, p]: The maximum score of any gapminding tree, rooted at p, with vertices [i, j] \u222a {p} (p may or may not be within [i, j] ).",
"cite_spans": [
{
"start": 125,
"end": 131,
"text": "[i, j]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "This improves our space requirement, but not necessarily the time requirement. For example, if we built up the subtree in Figure 1(c) by concatenating the three intervals [1, 5] rooted at 3, [6, 7] rooted at 6, and [8, 13] rooted at 3, and add the edge 6 \u2192 3, we would still need 6 indices to describe this operation (the four interval endpoints and the two roots), and so we have not yet improved the running time over the Inherit-1 case.",
"cite_spans": [
{
"start": 191,
"end": 194,
"text": "[6,",
"ref_id": null
},
{
"start": 195,
"end": 197,
"text": "7]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 122,
"end": 133,
"text": "Figure 1(c)",
"ref_id": null
}
],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "By part 2, we can concatenate one interval of a child with its gap, knowing that the gap is entirely descended from the child's parent, and forget the concatenation split point between the parent's other descendants and this side of the child. This allows us to substitute all operations involving 6 indices with two operations involving just 5 indices. For example, in Figure 1(c) , we could first merge [6, 7] rooted at 6 with [8, 13] rooted at 3 to create an interval [6, 13] and say that it is descended from 6, with the rightmost side descended from its child 3. That step required 5 indices. The following step would merge this concatenated interval ([6, 13] rooted at 6 and 3) with [1, 5] rooted at 3. This step also requires only 5 indices.",
"cite_spans": [
{
"start": 471,
"end": 474,
"text": "[6,",
"ref_id": null
},
{
"start": 475,
"end": 478,
"text": "13]",
"ref_id": null
},
{
"start": 656,
"end": 664,
"text": "([6, 13]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 370,
"end": 381,
"text": "Figure 1(c)",
"ref_id": null
}
],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "Our helper subtype we make use of is then: Consider an optimum scoring gap-minding tree T rooted at p with vertices V = [i, j] \u222a {p} and edges E, where E = \u2205. The form of the dynamic program may depend on whether:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "2. D[i, j, p,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "\u2022 p is within (i, j) (I) or external to [i, j] (E) 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "8 In the discussion we will assume that p = i and p = j, since any optimum solution with V = [i, j] \u222a {i} and a root at i will be equivalent to V = [i + 1, j] \u222a {i} rooted at i (and similarly for p = j).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "We can exhaustively enumerate all possibilities for T by considering all valid combinations of the following binary cases:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "\u2022 p has a single child (S) or multiple children (M)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "\u2022 i and j are descended from the same child of p (C) or different children of p (D)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "Note that case (S/D) is not possible: i and j cannot be descended from different children of p if p has only a single child. We therefore need to find the maximum scoring tree over the three cases of S/C, M/C, and M/D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "Claim 2. Let T be the optimum scoring gapminding tree rooted at p with vertices V = [i, j] \u222a {p}. Then T and its score are derived from one of the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "S/C If p has a single child x in T , then if p \u2208 (i, j) (I), T 's score is Score(p, x) + C[i, p\u22121, x] + C[p + 1, j, x]; if p / \u2208 [i, j] (E), T 's score is Score(p, x) + C[i, j, x].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "M/C If p has multiple children in T and i and j are descended from the same child x in T , then there is a split point k such that T 's score is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "Score(p, x)+C[i, k, x]+D[k + 1, j, p, x, T]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "if x is on the left side of its own gap, and T 's score is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "Score(p, x) + C[k, j, x] + D[i, k \u2212 1, p, x, F] if",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "x is on the right side.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "M/D If p has multiple children in T and i and j are descended from different children in T , then there is a split point k such that T 's score is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "C[i, k, p] + C[k + 1, j, p].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "T has the maximum score over each of the above cases, for all valid choices of x and k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "Proof. Case S/C: If p has exactly one child x, then the tree can be decomposed into the edge from p to x and the subtree rooted at x. If p is outside the interval, then the maximum scoring such tree is clearly Score(p, x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "+ C[i, j, x].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "If p is inside, then x has a gap across p, and so using Claim 1, the maximum scoring tree rooted at p with a single child x has score of Score(p, x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "+ C[i, p \u2212 1, x] + C[p + 1, j, x].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "Case M/C: If there are multiple children and the endpoints are descended from the same child x, then the child x has to have gap degree 1. x itself is on either the left or right side of its gap. For the moment, assume x is in the left interval. By Claim 1, we can split up the score of the tree as the score of the edge from p to x (Score(p, x) ), the score of the subtree corresponding to the projection of x to the left of its gap (C[i, k, x] ), and the score of the subtrees rooted at p with its remaining children and the subtree rooted at x corresponding to the right side of x's projection (D[k + 1, j, p, x, T] ). The case in which x is on the right side of its gap is symmetric.",
"cite_spans": [],
"ref_spans": [
{
"start": 333,
"end": 345,
"text": "(Score(p, x)",
"ref_id": null
},
{
"start": 434,
"end": 445,
"text": "(C[i, k, x]",
"ref_id": null
},
{
"start": 597,
"end": 618,
"text": "(D[k + 1, j, p, x, T]",
"ref_id": null
}
],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "Case M/D: If there are multiple children and the endpoints are descended from different children of p, then there must exist a split point k that partitions the children of p into two non-empty sets, such that each child's projection is either entirely on the left or entirely on the right of the split point. We show one such split point to demonstrate that there always exists at least one. Let x be the child of p that i is descended from, and let x l and x r be x's leftmost and right descendants, respectively. 9 Consider all the children of p (whose projections taken together partition [i, j] \u2212 {p}). No child can have descendants both to the left of x r and to the right of x r , because otherwise that child and x would be ill-nested. Therefore we can split up the interval at x r to have two gap-minding trees, both rooted at p. The score of T is then the sum of the scores of the best subtree rooted at p over [i, k] (C[i, k, p] ) and the score of the best subtree rooted at p over",
"cite_spans": [
{
"start": 921,
"end": 927,
"text": "[i, k]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 928,
"end": 939,
"text": "(C[i, k, p]",
"ref_id": null
}
],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "[k+1, j] (C[k + 1, j, p]).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "The above cases cover all non-empty gapminding trees, so the maximum will be found.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "Using Claim 2 to Devise an Algorithm The above claim showed that any problem of type C can be decomposed into subproblems of types C and D. From the definition of D, any problem of type D can clearly be decomposed into two problems of type C -simply split the interval at the split point known to exist and assign p or x as the roots for each side of the interval, as prescribed by the boolean b:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "D(i, j, p, x, T) = max k C[i, k, p] + C[k + 1, j, x] D(i, j, p, x, F) = max k C[i, k, x] + C[k + 1, j, p]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "Algorithm 1 makes direct use of the above claims. Note that in every gap-minding tree referred to in the cases above, all vertices that were not the root formed a single interval. Algorithm 1 builds up trees in increasing sizes of [i, j] \u222a {p}. The tree in C[i, j, p] corresponds to the maximum of four subroutines: SingleChild (S/C), EndpointsDiff (M/D), EndsFromLeftChild (M/C), and EndsFrom-RightChild (M/C). The D subproblems are filled in with the subroutine Max2Subtrees, which uses the above discussion. The maximum score of any gapminding tree is then found in C[1, n, 0], and the tree itself can be found using backpointers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For the gap interval",
"sec_num": "2."
},
{
"text": "If the input is assumed to be the complete graph (any word can have any other word as its parent), then the above algorithm takes O(n 5 ) time. The most expensive steps are M/C, which take O(n 2 ) time to fill in each of the O(n 3 ) C cells. and solving a D subproblem, which takes O(n) time on each of the O(n 4 ) possible such problems. Pruning: In practice, the set of edges considered (m) is not necessarily O(n 2 ). Many edges can be ruled out beforehand, either based on the distance in the sentence between the two words (Eisner and Smith, 2010) , the predictions of a local ranker (Martins et al., 2009) , or the marginals computed from a simpler parsing model (Carreras et al., 2008 ).",
"cite_spans": [
{
"start": 528,
"end": 552,
"text": "(Eisner and Smith, 2010)",
"ref_id": "BIBREF8"
},
{
"start": 589,
"end": 611,
"text": "(Martins et al., 2009)",
"ref_id": "BIBREF16"
},
{
"start": 669,
"end": 691,
"text": "(Carreras et al., 2008",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime analysis",
"sec_num": "5.1"
},
{
"text": "If we choose a pruning strategy such that each word has at most k potential parents (incoming edges), then the running time drops to O(kn 4 ). The five indices in an M/C step were: i, j, k, p, and x. As there must be an edge from p to x, and x only has k possible parents, there are now only O(kn 4 ) valid such combinations. Similarly, each D subproblem (which ranges over i, j, k, p, x) may only come into existence because of an edge from p to x, so again the runtime of these such steps drops to O(kn 4 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime analysis",
"sec_num": "5.1"
},
{
"text": "The ability to define slightly non-local features has been shown to improve parsing performance. In this section, we assume a grandparent-factored model, where the score of a tree is now the sum over scores of (g, p, c) triples, where (g, p) and (p, c) are both directed edges in the tree. Let Score(g, p, c) indicate the score of this grandparent-parent-child triple. We now show how to extend the above algorithm to find the maximum scoring gap-minding tree with grandparent scoring. Our two subproblems are now C [i, j, p, g] and D [i, j, p, x, b, g] ; each subproblem has been augmented with an additional grandparent index g, which has the meaning that g is p's parent. Note that g must be outside of the interval [i, j] (if it were not, a cycle would be introduced). Edge scores are now computed over (g, p, x) triples. In particular, claim 2 is modified:",
"cite_spans": [
{
"start": 516,
"end": 528,
"text": "[i, j, p, g]",
"ref_id": null
},
{
"start": 535,
"end": 553,
"text": "[i, j, p, x, b, g]",
"ref_id": null
},
{
"start": 807,
"end": 816,
"text": "(g, p, x)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to Grandparent Factorizations",
"sec_num": "6"
},
{
"text": "Claim 3. Let T be the optimum scoring gapminding tree rooted at p with vertices V = [i, j] \u222a {p}, where p \u2208 (i, j) (I), with a grandparent index g (g / \u2208 V ). Then T and its score are derived from one of the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to Grandparent Factorizations",
"sec_num": "6"
},
{
"text": "S/C If p has a single child x in T , then if p \u2208 (i, j) (I), T 's score is Score(g, p, x) + C[i, p\u22121, x, p]+C[p + 1, j, x, p]; if p / \u2208 [i, j] (E), T 's score is Score(g, p, x) + C[i, j, x, p].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to Grandparent Factorizations",
"sec_num": "6"
},
{
"text": "M/C If p has multiple children in T and i and j are descended from the same child x in T , then there is a split point k such that T 's score is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to Grandparent Factorizations",
"sec_num": "6"
},
{
"text": "Score(g, p, x) + C[i, k, x, p] + D[k + 1, j, p, x, T, g] if",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to Grandparent Factorizations",
"sec_num": "6"
},
{
"text": "x is on the left side of its own gap, and T 's score is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to Grandparent Factorizations",
"sec_num": "6"
},
{
"text": "Score(g, p, x) + C[k, j, x, p] + D[i, k \u2212 1, p, x, F, g] if",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to Grandparent Factorizations",
"sec_num": "6"
},
{
"text": "x is on the right side.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to Grandparent Factorizations",
"sec_num": "6"
},
{
"text": "M/D If p has multiple children in T and i and j are descended from different children in T , then there is a split point k such that T 's score is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to Grandparent Factorizations",
"sec_num": "6"
},
{
"text": "C[i, k, p, g] + C[k + 1, j, p, g].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to Grandparent Factorizations",
"sec_num": "6"
},
{
"text": "T has the maximum score over each of the above cases, for all valid choices of x and k. Note that for subproblems rooted at p, g is the grandparent index, while for subproblems rooted at x, p is the updated grandparent index. The D subproblems with the grandparent index are shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to Grandparent Factorizations",
"sec_num": "6"
},
{
"text": "D(i, j, p, x, T, g) = max k C[i, k, p, g] + C[k + 1, j, x, p] D(i, j, p, x, F, g) = max k C[i, k, x, p] + C[k + 1, j, p, g]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to Grandparent Factorizations",
"sec_num": "6"
},
{
"text": "We have added another index which ranges over n, so without pruning, we have now increased the running time to O(n 6 ). However, every step now includes both a g and a p (and often an x), so there is at least one implied edge in every step. If pruning is done in such a way that each word has at most k parents, then each word's set of grandparent and parent possibilities is at most k 2 . To run all of the S/C steps, we therefore need O(k 2 n 3 ) time; for all of the M/C steps, O(k 2 n 4 ) time; for all of the M/D steps, O(kn 4 ); for all of the D subproblems, O(k 2 n 4 ). The overall running time is therefore O(k 2 n 4 ), and we have shown that when edges are sufficiently pruned, grandparent factors add only an extra factor of k, and not a full extra factor of n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to Grandparent Factorizations",
"sec_num": "6"
},
{
"text": "The space of projective trees is strictly contained within the space of gap-minding trees which is strictly contained within spanning trees. Which space is most appropriate for natural language parsing may depend on the particular language and the type and frequencies of non-projective structures found in it. In this section we compare the parsing accuracy across languages for a parser which uses either the Eisner algorithm (projective), MST (spanning trees), or MaxGapMindingTree (gap-minding trees) as its decoder for both training and inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "We implemented both the basic gap-minding algorithm and the gap-minding algorithm with grandparent scoring as extensions to MSTParser 10 . MST-Parser (McDonald et al., 2005b; McDonald et al., 2005a) uses the Margin Infused Relaxed Algorithm (Crammer and Singer, 2003) for discriminative training. Training requires a decoder which produces the highest scoring tree (in the space of valid trees) under the current model weights. This same decoder is then used to produce parses at test time. MSTParser comes packaged with the Eisner algorithm (for projective trees) and MST (for spanning trees). MSTParser also includes two second order models: one of which is a projective decoder that also scores siblings (Proj+Sib) and the other of which produces non-projective trees by rearranging edges after producing a projective tree (Proj+Sib+Rearr). We add a further decoder with the algorithm presented here for gap minding trees, and plan to make the extension publicly available. The gap-minding decoder has both an edge-factored implementation and a version which scores grandparents as well. 11 The gap-minding algorithm is much more efficient when edges have been pruned so that each word has at most k potential parents. We use the weights from the trained MST models combined with the Matrix Tree Theorem (Smith and Smith, 2007; Koo et al., 2007; McDonald and Satta, 2007) to produce marginal probabilities of each edge. We wanted to be able to both achieve the running time bound and yet take advantage of the fact that the size of the set of reasonable parent choices is variable. We therefore use a hybrid pruning strategy: each word's set of potential parents is the smaller of a) the top k parents (we chose k = 10) or b) the set of parents whose probabilities are above a threshold (we chose th = .001). The running time for the gap-minding algorithm is then O(kn 4 ); with the grandparent features the gap-minding running time is O(k 2 n 4 ).",
"cite_spans": [
{
"start": 139,
"end": 174,
"text": "MST-Parser (McDonald et al., 2005b;",
"ref_id": null
},
{
"start": 175,
"end": 198,
"text": "McDonald et al., 2005a)",
"ref_id": "BIBREF19"
},
{
"start": 241,
"end": 267,
"text": "(Crammer and Singer, 2003)",
"ref_id": "BIBREF6"
},
{
"start": 1091,
"end": 1093,
"text": "11",
"ref_id": null
},
{
"start": 1307,
"end": 1330,
"text": "(Smith and Smith, 2007;",
"ref_id": "BIBREF23"
},
{
"start": 1331,
"end": 1348,
"text": "Koo et al., 2007;",
"ref_id": "BIBREF13"
},
{
"start": 1349,
"end": 1374,
"text": "McDonald and Satta, 2007)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "The training and test sets for the six languages come from the CoNLL-X shared task. 12 We train the gap-minding algorithm on sentences of length at most 100 13 (the vast majority of sentences). The projective and MST models are trained on all sentences and are run without any pruning. The Czech training set is much larger than the others and so for Czech only the first 10,000 training sentences were used. Testing is on the full test set, with no length restrictions.",
"cite_spans": [
{
"start": 84,
"end": 86,
"text": "12",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "The results are shown in Table 2 . The first three lines show the first order gap-minding decoder compared with the first order projective and MST de- 11 The grandparent features used were identical to the features provided within MSTParser for the second-order sibling parsers, with one exception -many features are conjoined with a direction indicator, which in the projective case has only two possibilities. We replaced this two-way distinction with a sixway distinction of the six possible orders of the grandparent, parent, and child.",
"cite_spans": [
{
"start": 151,
"end": 153,
"text": "11",
"ref_id": null
}
],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "12 MSTParser produces labeled dependencies on CoNLL formatted input. We replace all labels in the training set with a single dummy label to produce unlabeled dependency trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "13 Because of long training times, the gap-minding with grandparent models for Portuguese and Swedish were trained on only sentences up to 50 words. coders. The gap-minding decoder does better than the projective decoder on Czech, Danish, and Dutch, the three languages with the most non-projectivity, even though it was at a competitive disadvantage in terms of both pruning and (on languages with very long sentences) training data. The gap-minding decoder with grandparent features is better than the projective decoder with sibling features on all six of the languages. On some languages, the local search decoder with siblings has the absolute highest accuracy in Table 2 ; on other languages (Czech and Swedish) the gap-minding+grandparents has the highest accuracy. While not directly comparable because of the difference in features, the promising performance of the gap-minding+grandparents decoder shows that the space of gap-minding trees is larger than the space of projective trees, yet unlike spanning trees, it is tractable to find the best tree with higher order features. It would be interesting to extend the gap-minding algorithm to include siblings as well.",
"cite_spans": [],
"ref_spans": [
{
"start": 669,
"end": 676,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "Gap inheritance, a structural property on trees, has implications both for natural language syntax and for natural language parsing. We have shown that the mildly non-projective trees present in natural language treebanks all have zero or one children inherit each parent's gap. We also showed that the assumption of 1 gap inheritance removes a factor of n from parsing time, and the further assumption of 0 gap inheritance removes yet another factor of n. The space of gap-minding trees provides a closer fit to naturally occurring linguistic structures than the space of projective trees, and unlike spanning trees, the inclusion of higher order factors does not substantially increase the difficulty of finding the maximum scoring tree in that space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "http://ilk.uvt.nl/conll/free_data.html 2 LDC catalogue numbers LDC2006E01 and LDC2006E02 3 Trees are a reasonable assumption for most, but not all, linguistic structures. Parasitic gaps are an example in which a word perhaps should have multiple parents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "While some of the treebank structures are ill-nested or have a larger gap degree because of annotation decisions, some linguistic constructions in German and Czech are ill-nested or require at least two gaps under any reasonable representation (Chen-Main and Joshi, 2010; Chen-Main and Joshi, 2012).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The Proper Binding Condition(Fiengo, 1977) asserts that a moved element leaves behind a trace (unpronounced element), which must be c-commanded(Reinhart, 1976) by the corresponding pronounced material in its final location. Informally, c-commanded means that the first node is descended from the lowest ancestor of the other that has more than one child.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that x l = i by construction, and xr = j (because the endpoints are descended from different children).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://sourceforge.net/projects/mstparser/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Aravind Joshi for comments on an earlier draft. This material is based upon work supported under a National Science Foundation Graduate Research Fellowship.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Algorithm 1: MaxGapMindingTree Init: \u2200 i\u2208[1,n] C[i, i, i] = 0 for size = 0 to n \u2212 1 do for i = 1 to n \u2212 size do j = i + size / * Endpoint parents * / if size > 0 then ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Wellnested drawings as models of syntactic structure",
"authors": [
{
"first": "M",
"middle": [],
"last": "Bodirsky",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "M\u00f6hl",
"suffix": ""
}
],
"year": 2005,
"venue": "Tenth Conference on Formal Grammar and Ninth Meeting on Mathematics of Language",
"volume": "",
"issue": "",
"pages": "88--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Bodirsky, M. Kuhlmann, and M. M\u00f6hl. 2005. Well- nested drawings as models of syntactic structure. In In Tenth Conference on Formal Grammar and Ninth Meeting on Mathematics of Language, pages 88-1. University Press.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Tag, dynamic programming, and the perceptron for efficient, featurerich parsing",
"authors": [
{
"first": "X",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Koo",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Carreras, M. Collins, and T. Koo. 2008. Tag, dynamic programming, and the perceptron for efficient, feature- rich parsing. In Proceedings of CoNLL, pages 9-16. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Experiments with a higher-order projective dependency parser",
"authors": [
{
"first": "X",
"middle": [],
"last": "Carreras",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL",
"volume": "7",
"issue": "",
"pages": "957--961",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Carreras. 2007. Experiments with a higher-order projective dependency parser. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL, vol- ume 7, pages 957-961.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unavoidable illnestedness in natural language and the adequacy of tree local-mctag induced dependency structures",
"authors": [
{
"first": "J",
"middle": [],
"last": "Chen-Main",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Tenth International Workshop on Tree Adjoining Grammar and Related Formalisms",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Chen-Main and A. Joshi. 2010. Unavoidable ill- nestedness in natural language and the adequacy of tree local-mctag induced dependency structures. In Proceedings of the Tenth International Workshop on Tree Adjoining Grammar and Related Formalisms (TAG+ 10).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A dependency perspective on the adequacy of tree local multicomponent tree adjoining grammar",
"authors": [
{
"first": "J",
"middle": [],
"last": "Chen-Main",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
}
],
"year": 2012,
"venue": "In Journal of Logic and Computation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Chen-Main and A.K. Joshi. 2012. A depen- dency perspective on the adequacy of tree local multi- component tree adjoining grammar. In Journal of Logic and Computation. (to appear).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Lectures on Government and Binding. Dordrecht: Foris",
"authors": [
{
"first": "N",
"middle": [],
"last": "Chomsky",
"suffix": ""
}
],
"year": 1981,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Chomsky. 1981. Lectures on Government and Bind- ing. Dordrecht: Foris.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Ultraconservative online algorithms for multiclass problems",
"authors": [
{
"first": "K",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "951--991",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Crammer and Y. Singer. 2003. Ultraconservative on- line algorithms for multiclass problems. Journal of Machine Learning Research, 3:951-991, March.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A faster parsing algorithm for lexicalized tree-adjoining grammars",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 5th Workshop on Tree-Adjoining Grammars and Related Formalisms (TAG+5)",
"volume": "",
"issue": "",
"pages": "14--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Eisner and G. Satta. 2000. A faster parsing algorithm for lexicalized tree-adjoining grammars. In Proceed- ings of the 5th Workshop on Tree-Adjoining Grammars and Related Formalisms (TAG+5), pages 14-19.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Favor short dependencies: Parsing with soft and hard constraints on dependency length",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Trends in Parsing Technology: Dependency Parsing, Domain Adaptation, and Deep Parsing",
"volume": "",
"issue": "",
"pages": "121--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Eisner and N.A. Smith. 2010. Favor short dependen- cies: Parsing with soft and hard constraints on depen- dency length. In Harry Bunt, Paola Merlo, and Joakim Nivre, editors, Trends in Parsing Technology: Depen- dency Parsing, Domain Adaptation, and Deep Parsing, chapter 8, pages 121-150. Springer.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bilexical grammars and their cubictime parsing algorithms",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2000,
"venue": "Advances in Probabilistic and Other Parsing Technologies",
"volume": "",
"issue": "",
"pages": "29--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Eisner. 2000. Bilexical grammars and their cubic- time parsing algorithms. In Harry Bunt and Anton Nijholt, editors, Advances in Probabilistic and Other Parsing Technologies, pages 29-62. Kluwer Academic Publishers, October.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "On trace theory",
"authors": [
{
"first": "R",
"middle": [],
"last": "Fiengo",
"suffix": ""
}
],
"year": 1977,
"venue": "Linguistic Inquiry",
"volume": "8",
"issue": "1",
"pages": "35--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Fiengo. 1977. On trace theory. Linguistic Inquiry, 8(1):35-61.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Dependency parsing schemata and mildly non-projective dependency parsing",
"authors": [
{
"first": "C",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Weir",
"suffix": ""
}
],
"year": 2011,
"venue": "Computational Linguistics",
"volume": "37",
"issue": "3",
"pages": "541--586",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. G\u00f3mez-Rodr\u00edguez, J. Carroll, and D. Weir. 2011. De- pendency parsing schemata and mildly non-projective dependency parsing. Computational Linguistics, 37(3):541-586.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Efficient third-order dependency parsers",
"authors": [
{
"first": "T",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Koo and M. Collins. 2010. Efficient third-order de- pendency parsers. In Proceedings of ACL, pages 1-11.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Structured prediction models via the matrix-tree theorem",
"authors": [
{
"first": "T",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Globerson",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Koo, A. Globerson, X. Carreras, and M. Collins. 2007. Structured prediction models via the matrix-tree theo- rem. In Proceedings of EMNLP-CoNLL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Dual decomposition for parsing with nonprojective head automata",
"authors": [
{
"first": "T",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "A",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Jaakkola",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Sontag",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1288--1298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Koo, A.M. Rush, M. Collins, T. Jaakkola, and D. Son- tag. 2010. Dual decomposition for parsing with non- projective head automata. In Proceedings of EMNLP, pages 1288-1298.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Mildly nonprojective dependency structures",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of COLING/ACL",
"volume": "",
"issue": "",
"pages": "507--514",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Kuhlmann and J. Nivre. 2006. Mildly non- projective dependency structures. In Proceedings of COLING/ACL, pages 507-514.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Concise integer linear programming formulations for dependency parsing",
"authors": [
{
"first": "A",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "E",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "342--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.F.T. Martins, N.A. Smith, and E.P. Xing. 2009. Con- cise integer linear programming formulations for de- pendency parsing. In Proceedings of ACL, pages 342- 350.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Online learning of approximate dependency parsing algorithms",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. McDonald and F. Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Pro- ceedings of EACL, pages 81-88.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "On the complexity of non-projective data-driven dependency parsing",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 10th International Conference on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "121--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. McDonald and G. Satta. 2007. On the complexity of non-projective data-driven dependency parsing. In Proceedings of the 10th International Conference on Parsing Technologies, pages 121-132.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Online large-margin training of dependency parsers",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "91--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. McDonald, K. Crammer, and F. Pereira. 2005a. On- line large-margin training of dependency parsers. In Proceedings of ACL, pages 91-98.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Non-projective dependency parsing using spanning tree algorithms",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Ribarov",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of HLT-EMNLP",
"volume": "",
"issue": "",
"pages": "523--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. McDonald, F. Pereira, K. Ribarov, and J. Haji\u010d. 2005b. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of HLT-EMNLP, pages 523-530.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The Syntactic Domain of Anaphora",
"authors": [
{
"first": "T",
"middle": [],
"last": "Reinhart",
"suffix": ""
}
],
"year": 1976,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Reinhart. 1976. The Syntactic Domain of Anaphora. Ph.D. thesis, Massachusetts Institute of Technology.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Restrictions on tree adjoining languages",
"authors": [
{
"first": "G",
"middle": [],
"last": "Satta",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Schuler",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of COLING-ACL",
"volume": "",
"issue": "",
"pages": "1176--1182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Satta and W. Schuler. 1998. Restrictions on tree ad- joining languages. In Proceedings of COLING-ACL, pages 1176-1182.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Probabilistic models of nonprojective dependency trees",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D.A. Smith and N.A. Smith. 2007. Probabilistic models of nonprojective dependency trees. In Proceedings of EMNLP-CoNLL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Figure 1(b) shows a tree with inheritance degree 1:",
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"text": "x, b]: The maximum score of any set of two gap-minding trees, one rooted at p, one rooted at x, with vertices[i, j] \u222a {p, x} (x / \u2208 [i, j], pmay or may not be in [i, j]), such that for some k, vertices [i, k] are in the tree rooted at p if b = true (and at x if b = false), and vertices [k + 1, j] are in the tree rooted at x (p).",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"type_str": "table",
"text": "",
"num": null,
"content": "
: The number of sentences from the CoNLL-X training sets whose parse trees fall into each of the above |
classes. The two new classes of structures, Mild+0-Inherit and Mild+1-Inherit, have more coverage of empirical data |
than projective structures, yet can be parsed faster than mildly non-projective structures. Parsing times assume an edge- |
based factorization with no pruning of edges. The corresponding algorithms for Mild+1-Inherit and Mild+0-Inherit |
are in Sections 4 and 5. |
"
},
"TABREF2": {
"html": null,
"type_str": "table",
"text": "{n} forms a gap-minding tree rooted at n, [x k , x l ] \u222a {n} also forms a gap-minding tree rooted at n, and [x j+1 , x k\u22121 ] \u222a {p} forms a gap-minding tree rooted at p.",
"num": null,
"content": ""
},
"TABREF3": {
"html": null,
"type_str": "table",
"text": "80.0 88.2 79.8 87.4 86.9 MST 78.0 80.4 88.1 84.6 86.7 86.2 Gap-Mind 77.6 80.8 88.6 83.9 86.8 86.0 Proj+Sib 78.2 80.0 88.9 81.1 87.5 88.1 +Rearr 78.5 81.3 89.3 85.4 88.2 87.7 GM+Grand 78.3 82.1 89.1 84.6 87.7 88.5",
"num": null,
"content": ""
},
"TABREF4": {
"html": null,
"type_str": "table",
"text": "Unlabeled Attachment Scores on the CoNLL-X shared task test set.",
"num": null,
"content": ""
}
}
}
}