|
{ |
|
"paper_id": "N09-1038", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:43:47.593124Z" |
|
}, |
|
"title": "Minimal-length linearizations for mildly context-sensitive dependency trees", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"Albert" |
|
], |
|
"last": "Park", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "yapark@ucsd.edu" |
|
}, |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "rlevy@ling.ucsd.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The extent to which the organization of natural language grammars reflects a drive to minimize dependency length remains little explored. We present the first algorithm polynomial-time in sentence length for obtaining the minimal-length linearization of a dependency tree subject to constraints of mild context sensitivity. For the minimally contextsensitive case of gap-degree 1 dependency trees, we prove several properties of minimallength linearizations which allow us to improve the efficiency of our algorithm to the point that it can be used on most naturallyoccurring sentences. We use the algorithm to compare optimal, observed, and random sentence dependency length for both surface and deep dependencies in English and German. We find in both languages that analyses of surface and deep dependencies yield highly similar results, and that mild contextsensitivity affords very little reduction in minimal dependency length over fully projective linearizations; but that observed linearizations in German are much closer to random and farther from minimal-length linearizations than in English.", |
|
"pdf_parse": { |
|
"paper_id": "N09-1038", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The extent to which the organization of natural language grammars reflects a drive to minimize dependency length remains little explored. We present the first algorithm polynomial-time in sentence length for obtaining the minimal-length linearization of a dependency tree subject to constraints of mild context sensitivity. For the minimally contextsensitive case of gap-degree 1 dependency trees, we prove several properties of minimallength linearizations which allow us to improve the efficiency of our algorithm to the point that it can be used on most naturallyoccurring sentences. We use the algorithm to compare optimal, observed, and random sentence dependency length for both surface and deep dependencies in English and German. We find in both languages that analyses of surface and deep dependencies yield highly similar results, and that mild contextsensitivity affords very little reduction in minimal dependency length over fully projective linearizations; but that observed linearizations in German are much closer to random and farther from minimal-length linearizations than in English.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "This paper takes up the relationship between two hallmarks of natural language dependency structure. First, there seem to be qualitative constraints on the relationship between the dependency structure of the words in a sentence and their linear ordering. In particular, this relationship seems to be such that any natural language sentence, together with its dependency structure, should be generable by a mildly context-sensitivity formalism (Joshi, 1985) , in particular a linear context-free rewrite system in which the right-hand side of each rule has a distinguished head (Pollard, 1984; Vijay-Shanker et al., 1987; Kuhlmann, 2007) . This condition places strong constraints on the linear contiguity of word-word dependency relations, such that only limited classes of crossing context-free dependency structures may be admitted.", |
|
"cite_spans": [ |
|
{ |
|
"start": 444, |
|
"end": 457, |
|
"text": "(Joshi, 1985)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 578, |
|
"end": 593, |
|
"text": "(Pollard, 1984;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 594, |
|
"end": 621, |
|
"text": "Vijay-Shanker et al., 1987;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 622, |
|
"end": 637, |
|
"text": "Kuhlmann, 2007)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The second constraint is a softer preference for words in a dependency relation to occur in close proximity to one another. This constraint is perhaps best documented in psycholinguistic work suggesting that large distances between governors and dependents induce processing difficulty in both comprehension and production (Hawkins, 1994 (Hawkins, , 2004 Gibson, 1998; Jaeger, 2006) . Intuitively there is a relationship between these two constraints: consistently large dependency distances in a sentence would require many crossing dependencies. However, it is not the case that crossing dependencies always mean longer dependency distances. For example, (1) below has no crossing dependencies, but the distance between arrived and its dependent Yesterday is large. The overall dependency length of the sentence can be reduced by extraposing the relative clause who was wearing a hat, resulting in (2), in which the dependency Yesterday\u2192arrived crosses the dependency woman\u2190who.", |
|
"cite_spans": [ |
|
{ |
|
"start": 323, |
|
"end": 337, |
|
"text": "(Hawkins, 1994", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 338, |
|
"end": 354, |
|
"text": "(Hawkins, , 2004", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 355, |
|
"end": 368, |
|
"text": "Gibson, 1998;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 369, |
|
"end": 382, |
|
"text": "Jaeger, 2006)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1) Yesterday a woman who was wearing a hat arrived.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(2) Yesterday a woman arrived who was wearing a hat.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There has been some recent work on dependency length minimization in natural language sentences (Gildea and Temperley, 2007) , but the relationship between the precise constraints on available linearizations and dependency length minimization remains little explored. In this paper, we introduce the first efficient algorithm for obtaining linearizations of dependency trees that minimize overall dependency lengths subject to the constraint of mild context-sensitivity, and use it to investigate the relationship between this constraint and the distribution of dependency length actually observed in natural languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 124, |
|
"text": "(Gildea and Temperley, 2007)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the last few years there has been a resurgence of interest in computation on dependency-tree structures for natural language sentences, spurred by work such as McDonald et al. (2005a,b) showing that working with dependency-tree syntactic representations in which each word in the sentence corresponds to a node in the dependency tree (and vice versa) can lead to algorithmic benefits over constituency-structure representations. The linearization of a dependency tree is simply the linear order in which the nodes of the tree occur in a surface string. There is a broad division between two classes of linearizations: projective linearizations that do not lead to any crossing dependencies in the tree, and non-projective linearizations that involve at least one crossing dependency pair. Example (1), for example, is projective, whereas Example (2) is non-projective due to the crossing between the Yes-terday\u2192arrived and woman\u2190who dependencies. Beyond this dichotomy, however, the homomorphism from headed tree structures to dependency structures (Miller, 2000) can be used together with work on the mildly context-sensitive formalism linear context-free rewrite systems (LCFRSs) (Vijay-Shanker et al., 1987) to characterize various classes of mildly non-projective dependency-tree linearizations (Kuhlmann and Nivre, 2006) . The LCFRSs are an infinite sequence of classes of formalism for generating surface strings through derivation trees in a rule-based context-free rewriting system. The i-th LCFRS class (for i = 0, 1, 2, . . . ) imposes the con- straint that every node in the derivation tree maps to to a collection of at most i+1 contiguous substrings. The 0-th class of LCFRS, for example, corresponds to the context-free grammars, since each node in the derivation tree must map to a single contiguous substring; the 1st class of LCFRS corresponds to Tree-Adjoining Grammars (Joshi et al., 1975) , in which each node in the derivation tree must map to at most a pair of contiguous substrings; and so forth. The dependency trees induced when each rewrite rule in an i-th order LCFRS distinguish a unique head can similarly be characterized by being of gap-degree i, so that i is the maximum number of gaps that may appear between contiguous substrings of any subtree in the dependency tree (Kuhlmann and M\u00f6hl, 2007) . The dependency tree for Example (2), for example, is of gap-degree 1. Although there are numerous documented cases in which projectivity is violated in natural language, there are exceedingly few documented cases in which the documented gap degree exceeds 1 (though see, for example, Kobele, 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 188, |
|
"text": "McDonald et al. (2005a,b)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1052, |
|
"end": 1066, |
|
"text": "(Miller, 2000)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1185, |
|
"end": 1213, |
|
"text": "(Vijay-Shanker et al., 1987)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1302, |
|
"end": 1328, |
|
"text": "(Kuhlmann and Nivre, 2006)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1891, |
|
"end": 1911, |
|
"text": "(Joshi et al., 1975)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 2305, |
|
"end": 2330, |
|
"text": "(Kuhlmann and M\u00f6hl, 2007)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 2617, |
|
"end": 2630, |
|
"text": "Kobele, 2006)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Projective and mildly non-projective dependency-tree linearizations", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Even under the strongest constraint of projectivity, the number of possible linearizations of a dependency tree is exponential in both sentence length and arity (the maximum number of dependencies for any word). As pointed out by Gildea and Temperley (2007) , however, finding the unconstrained minimal-length linearization is a well-studied problem with an O(n 1.6 ) solution (Chung, 1984) . However, this approach does not take into account constraints of projectivity or mild context-sensitivity. Gildea and Temperley themselves introduced a novel efficient algorithm for finding the minimized dependency length of a sentence subject to the constraint that the linearization is projective. Their algorithm can perhaps be most simply understood by making three observations. First, the total depen- Figure 2 : Dependency length factorization for efficient projective linearization, using the dependency subtree of Figure 1 dency length of a projective linearization can be written as", |
|
"cite_spans": [ |
|
{ |
|
"start": 230, |
|
"end": 257, |
|
"text": "Gildea and Temperley (2007)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 377, |
|
"end": 390, |
|
"text": "(Chung, 1984)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 801, |
|
"end": 809, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 916, |
|
"end": 924, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Finding minimal dependency-length linearizations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "w i \u23a1 \u23a2 \u23a3D(wi, E i ) + w j dep \u2192 w i D(w i , E j ) \u23a4 \u23a5 \u23a6 (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Finding minimal dependency-length linearizations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where E i is the boundary of the contiguous substring corresponding to the dependency subtree rooted at w i which stands between w i and its governor, and D(w i , E j ) is the distance from w i to E j , with the special case of D(w root , E root ) = 0 (Figures 1 and 2 ). Writing the total dependency length this way makes it clear that each term in the outer sum can be optimized independently, and thus one can use dynamic programming to recursively find optimal subtree orderings from the bottom up. Second, for each subtree, the optimal ordering can be obtained by placing dependent subtrees on alternating sides of w from inside out in order of increasing length. Third, the total dependency lengths between any words withing an ordering stays the same when the ordering is reversed, letting us assume that D(w i , E i ) will be the length to the closest edge. These three observations lead to an algorithm with worst-case complexity of O(n log m) time, where n is sentence length and m is sentence arity. (The log m term arises from the need to sort the daughters of each node into descending order of length.)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 269, |
|
"text": "(Figures 1 and 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Finding minimal dependency-length linearizations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "When limited subclasses of nonprojectivity are admitted, however, the problem becomes more difficult because total dependency length can no longer be written in such a simple form as in Equation (1). Intuitively, the size of the effect on dependency length of a decision to order a given subtree discontiguously, as in a woman. . . who was wearing a hat in Example (2), cannot be calculated without consulting the length of the string that the discontiguous subtree would be wrapped around. Nevertheless, for any limited gap degree, it is possible to use a different factorization of dependency length that keeps computation polynomial in sentence length. We introduce this factorization in the next section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Finding minimal dependency-length linearizations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "k h |c 1 | |c 2 | h d 12 d 11 d 21 d 22 d 31 d 32", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Finding minimal dependency-length linearizations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We begin by defining some terms. We use the word component to refer to a full linearization of a subtree in the case where it is realized as a single contiguous string, or to refer to any of of the contiguous substrings produced when a subtree is realized discontiguously. We illustrate the factorization for gap-degree 1, so that any subtree has at most two components. We refer to the component containing the head of the subtree as the head component, the remaining component as the dependent component, and for any given (head component, dependent component) pair, we use pair component to refer to the other component in the pair. We refer to the two components of dependent d j as d j1 and d j2 respectively, and assume that d j1 is the head component. When dependencies can cross, total dependency length cannot be factorized as simply as in Equation (1) for the projective case. However, we can still make use of a more complex factorization of the total dependency length as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Minimization with limited gap degree", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "w i \u23a1 \u23a2 \u23a3 D(w i , E i ) + w j dep \u2192 w i D(w i , E j ) + l j k j \u23a4 \u23a5 \u23a6 (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Minimization with limited gap degree", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "where l j is the number of links crossing between the two components of d j , and k j is the distance added between these two components by the partial linearization at w i . Figure 3 illustrates an example of such a partial linearization, where k 2 is |d 31 | + |d 32 | due to the fact that the links between d 21 and d 22 have to cross both components of d 3 . The factorization in Equation 2allows us to use dynamic programming to find minimal-length linearizations, so that worst-case complexity is polynomial rather than exponential in sentence length. However, the additional term in the factorization means that we need to track the number of links l crossing between the two components of the subtree S i headed by w i and the component lengths |c 1 | and |c 2 |. Additionally, the presence of crossing dependencies means that Gildea and Temperley's proof that ordering dependent components from the inside out in order of increasing length no longer goes through. This means that at each node w i we need to hold on to the minimal-length partial linearization for each combination of the following quantities:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 183, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Minimization with limited gap degree", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 |c 2 | (which also determines |c 1 |);", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Minimization with limited gap degree", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 the number of links l between c 1 and c 2 ;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Minimization with limited gap degree", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 and the direction of the link between w i and its governor.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Minimization with limited gap degree", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We shall refer to a combination of these factors as a status set. The remainder of this section describes a dynamic-programming algorithm for finding optimal linearizations based on the factorization in Equation 2, and continues with several further findings leading to optimizations that make the algorithm tractable for naturally occurring sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Minimization with limited gap degree", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Our first algorithm takes a tree and recursively finds the optimal orderings for each possible status set of each of its child subtrees, which it then uses to calculate the optimal ordering of the tree. To calculate the optimal orderings for each possible status set of a subtree S, we use the brute-force method of choosing all combinations of one status set from each child subtree, and for each combination, we try all possible orderings of the components of the child subtrees, calculate all possible status sets for S, and store the minimal dependency value for each appearing status set of S. The number of possible length pairings |c 1 |, |c 2 | and number of crossing links l are each bounded above by the sentence length n, so that the maximum number of status sets at each node is bounded above by n 2 . Since the sum of the status sets of all child subtrees is also bounded by n 2 , the maximum number of status set combinations is bounded by ( n 2 m ) m (obtainable from the inequality of arithmetic and geometric means). There are (2m+1)!m possible arrangements of head word and dependent components into two components. Since there are n nodes in the tree and each possible combination of status sets from each dependent sub tree must be tried, this algorithm has worst-case complexity of O((2m + 1)!mn( n 2 m ) m ). This algorithm could be generalized for mildly context-sensitive linearizations polynomial in sentence length for any gap degree desired, by introducing additional l terms denoting the number of links between pairs of components. However, even for gap degree 1 this bound is incredibly large, and as we show in Figure 7 , algorithm 1 is not computationally feasible for batch processing sentences of arity greater than 5.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1642, |
|
"end": 1650, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Algorithm 1", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We now show how to speed up our algorithm by proving by contradiction that for any optimal ordering which minimizes the total dependency length with the two-cluster constraint, for any given subtree S and its child subtree C, the pair components c 1 and c 2 of a child subtree C must be placed on opposite sides of the head h of subtree S.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Let us assume that for some dependency tree structure, there exists an optimal ordering where c 1 and c 2 are on the same side of h. Let us refer to the ordered set of words between c 1 and c 2 as v. None of the words in v will have dependency links to any of the words in c 1 and c 2 , since the dependencies of the words in c 1 and c 2 are either between themselves or the one link to h, which is not between the two components by our assumption. There will be j 1 \u2265 0 links from v going over c 1 , j 2 \u2265 0 dependency links from v going over c 2 , and l \u2265 1 links between c 1 and c 2 . Without loss of generality, let us assume that h is on the right side of c 2 . Let us consider the effect on total dependency length of swapping c 1 with v, so that the linear ordering is v c 1 c 2 \u227a h. The total dependency length of the new word ordering changes by \u2212j 1 |c 1 |\u2212l|v|+j 2 |c 1 | if c 2 is the head component, and decreases by another |v| if c 1 is the head component. Thus the total change in dependency length is less than or equal to", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "(j 2 \u2212 j 1 )|c 1 | \u2212 l \u00d7 |v| < (j 2 \u2212 j 1 )|c 1 | (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "If instead we swap places of v with c 2 instead of c 1 so that we have c 1 c 2 v \u227a h, we find that the total change in dependency length is less than or equal to", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "(j 1 \u2212 j 2 )|c 2 | \u2212 (l \u2212 1)|v| \u2264 (j 1 \u2212 j 2 )|c 2 | (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "It is impossible for the right-hand sides of (3) and 4to be positive at the same time, so swapping v with either c 1 or c 2 must lead to a linearization with lower overall dependency length. But this is a contradiction to our original assumption, so we see that for any optimal ordering, all split child subtree components c 1 and c 2 of the child subtree of S must be placed on opposite sides of the head h. This constraint allows us to simplify our algorithm for finding the minimal-length linearization. Instead of going through all logically possible orderings of components of the child subtrees, we can now decide on which side the head component will be on, and go through all possible orderings for each side. This changes the factorial part of our algorithm run time from (2m + 1)!m to 2 m (m!) 2 m, giving us O(2 m (m!) 2 mn( n 2 m ) m ), greatly reducing actual processing time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We now present two more findings for further increasing the efficiency of the algorithm. First, we look at the status sets which need to be stored for the dynamic programming algorithm. In the straightforward approach we first presented, we stored the optimal dependency lengths for all cases of possible status sets. We now know that we only need to consider cases where the pair components are on opposite sides. This means the direction of the link from the head to the parent will always be toward the inside direction of the pair components, so we can re-define the status set as (p, l) where p is again the length of the dependent component, and l is the number of links between the two pair components. If the p values for sets s 1 and s 2 are equal, s 1 has a smaller number of links than s 2 (l s 1 \u2264 l s 2 ) and s 1 has a smaller or equal total dependency length to s 2 , then replacing the components of s 2 with s 1 will always give us the same or more optimal total dependency length. Thus, we do not have to store instances of these cases for our algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 3", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Next, we prove by contradiction that for any two status sets s 1 and s 2 , if p s 1 > p s 2 > 0, l s 1 = l s 2 , and the TOTAL INTERNAL DEPENDENCY LENGTH t 1 of s 1 -defined as the sum in Equation (2) over only those words inside the subtree headed by h-is less than or equal to t 2 of s 2 , then using s 1 will be at least as good as s 2 , so we can ignore s 2 . Let us suppose that the optimal linearization can use s 2 but not s 1 . Then in the optimal linearization, the two pair components c s 2 ,1 and c s 2 ,2 of s 2 are on opposite sides of the parent head h. WLOG, let us assume that components c s 1 ,1 and c s 2 ,1 are the dependent components. Let us denote the total number of links going over c s 2 ,1 as j 1 and the words between c s 2 ,1 and c s 2 ,2 as v (note that v must contain h). If we swap c s 2 ,1 with v, so that c s 2 ,1 lies adjacent to c s 2 ,2 , then there would be j 2 +1 links going over c s 2 ,1 . By moving c s 2 ,1 from opposite sides of the head to be right next to c s 2 ,2 , the total dependency length of the sentence changes by \u2212j", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 3", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "1 |c s 2 ,1 |\u2212l s 2 |v|+(j 2 +1)|c s 2 ,1 |.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 3", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Since the ordering was optimal, we know that", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 3", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "(j 2 \u2212 j 1 + 1)|c s 2 ,1 | \u2212 l s 2 |v| \u2265 0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 3", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Since l > 0, we can see that j 1 \u2212 j 2 \u2264 0. Now, instead of swapping v with c s 2 ,1 , let us try substituting the components from s 1 instead of s 2 . The change of the total dependency length of the sentence will be:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 3", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "j 1 \u00d7 (|c s 1 ,1 | \u2212 |c s 2 ,1 |) + j 2 \u00d7 (|c s 1 ,2 | \u2212|c s 2 ,2 |) + t 1 \u2212 t 2 = (j 1 \u2212 j 2 ) \u00d7 (p s 1 \u2212 p s 2 ) + (t 1 \u2212 t 2 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 3", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Since j 1 \u2212 j 2 \u2264 0 and p s 1 > p s 2 , the first term is less than or equal to 0 and since t 1 \u2212 t 2 \u2264 0, the total dependency length will have been be equal or This finding greatly reduces the number of status sets we need to store and check higher up in the algorithm. The worst-case complexity remains O(2 m m! 2 mn( n 2 m ) m ), but the actual runtime is reduced by several orders of magnitude.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 3", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Our last optimization is on the ordering among the child subtree components on each side of the subtree head h. The initially proposed algorithm went through all combinations of possible orderings to find the optimal dependency length for each status set. By the first optimization in section 4.2 we have shown that we only need to consider the orderings in which the components are on opposite sides of the head. We now look into the ordering of the components on each side of the head. We first define the rank value r for each component c as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 4", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "|c| # links between c and its pair component+I (c) where I(c) is the indicator function having value 1 if c is a head component and 0 otherwise . Using this definition, we prove by contradiction that the ordering of the components from the head outward must be in order of increasing rank value.", |
|
"cite_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 50, |
|
"text": "(c)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 4", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Let us suppose that at some subtree S headed by h and with head component C 1 and dependent component C 2 , there is an optimal linearization in which there exist two components c i and c i+1 of immediate subtrees of S such that c i is closer to h, the com- We shall denote the number of links between each component and its pair component as l i , l i+1 . Let l i = l i + I(c i ) and l i+1 = l i+1 + I(c i+1 ). There are two cases to consider: either (1) c i and c i+1 are within the same component of S, or (2) c i is at the edge of C 1 nearest C 2 and c i+1 is at the edge of C 2 neareast C 1 . Consider case 1, and let us swap c i with c i+1 ; this affects only the lengths of links involving connections to c i or c i+1 . The total dependency length of the new linearization will change by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 4", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "\u2212l i+1 |c i | + l i |c i+1 | = \u2212l i l i+1 (r i \u2212 r i+1 ) < 0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 4", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "This is a contradiction to the assumption that we had an optimal ordering. Now consider case 2, which is illustrated in Figure 4 . We denote the number of links going over c i and c i+1 , excluding links to c i , c i+1 as \u03b1 1 and \u03b1 2 respectively, and the length of words between the edges of C 1 and C 2 as k. Let us move c i+1 to the outermost position of C 1 , as shown in Figure 5 . Since the original linearization was optimal, we have:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 128, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 376, |
|
"end": 384, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Algorithm 4", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "\u2212\u03b1 2 |c i+1 | + \u03b1 1 |c i+1 | \u2212 l i+1 k \u2265 0 (\u03b1 1 \u2212 \u03b1 2 )|c i+1 | \u2265 l i+1 k (\u03b1 1 \u2212 \u03b1 2 )r i+1 \u2265 k", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 4", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Let us also consider the opposite case of moving c i to the inner edge of C 2 , as shown in Figure ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 98, |
|
"text": "Figure", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Algorithm 4", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "\u2212\u03b1 1 |c i | + \u03b1 2 |c i | + l i k \u2265 0 (\u03b1 2 \u2212 \u03b1 1 )|c i | \u2265 \u2212l i k (\u03b1 1 \u2212 \u03b1 2 )r i \u2264 k", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 4", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "But this is a contradiction, since r i > r i+1 . Combining the two cases, we can see that regardless of where the components may be split, in an optimal ordering the components going outwards from the head must have an increasing rank value. This result allows us to simplify our algorithm greatly, because we no longer need to go through all combinations of orderings. Once it has been decided which components will come on each side of the head, we can sort the components by rank value and place them from the head out. This reduces the factorial component of the algorithm's complexity to m log m, and the overall worst-case complexity to O(nm 2 log m( 2n 2 m ) m ). Although this is still exponential in the arity of the tree, nearly all sentences encountered in treebanks have an arity low enough to make the algorithm tractable and even very efficient, as we show in the following section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 4", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Using the above algorithm, we calculated minimal dependency lengths for English sentences from the WSJ portion of the Penn Treebank, and for German sentences from the NEGRA corpus. The English-German comparison is of interest because word order is freer, and crossing dependencies more common, in German than in English (Kruijff and Vasishth, 2003) . We extracted dependency trees from these corpora using the head rules of Collins (1999) for English, and the head rules of Levy and Manning (2004) for German. Two dependency trees were extracted from each sentence, the surface tree extracted by using the head rules on the context-free tree representation (i.e. no crossing dependencies), and the deep tree extracted by first returning discontinuous dependents (marked by *T* and *ICH* in WSJ, and by *T* in the Penn-format version of NEGRA) before applying head rules. Figure 7 shows the average time it takes to calculate the minimal dependency length with crossing dependencies for WSJ sentences using the unoptimized algorithm of Section 4.1 and the fully optimized algorithm of Section 4.4. Timing tests were implemented and performed using Java 1.6.0 10 on a system running Linux 2.6.18-6-amd64 with a 2.0 GHz Intel Xeon processor and 16 gigs of memory, run on a single core. We can see from Figure 7 that the straight-forward dynamic programming algorithm takes many more magnitudes of time than our optimized algorithm, making it infeasible to calculate the minimal dependency length for larger sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 320, |
|
"end": 348, |
|
"text": "(Kruijff and Vasishth, 2003)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 424, |
|
"end": 438, |
|
"text": "Collins (1999)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 474, |
|
"end": 497, |
|
"text": "Levy and Manning (2004)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 871, |
|
"end": 879, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 1299, |
|
"end": 1307, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Empirical results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The results we present below were obtained with the fully optimized algorithm from the sentences with a maximum arity of 10, using 49,176 of the 49,208 WSJ sentences and 20,563 of the 20,602 NEGRA sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Summary results over all sentences from each corpus are shown in Table 1 . We can see that for both corpora, the oberved dependency length is smaller than the dependency length of random orderings, even when the random ordering is subject to the projectivity constraint. Relaxing the projectivity constraint by allowing crossing dependencies introduces a slightly lower optimal dependency length. The average sentence dependency lengths for the three random orderings are significantly higher than the observed values. It is interesting to note that the random orderings given the projectivity constraint and the two-cluster constraint have very similar dependency lengths, where as a total random ordering increases the dependency length significantly. NEGRA generally has shorter sentences than WSJ, so we need a more detailed picture of dependency length as a function of sentence length; this is shown in Figure 8 . As in Table 1 , we see that English, which has less crossing dependency structures than German, has observed DL closer to optimal DL and farther from random DL. We also see that the random and observed DLs behave very similarly across different sentence lengths in English and German, but observed DL grows faster in German. Perhaps surprisingly, optimal projective DL and gap-degree 1 DL tend to be very similar even for longer sentences. The picture as a function of sentence arity is largely the same (Figure 9 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 72, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 909, |
|
"end": 917, |
|
"text": "Figure 8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 926, |
|
"end": 933, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1424, |
|
"end": 1433, |
|
"text": "(Figure 9", |
|
"ref_id": "FIGREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Empirical results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper, we have presented an efficient dynamic programming algorithm which finds minimumlength dependency-tree linearizations subject to constraints of mild context-sensitivity. For the gapdegree 1 case, we have proven several properties of these linearizations, and have used these properties to optimize our algorithm. This made it possible to find minimal dependency lengths for sentences from the English Penn Treebank WSJ and German NE-GRA corpora. The results show that for both languages, using surface dependencies and deep dependencies lead to generally similar conclusions, but that minimal lengths for deep dependencies are consistently slightly higher for English and slightly lower for German. This may be because German has many more crossing dependencies than English. Another finding is that the difference between average sentence DL does not change much between optimizing for the projectivity constraint and the twocluster constraint: projectivity seems to give natural language almost all the flexibility it needs to minimize DL. For both languages, the observed linearization is much closer in DL to optimal linearizations than to random linearizations; but crucially, we see that English is closer to the optimal linearization and farther from random linearization than German. This finding is resonant with the fact that German has richer morphology and overall greater variability in observed word order, and with psycholinguistic results suggesting that dependencies of greater linear distance do not always pose the same increased processing load in German sentence comprehension as they do in English (Konieczny, 2000) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1635, |
|
"end": 1652, |
|
"text": "(Konieczny, 2000)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "On optimal linear arrangements of trees", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"R K" |
|
], |
|
"last": "Chung", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1984, |
|
"venue": "Computers and Mathematics with Applications", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "43--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chung, F. R. K. (1984). On optimal linear arrange- ments of trees. Computers and Mathematics with Applications, 10:43-60.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Head-Driven Statistical Models for Natural Language Parsing", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Collins, M. (1999). Head-Driven Statistical Models for Natural Language Parsing. PhD thesis, Uni- versity of Pennsylvania.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Linguistic complexity: Locality of syntactic dependencies", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Gibson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Cognition", |
|
"volume": "68", |
|
"issue": "", |
|
"pages": "1--76", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gibson, E. (1998). Linguistic complexity: Locality of syntactic dependencies. Cognition, 68:1-76.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Optimizing grammars for minimum dependency length", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Temperley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gildea, D. and Temperley, D. (2007). Optimizing grammars for minimum dependency length. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A Performance Theory of Order and Constituency", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Hawkins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hawkins, J. A. (1994). A Performance Theory of Order and Constituency. Cambridge.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Efficiency and Complexity in Grammars", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Hawkins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hawkins, J. A. (2004). Efficiency and Complexity in Grammars. Oxford University Press.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Redundancy and Syntactic Reduction in Spontaneous Speech", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Jaeger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jaeger, T. F. (2006). Redundancy and Syntactic Re- duction in Spontaneous Speech. PhD thesis, Stan- ford University, Stanford, CA.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "How much context-sensitivity is necessary for characterizing structural descriptions -Tree Adjoining Grammars", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Joshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "Natural Language Processing -Theoretical, Computational, and Psychological Perspectives", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joshi, A. K. (1985). How much context-sensitivity is necessary for characterizing structural descrip- tions -Tree Adjoining Grammars. In Dowty, D., Karttunen, L., and Zwicky, A., editors, Nat- ural Language Processing -Theoretical, Com- putational, and Psychological Perspectives. Cam- bridge.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Tree adjunct grammars", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Takahashi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1975, |
|
"venue": "Journal of Computer and System Sciences", |
|
"volume": "10", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joshi, A. K., Levy, L. S., and Takahashi, M. (1975). Tree adjunct grammars. Journal of Computer and System Sciences, 10(1).", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Generating Copies: An investigation into Structural Identity in Language and Grammar", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Kobele", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kobele, G. M. (2006). Generating Copies: An inves- tigation into Structural Identity in Language and Grammar. PhD thesis, UCLA.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Locality and parsing complexity", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Konieczny", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Journal of Psycholinguistic Research", |
|
"volume": "29", |
|
"issue": "6", |
|
"pages": "627--645", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Konieczny, L. (2000). Locality and parsing com- plexity. Journal of Psycholinguistic Research, 29(6):627-645.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Quantifying word order freedom in natural language: Implications for sentence processing", |
|
"authors": [ |
|
{ |
|
"first": "G.-J", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Kruijff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Vasishth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Architectures and Mechanisms for Language Processing conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kruijff, G.-J. M. and Vasishth, S. (2003). Quantify- ing word order freedom in natural language: Im- plications for sentence processing. Proceedings of the Architectures and Mechanisms for Language Processing conference.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Dependency Structures and Lexicalized Grammars", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kuhlmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kuhlmann, M. (2007). Dependency Structures and Lexicalized Grammars. PhD thesis, Saarland Uni- versity.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Mildly context-sensitive dependency languages", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kuhlmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "M\u00f6hl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kuhlmann, M. and M\u00f6hl, M. (2007). Mildly context-sensitive dependency languages. In Pro- ceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Mildly nonprojective dependency structures", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kuhlmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of COLING/ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kuhlmann, M. and Nivre, J. (2006). Mildly non- projective dependency structures. In Proceedings of COLING/ACL.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Deep dependencies from context-free statistical parsers: correcting the surface dependency approximation", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Levy, R. and Manning, C. (2004). Deep depen- dencies from context-free statistical parsers: cor- recting the surface dependency approximation. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Online large-margin training of dependency parsers", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "McDonald, R., Crammer, K., and Pereira, F. (2005a). Online large-margin training of depen- dency parsers. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Non-projective dependency parsing using spanning tree algorithms", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Ribarov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Haji\u010d", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "McDonald, R., Pereira, F., Ribarov, K., and Haji\u010d, J. (2005b). Non-projective dependency parsing using spanning tree algorithms. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Strong Generative Capacity: The Semantics of Linguistic Formalism", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miller, P. (2000). Strong Generative Capacity: The Semantics of Linguistic Formalism. Cambridge.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Generalized Phrase Structure Grammars, Head Grammars, and Natural Languages", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Pollard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1984, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pollard, C. (1984). Generalized Phrase Structure Grammars, Head Grammars, and Natural Lan- guages. PhD thesis, Stanford.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Characterizing structural descriptions produced by various grammatical formalisms", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Vijay-Shanker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Weir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Joshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vijay-Shanker, K., Weir, D. J., and Joshi, A. K. (1987). Characterizing structural descriptions produced by various grammatical formalisms. In Proceedings of ACL.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Sample dependency subtree forFigure 2" |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Factorizing dependency length at node w i of a mildly context-sensitive dependency tree. This partial linearization of head with dependent components makes c 1 the head component and leads to l = 2 links crossing between c 1 and c 2 ." |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Initial setup for latter part of optimization proof in section 4.4. To the far left is the head h of subtree S. The component pair C 1 and C 2 makes up S, and g is the governor of h. The length of the substring v between C 1 and C 2 is k. c i and c i+1 are child subtree components." |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Moving c i+1 to C 1" |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Moving c i to C 2 have decreased. But this contradicts our assumption that only s 2 can be part of an optimal ordering." |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Timing comparison of first and fully optimized algorithms ponents have rank values r i and r i+1 respectively, r i > r i+1 , and no other component of the immediate subtrees of S intervenes between c i and c i+1 ." |
|
}, |
|
"FIGREF7": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Average sentence DL as a function of sentence arity. Legend is ordered top curve to bottom curve." |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Average sentence DL as a function of sentence length. Legend is ordered top curve to bottom curve.", |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Unconstrained Random</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>2\u2212component Random</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Projective Random</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Observed</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Average sentence DL</td><td/><td/><td>Projective Optimal 2\u2212component Optimal</td><td/><td/><td/></tr><tr><td>Average sentence DL</td><td>0 400 100 200 300</td><td>2 Unconstrained Random 3 4 5 English/Surface 2\u2212component Random Projective Random Observed Figure 8: 1 Projective Optimal 2\u2212component Optimal</td><td>6</td><td>7</td><td>8</td><td>Average sentence DL</td><td>0 400 100 200 300</td><td>1</td><td>2 Unconstrained Random 3 4 5 English/Deep 2\u2212component Random Projective Random Observed Projective Optimal 2\u2212component Optimal</td><td>6</td><td>7</td><td>8</td><td>Average sentence DL</td><td>0 400 100 200 300</td><td>1</td><td>2 Unconstrained Random 3 4 5 German/Surface 2\u2212component Random Projective Random Observed Projective Optimal 2\u2212component Optimal</td><td>6</td><td>7</td><td>8</td><td>Average sentence DL</td><td>0 400 100 200 300</td><td>1</td><td>2 Unconstrained Random 3 4 5 German/Deep 2\u2212component Random Projective Random Observed Projective Optimal 2\u2212component Optimal</td><td>6</td><td>7</td><td>8</td></tr><tr><td/><td/><td>Sentence Arity</td><td/><td/><td/><td/><td/><td/><td>Sentence Arity</td><td/><td/><td/><td/><td/><td/><td>Sentence Arity</td><td/><td/><td/><td/><td/><td/><td>Sentence Arity</td><td/><td/><td/></tr></table>" |
|
} |
|
} |
|
} |
|
} |