Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E95-1013",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:31:29.938755Z"
},
"title": "Literal Movement Grammars",
"authors": [
{
"first": "Annius",
"middle": [
"V"
],
"last": "Groenink",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CWI",
"location": {
"addrLine": "Kruislaan 413",
"postCode": "1098 SJ",
"settlement": "Amsterdam",
"country": "The Netherlands"
}
},
"email": "avg@cwi@nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Literal movement grammars (LMGs) provide a general account of extraposition phenomena through an attribute mechanism allowing top-down displacement of syntactical information. LMGs provide a simple and efficient treatment of complex linguistic phenomena such as cross-serial dependencies in German and Dutch-separating the treatment of natural language into a parsing phase closely resembling traditional contextfree treatment, and a disambiguation phase which can be carried out using matching, as opposed to full unification employed in most current grammar formalisms of linguistical relevance.",
"pdf_parse": {
"paper_id": "E95-1013",
"_pdf_hash": "",
"abstract": [
{
"text": "Literal movement grammars (LMGs) provide a general account of extraposition phenomena through an attribute mechanism allowing top-down displacement of syntactical information. LMGs provide a simple and efficient treatment of complex linguistic phenomena such as cross-serial dependencies in German and Dutch-separating the treatment of natural language into a parsing phase closely resembling traditional contextfree treatment, and a disambiguation phase which can be carried out using matching, as opposed to full unification employed in most current grammar formalisms of linguistical relevance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The motivation for the introduction of the literal movement grammars presented in this paper is twofold. The first motivation is to examine whether, and in which ways, the use of unification is essential to automated treatment of natural language. Unification is an expensive operation, and pinpointing its precise role in NLP may give access to more efficient treatment of language than in most (Prolog-based) scientific applications known today. The second motivation is the desire to apply popular computer-science paradigms, such as the theory of attribute grammars and modular equational specification, to problems in linguistics. These formal specification techniques, far exceeding the popular Prolog in declarativity, may give new insight into the formal properties of natural language, and facilitate prototyping for large language applications in the same way as they are currently being used to facilitate prototyping of programming language tools.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For an extensive illustration of how formal specification techniques can be made useful in the treatment of natural language, see (Newton, 1993) which describes the abstract specification of several accounts of phrase structure, features, movement, modularity and *This work is supported by SION grant 612-317-420 of the Netherlands Organization for Scientific Research ~wo).",
"cite_spans": [
{
"start": 130,
"end": 144,
"text": "(Newton, 1993)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "parametrization so as to abstract away from the exact language being modelled. The specification language (ASL) used by Newton is a very powerful formalism. The class of specification formalisms we have in mind includes less complex, equational techniques such as ASF+SDF (Bergstra et al., 1989 ) (van Deursen, 1992 which can be applied in practice by very efficient execution as a term rewriting system.",
"cite_spans": [
{
"start": 272,
"end": 294,
"text": "(Bergstra et al., 1989",
"ref_id": "BIBREF2"
},
{
"start": 295,
"end": 315,
"text": ") (van Deursen, 1992",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "90",
"sec_num": null
},
{
"text": "Literal movement grammars are a straightforward extension of context-free grammars. The derivation trees of an LMG analysis can be easily transformed into trees belonging to a context-free backbone which gives way to treatment by formal specification systems. In order to obtain an efficient implementation, some restrictions on the general form of the formalism are necessary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "90",
"sec_num": null
},
{
"text": "Equational specification systems such as the ASF+SDF system operate through sets of equations over signatures that correspond to arbitrary forms of context-free grammar. An attempt at an equational specification of a grammar based on contextfree phrase structure rules augmented with feature constraints may be to use the context-free backbone as a signature, and then implement further analysis through equations over this signature. This seems entirely analoguous to the static semantics of a programming language: the language itself is context-free, and the static semantics are defined in terms of functions over the constructs of the language. In computer-science applications it is irrelevant whether the evaluation of these functions is carried out during the parsing phase (I-pass treatment), or afterwards (2-pass treatment). This is not a trivial property of computer languages: a computer language with static semantics restrictions is a context-sensitive sublanguage of a context-free language that is either unambiguous or has the finite ambiguity property: for any input sentence, there is only a finite number of possible context-free analyses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Context Sensitivity in Natural Language",
"sec_num": "1.1"
},
{
"text": "In section 1.3 we will show that due to phenomena of extraposition or discontinuous constituency exhibited by natural languages, a context-free backbone for a sufficiently rich fragment of natural language no longer has the property of finite ambiguity. Hence an initial stage of sentence processing cannot be based on a purely context-free analysis. The LMG formalism presented in this paper attempts to eliminate infinite ambiguity by providing an elementary, but adequate treatment of movement. Experience in practice suggests that after relocating displaced constituents, a further analysis based on feature unification no longer exploits unbounded structural embedding. Therefore it seems that after LMGanalysis, there is no need for unification, and further analysis can be carried out through functional matching techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Context Sensitivity in Natural Language",
"sec_num": "1.1"
},
{
"text": "We aim to present a grammar formalism that t~ is sufficiently powerful to model relevant fragments of natural language, at least large enough for simple applications such as an interface to a database system over a limited domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aims",
"sec_num": "1.2"
},
{
"text": "t, is sufficiently elementary to act as a front-end to computer-scientific tools that operate on contextfree languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aims",
"sec_num": "1.2"
},
{
"text": "t~ has a (sufficiently large) subclass that allows efficient implementation through standard (Earleybased) left-to-right parsing techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aims",
"sec_num": "1.2"
},
{
"text": "Three forms of movement in Dutch will be a leading thread throughout this paper. We will measure the adequacy of a grammar formalism in terms of its ability to give a unified account of these three phenomena.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Requirements",
"sec_num": "1.3"
},
{
"text": "Topicalization The (leftward) movement of the objects of the verb phrase, as in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Requirements",
"sec_num": "1.3"
},
{
"text": "(1) [Which book]/ did John forget to return el to the library?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Requirements",
"sec_num": "1.3"
},
{
"text": "Dutch sentence structure The surface order of sentences in Dutch takes three different forms: the finite verb appears inside the verb phrase in relative clauses; before the verb phrase in declarative clauses, and before the subject in questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Requirements",
"sec_num": "1.3"
},
{
"text": "(2) ... dat Jan [vP Marie kuste ] (3) Jan kustei [vP Marie el ] (4) kustei Jan [,ca Marie ei ] ?",
"cite_spans": [
{
"start": 16,
"end": 33,
"text": "[vP Marie kuste ]",
"ref_id": null
},
{
"start": 49,
"end": 63,
"text": "[vP Marie el ]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Requirements",
"sec_num": "1.3"
},
{
"text": "We think of these three (surface) forms a s merely being different representations of the same (deep) structure, and will take this deep structure to be the form (2) that does not show movement. Note that this analysis (after relocation of the extraposed objects) is structurally equal to the corresponding English VP. The accounts of Dutch in this paper will consistently assign \"deep structures\" to sentences of Dutch which correspond to the underlying structure as it appears in English. Similar accounts can be given for other languages--so as to get a uniform treatment of a group of similar (European) languages such as German, French and Italian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Requirements",
"sec_num": "1.3"
},
{
"text": "If we combine the above three analyses, the final analysis of (3) will become",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-serial dependencies",
"sec_num": null
},
{
"text": "Although this may look like an overcomplication, this abundant use of movement is essential in any uniform treatment of Dutch verb constructions. Hence it turns out to occur in practice that a verb phrase has no lexical expansion at all, when a sentence shows both object and verb extraposition. Therefore, as conjectured in the introduction, a 2-pass treatment of natural language based on a context-free backbone will in general fail-as there are infinitely many ways of building an empty verb phrase from a number of empty constituents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Jan kustei Mariej [w el ej ]",
"sec_num": null
},
{
"text": "There is evidence that suggests that the typical human processing of movement is to first locate displaced information (the filler), and then find the logical location (the trace), to substitute that information. It also seems that by and large, displaced information appears earlier than (or left of) its logical position, as in all examples given in the previous section. The typical unificationbased approach to such movement is to structurally analyse the displaced constituent, and use this analysed information in the treatment of the rest of the sentence. This method is called gap-threading; see (Alshawi, 1992 ). If we bear in mind that a filler is usually found to the left of the corresponding trace, it is worth taking into consideration to develop a way of deferring treatment of syntactical data. E.g. for example sentence 1 this means that upon finding the displaced constituent which book, we will not evaluate that constituent, but rather remember during the treatment of the remaining part of the sentence, that this data is still to be fitted into a logical place. This is not a new idea.",
"cite_spans": [
{
"start": 604,
"end": 618,
"text": "(Alshawi, 1992",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definition and Examples",
"sec_num": "2"
},
{
"text": "A number of nonconcatenative grammar formalisms has been put forward, such as head-wrapping grammars (HG) (Pollard, 1984) , extraposition grammars (XG) (Pereira, 1981) . and tree adjoining grammars (TAG) (Kroch and Joshi, 1986) . A discussion of these formalisms as alternatives to the LMG formalism is given in section 4.",
"cite_spans": [
{
"start": 106,
"end": 121,
"text": "(Pollard, 1984)",
"ref_id": "BIBREF9"
},
{
"start": 152,
"end": 167,
"text": "(Pereira, 1981)",
"ref_id": null
},
{
"start": 204,
"end": 227,
"text": "(Kroch and Joshi, 1986)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definition and Examples",
"sec_num": "2"
},
{
"text": "Lessons in parsing by hand in high school (e.g. in English or Latin classes) informally illustrate the purpose of literal movement grammars: as opposed to the traditional linguistic point of view that there is only one head which dominates a phrase, constituents of a sentence have several key components. A verb phrase for example not only has its finite verb, but also one or more objects. It is precisely these key components that can be subject to movement. Now when such a key component is found outside the consitituent it belongs to, the LMG formalism implements a simple mechanism to pass the component down the derivation tree, where it is picked up by the constituent that contains its trace.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition and Examples",
"sec_num": "2"
},
{
"text": "It is best to think of LMGs versus context-free grammars as a predicate version of the (propositional) paradigm of context-free grammars, in that nonterminals can have arguments. If we call the general class of such grammars predicate grammars, the distinguishing feature of LMG with respect to other predicate grammar formalisms such as indexed grammars I (Weir, 1988) (Aho, 1968) is the ability of binding or quantification in the right hand side of a phrase structure rule. 1 2.1 Definition We fix disjoint sets N, T, V of nonterminal symbols, terminal symbols and variables. We will write A, B, C... to denote nonterminal symbols, a, b, c... to denote terminal symbols, and x, y, z for variables. A sequence ala2. \u2022 \u2022 a,~ or a E T* is called a (terminal) word or string. We will use the symbols a, b, e for terminal words. (Note the use of bold face for sequences.) 1 2.2 Definition (term) A sequence tlt2...t~ or t E (V U T)* is called a term. If a term consists of variables only, we call it a vector and usually write x. 1 2.3 Definition (similarity type) A (partial) function # mapping N to the natural numbers is called a similarity type. 1 2.4 Definition (predicate) Let # be a similarity type, A E N and n = /~(A), and for 1 <_ i <_ n, let ti be a term. Then a predicate qa of type # is a terminal a (a terminal predicate) or a syntactical unit of the form A ( t l , t 2, \u2022 \u2022., t,~ ), called a nonterminal predicate. If all t~ = xl are vectors, we say that = A(a~l, ~e2,... , a~n) is apattern.",
"cite_spans": [
{
"start": 357,
"end": 369,
"text": "(Weir, 1988)",
"ref_id": null
},
{
"start": 370,
"end": 381,
"text": "(Aho, 1968)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definition and Examples",
"sec_num": "2"
},
{
"text": "Informally, we think of the arguments of a nonterminal as terminal words. A predicate A(x) then stands for a constituent A where certain information with terminal yield x has been extraposed (i.e. found outside the constituent), and must hence be left out of the A constituent itself. 1 2.5 Definition (item) Let/z be a similarity type, ~p a predicate of type #, and t a term. Then an item of type # is a syntactical unit of one of the following forms:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition and Examples",
"sec_num": "2"
},
{
"text": "1 Indexed grammars are a weak form of monadic predicate grammar, as a nonterminal can have at most one argument.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition and Examples",
"sec_num": "2"
},
{
"text": "1. qo (a nonterminal or terminal predicate) 2. x:~ (a quantifier item) 3. ~/t (a slash item)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition and Examples",
"sec_num": "2"
},
{
"text": "We will use \u00a2, qJ to denote items, and a,/3, 3' to denote sequences of items. 1 2.6 Definition Let /z be a similarity type. A rewrite rule R of type/2 is a syntactical unit qo ---, qbl (I)2 \u2022 ' \u2022 qb,~ where qo is a pattern of type #, and for I < i < n, ~i is an item of type #.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition and Examples",
"sec_num": "2"
},
{
"text": "A literal movement grammar is a triple (#, S, P)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition and Examples",
"sec_num": "2"
},
{
"text": "where # is a similarity type, S E N, #(S) = 0 and P is a set of rewrite rules of type #.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition and Examples",
"sec_num": "2"
},
{
"text": "Items on the right hand side of a rule can either refer to variables, as in the following rule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition and Examples",
"sec_num": "2"
},
{
"text": "A(x, yz) -~ BO/x a/y C(z)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition and Examples",
"sec_num": "2"
},
{
"text": "or bind new variables, as the first two items in A 0 ---, x:B 0 y:C(x) D(y).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition and Examples",
"sec_num": "2"
},
{
"text": "A slash item such as B()/x means that x should be used instead of the actual \"input\" to recognize the nonterminal predicate B(). I.e. the terminal word x should be recognized as B0, and the item BO/x itself will recognize the empty string. A quantifier item x:B() means that a constituent B() is recognized from the input, and the variable x, when used elsewhere in the rule, will stand for the part of the input recognized. illustrates more intuitively how displaced information (the two a symbols in this case) is 'moved back down' into the tree, until it gets 'consumed' by a slash item. It also shows how we can extract a context-free 'deep structure' for further analysis by, for example, formal specification tools: if we transform the tree, as shown in figure 3, by removing quantified (extraposed) data, and abstracting away from the parameters, we see that the grammar, in a sense, works by transforming the language anbnc n to the context-free language (ab)ncn. Figure 4 shows how we can derive a context free 'backbone grammar' from the original grammar.",
"cite_spans": [],
"ref_spans": [
{
"start": 972,
"end": 980,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Definition and Examples",
"sec_num": "2"
},
{
"text": "Dutch) The following LMG captures precisely the three basic types of extraposition defined in section 1.3: the three Dutch verb orders, topicalization and cross-serial verb-object dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example (cross-serial dependencies in",
"sec_num": "12.9"
},
{
"text": "S'(e) -~ dat NP VP(e,e) S'(e) * n: arguments: the first is used to fill verb traces, the second is treated as a list of noun phrases to which more noun phrases can be appended. A V' is similar to a VP except that it uses the list of noun phrases in its second argument to fill noun phrase traces rather than adding to it. Figure 5 shows how this grammar accepts the sentence",
"cite_spans": [],
"ref_spans": [
{
"start": 322,
"end": 330,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "s ~ s'(~)",
"sec_num": null
},
{
"text": "NP S'(n) S'(n) -~ v:V NP VP(v,n) S'(e) ~ NP v:V VP(v,e) re(v, n) -~ m:NP W(v,,~m) VP(v,,~) --, V'(v, n) V(c, ~) --, v7 V(v, ~) --, Vt/v V(\u00a2,n) -, VT NP/n V'(v, n) ~ VT/v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "s ~ s'(~)",
"sec_num": null
},
{
"text": "We see that it is analyzed as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Marie zag Fred Anne kussen.",
"sec_num": null
},
{
"text": "which as anticipated in section 1.3 has precisely the basic, context-free underlying structure of the corresponding English sentence Mary saw Fred kiss Anne ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Marie zag i Fredj Annek IV' ei ej [V, kussen e~ ]]",
"sec_num": null
},
{
"text": "The LMG formalism in its unrestricted form is shown to be Turing complete in (Groenink, 1995a) . But the grammars presented in this paper satisfy a number of vital properties that allow for efficient parsing techniques. Before building up material for a complexity result, notice the following proposition, which shows, using only part of the strength of the formalism, that the literal movement grammars are closed under intersection.",
"cite_spans": [
{
"start": 77,
"end": 94,
"text": "(Groenink, 1995a)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Properties",
"sec_num": "3"
},
{
"text": "1 3.1 Proposition (intersection) Given two literal movement grammars G1 ---(#1,$1, P1) and Gz = (tzz, $2, Pz) such that dom(#l) n dom(#2) = O, we can construct the grammar GI = (#1 U #z U {(S, 0)}, S, P1 U P2 U {R}) where we add the rule R: so -~ =S,O Sz()/x Clearly, GI recognizes precisely those sentences which are recognized by both G1 and Gz.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Properties",
"sec_num": "3"
},
{
"text": "We can use this knowledge in example 2.9 to restrict movement of verbs to verbs of finite morphology, by adding a nonterminal VFIN, replacing the quantifier items v:V that locate verb fillers with v:VFIN, where VFIN generates all finite verbs. Any extraposed verb will then be required to be in the intersection of VFIN and one of the verb types VI, VT or VR, reducing possible ambiguity and improving the efficiency of left-to-right recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Properties",
"sec_num": "3"
},
{
"text": "The following properties allow us to define restrictions of the LMG formalism whose recognition problem has a polynomial time complexity. 1 3.2 Definition (non-combinatorial) An LMG is non-combinatorial if every argument of a nonterminal on the RHS of a rule is a single variable (i.e. we do not allow composite terms within predicates). If G is a non-combinatorial LMG, then any terminal string occurring (either as a sequence of items or inside a predicate) in a full G-derivation is a substring of the derived string. The grammar of example 2.8 is noncombinatorial; the grammar of example 2.9 is not (the offending rule is the first VP production). For example, the following rule is left binding: DO/y E(u,z) but these ones are not:",
"cite_spans": [
{
"start": 701,
"end": 712,
"text": "DO/y E(u,z)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Properties",
"sec_num": "3"
},
{
"text": "A(xyz, v) ~ u:B(v) C(v)/x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Properties",
"sec_num": "3"
},
{
"text": "because in (a), x is bound right of its use; in (b), the item A(x) is not of the form qo/x and in (e), the variables in the vector zyz occur in the wrong order (zzy).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(a) g(y) ---* C(x) x:D(y) (b) A(xy) ---* A(x) B(y) (c) A(xyz)~ A(z) BO/x CO/y",
"sec_num": null
},
{
"text": "Ifa grammar satisfies condition 1, then for any derivable string, there is a derivation such that the modus ponens and elimination rules are always applied to the leftmost item that is not a terminal. Furthermore, the :E rule can be simplified to :E G The proof tree in example 2.8 (figure 1) is an example of such a derivation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(a) g(y) ---* C(x) x:D(y) (b) A(xy) ---* A(x) B(y) (c) A(xyz)~ A(z) BO/x CO/y",
"sec_num": null
},
{
"text": "Condition 2 eliminates the nondeterminism in finding the right instantiation for rules with multiple variable patterns in their LHS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(a) g(y) ---* C(x) x:D(y) (b) A(xy) ---* A(x) B(y) (c) A(xyz)~ A(z) BO/x CO/y",
"sec_num": null
},
{
"text": "Both grammars from section 2 are left-binding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(a) g(y) ---* C(x) x:D(y) (b) A(xy) ---* A(x) B(y) (c) A(xyz)~ A(z) BO/x CO/y",
"sec_num": null
},
{
"text": "An LMG G is left-recursive if there exists an instantiated nonterminal G predicate qa such that there is a derivation of ~o ~ ~pc~ for any sequence of items c~.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.4 Definition (left-recursive)",
"sec_num": "1"
},
{
"text": "The following two rules show that left-recursion in LMG is not always immediately apparent: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.4 Definition (left-recursive)",
"sec_num": "1"
},
{
"text": "A(y) ~ BO/Y A(e)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.4 Definition (left-recursive)",
"sec_num": "1"
},
{
"text": "We now show that the recognition problem for an arbitrary left-binding, non-combinatorial LMG has a polynomial worst-case time complexity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "/E",
"sec_num": null
},
{
"text": "1 3.5 Theorem (polynomial complexity) Let G be a LMG of similarity type # that is noncombinatorial, left binding and not left-recursive. Let m be the maximum number of items on the right hand side of rules in G, and let p be the greatest arity of predicates occurring in G. Then the worst case time complexity of the recognition problem for G does not exceed O(IGIm(1 + p)nl+'~+2P), where n is the size of the input string ala2\" \u2022 .a,~.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "/E",
"sec_num": null
},
{
"text": "Proof (sketch) We adopt the memoizing recursive descent algorithm presented in (Leermakers, 1993) . As G is not left-binding, the terminal words associated with variables occurring in the grammar rules can be fully determined while proceeding through sentence and rules from left to right. Because the grammar is non-combinatorial, the terminal words substituted in the argument positions of a nonterminal are always substrings of the input sentence, and can hence be represented as a pair of integers. The recursive descent algorithm recursively computes set-valued recognition functions of the form:",
"cite_spans": [
{
"start": 79,
"end": 97,
"text": "(Leermakers, 1993)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "/E",
"sec_num": null
},
{
"text": "[~o](i) = {jl~o ~ ai+l\"\" .aj}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "/E",
"sec_num": null
},
{
"text": "where instead of a nonterminal as in the contextfree case, qo is any instantiated nonterminal predicate A (bl,..., b,~) . As bl,...,b,~ i, (tl, r,) ,..., r,,)) = {jlA (ah+ 1...a,,,,...,at.+l...ar~) ai+ 1 \u2022 \u2022. aj } Where # = #(A) < p. The arguments i, ll,...,l~, and rl,. \u2022., r t, are integer numbers ranging from 0 to n -1 and 1 to n respectively. Once a result of such a recognition function has been computed, it is stored in a place where it can be retrieved in one atomic operation. The number of such results to be stored is O(n) for each possible nonterminal and each possible combination of, at most 1 + 2p, arguments; so the total space complexity is O(IGIn2+2p). Much of the extra complication w.r.t, the contextfree case is coped with at compile time; for example, if there is one rule for nonterminal A: [] | 3.6 Remark If all nonterminals in the grammar are nullary (p = 0), then the complexity result coincides with the values found for the context-free recursive descent algorithm (Leermakers, 1993) . Nullary LMG includes the context-free case, but still allows movement local to a rule; the closure result 3.1 still holds for this class of grammars. As all we can do with binding and slashing local to a rule is intersection, the nullary LMGs must be precisely the closure of the context-free grammars under finite intersection.",
"cite_spans": [
{
"start": 106,
"end": 119,
"text": "(bl,..., b,~)",
"ref_id": null
},
{
"start": 122,
"end": 135,
"text": "As bl,...,b,~",
"ref_id": null
},
{
"start": 167,
"end": 197,
"text": "(ah+ 1...a,,,,...,at.+l...ar~)",
"ref_id": null
},
{
"start": 995,
"end": 1013,
"text": "(Leermakers, 1993)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 136,
"end": 147,
"text": "i, (tl, r,)",
"ref_id": null
}
],
"eq_spans": [],
"section": "/E",
"sec_num": null
},
{
"text": "A(x,,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "/E",
"sec_num": null
},
{
"text": "These results can be extended to more efficient algorithms which can cope with left-recursive grammars such as memoizing recursive ascent (Leermakers, 1993) . A very simple improvement is obtained by bilinearizing the grammar (which is possible if it is left binding), giving a worst case complexity of o(Ic[(1 + p)n3+2,).",
"cite_spans": [
{
"start": 138,
"end": 156,
"text": "(Leermakers, 1993)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "/E",
"sec_num": null
},
{
"text": "A natural question to ask is whether the LMG formalism (for the purpose of embedding in equational specification systems, or eliminating unification as a stage of sentence processing) really has an advantage over existing mildly context-sensitive approaches to movement. Other non-concatenative formalisms are head-wrapping grammars (HG) (Pollard, 1984) , extraposition grammars (XG) (Pereira, 1981) and various exotic forms of tree adjoining grammar (Kroch and Joshi, 1986) . For overviews see (Weir, 1988) , (Vijay-Shanker et al., 1986) and (van Noord, 1993) . The most applicable of these formalisms for our purposes seem to be HG and XG, as both of these show good results in modeling movement phenomena, and both are similar in appearance to context-free grammars; as in LMG, a context-free grammar has literally the same representation when expressed in HG or XG. Hence it is to be expected that incorporating these approaches into a system based on a context-free front-end will not require a radical change of perspective.",
"cite_spans": [
{
"start": 338,
"end": 353,
"text": "(Pollard, 1984)",
"ref_id": "BIBREF9"
},
{
"start": 384,
"end": 399,
"text": "(Pereira, 1981)",
"ref_id": null
},
{
"start": 451,
"end": 474,
"text": "(Kroch and Joshi, 1986)",
"ref_id": "BIBREF5"
},
{
"start": 495,
"end": 507,
"text": "(Weir, 1988)",
"ref_id": null
},
{
"start": 510,
"end": 538,
"text": "(Vijay-Shanker et al., 1986)",
"ref_id": "BIBREF12"
},
{
"start": 543,
"end": 560,
"text": "(van Noord, 1993)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other Approaches to Separation of Movement",
"sec_num": "4"
},
{
"text": "A notion that plays an important role in various forms of Linguistic theory is that of a head. Although there is a great variation in the form and function of heads in different theories, in general we might say that the head of a constituent is the key component of that constituent. The head grammar formalism, introduced by Pollard in (Pollard, 1984) divides a constituent into three components: a left context, a terminal head and a right context. In a HG rewrite rule these parts of a constituent can be addressed separately when building a constituent from a number of subconstituents. An accurate and elegant account of Dutch crossserial dependencies using HG is sketched in (Pollard, 1984) . However, we have not been able to construct head grammars that are able to model verb movement, cross-serial dependencies and topicalization at the same time. For every type of constituent, there is only one head, and hence only one element of the constituent that can be the subject to movement. 3",
"cite_spans": [
{
"start": 338,
"end": 353,
"text": "(Pollard, 1984)",
"ref_id": "BIBREF9"
},
{
"start": 682,
"end": 697,
"text": "(Pollard, 1984)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Head Grammars",
"sec_num": "4.1"
},
{
"text": "Whereas head grammars provide for an account of verb fronting and cross-serial dependencies, Pereira, 3However, a straightforward extension of head grammars defined in (Groenink, 1995a) which makes use of arbitrary tupies, rather than dividing constituents into three components, is (1) capable of representing the three target phenomena of Dutch all at once and (2) weakly equivalent to a (strongly limiting) restriction of literal movement grammars. Head grammars and their generalizations, being linear contextfree rewriting systems (Weir, 1988) , have been shown to have polynomial complexity.",
"cite_spans": [
{
"start": 168,
"end": 185,
"text": "(Groenink, 1995a)",
"ref_id": "BIBREF3"
},
{
"start": 536,
"end": 548,
"text": "(Weir, 1988)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extraposition Grammars",
"sec_num": "4.2"
},
{
"text": "introducing extraposition grammars in (Pereira, 1981) , is focused on displacement of noun phrases in English. Extraposition grammars are in appearance very similar to context-free grammars, but allow for larger patterns on the left hand side of PS rules. This makes it possible to allow a topicalized NP only if somewhere to its right there is an unfilled trace:",
"cite_spans": [
{
"start": 38,
"end": 53,
"text": "(Pereira, 1981)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extraposition Grammars",
"sec_num": "4.2"
},
{
"text": "S --~ Topic S Topic . . . XP --* NP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraposition Grammars",
"sec_num": "4.2"
},
{
"text": "While XG allows for elegant accounts of cross-serial dependencies and topicalization, it seems again hard to simultaneously account for verb and noun movement, especially if the bracketing constraint introduced in (Pereira, 1981) , which requires that XG derivation graphs have a planar representation, is not relaxed. 4",
"cite_spans": [
{
"start": 214,
"end": 229,
"text": "(Pereira, 1981)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extraposition Grammars",
"sec_num": "4.2"
},
{
"text": "Furthermore, the practical application of XG seems to be a problem. First, it is not obvious how we should interpret XG derivation graphs for further analysis. Second, as Pereira points out, it is nontrivial to make the connection between the XG formalism and standard (e.g. Earley-based) parsing strategies so as to obtain truly efficient implementations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraposition Grammars",
"sec_num": "4.2"
},
{
"text": "We have presented the LMG formalism, examples of its application, and a complexity result for a constrained subclass of the formalism. Example 2.9 shows that an LMG can give an elegant account of movement phenomena. The complexity result 3.5 is primarily intended to give an indication of how the recognition problem for LMG relates to that for arbitrary context free grammars. It should be noted that the result in this paper only applies to non-combinatorial LMGs, excluding for instance the grammar of example 2.9 as presented here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "There are other formalisms (HG and XG) which provide sensible accounts of the three movement phenomena sketched in section 1.3, but altogether do not seem to be able to model all phenomena at once. In (Groenink, 1995b) we give a more detailed analysis of what is and is not possible in these formalisms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "1. The present proof of polynomial complexity does not cover a very large class of literal movement grammars. It is to be expected that larger, Turing complete, classes will be formally intractable but behave reasonably in practice. It is worthwile to look at possible practical implementations for larger classes of LMGs, and investigate the (theoretical and practical) performance of these systems on various representative grammars.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": null
},
{
"text": "2. Efficient treatment of LMG strongly depends on the left-binding property of the grammars, which 4Theoretically simultaneous treatment of the three movement phenomena is not impossible in XG (a technique similar topit-stopping in GB allows one to wrap extrapositions over natural bracketing islands), but grammars and derivations become very hard to understand. seems to restrict grammars to treatment of leftward extraposition. In reality, a smaller class of rightward movement phenomena will also need to be treated. It is shown in (Groenink, 1995b ) that these can easily be circumvented in left-binding LMG, by introducing artificial, \"parasitic\" extraposition.",
"cite_spans": [
{
"start": 536,
"end": 552,
"text": "(Groenink, 1995b",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": null
}
],
"back_matter": [
{
"text": "I would like to thank Jasper Kamperman, Ren6 Leermakers, Jan van Eijck and Eelco Visser for their enthousiasm, for carefully reading this paper, and for many general and technical comments that have contributed a great deal to its consistency and readability. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Indexed Grammars -an Extension to Context-free grammars",
"authors": [
{
"first": "A",
"middle": [
"V"
],
"last": "Aho",
"suffix": ""
}
],
"year": 1968,
"venue": "JACM",
"volume": "15",
"issue": "",
"pages": "647--671",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.V. Aho. 1968. Indexed Grammars -an Extension to Context-free grammars. JACM, 15:647-671.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Core Language Engine",
"authors": [],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiyan Alshawi, editor. 1992. The Core Language Engine. MIT Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Algebraic Specification",
"authors": [
{
"first": "J",
"middle": [
"A"
],
"last": "Bergstra",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Heering",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Klint",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.A. Bergstra, J. Heering, and P. Klint, editors. 1989. Algebraic Specification. ACM Press Frontier Se- ries. The ACM Press in co-operation with Addison- Wesley.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Accounts of Movement--a Formal Comparison",
"authors": [
{
"first": "V",
"middle": [],
"last": "Annius",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Groenink",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annius V. Groenink. 1995a. Accounts of Movement--a Formal Comparison. Unpublished manuscript.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Mechanisms for Movement",
"authors": [
{
"first": "V",
"middle": [],
"last": "Annius",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Groenink",
"suffix": ""
}
],
"year": 1994,
"venue": "Linguistics In the Netherlands) meeting",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annius V. Groenink. 1995b. Mechanisms for Move- ment. Paper presented at the 5th CLIN (Compu- tational Linguistics In the Netherlands) meeting, November 1994.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Analyzing Extraposition in a TAG",
"authors": [
{
"first": "A",
"middle": [
"S"
],
"last": "Kroch",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
}
],
"year": 1986,
"venue": "Syntax and Semantics: Discontinuous Constituents",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.S. Kroch and A.K. Joshi. 1986. Analyzing Extra- position in a TAG. In Ojeda Huck, editor, Syntax and Semantics: Discontinuous Constituents. Acad. Press, New York.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Ren6 Leermakers. 1993. The Functional Treatment of Parsing",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ren6 Leermakers. 1993. The Functional Treatment of Parsing. Kluwer, The Netherlands.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Formal Specification of Grammar",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Newton",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Newton. 1993. Formal Specification of Grammar. Ph.D. thesis, University of Edinburgh.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Generalized Phrase Structure Grammars, Head Grammars, and Natural Language",
"authors": [
{
"first": "Carl",
"middle": [
"J"
],
"last": "Pollard",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carl J. Pollard. 1984. Generalized Phrase Struc- ture Grammars, Head Grammars, and Natural Lan- guage. Ph.D. thesis, Standford University.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Published in revised form in Van Deursen, Executable Language Definitions--Case Studies and Origin Tracking",
"authors": [
{
"first": "Arie",
"middle": [],
"last": "Van Deursen",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arie van Deursen. 1992. Specification and Genera- tion of a A-calculus environment. Technical report, CWI, Amsterdam. Published in revised form in Van Deursen, Executable Language Definitions--Case Studies and Origin Tracking, PhD Thesis, Univer- sity of Amsterdam, 1994.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Reversibility in Natural Language",
"authors": [
{
"first": "",
"middle": [],
"last": "Gertjan Van Noord",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gertjan van Noord. 1993. Reversibility in Natural Language. Ph.D. thesis, Rijksuniversiteit Gronin- gen.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Tree Adjoining and Head Wrapping",
"authors": [
{
"first": "K",
"middle": [],
"last": "Vijay-Shanker",
"suffix": ""
},
{
"first": "David",
"middle": [
"J"
],
"last": "Weir",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
}
],
"year": 1986,
"venue": "11th int. conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Vijay-Shanker, David J. Weir, and A.K. Joshi. 1986. Tree Adjoining and Head Wrapping. In 11th int. conference on Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "..., x,~) ~ ~1(I)2 ,.. ~rn be a rewrite rule, then an instantiation of R is the syntactical entity obtained by substituting for each i and for each variable x E xl a terminal word a~. A grammar derives the string a iff S 0 =~ a where G ===~ is a relation between predicates and sequences of items defined inductively by the following axioms and inference rules: 2 G a~a G qo ==* a when qo --* a is an instantiation of a rule in G qo ~ /3 A(tl,...,t,~) 7 A(tl,...,t,~) that [a/x] in the :E rule is not an item, but stands for the substitution of a for z.(a'~b'~c '*) The following, very elementary LMG recognizes the trans-context free language anbnc n : Informal tree analysis.",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "NP/n 12(\u00a2,nm) ---, VR NP/n 12(e,m) V(v, nm) ---* VR/v gP/n V(e,m' has one argument which is used, if nonempty, to fill a noun phrase trace. A VP has two",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "Backbone grammar.",
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"uris": null,
"text": "indicated in figure 5 by terminal words in bold face. Note that arbitrary verbs are recognized by a quanti-Derivation of a Dutch sentence fier item v:V, and only when, further down the tree, a trace is filled with such a verb in items such as VR/v, its subcategorization types VI, VT and VR start playing a role.",
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"uris": null,
"text": "(left-binding) An LMG G is leftbinding when 1. W.r.t. argument positions, an item in the RHS of a rule only depends on variables bound in items to its left. 2. For any vector x ~ \u2022 \u2022 \u2022 x,~ of n > 1 variables on the LHS, each of xl upto xn-~ occurs in exactly one item, which is of the form qo/xl. Furthermore, for each 1 < I < k < n the item referring to xz appears left of any item referring to x~.",
"type_str": "figure"
},
"FIGREF6": {
"num": null,
"uris": null,
"text": "x2) ~ x3:Ba(xj) B2() B3(x3)/x2then the code for[g](i, (ll, r,), (12, r2)) will beresult := empty for kl E [B1](i, (/1, rl)result return resultThe extra effort remaining at parse time is in copying arguments and an occasional extra comparison (the if statement in the example), taking rn(1 + p) steps everytime the innermost for statement is reached, and the fact that not O(n), but O(n l+2p) argumentvalue pairs need to be memoized. Merging the results in a RHS sequence of rn items can be done in O(m(1 + p)n ~-1) time. The result is a set of O(n) size. As there are at most O(IGln 1+2p) results to be computed, the overall time complexity of the algorithm is O(IGIm(1 + p)nl+m+2P).",
"type_str": "figure"
},
"TABREF1": {
"content": "<table><tr><td>formulate this as</td></tr><tr><td>[A](</td></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": "are continuous substrings of the input sentence ala2 \u2022 \u2022 \u2022 an, we can re-"
}
}
}
}