{ "paper_id": "P19-1008", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:21:43.304780Z" }, "title": "Semantic expressive capacity with bounded memory", "authors": [ { "first": "Antoine", "middle": [], "last": "Venant", "suffix": "", "affiliation": { "laboratory": "", "institution": "Saarland University", "location": {} }, "email": "" }, { "first": "Alexander", "middle": [], "last": "Koller", "suffix": "", "affiliation": { "laboratory": "", "institution": "Saarland University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We investigate the capacity of mechanisms for compositional semantic parsing to describe relations between sentences and semantic representations. We prove that in order to represent certain relations, mechanisms which are syntactically projective must be able to remember an unbounded number of locations in the semantic representations, where nonprojective mechanisms need not. This is the first result of this kind, and has consequences both for grammar-based and for neural systems.", "pdf_parse": { "paper_id": "P19-1008", "_pdf_hash": "", "abstract": [ { "text": "We investigate the capacity of mechanisms for compositional semantic parsing to describe relations between sentences and semantic representations. We prove that in order to represent certain relations, mechanisms which are syntactically projective must be able to remember an unbounded number of locations in the semantic representations, where nonprojective mechanisms need not. This is the first result of this kind, and has consequences both for grammar-based and for neural systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic parsers which translate a sentence into a semantic representation compositionally must recursively compute a partial semantic representation for each node of a syntax tree. These partial semantic representations usually contain placeholders at which arguments and modifiers are attached in later composition steps. Approaches to semantic parsing differ in whether they assume that the number of placeholders is bounded or not. Lambda calculus (Montague, 1974; Blackburn and Bos, 2005) assumes that the number of placeholders (lambda-bound variables) can grow unboundedly with the length and complexity of the sentence. By contrast, many methods which are based on unification (Copestake et al., 2001) or graph merging (Courcelle and Engelfriet, 2012; Chiang et al., 2013) assume a fixed set of placeholders, i.e. the number of placeholders is bounded.", "cite_spans": [ { "start": 452, "end": 468, "text": "(Montague, 1974;", "ref_id": "BIBREF26" }, { "start": 469, "end": 493, "text": "Blackburn and Bos, 2005)", "ref_id": "BIBREF6" }, { "start": 685, "end": 709, "text": "(Copestake et al., 2001)", "ref_id": "BIBREF10" }, { "start": 727, "end": 759, "text": "(Courcelle and Engelfriet, 2012;", "ref_id": "BIBREF11" }, { "start": 760, "end": 780, "text": "Chiang et al., 2013)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Methods based on bounded placeholders are popular both in the design of hand-written grammars (Bender et al., 2002) and in semantic parsing for graphs (Peng et al., 2015; Groschwitz et al., 2018) . However, it is not clear that all relations between language and semantic representations can be expressed with a bounded number of placeholders. The situation is particularly challenging when one insists that the compositional analysis is projective in the sense that each composition step must combine adjacent substrings of the input sentence. In this case, it may be impossible to combine a semantic predicate with a distant argument immediately, forcing the composition mechanism to use up a placeholder to remember the argument position. If many predicates have distant arguments, this may exceed the bounded \"memory capacity\" of the compositional mechanism.", "cite_spans": [ { "start": 94, "end": 115, "text": "(Bender et al., 2002)", "ref_id": "BIBREF5" }, { "start": 151, "end": 170, "text": "(Peng et al., 2015;", "ref_id": null }, { "start": 171, "end": 195, "text": "Groschwitz et al., 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we show that there are relations between sentences and semantic representations which can be described by compositional mechanisms which are bounded and non-projective, but not by ones which are bounded and projective. To our knowledge, this is the first result on expressive capacity with respect to semantics -in contrast to the extensive literature on the expressive capacity of mechanisms which describe just the string languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "More precisely, we prove that tree-adjoining grammars can describe string-graph relations using the HR graph algebra (Courcelle and Engelfriet, 2012) with two sources (bounded, nonprojective) which cannot be described using linear monadic context-free tree grammars and the HR algebra with k sources, for any fixed k (bounded, projective). This result is especially surprising because TAG and linear monadic CFTGs describe the same string languages; thus the difference lies only in the projectivity of the syntactic analysis.", "cite_spans": [ { "start": 117, "end": 149, "text": "(Courcelle and Engelfriet, 2012)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We further prove that given certain assumptions on the alignment between tokens in the sentence and edges in the graph, no generative device for projective syntax trees can simulate TAG with two sources. This has practical consequences for the design of transition-based semantic parsers (whether grammar-based or neural).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Plan of the paper. We will first explain the linguistic background in Section 2 and lay the formal foundations in Section 3. We will then prove the reduced semantic expressive capacity for aligned generative devices in Section 4 and for CFTGs in Section 5. We conclude with a discussion of the practical impact of our findings (Section 6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Principle of Compositionality, which is widely accepted in theoretical semantics, states that the meaning of a natural-language expression can be determined from the meanings of its immediate subexpressions and the way in which the subexpressions were combined. Implementations of this principle usually assume that there is some sort of syntax tree which describes the grammatical structure of a sentence. A semantic representation is then calculated by bottom-up evaluation of this syntax tree, starting with semantic representations of the individual words and then recursively computing a semantic representation for each node from those of its children.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compositional semantic construction", "sec_num": "2" }, { "text": "Mechanisms for semantic composition will usually keep track of places at which semantic arguments are still missing or modifiers can still be attached. For instance, when combining the semantic representations for \"John\" and \"sleeps\" in a derivation of \"John sleeps\", the \"subject\" argument of \"sleeps\" is filled with the meaning of \"John\". The compositional mechanism therefore assigns a semantic representation to \"sleeps\" which has an unfilled placeholder for the subject.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compositional mechanisms", "sec_num": "2.1" }, { "text": "The exact nature of the placeholder depends on the compositional mechanism. There are two major classes in the literature. Lambda-style compositional mechanisms use a list of placeholders. For instance, lambda calculus, as used e.g. in Montague Grammar (Montague, 1974) , CCG (Steedman, 2001 ), or linear-logic-based approaches in LFG (Dalrymple et al., 1995) might represent \"sleeps\" as \u03bbx.sleep(x). Placeholders are lambdabound variables (here: x).", "cite_spans": [ { "start": 253, "end": 269, "text": "(Montague, 1974)", "ref_id": "BIBREF26" }, { "start": 276, "end": 291, "text": "(Steedman, 2001", "ref_id": null }, { "start": 335, "end": 359, "text": "(Dalrymple et al., 1995)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Compositional mechanisms", "sec_num": "2.1" }, { "text": "By contrast, unification-style compositional mechanisms use names for placeholders. For example, a simplified form of the Semantic Algebra used in HPSG (Copestake et al., 2001) might represent \"sleeps\" as the feature structure (e.g. subj). Named placeholders are also used in the HR algebra (Courcelle and Engelfriet, 2012) and its derivatives, like Hyperedge Replacement Grammars (Drewes et al., 1997; Chiang et al., 2013) and the AM algebra (Groschwitz et al., 2018) .", "cite_spans": [ { "start": 152, "end": 176, "text": "(Copestake et al., 2001)", "ref_id": "BIBREF10" }, { "start": 291, "end": 323, "text": "(Courcelle and Engelfriet, 2012)", "ref_id": "BIBREF11" }, { "start": 381, "end": 402, "text": "(Drewes et al., 1997;", "ref_id": "BIBREF15" }, { "start": 403, "end": 423, "text": "Chiang et al., 2013)", "ref_id": "BIBREF8" }, { "start": 443, "end": 468, "text": "(Groschwitz et al., 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Compositional mechanisms", "sec_num": "2.1" }, { "text": "A fundamental difference between lambda-style and unification-style compositional mechanisms is in their \"memory capacity\": the number of placeholders in a lambda-style mechanism can grow unboundedly with the length and complexity of the sentence (e.g. by functional composition of lambda terms), whereas in a unification-style mechanism, the placeholders are fixed in advance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Boundedness and projectivity", "sec_num": "2.2" }, { "text": "There is an informal intuition that unbounded memory is needed especially when an unbounded number of semantic predicates can be far away from their arguments in the sentence, and the syntax formalism does not allow these predicates to combine immediately with the arguments. For illustration, consider the two derivations of the following Swiss German sentence from Shieber (1985) in Fig. 1: (1) ( The lexical semantic representation of each verb comes with a placeholder for its object (o 1 , o 2 , o 3 ) and, in the case of \"l\u00f6nd\" and \"h\u00e4lfed\", also one for its verb complement (v). The derivation in Fig. 1a immediately combines each verb with its complements; the placeholders that are used at each node never grow beyond the ones the verbs originally had. However, this derivation combines verbs with nouns which are not adjacent in the string, which is not allowed in many grammar formalisms. If we limit ourselves to combining only adjacent substrings (projectively, see Fig. 1b ), we must remember the placeholders for all the verbs at the same time if we want to obtain the correct predicate-argument structure. Thus, the number of placeholders grows with the length of the sentence; this is only possible with a lambda-style compositional mechanism.", "cite_spans": [ { "start": 375, "end": 381, "text": "(1985)", "ref_id": null }, { "start": 397, "end": 398, "text": "(", "ref_id": null } ], "ref_spans": [ { "start": 385, "end": 392, "text": "Fig. 1:", "ref_id": "FIGREF0" }, { "start": 604, "end": 611, "text": "Fig. 1a", "ref_id": "FIGREF0" }, { "start": 979, "end": 986, "text": "Fig. 1b", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Boundedness and projectivity", "sec_num": "2.2" }, { "text": "There is scattered evidence in the literature for this tension between bounded memory and projectivity. Chiang et al. (2013) report (of a compositional mechanism based on the HR algebra, unification-style) that a bounded number of placeholders suffices to derive the graphs in the AMR version of the Geoquery corpus, but Groschwitz et al. (2018) find that this requires non-projective derivations in 37% of the AMRBank training data (Banarescu et al., 2013) . Approaches to semantic construction with treeadjoining grammar either perform semantic composition along the TAG derivation tree using unification (non-projective, unification-style) (Gardent and Kallmeyer, 2003) or along the TAG derived tree using linear logic (projective, lambda-style) (Frank and van Genabith, 2001 ). Bender (2008) discusses the challenges involved in modeling the predicate-argument structure of a language with very free word order (Wambaya) with projective syntax. While the Wambaya noun phrase does not seem to require the projective grammar to collect unbounded numbers of unfilled arguments as in Fig. 1b , Bender notes that her projective analysis still requires a more flexible handling of semantic arguments than the HPSG Semantic Algebra (unification-style) supports.", "cite_spans": [ { "start": 104, "end": 124, "text": "Chiang et al. (2013)", "ref_id": "BIBREF8" }, { "start": 321, "end": 345, "text": "Groschwitz et al. (2018)", "ref_id": "BIBREF19" }, { "start": 433, "end": 457, "text": "(Banarescu et al., 2013)", "ref_id": "BIBREF3" }, { "start": 643, "end": 672, "text": "(Gardent and Kallmeyer, 2003)", "ref_id": "BIBREF18" }, { "start": 749, "end": 778, "text": "(Frank and van Genabith, 2001", "ref_id": "BIBREF17" }, { "start": 782, "end": 795, "text": "Bender (2008)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 1084, "end": 1091, "text": "Fig. 1b", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Boundedness and projectivity", "sec_num": "2.2" }, { "text": "In this paper, we define a notion of semantic expressive capacity and prove the first formal results about the relationship between projectivity and bounded memory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Boundedness and projectivity", "sec_num": "2.2" }, { "text": "Let N 0 = {0, 1, . . .} be the nonnegative integers. A signature is a finite set \u03a3 of function symbols f , each of which has been assigned a nonnegative integer called its rank. We write \u03a3 n for the symbols of rank n. Given a signature \u03a3, we say that all constants a \u2208 \u03a3 0 are trees over \u03a3; further, if f \u2208 \u03a3 n and t 1 , . . . , t n are trees over \u03a3, then", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal background", "sec_num": "3" }, { "text": "f (t 1 , . . . , t n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal background", "sec_num": "3" }, { "text": "is also a tree. We write T \u03a3 for the set of all trees over \u03a3. We define the height ht(t) of a tree t = f (t 1 , . . . , t n ) to be 1 + max ht(t i ), and ht(c) = 1 for c \u2208 \u03a3 0 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal background", "sec_num": "3" }, { "text": "Let X / \u2208 \u03a3, and let \u03a3 X = \u03a3 \u222a {X} (with X as a constant of rank 0). Then we call a tree C \u2208 T \u03a3 X a context if it contains exactly one occurrence of X, and write C \u03a3 for the set of all contexts. A context can be seen as a tree with exactly one hole. If t \u2208 T \u03a3 , we write C[t] for the tree in T \u03a3 that is obtained by replacing X with t.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal background", "sec_num": "3" }, { "text": "Given a string w \u2208 W * , we write |w| a for the number of times that a \u2208 W occurs in w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal background", "sec_num": "3" }, { "text": "We take a very general view on how semantic representations for strings are constructed compositionally. To this end, we define a notion of \"grammar\" which encompasses more devices for describing languages than just traditional grammars, such as transition-based parsers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammars for strings and trees", "sec_num": "3.1" }, { "text": "We say that a tree grammar G over the signature \u03a3 is any finite device that defines a language L(G) \u2286 T \u03a3 . For instance, regular tree grammars (Comon et al., 2007) are tree grammars, and context-free grammars can also be seen as tree grammars defining the language of parse trees.", "cite_spans": [ { "start": 144, "end": 164, "text": "(Comon et al., 2007)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Grammars for strings and trees", "sec_num": "3.1" }, { "text": "We say that a string grammar G = (G, yd) over the signature \u03a3 and the alphabet W is a pair consisting of a tree grammar G over \u03a3 and a yield function yd : T \u03a3 \u2192 W * which maps trees to strings over W (Weir, 1988) . A string grammar defines a language L(G) = {yd(t) | t \u2208 L(G)} \u2286 W * . We call the trees t \u2208 L(G) derivations.", "cite_spans": [ { "start": 200, "end": 212, "text": "(Weir, 1988)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Grammars for strings and trees", "sec_num": "3.1" }, { "text": "A particularly common yield function is the function yd pr , defined as yd pr (f (t 1 , . . . , t n )) = yd pr (t 1 ) \u2022 . . . \u2022 yd pr (t n ) if n > 0 and yd pr (c) = c if c has rank 0. This yield function simply concatenates the words at the leaves of t. Applied to the phrase-structure tree t in Fig. 2c , yd pr (t) is the Swiss German sentence in (1). Context-free grammars can be characterized as string grammars that combine a regular tree grammar with yd pr . By contrast, we can model tree-adjoining grammars (TAG, Joshi and Schabes, 1997) by choosing a tree grammar G that describes derivation trees as in Fig. 2b . The yd function could then substitute and adjoin the elementary trees as specified by the derivation tree (see Fig. 2a ) and then read off the words from the resulting derived tree in Fig. 2c .", "cite_spans": [], "ref_spans": [ { "start": 297, "end": 304, "text": "Fig. 2c", "ref_id": "FIGREF1" }, { "start": 613, "end": 620, "text": "Fig. 2b", "ref_id": "FIGREF1" }, { "start": 734, "end": 741, "text": "Fig. 2a", "ref_id": "FIGREF1" }, { "start": 807, "end": 814, "text": "Fig. 2c", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Grammars for strings and trees", "sec_num": "3.1" }, { "text": "We say that a string grammar is projective if its yield function is yd pr . Context-free grammars as construed above are clearly projective. Treeadjoining grammars are not projective: For instance, the yield of the subtree below \"aastriiche\" in Fig. 2b consists of the two separate strings \"es Huus\" and \"aastriiche\", which are then wrapped around \"l\u00f6nd h\u00e4lfed\" further up in the derivation. If the grammar is projective, then for any context C there exist two strings left(C) and right(C) such that for any tree t, yd(", "cite_spans": [], "ref_spans": [ { "start": 245, "end": 252, "text": "Fig. 2b", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Grammars for strings and trees", "sec_num": "3.1" }, { "text": "C[t]) = left(C)\u2022yd(t)\u2022 right(C).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammars for strings and trees", "sec_num": "3.1" }, { "text": "Below, we will talk about linear monadic contextfree tree grammars (LM-CFTGs; Rounds (1969), Comon et al. (2007) ). An LM-CFTG is a quadruple G = (N, \u03a3, R, S), where N is a ranked signature of nonterminals of rank at most one, \u03a3 is a ranked signature of terminals, S \u2208 N 0 is the start symbol, and R is a finite set of production rules of one of the forms", "cite_spans": [ { "start": 93, "end": 112, "text": "Comon et al. (2007)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Context-free tree languages", "sec_num": "3.2" }, { "text": "\u2022 A \u2192 t with A \u2208 N 0 and t \u2208 T V \u2022 A(t) \u2192 C[t] with A \u2208 N 1 and C \u2208 C V , where V = N \u222a \u03a3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-free tree languages", "sec_num": "3.2" }, { "text": "The trees in L(G) \u2286 T \u03a3 are obtained by expanding S with production rules. Nonterminals of rank zero are expanded by replacing them with trees. Nonterminals of rank one must have exactly one child in the tree; they are replaced by a context, and the variable in the context is replaced by the subtree below the child.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-free tree languages", "sec_num": "3.2" }, { "text": "We can extend an LM-CFTG G to a string grammar G = (G, yd pr ). Then LM-CFTG is weakly equivalent to TAG (Kepser and Rogers, 2011) ; that is, LM-CFTG and TAG generate the same class of string languages. Intuitively, the weakly equivalent LM-CFTG directly describes the language of derived trees of the TAG grammar (cf. Fig. 2c ). Notice that LM-CFTG is projective.", "cite_spans": [ { "start": 105, "end": 130, "text": "(Kepser and Rogers, 2011)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 319, "end": 326, "text": "Fig. 2c", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Context-free tree languages", "sec_num": "3.2" }, { "text": "Below, we will make crucial use of the following pumping lemma for LM-CFTLs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-free tree languages", "sec_num": "3.2" }, { "text": "Lemma 1 (Maibaum (1978) ). Let G be an LM-CFTG. There exists a constant p \u2208 N 0 such that for any t \u2208 L(G) with ht(t) > p, there exists a decomposition", "cite_spans": [ { "start": 8, "end": 23, "text": "(Maibaum (1978)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Context-free tree languages", "sec_num": "3.2" }, { "text": "t = C 1 [C 2 [C 3 [C 4 [t 5 ]]]] with ht(C 2 [C 3 [C 4 [X]]]) \u2264 p and ht(C 2 ) + ht(C 4 ) > 0 such that for any i \u2208 N 0 , C 1 [v i [t 5 ]] \u2208 L(G), where we let v 0 = C 3 and v i+1 = C 2 [v i [C 4 [X]]].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-free tree languages", "sec_num": "3.2" }, { "text": "We call p the pumping height of L(G).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-free tree languages", "sec_num": "3.2" }, { "text": "The specific unification-style semantic algebra we use in this paper is the HR algebra (Courcelle and Engelfriet, 2012) . This choice encompasses much of the recent literature on compositional semantic parsing with graphs, based e.g. on Hyperedge Replacement Grammars (Chiang et al., 2013; Peng et al., 2015; Koller, 2015) and the AM algebra (Groschwitz et al., 2018) . The values of the HR algebra are s-graphs: directed, edge-labeled graphs, some of whose nodes may be designated as sources, written in angle brackets. S-graphs can be combined using the forget, rename, and merge operations. Rename ren a\u2192b changes an a-source node into a b-source node. Forget f a makes it so the a-source node in the s-graph is no longer a source node. Merge || combines two s-graphs while unifying nodes with the same source annotation. For instance, the sgraphs rt ARG1 \u2212\u2212\u2192 o and o", "cite_spans": [ { "start": 87, "end": 119, "text": "(Courcelle and Engelfriet, 2012)", "ref_id": "BIBREF11" }, { "start": 268, "end": 289, "text": "(Chiang et al., 2013;", "ref_id": "BIBREF8" }, { "start": 290, "end": 308, "text": "Peng et al., 2015;", "ref_id": null }, { "start": 309, "end": 322, "text": "Koller, 2015)", "ref_id": "BIBREF23" }, { "start": 342, "end": 367, "text": "(Groschwitz et al., 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "The HR algebra", "sec_num": "3.3" }, { "text": "Hans are merged into", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The HR algebra", "sec_num": "3.3" }, { "text": "rt ARG0 \u2212\u2212\u2192 o", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The HR algebra", "sec_num": "3.3" }, { "text": "Hans . The HR algebra uses operation symbols from a ranked signature \u2206 to describe s-graphs syntactically. \u2206 contains symbols for merge (rank 2) and the forget and rename operations (rank 1). It also contains constants (symbols of rank 0) which denote s-graphs of the form a f \u2212 \u2192 b and a f , where a, b are sources and f is an edge label. Terms t \u2208 T \u2206 over this signature evaluate recursively to s-graphs t , as usual in an algebra. Each instance of the HR algebra uses a fixed, finite set of k source names which can be used in the con-stant s-graphs and the rename and forget operations. The class of graphs which can be expressed as values of terms over the algebra increases with k. We write H k for the HR algebra with k source names (and some set of edge labels).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The HR algebra", "sec_num": "3.3" }, { "text": "Let G be an s-graph, and let G be a subgraph of G, i.e. a subset of its edges. We call a node a boundary node of G if it is incident both to an edge in G and to an edge that is not in G . For instance, the s-graph in Fig. 2e is a subgraph of the one in Fig. 2d ; the boundary nodes are drawn shaded in (d). The following lemma holds: Lemma 2. Let G = C[t] be an s-graph, and let G be a subgraph of G such that the s-graph t contains the same edges as G . Then every boundary node in G is a source in t .", "cite_spans": [], "ref_spans": [ { "start": 217, "end": 224, "text": "Fig. 2e", "ref_id": "FIGREF1" }, { "start": 253, "end": 260, "text": "Fig. 2d", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The HR algebra", "sec_num": "3.3" }, { "text": "Finally, we extend string grammars to compositionally relate strings with semantic representations. Let G = (G, yd) be a string grammar. The tree grammar G generates a language L(G) \u2286 T \u03a3 of trees. We will map each tree t \u2208 L(G) into a term h(t) over some algebra A over a signature \u2206 using a linear tree homomorphism (LTH) h : T \u03a3 \u2192 T \u2206 (Comon et al., 2007) , i.e. by compositional bottom-up evaluation. This defines a relation between strings and values of A:", "cite_spans": [ { "start": 338, "end": 358, "text": "(Comon et al., 2007)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Grammars with semantic interpretations", "sec_num": "3.4" }, { "text": "REL(G, h, A) = {(yd(t), h(t) A ) | t \u2208 L(G)}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammars with semantic interpretations", "sec_num": "3.4" }, { "text": "For instance, A could be some HR algebra H k ; then REL(G, h, H k ) will be a binary relation between strings and s-graphs. In this case, we abbreviate h(t) as graph(t).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammars with semantic interpretations", "sec_num": "3.4" }, { "text": "If we look at an entire class G of string grammars and a fixed algebra, this defines a class of such relations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammars with semantic interpretations", "sec_num": "3.4" }, { "text": "L(G, A) = {REL(G, h, A) | G \u2208 G, h LTH }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammars with semantic interpretations", "sec_num": "3.4" }, { "text": "In the example in Fig. 2 , we can define a linear homomorphism h to map the derivation tree t in (b) to a term h(t) which evaluates to the s-graph shown in (d). At the top of this term, the s-graphs at the \"chind\" and \"h\u00e4lfed\" (f,g) nodes are combined into (d) by h(l\u00f6nd):", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 24, "text": "Fig. 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Grammars with semantic interpretations", "sec_num": "3.4" }, { "text": "h(l\u00f6nd) = rt let || f o ( rt ARG1 \u2212\u2212\u2192 o || ren rt\u2192o (G (f ) )) || f o ( rt ARG2 \u2212\u2212\u2192 o || ren rt\u2192o (G (g) ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammars with semantic interpretations", "sec_num": "3.4" }, { "text": "This non-projective derivation produces the sgraph in (d) using only two sources, rt and o. By contrast, a homomorphic interpretation of the projective tree (c) has to use at least four sources, as the intermediate result in (e) illustrates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammars with semantic interpretations", "sec_num": "3.4" }, { "text": "We will now investigate the ability of projective grammar formalisms (G, H k ) to express L(TAG, H 2 ). We will define a relation CSD \u2208 L(TAG, H 2 ) and prove that CSD cannot be generated by projective grammar formalisms with bounded k. We show this first for arbitrary projective G, under certain assumptions on the alignment of words and graph edges. In Section 5, we drop these assumptions, but focus on G = LM-CFTG.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projective cross-serial dependencies", "sec_num": "4" }, { "text": "To construct CSD, consider the string language", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The relation CSD", "sec_num": "4.1" }, { "text": "CSD s = {A n B m C n D m | m, n \u2265 1}, where A = {a k a k k | k \u2265 0},", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The relation CSD", "sec_num": "4.1" }, { "text": "and analogously for B, C, D. An example string in CSD s is a aa b b b c c d d. Note that k can be chosen independently for each segment. Every string w \u2208 CSD s can be uniquely described by m, n, and a sequence K(w) = (K (a) , K (b) , K (c) , K (d) ) of numbers specifying the k's used in each segment, where K (a) , K (c) each contain n numbers and K (b) , K (d) contain m numbers. In the example, we have n = 1, m = 2, and", "cite_spans": [ { "start": 228, "end": 231, "text": "(b)", "ref_id": null }, { "start": 244, "end": 247, "text": "(d)", "ref_id": null }, { "start": 318, "end": 321, "text": "(c)", "ref_id": null }, { "start": 359, "end": 362, "text": "(d)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The relation CSD", "sec_num": "4.1" }, { "text": "K(w) = ((2), (1, 0), (1), (0, 0)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The relation CSD", "sec_num": "4.1" }, { "text": "We associate a graph G w with each string w \u2208 CSD s by the construction illustrated in Fig. 3 . For each 1 \u2264 i \u2264 n, we define the i-th a-block to be the graph consisting of nodes u c \u2212 \u2192 v with a further outgoing a-edge from u. In addition, u connects to a linear chain of K c-edges. G w consists of a linear chain of the n a-blocks, followed by the m b-blocks (defined analogously). We let CSD = Note that CSD is a more intricate version of the cross-serial dependency language. CSD can be generated by a TAG grammar along the lines of the one from Section 3.4, using a HR algebra with two sources; thus CSD \u2208 L(TAG, H 2 ).", "cite_spans": [], "ref_spans": [ { "start": 87, "end": 93, "text": "Fig. 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "The relation CSD", "sec_num": "4.1" }, { "text": "{(w, G w ) | w \u2208 CSD s }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The relation CSD", "sec_num": "4.1" }, { "text": "The characteristic feature of CSD is that edges which are close together in the graph (e.g. the a and c edge in an a-block) correspond to symbols that can be distant in the string (e.g. a and c tokens). Projective grammars cannot combine predicates (a) and arguments (c) directly because of their distance in the string; intuitively, they must keep track of either the c's or the a's for a long time, which cannot be done with a bounded k.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CSD with bounded blocks", "sec_num": "4.2" }, { "text": "Before we go into exploiting this intuition, we first note that its correctness depends on the details of the construction of CSD, in particular the ability to select arbitrary and independent K (x) for the different x \u2208 {a, b, c, d}. Consider the derivation t on the left of Fig. 4 with its projective yield abbcdd; this is the case ((0), (0, 0), (0), (0, 0)) of CSD, corresponding to the CSD graph G 1 shown in Fig. 4 (a) . We can map t to this graph by applying the following linear tree homomorphism h into H 2 :", "cite_spans": [], "ref_spans": [ { "start": 276, "end": 282, "text": "Fig. 4", "ref_id": "FIGREF4" }, { "start": 413, "end": 423, "text": "Fig. 4 (a)", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "CSD with bounded blocks", "sec_num": "4.2" }, { "text": "h( * 1) = fs(x1 || renrt\u2192s(x2)) h( * 0) = x1 h(b) = d \u2190 \u2212 b rt \u2212 \u2192 s h(b!) = d \u2190 \u2212 b rt h(a) = c \u2190 \u2212 a rt \u2212 \u2192 s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CSD with bounded blocks", "sec_num": "4.2" }, { "text": "A derivation of the form * 0 (t 1 , t 2 ) evaluates to the same graph as t 1 ; the graph value of t 2 is ignored. Thus if we assume that the subtree of t for cdd evaluates to some arbitrary graph G 0 , the complete derivation t evaluates to G 1 . Some intermediate results are shown on the right of Fig. 4 . If we let CSD 0 be the subset of CSD where all K (x) are zero, we can generalize this construction into an LM-CFTG which generates CSD 0 . Thus, CSD 0 can be generated by a projective grammar that is interpreted into H 2 . But note that the derivation in Fig. 4 is unnatural in that the symbols in the string are not generated by the same derivation steps that generate the graph nodes that intuitively correspond to them; for instance, the graphs generated for the d tokens are completely irrelevant. Below, we prevent unnatural constructions like this in two ways. We will first assume that string symbols and graph nodes must be aligned (Thm. 1). Then we will assume that the K (x) can be arbitary, which allows us to drop the alignment assumption (Thm. 2).", "cite_spans": [], "ref_spans": [ { "start": 299, "end": 305, "text": "Fig. 4", "ref_id": "FIGREF4" }, { "start": 563, "end": 569, "text": "Fig. 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "CSD with bounded blocks", "sec_num": "4.2" }, { "text": "Let R \u2287 CSD 0 be some relation containing at least the string-graph pairs of CSD 0 , e.g. CSD itself. Assume that R is generated by a projective grammar (G, h) with G = (G, yd pr ) and a fixed number k of sources, i.e. we have R = REL(G, h, H k ). We will prove a contradiction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "k-distant trees", "sec_num": "4.3" }, { "text": "Given a pair (w, G w ) \u2208 R, we say that two edges e, f in G w are equivalent, e \u2261 f , if they belong to the same block. We call a derivation tree t \u2208 T = L(G) k-distant if t has a subtree t such that we can find k edges e 1 , . . . , e k \u2208 graph(t ) with e i \u2261 e j for all i = j and k further edges e 1 , . . . , e k \u2208 G w \\ graph(t ) such that e i \u2261 e i for all i. For such trees, we have the following lemma. Lemma 3. A k-distant tree has a subtree t such that graph(t ) has at least k sources. Proof. Let BK i be the i-th block in G w ; we let 1 \u2264 i \u2264 m + n and do not distinguish between aand b-blocks. Let t be the subtree of t claimed by the definition of distant trees. For each i, let E i = BK i \u2229 graph(t ) be the edges in the i-th block generated by t , and let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "k-distant trees", "sec_num": "4.3" }, { "text": "E i = BK i \\E i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "k-distant trees", "sec_num": "4.3" }, { "text": "By definition, E i and E i are both non-empty for at least k blocks. Each of these blocks is weakly connected, and thus contains at least one node u i which is incident both to an edge in E i and in E i . This node is a boundary node of graph(t ). Because u 1 , . . . , u k are all distinct, it follows from Lemma 2 that graph(t ) has at least k sources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "k-distant trees", "sec_num": "4.3" }, { "text": "We also note the following lemma about derivations of projective string grammars, which follows from the inability of projective grammars to combine distant tokens. We write Sep = {a/c, c/a, b/d, d/b}. Lemma 4. Let G = (G, yd) be a projective string grammar. For any r \u2208 N 0 there exists s \u2208 N 0 such that any t \u2208 L(G) with yd(t) \u2208 a * b s c s d * has a subtree t such that yd(t ) contains r occurrences of x and no occurrences of y, for some x/y \u2208 Sep.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "k-distant trees", "sec_num": "4.3" }, { "text": "A consequence of Lemma 3 is that if certain string-graph pairs in CSD 0 can only be expressed with k+1-distant trees, then R (which contains these pairs as well) is not in L(G, H k ), because H k only admits k sources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projectivity and alignments", "sec_num": "4.4" }, { "text": "However, as we saw in Section 4.2, pairs in CSD 0 can have unexpected projective derivations which make do with a low number of sources. So let's assume for now that the string grammar G and the tree homomorphism h produce tokens and edge labels that fit together. Let us call G, h aligned if for all constants c \u2208 \u03a3 0 , graph(c) is a graph containing a single edge with label yd(c). The derivation in Fig. 4 cannot be generated by an aligned grammar because the graph for the token b contains a d-edge. We write L \u2194 (G, A) = {REL(G, h, A) | G \u2208 G and G, h aligned} for the class of string-semantics relations which can be generated with aligned grammars.", "cite_spans": [], "ref_spans": [ { "start": 402, "end": 408, "text": "Fig. 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Projectivity and alignments", "sec_num": "4.4" }, { "text": "Under this assumption, it is easy to see that any relation including CSD 0 (hence, CSD) cannot be expressed with a projective grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projectivity and alignments", "sec_num": "4.4" }, { "text": "Theorem 1. Let G be any class of projective string grammars and R \u2287 CSD 0 . For any k, R \u2208 L \u2194 (G, H k ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projectivity and alignments", "sec_num": "4.4" }, { "text": "Proof. Assume that there is a G = (G, yd pr ) \u2208 G and an LTH h such that R = REL(G, h, H k ). Given k, choose s \u2208 N 0 such that every tree t \u2208 T = L(G) with yd(t) = a s b s c s d s has a subtree t such that yd(t ) contains k + 1 occurrences of x and no occurrences of y, for some x/y \u2208 Sep. Such an s exists according to Lemma 4. We can choose t such that (yd(t), graph(t)) \u2208 CSD 0 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projectivity and alignments", "sec_num": "4.4" }, { "text": "Because G, h are aligned, graph(t ) contains no y-edge and at least k + 1 x-edges. Each of these x-edges is non-equivalent to all the others, and equivalent to a y-edge in graph(t)\\ graph(t ), so t is k+1-distant. It follows from Lemma 3 that graph(t ) has k + 1 sources, in contradiction to the assumption that G, h uses only k sources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projectivity and alignments", "sec_num": "4.4" }, { "text": "Thm. 1 is a powerful result which shows that CSD cannot be generated by any device for generating projective derivations using bounded placeholder memory -if we can assume that tokens and edges are aligned. We will now drop this assumption and prove that CSD cannot be generated using a fixed set of placeholders using LM-CFTG, regardless of alignment. The basic proof idea is to enforce a weak form of alignment through the interaction of the pumping lemma with very long x-chains. The result is remarkable in that LM-CFTG and TAG are weakly equivalent; they only differ in whether they must derive the strings projectively or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expressive capacity of LM-CFTG", "sec_num": "5" }, { "text": "Theorem 2. CSD \u2208 L(LM-CFTG, H k ), for any k.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expressive capacity of LM-CFTG", "sec_num": "5" }, { "text": "Assume that CSD = REL(G, h, H k ), for some k, with G = (G, yd) an LM-CFTG. Proving that this is a contradiction hinges on a somewhat technical concept of asynchronous derivations, which have to do with how the nodes generating edge labels such as a are distributed over a derivation tree. We prove that all asynchronous derivations of certain elements of CSD are distant (Lemma 5), and that all LM-CFTG derivations of CSD are asynchronous (Lemma 6), which proves Thm. 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asynchronous derivations", "sec_num": "5.1" }, { "text": "In what follows, Let T = L(G). let us write for any tree or context t and symbol x, n t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asynchronous derivations", "sec_num": "5.1" }, { "text": "x as a shorthand for | yd(t)| x , e t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asynchronous derivations", "sec_num": "5.1" }, { "text": "x for the number of xedges in graph(t) and m t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asynchronous derivations", "sec_num": "5.1" }, { "text": "x for the maximum length of a string in x * which is also substring of yd(t).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asynchronous derivations", "sec_num": "5.1" }, { "text": "Definition 1 (x, y, l-asynchronous derivation). Let x/y \u2208 Sep, l > 0, t \u2208 T, We call t an x, y, l- asynchronous derivation iff there is a decomposi- tion t = C[t ] such that e t y \u2265 n t y \u2212 n t x l \u2212 m t y e t x \u2264 n t x l + m t x (l + 1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asynchronous derivations", "sec_num": "5.1" }, { "text": "We call the pair (C, t ) an x, y, l-asynchronous split of t.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asynchronous derivations", "sec_num": "5.1" }, { "text": "Lemma 5. For any k, l > 0, there is a pair", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asynchronous derivations", "sec_num": "5.1" }, { "text": "o k,l = (w k,l , G k,l ) \u2208 CSD such that every x, y, l- asynchronous t with o k,l = (yd(t), graph(t)) is k- distant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asynchronous derivations", "sec_num": "5.1" }, { "text": "Proof. For x \u2208 {a, b, c, d} and m \u2208 N 0 , let x (m) denote the word m x m m . Let r = s = 3l + k + 1 and o k,l = (w k,l , G k,l ) be the unique element of CSD such that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asynchronous derivations", "sec_num": "5.1" }, { "text": "w k,l = (aa (s) ) r (bb (s) ) r (cc (s) ) r (dd (s) ) r .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asynchronous derivations", "sec_num": "5.1" }, { "text": "Let t be an a, c, l-asynchronous derivation of o k,l ; other choices of x/y \u2208 Sep are analogous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asynchronous derivations", "sec_num": "5.1" }, { "text": "By definition, we can split t = C[t ] such that graph(t ) has at most q a = lr+(l+1)s = (2l+1)s a-edges and at least q c = rs\u2212rl\u2212s = (2l+k+1)s c-edges. Notice first that graph(t ) contains at most 2l + 1 different complete a-blocks of G k,l , because each a-block contains s a-edges. Having 2l + 2 of them would require (2l + 2)s a-edges, which is more than the q a a-edges that graph(t ) can contain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asynchronous derivations", "sec_num": "5.1" }, { "text": "Next, consider 2l + k distinct a-blocks of G k,l . These blocks contain a total of (2l + k)s < (2l + k + 1)s = q c c-edges. Hence, the c-edges of graph(t ) cannot be contained within only 2l + k distinct blocks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asynchronous derivations", "sec_num": "5.1" }, { "text": "So we can find at least 2l + k + 1 c-edges in graph(t ) which are pairwise non-equivalent. There are at least k edges among these which are equivalent to an edge in G k,l \\ graph(t ), because graph(t ) contains at most l complete a-blocks of G k,l . Thus, t is k-distant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asynchronous derivations", "sec_num": "5.1" }, { "text": "So far, we have not used the assumption that T is an LM-CFTL. We will now exploit the pumping lemma to show that all derivation trees of an LM-CFTG for CSD must be asynchronous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "Lemma 6. If T is an LM-CFTL, then there exists l 0 \u2208 N 0 such that for every t \u2208 T, there exists x/y \u2208 Sep such that t is x, y, l 0 -asynchronous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "We prove this lemma by appealing to a class of derivation trees in which predicate and argument tokens are generated in separate parts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "Definition 2 (x, y, l-separated derivation). Let x/y \u2208 Sep. A tree t \u2208 T \u2206 is x, y, l-separated if we can write t = C x [C 0 [t y ]] such that | yd(t y )| x = 0 and | yd(C x )| y = 0 and | yd(C 0 )| x \u2264 l. The triple (C x , C 0 , t y )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "is called an l-separation of t. We call an l-separation minimal if there is no other l-separation of t with a smaller C 0 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "Intuitively, we can use the pumping lemma to systematically remove some contexts from a t \u2208 T. From the shape of CSD, we can conclude certain alignments between the strings and graphs generated by these contexts and establish bounds on the number of xand y-edges generated by the lower part of a separated derivation. The full proof is in the appendix; we sketch the main ideas here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "Let p denote the pumping height of T. There is a maximal number of string tokens and edges that a context of height at most p can generate under a given yield and homomorphism. We call this number l 0 in the rest of the proof. Lemma 7. For t \u2208 T, let r t y be the length of the maximal substring of yd(t) consisting in only ytokens and containing the rightmost occurrence of y in yd(t). If t is x, y, l 0 -separated, there exists a minimal x, y, l", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "0 -separation D x [D 0 [t y ]] of t such that, letting t 0 = D 0 [t y ], e t 0 y \u2265 n t y \u2212 n t x l 0 \u2212 r t y . Moreover, for any x, y, l 0 -separation t = E x [E 0 [t 1 y ]], letting t 1 = E 0 [t 1 y ], e t 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "x \u2264 n t 1 x + n t x l 0 . Proof (sketch). Both statements must be achieved in separated inductions on the height of t, although they mostly follow similar steps. We therefore focus here only on the crucial parts of the (slightly trickier) bound on e t 0 y . Let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "D x [D 0 [t y ]] be a minimal x, y, l 0 -separation of t and t 0 = D 0 [t y ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "Base Case If ht(t) \u2264 p, we have n t y \u2264 l 0 . We also have n t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "x > 0, so n t y \u2212 n t x l 0 \u2212 r t y \u2264 0 \u2264 e t 0 y . Induction step If h(t) > p, we apply Lemma 1 to t to yield a decomposition t =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "C 1 [C 2 [C 3 [C 4 [t 5 ]]]], where t = C 1 [C 3 [t 5 ]] \u2208 T, ht(t ) < ht(t) and ht(C 2 [[C 3 ]C 4 ]) \u2264 p.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "We first observe that t is x, y, l 0 -separated. By induction, there exists a minimal separation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "t = C x [C 0 [t y ]] with t 0 = C 0 [t y ] validating the bound on e t 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "y . Because of pumping considerations, we need to distinguish only three configurations of C 2 and C 4 . We present only the most difficult case here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "In this case C 2 and C 4 generate only one kind of bar symbol, y, and brackets. One needs to examine all possible ways C 2 , C 4 and t 0 may overlap. We detail the reasoning in the case where t 0 does not overlap with C 2 or C 4 . Then, since all y-tokens are generated by t 0 , projectivity of the yield and the definition of CSD impose that the generated y-tokens contribute to the rightmost ychain i.e. r t y = r t y + n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "C 2 [C 4 ] y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": ". Hence e t 0 y \u2265 e", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "t 0 y \u2265 n t y \u2212 n C 2 [C 4 ] y + n t x l 0 \u2212 r t 0 y + n C 2 [C 4 ] y .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "Lemma 8. For any t \u2208 T, if t is x, y, l 0 -separated then t is x, y, l 0 -asynchronous. Proof. By Lemma 7 there is a minimal x, y, l 0separation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "t = D x [D 0 [t y ]] such that, for t 0 = D 0 [t y ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": ", the bound on e t 0 y and the bound on e t 0 x both obtain. Observe that r t y \u2264 m t y by definition, and since t 0 generates at most l 0 x-tokens, by projectivity it generates at most (l 0 +1)m t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "x x-tokens (one sequence of m t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "x between each occurrence of x and the next, plus possibly one before the first and one after the last). Thus t is x, y, l 0 -asynchronous. Lemma 9. For any t \u2208 T, t is x, y, l 0 -separated for some x/y \u2208 Sep.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "Proof (sketch). The proof proceeds by induction on the height of t.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "If ht(t) \u2264 p, then | yd(t)| z \u2264 l 0 for any z \u2208 {a, b, c, d}, hence t is trivially x, y, l 0 -separated for some x/y \u2208 Sep.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "If h(t) > p, Lemma 1 yields a decomposition", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "t = C 1 [C 2 [C 3 [C 4 [t 5 ]]]], where t = C 1 [C 3 [t 5 ]] \u2208 T, ht(t ) < ht(t) and ht(C 2 [C 3 [C 4 ]]) \u2264 p.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "By induction t is x, y, l 0 -separated for some x/y \u2208 Sep. Let us assume x/y = a/c, other cases are analoguous. The challenge is to conclude to the x, y, l separation of t, after reinsertion of C 2 and C 4 in t .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "If C 2 and C 4 generate no aor c-token, the distribution of aand c-tokens in the tree is not affected, hence t is a, c, l 0 -separated. Otherwise, due to pumping considerations, we need to distinguish three possible configurations regarding the shape of the yields of C 2 and C 4 . We present one here, see the appendix for the others; they are in the same spirit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "We consider the case where left(C 2 ) contains some a-token and no b, c, d-tokens, and yd(C 4 ) contains some c-token. Assume left(C 4 ) contains some c. It follows that all b-tokens are generated by C 3 . So t has less than l 0 b-tokens, by definition of CSD it has then also less than", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "l 0 d-tokens, so (C 1 , C 2 [C 3 [C 4 ]], t 5 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "is a d, b, l 0 -separation. Assume now that right(C 4 ) contains some c. It follows that t 5 generate no d-token and C 1 generate no b-token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "Hence (C 1 , C 2 [C 3 [C 4 ]], t 5 ) is a b, d, l 0 -separation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "This concludes the proof of Lemma 6 and Thm. 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM-CFTG derivations are asynchronous", "sec_num": "5.2" }, { "text": "We have established a notion of expressive capacity in compositional semantic parsing. We have proved that non-projective grammars can express sentence-meaning relations with bounded memory that projective ones cannot. This answers an old question in the design of compositional systems: assuming projective syntax, lambda-style compositional mechanisms can be more expressive than unification-style ones, which have bounded \"memory\" for unfilled arguments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "From a theoretical perspective, the stronger result of this paper is perhaps Thm. 2, which shows without further assumptions that weakly equivalent grammar formalisms can differ in their se-mantic expressive capacity. However, Thm. 1 may have a clearer practical impact on the development of compositional semantic parsers. Consider, for instance, the case of CCG, a lexicalized grammar formalism that has been widely used in semantic parsing (Bos, 2008; Artzi et al., 2015; Lewis et al., 2016) . While a potentially infinite set of syntactic categories can be used in the parses of a single CCG grammar, CCG derivations are still projective in our sense. Thus, if one assumes that derivations should be aligned (which is natural for a lexicalized grammar), Thm. 1 implies that CCG with lambda-style semantic composition is more semantically expressive than with unification-style composition. Indeed, lambda-style compositional mechanisms are the dominant approach in CCG (Steedman, 2001; Baldridge and Kruijff, 2002; Artzi et al., 2015) .", "cite_spans": [ { "start": 443, "end": 454, "text": "(Bos, 2008;", "ref_id": "BIBREF7" }, { "start": 455, "end": 474, "text": "Artzi et al., 2015;", "ref_id": "BIBREF1" }, { "start": 475, "end": 494, "text": "Lewis et al., 2016)", "ref_id": "BIBREF24" }, { "start": 973, "end": 989, "text": "(Steedman, 2001;", "ref_id": null }, { "start": 990, "end": 1018, "text": "Baldridge and Kruijff, 2002;", "ref_id": "BIBREF2" }, { "start": 1019, "end": 1038, "text": "Artzi et al., 2015)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Furthermore, under the alignment assumptions of Section 4, no unification-style compositional mechanism can describe string-meaning relations like CSD. This includes neural models. For instance, most transition-based parsers (Nivre, 2008; Andor et al., 2016; Dyer et al., 2016) are projective, in that the parsing operations can only concatenate two substrings on the top of the stack if they are adjacent in the string. Such transition systems can therefore not be extended to transitionbased semantic parsers (Damonte et al., 2017) without (a) losing expressive capacity, (b) giving up compositionality, (c) adding mechanisms for non-projectivity (G\u00f3mez-Rodr\u00edguez et al., 2018), or (d) using a lambda-style semantic algebra. Thus our result clarifies how to build an effective and accurate semantic parser.", "cite_spans": [ { "start": 225, "end": 238, "text": "(Nivre, 2008;", "ref_id": null }, { "start": 239, "end": 258, "text": "Andor et al., 2016;", "ref_id": "BIBREF0" }, { "start": 259, "end": 277, "text": "Dyer et al., 2016)", "ref_id": "BIBREF16" }, { "start": 511, "end": 533, "text": "(Damonte et al., 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We have focused on whether a grammar formalism is projective or not, while holding the semantic algebra fixed. In the future, it would be interesting to explore how a unification-style compositional mechanism can be converted to a lambdastyle mechanism with unbounded placeholders. This would allow us to specify and train semantic parsers using such abstractions, while benefiting from the efficiency of projective parsers. A Details of the proof of Theorem 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Lemma 4. Let G = (G, yd) be a projective string grammar. For any r \u2208 N 0 there exists s \u2208 N such that any t \u2208 L(G) with yd(t) \u2208 a * b s c s d * has a subtree t such that yd(t ) contains r occurrences of x and no occurrences of y, for some x/y \u2208 Sep.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Proof. Depending on yd, one can always choose s > r such that any t with | yd(t)| > 2s has at least one strict subtree t with | yd(t )| \u2265 2r. The lemma follows by induction over the height of t. It is trivially true for height 1. For the induction step, consider that w = yd(t ) must have at least r occurrences of some letter because of projectivity and the shape of yd(t); assume it is a, the other cases are analogous. If w has no occurrences of c, we are done. Otherwise, by projectivity, w contains all the b's, i.e. w \u2208 a * b s c + d * . In this case, either w contains s > r occurrences of b and no occurrences of d, in which case we are again done. Or it contains an occurrence of d; then w \u2208 a * b s c s d * is in the shape required by the lemma, and we can apply the induction hypothesis to identify a subtree t of t with r occurrences of some x and none of the corresponding y; and t is also a subtree of t.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In all of the following, we assume that for some k \u2208 N 0 we have CSD = REL(G, h, H k ), where G = (G, yd) is an LM-CFTG (hence projective, i.e. yd = yd pr ). We let T = L(G) and p be the pumping height of T.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Details of the proof of Theorem 2", "sec_num": null }, { "text": "Let us extend the domain of yd to contexts: for a context C, we let yd(C) = left(C) \u2022 right(C).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.1 Terminology", "sec_num": null }, { "text": "We say that a string s is balanced if, for any z \u2208 {a, b, c, d} and any position i in s such that s i = z there are two encompassing positions k \u2264 i \u2264 l such that s [k,l] \u2208 { n z n | n \u2208 N 0 }. We say that a tree or a context is balanced if its yield is balanced. By construction, all trees of T are balanced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.1 Terminology", "sec_num": null }, { "text": "For t \u2208 T, a pumping decomposition of t is a 5-tuple (C 1 , C 2 , C 3 , C 4 , t 5 ), consisting in 4 contexts C 1 -C 4 and one tree t 5 such that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.1 Terminology", "sec_num": null }, { "text": "t = C 1 [C 2 [C 3 [C 4 [t 5 ]]]], ht(C 2 [C 3 [C 4 ]]) \u2264 p, ht(C 2 ) + ht(C 4 ) > 0 and for any i \u2208 N 0 , C 1 [v i [t 5 ]] \u2208 T, where we let v 0 = C 3 and v i+1 = C 2 [v i [C 4 [X]]].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.1 Terminology", "sec_num": null }, { "text": "Lemma 10. Let t \u2208 T with ht(t) > p, and consider a pumping decomposition t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "= C 1 [C 2 [C 3 [C 4 [t 5 ]]]]. Let s = left(C 2 ) \u2022 left(C 4 ) \u2022 right(C 4 ) \u2022 right(C 2 ) = yd(C 2 [C 4 ]).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "The two following propositions obtain:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "\u2022 For any (x, y) \u2208 {(a, c), (b, d)}, |s| x = |s| y . \u2022 Let t = C 1 [C 3 [t 5 ]]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": ". For u \u2208 T \u2206 and z \u2208 {a, b, c, d, a, b, c, d} let e u z denote the number of z-edges in graph(u). It holds for any z \u2208 {a, b, c, d, a, b, c, d} that e t z = e t z + |s| z . Proof.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "Let (x, y) \u2208 {(a, c), (b, d)}. t \u2208 T so yd(t) \u2208 CSD s which entails | yd(t)| x = | yd(t)| y .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "(1) since t \u2208 T by construction, we have yd(t ), graph(t ) \u2208 CSD. From there", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "| yd(t )| x = | yd(t )| y .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "(2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "But | yd(t)| x,y = | yd(t )| x,y +|s| x,y . Plugging this into (1) yields | yd(t )| x + |s| x = | yd(t )| y + |s| y .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "Simplifying using (2) we find |s| x = |s| y which establishes the first point. For the second point, we have from yd(t), graph(t) \u2208 CSD and by definition of CSD e t z = | yd(t)| z = | yd(t )| z +|s| z . Similarily since yd(t ), graph(t ) \u2208 CSD we have e t z = | yd(t )| z . Hence e t z = e t z + |s| z . We will now present a pair of lemmas stating, in formal terms, that decompositions t =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "C 1 [C 2 [C 3 [C 4 [t 5 ]]]]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "provided by the pumping lemma all fall within a small number of configurations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "\u2022 First, in the case where the 'pumpable' contexts C 2 and C 4 generate only 'bar' tokens and brackets in {a, b, c, d, , } * , we show that yd(C 2 ) \u2208 { , } * , so that only C 4 is actually pumping 'bar' tokens of some kind. Moreover, t 5 generates only 'bar' tokens and brackets as well. \u2022 Second, we explore the alternative, where the 'pumpable' contexts generate some of the 'core' tokens in {a, b, c, d}, say -for the sake of this informal presentation -some atokens. By lemma 10, they must generate as many c-tokens, for which we can again distinguish three possible configurations: 1. a's and c's are respectively generated on different sides of a single context (C 2 and/or C 4 ), but then neither C 2 nor C 4 generate any b or d-tokens. 2. C 2 generate both a and d-tokens (on the left and right sides respectively) and no b and c-tokens, while C 4 ensures generation of corresponding b and c-tokens (on the left and right sides respectively). 3. Or else, one of C 2 , C 4 generates the a-tokens and no c,b or d while the other generates the corresponding c-tokens and no a, b or d. Below follows the formal presentation of these lemmas:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "Lemma 11. Let t \u2208 T with ht(t) > p, and consider a pumping decomposition t =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "C 1 [C 2 [C 3 [C 4 [t 5 ]]]] such that for all z \u2208 {a, b, c, d}, | yd(C 2 [C 4 ])| z = 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "There is x \u2208 {a, b, c, d} such that all of the following holds:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "1. yd(C 2 ) \u2208 { , } * and yd(C 4 ) \u2208 { , x, } * . 2. Either left(C 4 ) \u2208 {x} * and | left(C 3 )| z = 0 for any z \u2208 {a, b, c, d}, or symmetrically, right(C 4 ) \u2208 {x} * and | right(C 3 )| z = 0 for any z \u2208 {a, b, c, d}. 3. | yd(t 5 )| z = 0 for any z \u2208 {a, b, c, d}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "Proof. First point: Let s = left(C 1 ) and n 0 = |s| . Let y \u2208 {a, b, c, d} and assume y \u2208 left(C 2 ). Pumping C 2 -C 4 n 0 + 1-times yields a tree t n 0 +1 \u2208 T such that s \u2022 left(C 2 ) n 0 +1 is a prefix of yd(t n 0 +1 ). We thus see that t n 0 +1 is not balanced, which is in contradiction with t n 0 +1 \u2208 T. A symmetric argument establishes that y / \u2208 right(C 2 ). Assume now that there are two distinct x, y \u2208 {a, b, c, d} such that x \u2208 yd(C 4 ) and y \u2208 yd(C 4 ). Notice that, since C 4 does not contain non-bar tokens, if x and y occur on the same side of C 4 (for instance left(C 4 ) = x y ) then t / \u2208 T be-cause no string in CSD S admits left(C 4 ) as a substring, whereas yd(t) does. So x and y must occur on distinct sides. It follows that C 4 does not generate tokens in { , } either: if for instance", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "left(C 4 ) = u \u2022 \u2022 x \u2022 v for some strings u and v in {x , } * , u\u2022 \u2022x\u2022v\u2022u\u2022 \u2022x\u2022v would be a substring of C 1 [C 2 [C 2 [C 3 [C 4 [C 4 [t 5 ]]]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "]]] \u2208 T which again is a contradiction. Let now n 1 = | yd(t 5 )| . Pumping C 2 -C 4 n 1 + 1 times yields a tree t n 1 +1 \u2208 T with a substring of the form x n 1 +1 yd(t 5 )y n 1 +1 (up to x/y symmetry) which cannot be balanced, yielding a final contradiction. Second point: yd(C 4 ) / \u2208 { , } * , because otherwise pumping C 2 and C 4 more times than the maximum number of occurrences of a bar token in yd(t) would yield an unbalanced tree. So there is a x such that x \u2208 left(C 4 ) or x \u2208 right(C 4 ). Assume for contradiction that any different token occurs on the same side of C 4 then", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "C 1 [C 2 [C 2 [C 3 [C 4 [C 4 [t 5 ]]]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "]]] \u2208 T contains a substring that cannot be found in any string of CSD yielding a contradiction. So left(C 4 ) \u2208 {x} * or right(C 4 ) \u2208 {x} * . Assume left(C 4 ) \u2208 {x} * , the other case is symmetric. Assume for contradiction that | left(C 3 )| z > 0 for some z \u2208 {a, b, c, d}. Let n 2 = | yd(C 3 )| . Pumping C 2 -C 4 n 2 + 1 times yields a tree t n 2 +1 \u2208 T such that (by projectivity) yd(t n 2 +1 ) has a substring of the form z", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "\u2022 u \u2022 x n 2 +1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "where |u| \u2264 n 2 . Hence t n 2 +1 \u2208 T is not balanced, yielding a contradiction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "Third point: Assume for contradiction that | yd(t 5 )| z > 0. Assume that left(C 4 ) \u2208 {x} * , the case right(C 4 ) \u2208 {x} * is symmetric, and point 2 ensures that these two cases are exhaustive. Let n 3 = |t 5 | and consider the tree t n 3 +1 \u2208 T obtained by pumping C 2 -C 4 n 3 + 1 times. By projectivity, yd(t n 3 +1 ) has a substring of the form", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "x k \u2022 u \u2022 z \u2022 v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "with k \u2265 n 3 + 1 and |u| \u2264 n 3 . Hence t n 3 +1 is not balanced and t n 3 +1 / \u2208 T, yielding a contradiction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "Lemma 12. let t \u2208 T with ht(t) > p, and consider a pumping decomposition t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "= C 1 [C 2 [C 3 [C 4 [t 5 ]]]]. Let (x, y, X, Y ) \u2208 {(a, c, A, C), (b, d, B, D)} such that | yd(C 2 [C 4 ])| x = 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "One of the following obtains:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "1. For some (i, j) \u2208 {(2, 4), (4, 2)}, left(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "C i ) \u2208 X + , right(C i ) \u2208 Y + , left(C j ) \u2208 X * and right(C j ) \u2208 Y * . 2. left(C 2 ) \u2208 A + , right(C 2 ) \u2208 D + , left(C j ) \u2208 B + and right(C j ) \u2208 C + . 3. Either left(C 2 ) \u2208 X + , right(C 2 ) = and left(C 4 ) \u2022 right(C 4 ) \u2208 Y + , or symmetrically left(C 2 ) = , right(C 2 ) \u2208 Y + and left(C 4 ) \u2022 right(C 4 ) \u2208 X + .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "Proof. All these observations follow easily from the first point of Lemma 10 (governing the relative number of occurrences of a, c-tokens on one hand and b, d-tokens on the other hand), projectivity, and the following observation: only one side of C 2 or C 4 cannot generate two different kinds of tokens in {a, b, c, d} or be unbalanced. Otherwise pumping would (from projectivity) ensure that the resulting tree has a substring of a shape impossible for CSD (for example, if both a and b-tokens occur on the same side of C 2 , pumping once produces a substring a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "\u2022 u \u2022 b \u2022 v \u2022 a \u2022 u \u2022 b \u2022 v). B.3 Separation Lemma 13. Let t = C 1 [C 2 [C 3 [C 4 [t 5 ]]]] \u2208 T and t = C 1 [C 3 [t 5 ]] \u2208 T. If t is x, y, l-separated then so is t . Proof. Consider an x, y, l-separation of t: t = D x [D 0 [t y ]].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "Let C x , C 0 and t y be respectively obtained by removing all nodes from C 2 or C 4 from D x , D 0 and t y . One easily checks that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "t = C x [C 0 [t y ]]. Moreover, | yd(C x )| y \u2264 | yd(D x )| y = 0, | yd(C 0 )| x \u2264 | yd(D 0 )| x \u2264 l and | yd(t y )| x \u2264 | yd(t y )| x = 0. Hence t is x, y, l-separated. B.4 Minimality argument Lemma 14. Let t = C 1 [C 2 [C 3 [C 4 [t 5 ]]]] \u2208 T and t = C 1 [C 3 [t 5 ]] \u2208 T such that t is x, y, l- separated. By Lemma 13, t is separated. Let D x [D 0 [t y ]] be a minimal separation of t and C x [C 0 [t y ]] be a minimal separation of t . D 0 [t y ] contains all nodes of C 0 [t y ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "Proof. Assume for contradiction that a node \u03c0 of C 0 is not in D 0 . It must then be in D x or t y . Assume that it is in D x , the case where it is in t y is analoguous. Since \u03c0 is not in D 0 , there is a non-trivial subcontext", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "D x of D x rooted at \u03c0, i.e. D x = D x [D x ] with ht(D x ) > 0. Let C x , C x be obtained by removing all nodes from C 2 or C 4 from D x and D x respectively. By definition of D x , | yd(D x [D x ])| y = 0, hence | yd(C x [C x ])| y = 0. Further observe that we have C x [C 0 ] = C x [C x [C 0 ]] for some subcontext C 0 of C 0 . Since \u03c0 is not in C 2 or C 4 , ht(C x ) > 0 thus ht(C 0 ) < ht(C 0 ). But letting E x = C x [C x ], E x [C 0 [t y ]]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "is then an x, y, l-separation of t which contredicts the assumed minimality of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "C x [C 0 [t y ]].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Pumping considerations", "sec_num": null }, { "text": "For any tree or context t and symbol x, let us write n t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x as a shorthand for | yd(t)|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x , e t x for the number of x-edges generated by t and r t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x the length of the rightmost maximal substring of yd(t) consisting in only x-tokens (more formally, r t x = |s| x , where s is the unique substring such that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "yd(t) = u \u2022 s \u2022 v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "where s \u2208 x * , if u is non empty its last token is not x, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "|v| x = 0).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "There is a maximal number of string tokens and edges that a context of height at most p can generate under the considered yield and homomorphism. We call l 0 this number and focus from now on l 0 -separated and l 0 -asynchronous derivations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "Below are the proofs of the two statements of Lemma 7 of the main paper (respectively, 7-1 and 7-2). Lemma 7-1. If t \u2208 T is x, y, l 0 -separated and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "t = D x [D 0 [t y ]] is an x, y, l 0 -separation of t, then for t 0 = D 0 [t y ] we have e t 0 x \u2264 n t 0 x + n t x l 0 . (x bound)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "Proof. We prove the result by induction over the pair (ht(t 0 ), ht(t)) (with lexicographic ordering). Base Case Assume ht(t 0 ) \u2264 p. Then e t 0 \u2264 l 0 . Since yd(t) \u2208 CSD s , n t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x > 0, thus n t 0 x +n t x l 0 \u2265 l 0 which ensures the bound.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "Induction step If h(t 0 ) > p then h(t) \u2265 h(t 0 ) > p. We apply Lemma 1 to t to yield", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "a decomposition t = C 1 [C 2 [C 3 [C 4 [t 5 ]]]], where t = C 1 [C 3 [t 5 ]] \u2208 T, ht(t ) < ht(t) and ht(C 2 [C 3 [C 4 ]]]) \u2264 p. Notice that t 0 cannot over- lap with C 2 [C 3 [C 4 ] without overlapping with C 1 or t 5 as well, for otherwise h(t 0 ) \u2264 p.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "As in the proof of Lemma 13, letting C x , C 0 , t y be obtained by removing all nodes from C 2 and C 4 from D x , D 0 and t y respectively, we obtain an x, y, l-separation t = C x [C 0 [t y ]]. We let t 0 = C 0 [t y ] and distinguish between possible configurations for C 2 and C 4 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "Case 0 If neither C 2 or C 4 generate any xtoken, we find by induction", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "e t 0 x \u2264 n t 0 x + n t x l 0 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "Moreover, we have e t 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x = e t 0 x , n t 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x = n t 0 x and n t x \u2264 n t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x which concludes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "Case 1 In this case Lemma 11 applies i.e. C 2 and C 4 generate only some z-tokens and brackets. The only subcase not already covered by Case 0 is the one where z = x. Notice that n t x = n t x . By induction, e t 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x \u2264 n t 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x + n t x l 0 . If t 0 does not overlap with C 2 or C 4 , we have e t 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x = e t 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x and n t 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x = n t 0 x which ensures the bound. Otherwise t 0 overlaps with C 4 . If all nodes of t 0 are contained in C 4 [t 5 ], then by Lemma 11, t 0 generate no y-token. By separation, neither does t which contradicts t \u2208 CSD. Hence t 0 contains all nodes of C 4 . Then by lemma 11 again, n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "C 2 [C 4 ] x = n C 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x , hence n t 0 x = n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "t 0 x + n C 2 [C 4 ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x and e t 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x \u2264 e t 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x + n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "C 2 [C 4 ] x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "which yields", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "e t 0 x \u2264 e t 0 x + n C 2 [C 4 ] x \u2264 n t 0 x + n t x l 0 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "Case 2 In this case Lemma 12 applies and at least one of C 2 -C 4 generate some token z \u2208 {a, b, c, d}. The only subcase not already dealt with in Case 0 is the one where we can set z = x. We thus get inductively:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "e t 0 x \u2264 n t 0 x + n t x l 0 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "Since C 2 or C 4 generate at least some x-token, we have n t x \u2265 n t x + 1. Moreover e t 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x \u2264 e t 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x + l 0 since C 2 [C 4 ] generate at most l 0 x-edges, and n t 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x \u2265 n t 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x . So we have e t 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x \u2264 n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "t 0 x +n t x l 0 +l 0 \u2264 n t 0 x +n t x l 0 concluding the proof. Lemma 7-2. If t \u2208 T is x, y, l 0 -separated then t there exists a minimal x, y, l 0 -separation D x [D 0 [t y ]] of t is such that, letting t 0 = D 0 [t y ], we have e t 0 y \u2265 n t y \u2212 n t x l 0 \u2212 r t y (y bound)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "Proof. We prove the result by induction over the height of t. t is x, y, l 0 -separated so let us consider", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "D x [D 0 [t y ]] a minimal x, y, l 0 -separation of t. Let t 0 = D 0 [t y ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "Base Case Assume ht(t) \u2264 p. Then. n t y \u2264 l 0 . Since yd(t) \u2208 CSD s , n t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x > 0. Moreover, 0 \u2264 e t 0 y and n t y \u2264 l 0 . So n t y \u2212 n t x l 0 \u2212 r t y \u2264 0 \u2264 e t 0 y which ensures the bound.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "Induction step If h(t) > p, we apply Lemma 1 to t to yield a decomposition t =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "C 1 [C 2 [C 3 [C 4 [t 5 ]]]], where t = C 1 [C 3 [t 5 ]] \u2208 T, ht(t ) < ht(t) and ht(C 2 [C 3 [C 4 ]]) \u2264 p. By", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "Lemma 13, t is x, y, l 0 -separated. We let t = C x [C 0 [t y ]] be a minimal separation of t verifying the bound and t 0 = C 0 [t y ]. In other words, we have: e", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "t 0 y \u2265 n t y \u2212 n t x l 0 \u2212 r t y .", "eq_num": "(3)" } ], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "By Lemma 14, t 0 = D 0 [t y ] contains all nodes of t 0 . We distinguish cases according to Lemmas 11 and 12. Case 1 Consider first the case where Lemma 11 applies i.e. C 2 and C 4 generate only one kind of bar token, z, and brackets. We now distinguish cases depending on the value of z. Before this, we emphasize that in all subcases it holds that n t x = n t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x . subcase i) z = y. Since all nodes of t 0 are contained in t 0 , we have e t 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x \u2265 e t 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x . Since C 2 and C 4 generate no y-token, we have n t y = n t y and r t y = r t y . Injecting into inequation (3) concludes. subcase ii) z = y. We distinguish the different possible overlap of C 2 and C 4 with t 0 . Notice first that, by minimality, if any C i , i \u2208 {2, 4} overlaps with t 0 then t 0 contains all nodes of C i , for otherwise we would have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "D 0 = D 0 [D 0 ] with D 0 a subcontext of C i such that ht(D 0 > 0), and in that case (D x [D 0 ], D 0 , t y ) would be a smaller x, y, l-separation of t since C i (hence D 0 ) does not generate y-tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "Hence, in the case where t 0 overlaps with C 2 , t 0 contains all nodes of C 2 and C 4 . Since t 0 also contains all nodes of t 0 , e t 0 \u2265 e t 0 + e C 2 [C 4 ] = e t 0 + n t y \u2212 n t y . Moreover, r t y \u2265 r t y . We can then conclude using inequation 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "Consider now the case where t 0 does not overlap with C 2 or C 4 . Since all y-tokens are generated by t 0 , projectivity of the yield and the definition of CSD impose that r t y = r t y + n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "C 2 [C 4 ] y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": ". We fur-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "ther have e t 0 y \u2265 e t 0 y , and injecting into inequation 3 yields e t 0 y \u2265 n t y \u2212 n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "C 2 [C 4 ] y + n t x l 0 \u2212 r t 0 y + n [C 2 [C 4 ] y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "which simplifies into the desired y bound. Finally, in the case where C 2 does not overlap with t 0 but C 4 does, all nodes of C 2 are contained in D x and all nodes of C 4 are contained in t 0 . We must then have | yd(C 3 )| y > 0. Otherwise, there would exist an x, y, l-separation E", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x [E 0 [t y ]] with E x = C 1 [C 2 [C 3 [C 4 ]]]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": ", and ht(E 0 ) < ht(D 0 ). Assume | yd(C 3 )| x > 0. Lemma 11, point 2, ensures that | left(C 3 )| x,y = 0 or | right (C 3 )| x,y = 0. Assume | right(C 3 )| x,y = 0 (the other case is symmetric). We then have both a x and a y generated on the left of C 3 . Since neither C 1 [C 2 ] nor t 5 generate any y-token, projectivity imposes r t y = r t y + n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "C 2 [C 4 ] y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "and we can conclude as in the previous case. The only remaining subcase is when | yd(C 3 )| x = 0, in which case t is 0-separated, and considering the (minimal) 0separation (C 1 , X, C 2 [C 3 [C 4 ]]) we can use the same argument as in the case where t 0 encompasses all nodes of C 2 and C 4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "Case 2 Consider now the remaining case where Lemma 12 applies. If neither C 2 or C 4 generate some x or y-token, they don't generate x or ytokens either, and the same reasoning as Case 1 subcase i) applies. Otherwise C 2 [C 4 ] generate at least some x-token. We then have n t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "x \u2265 n t x + 1. Since t 0 contains all nodes from t 0 we further have e t 0 y \u2265 e t 0 y . Finally n t y \u2264 n t y + l 0 . We conclude using inequation 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.5 Inductive bounds", "sec_num": null }, { "text": "Lemma 8. For any t \u2208 T, if t is x, y, l 0 -separated then t is x, y, l 0 -asynchronous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "Proof. By Lemma 7-2, there is a minimal x, y, l 0 -separation t = D x [D 0 [t y ]] such that the y bound obtains for t 0 = D 0 [t y ]. By lemma 7-1 the x bound obtains for t 0 as well. Observe finally, that r t y \u2264 m t y and since t 0 generates at most l 0 x-tokens, by projectivity and definition of CSD, it generates at most (l 0 + 1)m t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "x x-tokens (one sequence of m t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "x between each occurrence of x and the next, plus possibly one in front of the first and one after the last). Hence, e t 0 y \u2265 n t y \u2212 n t x l 0 \u2212 m t y e t 0 x \u2264 n t x l + m t x (l 0 + 1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "and t is x, y, l 0 -asynchronous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "Lemma 9. For any t \u2208 T, t is x, y, l 0 -separated for some x/y \u2208 Sep.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "Proof. The proof proceeds by induction on the height of t.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "If ht(t) \u2264 p. Then | yd(t)| z \u2264 l 0 for any z \u2208 {a, b, c, d}, hence t is trivially x, y, l 0 -separated for some x/y \u2208 Sep.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "If h(t) > p, Lemma 1 yields a decomposition", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "t = C 1 [C 2 [C 3 [C 4 [t 5 ]]]], where t = C 1 [C 3 [t 5 ]] \u2208", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "T, ht(t ) < ht(t) and ht(C 2 [C 4 ]) \u2264 p. By induction t is x, y, l 0 -separated for some x/y \u2208 Sep. For sake of succintness, let us present the inductive step for x/y = a/c, the reasoning for other cases is analoguous. Let us examine the different possible configurations of C 2 and C 4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "Case 1 If Lemma 11 applies i.e. C 2 and C 4 generate only one kind of bar token, z, and brackets, one checks easily that inserting C 2 and C 4 does not change the distribution of a and c-tokens in the tree, hence t is a, c, l 0 -separated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "Case 2 If Lemma 12 applies, note first that if C 2 and C 4 generate no a or c-token, we can conclude as in Case 1 as the distribution of a and c-tokens in the tree is not changed either. Otherwise, we assume that C 2 or C 4 generate some a or c-token and distinguish between subcases 1-3 of Lemma 12:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "Subcase 1 in this case for some i \u2208 {2, 4} left(C i ) contains an a-token and no b, c or dtoken while right(C i ) contains some c-token and no a, b or d-token. Assume i = 2, the case where i = 4 is similar. By projectivity and definition of CSD s follows that all b-tokens are generated in C 3 [C 4 [t 5 ]] and all c-tokens in C 1 . t is therefore b, d, 0-separated, hence b, d, l 0 -separated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "Subcase 2 in this case, left(C 2 ) contains some a-token and no b, c, d-token, right(C 3 ) contains some d-token and no a, c, d-token, left(C 4 ) contains some b-token and no a, c, d-token, right(C 4 ) contains some c-token and no a, b, d-token. It follows that t 5 generate no occurrence of a and C 1 no occurrence of c.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "Since | yd(C 2 [C 3 [C 4 ]])| a \u2264 l 0 , (C 1 , C 2 [C 3 [C 4 ]], t 5 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "is an a, c, l 0 -separation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "Subcase 3 Assume left(C 2 ) contains some atoken and no b, c, d-token and that left(C 4 ) contains some c-token. It follows that all b-tokens are generated by C 3 . So yd(t) contains less than l 0 b-tokens, by definition of CSD it also contains less than l 0 d-tokens, so (C 1 , C 2 [C 3 [C 4 ]], t 5 ) is a d, b, l 0 -separation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "Assume now left(C 2 ) contains some a-token and no b, c, d-token and that right(C 4 ) contains some c-token. It follows that t 5 generate no d-token and C 1 generate no b-token. Hence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "(C 1 , C 2 [C 3 [C 4 ]], t 5 ) is b, d, l 0 -separation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null }, { "text": "The remaining cases are symmetric exchanging c with a, d with b, and left with right everywhere.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.6 Conclusion", "sec_num": null } ], "back_matter": [ { "text": "We are grateful to Emily Bender, Guy Emerson, Meaghan Fowlie, Jonas Groschwitz, and the participants of the DELPH-IN workshop 2018 for fruitful discussions, and to the anonymous reviewers for their insightful feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Globally normalized transition-based neural networks", "authors": [ { "first": "Daniel", "middle": [], "last": "Andor", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "David", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "Aliaksei", "middle": [], "last": "Severyn", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Presta", "suffix": "" }, { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally nor- malized transition-based neural networks. In Pro- ceedings of ACL.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Broad-coverage CCG Semantic Parsing with AMR", "authors": [ { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG Semantic Parsing with AMR. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Coupling CCG and Hybrid Logic Dependency Semantics", "authors": [ { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" }, { "first": "M", "middle": [], "last": "Geert-Jan", "suffix": "" }, { "first": "", "middle": [], "last": "Kruijff", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Baldridge and Geert-Jan M. Kruijff. 2002. Cou- pling CCG and Hybrid Logic Dependency Seman- tics. In Proceedings of the 40th ACL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Abstract Meaning Representation for Sembanking", "authors": [ { "first": "Laura", "middle": [], "last": "Banarescu", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Bonial", "suffix": "" }, { "first": "Shu", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Madalina", "middle": [], "last": "Georgescu", "suffix": "" }, { "first": "Kira", "middle": [], "last": "Griffitt", "suffix": "" }, { "first": "Ulf", "middle": [], "last": "Hermjakob", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Schneider", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for Sembanking. In Proceedings of the 7th Linguis- tic Annotation Workshop and Interoperability with Discourse.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Radical nonconfigurationality without shuffle operators: An analysis of Wambaya", "authors": [ { "first": "Emily", "middle": [ "M" ], "last": "Bender", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 15th International Conference on HPSG", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily M. Bender. 2008. Radical non- configurationality without shuffle operators: An analysis of Wambaya. In Proceedings of the 15th International Conference on HPSG.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The Grammar Matrix: An open-source starter-kit for the rapid development of crosslinguistically consistent broad-coverage precision grammars", "authors": [ { "first": "Emily", "middle": [ "M" ], "last": "Bender", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Flickinger", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Oepen", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the COLING Workshop on Grammar Engineering and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily M. Bender, Dan Flickinger, and Stephan Oepen. 2002. The Grammar Matrix: An open-source starter-kit for the rapid development of cross- linguistically consistent broad-coverage precision grammars. In Proceedings of the COLING Work- shop on Grammar Engineering and Evaluation.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Representation and Inference for Natural Language", "authors": [ { "first": "Patrick", "middle": [], "last": "Blackburn", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Bos", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Blackburn and Johan Bos. 2005. Represen- tation and Inference for Natural Language. CSLI Publications.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Wide-coverage semantic analysis with Boxer", "authors": [ { "first": "Johan", "middle": [], "last": "Bos", "suffix": "" } ], "year": 2008, "venue": "Conference Proceedings. College Publications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johan Bos. 2008. Wide-coverage semantic analy- sis with Boxer. In Semantics in Text Processing. STEP 2008 Conference Proceedings. College Pub- lications.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Parsing graphs with hyperedge replacement grammars", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Karl", "middle": [ "Moritz" ], "last": "Hermann", "suffix": "" }, { "first": "Bevan", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, Bevan Jones, and Kevin Knight. 2013. Parsing graphs with hyperedge replacement grammars. In Proceedings of the 51st ACL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Tree Automata techniques and applications", "authors": [ { "first": "Hubert", "middle": [], "last": "Comon", "suffix": "" }, { "first": "Max", "middle": [], "last": "Dauchet", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Gilleron", "suffix": "" }, { "first": "Florent", "middle": [], "last": "Jacquemard", "suffix": "" }, { "first": "Denis", "middle": [], "last": "Lugiez", "suffix": "" }, { "first": "Sophie", "middle": [], "last": "Tison", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Tommasi", "suffix": "" }, { "first": "Christof", "middle": [], "last": "L\u00f6ding", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hubert Comon, Max Dauchet, R\u00e9mi Gilleron, Flo- rent Jacquemard, Denis Lugiez, Sophie Tison, Marc Tommasi, and Christof L\u00f6ding. 2007. Tree Au- tomata techniques and applications. Published on- line at http://tata.gforge.inria.fr/.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "An algebra for semantic construction in constraint-based grammars", "authors": [ { "first": "Ann", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Lascarides", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Flickinger", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 39th ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ann Copestake, Alex Lascarides, and Dan Flickinger. 2001. An algebra for semantic construction in constraint-based grammars. In Proceedings of the 39th ACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Graph Structure and Monadic Second-Order Logic, a Language Theoretic Approach", "authors": [ { "first": "Bruno", "middle": [], "last": "Courcelle", "suffix": "" }, { "first": "Joost", "middle": [], "last": "Engelfriet", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bruno Courcelle and Joost Engelfriet. 2012. Graph Structure and Monadic Second-Order Logic, a Lan- guage Theoretic Approach. Cambridge University Press.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Linear logic for meaning assembly", "authors": [ { "first": "Vijay", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "", "middle": [], "last": "Saraswat", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Workshop on Computational Logic for Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pereira, and Vijay Saraswat. 1995. Linear logic for meaning assembly. In Proceedings of the Workshop on Computational Logic for Natural Language Pro- cessing.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "An incremental parser for Abstract Meaning Representation", "authors": [ { "first": "Marco", "middle": [], "last": "Damonte", "suffix": "" }, { "first": "Shay", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "Giorgio", "middle": [], "last": "Satta", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Damonte, Shay B. Cohen, and Giorgio Satta. 2017. An incremental parser for Abstract Meaning Representation. In Proceedings of the 15th EACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Handbook of Graph Grammars and Computing by Graph Transformation", "authors": [ { "first": "Frank", "middle": [], "last": "Drewes", "suffix": "" }, { "first": "Hans-J\u00f6rg", "middle": [], "last": "Kreowski", "suffix": "" }, { "first": "Annegret", "middle": [], "last": "Habel", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "95--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frank Drewes, Hans-J\u00f6rg Kreowski, and Annegret Ha- bel. 1997. Hyperedge replacement graph gram- mars. In G. Rozenberg, editor, Handbook of Graph Grammars and Computing by Graph Transforma- tion, pages 95-162. World Scientific.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Recurrent neural network grammars", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Adhiguna", "middle": [], "last": "Kuncoro", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of NAACL.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "GlueTag: Linear logic based semantics for LTAG", "authors": [ { "first": "Anette", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the LFG Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anette Frank and Josef van Genabith. 2001. GlueTag: Linear logic based semantics for LTAG. In Proceed- ings of the LFG Conference.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Semantic construction in feature-based TAG", "authors": [ { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Kallmeyer", "suffix": "" } ], "year": 2003, "venue": "Proceedings of EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claire Gardent and Laura Kallmeyer. 2003. Semantic construction in feature-based TAG. In Proceedings of EACL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "AMR dependency parsing with a typed semantic algebra", "authors": [ { "first": "Jonas", "middle": [], "last": "Groschwitz", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Lindemann", "suffix": "" }, { "first": "Meaghan", "middle": [], "last": "Fowlie", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2018, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonas Groschwitz, Matthias Lindemann, Meaghan Fowlie, Mark Johnson, and Alexander Koller. 2018. AMR dependency parsing with a typed semantic al- gebra. In Proceedings of ACL.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Global transition-based non-projective dependency parsing", "authors": [ { "first": "Carlos", "middle": [], "last": "G\u00f3mez-Rodr\u00edguez", "suffix": "" }, { "first": "Tianze", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2018, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlos G\u00f3mez-Rodr\u00edguez, Tianze Shi, and Lillian Lee. 2018. Global transition-based non-projective depen- dency parsing. In Proceedings of ACL.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Tree-Adjoining Grammars", "authors": [ { "first": "K", "middle": [], "last": "Aravind", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "", "middle": [], "last": "Schabes", "suffix": "" } ], "year": 1997, "venue": "Handbook of Formal Languages", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aravind K. Joshi and Yves Schabes. 1997. Tree- Adjoining Grammars. In G. Rozenberg and A. Salo- maa, editors, Handbook of Formal Languages, vol- ume 3. Springer-Verlag.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "The equivalence of tree adjoining grammars and monadic linear context-free tree grammars", "authors": [ { "first": "Stephan", "middle": [], "last": "Kepser", "suffix": "" }, { "first": "James", "middle": [], "last": "Rogers", "suffix": "" } ], "year": 2011, "venue": "Journal of Logic, Language and Information", "volume": "20", "issue": "3", "pages": "361--384", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Kepser and James Rogers. 2011. The equiva- lence of tree adjoining grammars and monadic linear context-free tree grammars. Journal of Logic, Lan- guage and Information, 20(3):361-384.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Semantic construction with graph grammars", "authors": [ { "first": "Alexander", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 11th International Conference on Computational Semantics", "volume": "", "issue": "", "pages": "228--238", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Koller. 2015. Semantic construction with graph grammars. In Proceedings of the 11th Inter- national Conference on Computational Semantics, pages 228-238.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "LSTM CCG Parsing", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Lewis, Kenton Lee, and Luke Zettlemoyer. 2016. LSTM CCG Parsing. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Pumping lemmas for term languages", "authors": [ { "first": "T", "middle": [], "last": "Maibaum", "suffix": "" } ], "year": 1978, "venue": "Journal of Computer and System Sciences", "volume": "", "issue": "", "pages": "319--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Maibaum. 1978. Pumping lemmas for term lan- guages. Journal of Computer and System Sciences, pages 319-330.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The proper treatment of quantification in ordinary English", "authors": [ { "first": "Richard", "middle": [], "last": "Montague", "suffix": "" } ], "year": 1974, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Montague. 1974. The proper treatment of quantification in ordinary English. In R. Thoma- son, editor, Formal philosophy: Selected papers of Richard Montague. Yale University Press, New Haven.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "[subj: 1 , sem:[pred:sleep, agent: 1 ]]. This is unified with [subj:John]. The placeholders are holes with labels from a fixed set of argument names (a) Nonprojective and (b) projective analysis.", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "Semantic construction with TAG: (a) TAG derivation, (b) derivation tree, (c) derived tree, (d) semantic graph. (e) s-graph interpretations of the boxed node in (c); (f,g) s-graph interpretations at the boxed nodes in (b).", "num": null, "type_str": "figure" }, "FIGREF2": { "uris": null, "text": "The CSD graph for ((2), (1, 0), (1), (0, 0)); blocks indicated by gray boxes.", "num": null, "type_str": "figure" }, "FIGREF3": { "uris": null, "text": "edges with label a, and v to a linear chain of K (c) i", "num": null, "type_str": "figure" }, "FIGREF4": { "uris": null, "text": "An derivation of ((0), (0,0), (0), (0,0)).", "num": null, "type_str": "figure" }, "TABREF1": { "content": "
William Rounds. 1969. Context-free grammars on
trees. In Proceedings of the First Annual ACM Sym-
posium on Theory of Computing (STOC).
Stuart M. Shieber. 1985. Evidence against the context-
freeness of natural language. Linguistics and Phi-
losophy, 8(3):333-343.
Mark Steedman. 2001. The Syntactic Process. MIT
Press, Cambridge, MA.
David Weir. 1988. Characterizing mildly context-
sensitive grammar formalisms. Ph.D. thesis, Uni-
versity of Pennsylvania.
", "text": "Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34(4):513-553. Xiaochang Peng, Linfeng Song, and Daniel Gildea. 2015. A synchronous hyperedge replacement grammar based approach for AMR parsing. In Proceedings of the 19th Conference on Computational Language Learning.", "html": null, "type_str": "table", "num": null } } } }