{ "paper_id": "P07-1023", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:51:25.147082Z" }, "title": "Parsing and Generation as Datalog Queries", "authors": [ { "first": "Makoto", "middle": [], "last": "Kanazawa", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institute of Informatics", "location": { "addrLine": "2-1-2 Hitotsubashi, Chiyoda-ku", "postCode": "101-8430", "settlement": "Tokyo", "country": "Japan" } }, "email": "kanazawa@nii.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We show that the problems of parsing and surface realization for grammar formalisms with \"context-free\" derivations, coupled with Montague semantics (under a certain restriction) can be reduced in a uniform way to Datalog query evaluation. As well as giving a polynomialtime algorithm for computing all derivation trees (in the form of a shared forest) from an input string or input logical form, this reduction has the following complexity-theoretic consequences for all such formalisms: (i) the decision problem of recognizing grammaticality (surface realizability) of an input string (logical form) is in LOGCFL; and (ii) the search problem of finding one logical form (surface string) from an input string (logical form) is in functional LOGCFL. Moreover, the generalized supplementary magic-sets rewriting of the Datalog program resulting from the reduction yields efficient Earley-style algorithms for both parsing and generation.", "pdf_parse": { "paper_id": "P07-1023", "_pdf_hash": "", "abstract": [ { "text": "We show that the problems of parsing and surface realization for grammar formalisms with \"context-free\" derivations, coupled with Montague semantics (under a certain restriction) can be reduced in a uniform way to Datalog query evaluation. As well as giving a polynomialtime algorithm for computing all derivation trees (in the form of a shared forest) from an input string or input logical form, this reduction has the following complexity-theoretic consequences for all such formalisms: (i) the decision problem of recognizing grammaticality (surface realizability) of an input string (logical form) is in LOGCFL; and (ii) the search problem of finding one logical form (surface string) from an input string (logical form) is in functional LOGCFL. Moreover, the generalized supplementary magic-sets rewriting of the Datalog program resulting from the reduction yields efficient Earley-style algorithms for both parsing and generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The representation of context-free grammars (augmented with features) in terms of definite clause programs is well-known. In the case of a bare-bone CFG, the corresponding program is in the functionfree subset of logic programming, known as Datalog. For example, determining whether a string John found a unicorn belongs to the language of the CFG in Figure 1 is equivalent to deciding whether the Datalog program in Figure 2 together with the database in (1) can derive the query \"?\u2212 S(0, 4).\"", "cite_spans": [], "ref_spans": [ { "start": 351, "end": 359, "text": "Figure 1", "ref_id": null }, { "start": 417, "end": 425, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) John(0, 1). found(1, 2). a(2, 3). unicorn (3, 4) . By naive (or seminaive) bottom-up evaluation (see, e.g., Ullman, 1988 ), the answer to such a query can be computed in polynomial time in the size of the database for any Datalog program. By recording rule instances rather than derived facts, a packed representation of the complete set of Datalog derivation trees for a given query can also be obtained in polynomial time by the same technique. Since a Datalog derivation tree uniquely determines a grammar derivation tree, this gives a reduction of context-free recognition and parsing to query evaluation in Datalog.", "cite_spans": [ { "start": 46, "end": 49, "text": "(3,", "ref_id": null }, { "start": 50, "end": 52, "text": "4)", "ref_id": null }, { "start": 112, "end": 124, "text": "Ullman, 1988", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "S \u2192 NP VP VP \u2192 V NP V \u2192 V Conj V NP \u2192 Det N NP \u2192 John V \u2192 found V \u2192 caught Conj \u2192 and Det \u2192 a N \u2192 unicorn Figure 1: A CFG. S(i, j) :\u2212 NP(i, k), VP(k, j). VP(i, j) :\u2212 V(i, k), NP(k, j). V(i, j) :\u2212 V(i, k), Conj(k, l), V(l, j). NP(i, j) :\u2212 Det(i, k), N(k, j). NP(i, j) :\u2212 John(i, j). V(i, j) :\u2212 found(i, j). V(i, j) :\u2212 caught(i, j). Conj(i, j) :\u2212 and(i, j). Det(i, j) :\u2212 a(i, j). N(i, j) :\u2212 unicorn(i, j).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we show that a similar reduction to Datalog is possible for more powerful grammar formalisms with \"context-free\" derivations, such as (multi-component) tree-adjoining grammars (Joshi and Schabes, 1997; Weir, 1988) , IO macro grammars (Fisher, 1968) , and (parallel) multiple contextfree grammars (Seki et al., 1991) . For instance, the TAG in Figure 3 is represented by the Datalog program in Figure 4 . Moreover, the method of reduc- Figure 3 : A TAG with one initial tree (left) and one auxiliary tree (right) tion extends to the problem of tactical generation (surface realization) for these grammar formalisms coupled with Montague semantics (under a certain restriction). Our method essentially relies on the encoding of different formalisms in terms of abstract categorial grammars (de Groote, 2001) .", "cite_spans": [ { "start": 191, "end": 216, "text": "(Joshi and Schabes, 1997;", "ref_id": "BIBREF16" }, { "start": 217, "end": 228, "text": "Weir, 1988)", "ref_id": "BIBREF31" }, { "start": 249, "end": 263, "text": "(Fisher, 1968)", "ref_id": "BIBREF10" }, { "start": 311, "end": 330, "text": "(Seki et al., 1991)", "ref_id": "BIBREF23" }, { "start": 803, "end": 820, "text": "(de Groote, 2001)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 358, "end": 366, "text": "Figure 3", "ref_id": null }, { "start": 408, "end": 416, "text": "Figure 4", "ref_id": "FIGREF2" }, { "start": 450, "end": 458, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "S(p 1 , p 3 ) :\u2212 A(p 1 , p 3 , p 2 , p 2 ). A(p 1 , p 8 , p 4 , p 5 ) :\u2212 A(p 2 , p 7 , p 3 , p 6 ), a(p 1 , p 2 ), b(p 3 , p 4 ), c(p 5 , p 6 ), d(p 7 , p 8 ). A(p 1 , p 2 , p 1 , p 2 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The reduction to Datalog makes it possible to apply to parsing and generation sophisticated evaluation techniques for Datalog queries; in particular, an application of generalized supplementary magicsets rewriting (Beeri and Ramakrishnan, 1991) automatically yields Earley-style algorithms for both parsing and generation. The reduction can also be used to obtain a tight upper bound, namely LOGCFL, on the computational complexity of the problem of recognition, both for grammaticality of input strings and for surface realizability of input logical forms.", "cite_spans": [ { "start": 214, "end": 244, "text": "(Beeri and Ramakrishnan, 1991)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "With regard to parsing and recognition of input strings, polynomial-time algorithms and the LOGCFL upper bound on the computational complexity are already known for the grammar formalisms covered by our results (Engelfriet, 1986) ; nevertheless, we believe that our reduction to Datalog offers valuable insights. Concerning generation, our results seem to be entirely new. 1 side of each rule is annotated with a \u03bb-term that tells how the meaning of the left-hand side is composed from the meanings of the right-hand side nonterminals, represented by upper-case variables X 1 , X 2 , . . . (Figure 5 ). 2 The meaning of a sentence is computed from its derivation tree. For example, John found a unicorn has the derivation tree in Figure 6 , and the grammar rules assign its root node the \u03bb-term (\u03bbu.u John)(\u03bbx.(\u03bbuv.\u2203(\u03bby.\u2227(uy)(vy))) unicorn (\u03bby.find y x)), which \u03b2-reduces to the \u03bb-term", "cite_spans": [ { "start": 211, "end": 229, "text": "(Engelfriet, 1986)", "ref_id": "BIBREF9" }, { "start": 603, "end": 604, "text": "2", "ref_id": null } ], "ref_spans": [ { "start": 590, "end": 599, "text": "(Figure 5", "ref_id": null }, { "start": 730, "end": 738, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "S(X 1 X 2 ) \u2192 NP(X 1 ) VP(X 2 ) VP(\u03bbx.X 2 (\u03bby.X 1 yx)) \u2192 V(X 1 ) NP(X 2 ) V(\u03bbyx.X 2 (X 1 yx)(X 3 yx)) \u2192 V(X 1 ) Conj(X 2 ) V(X 3 ) NP(X 1 X 2 ) \u2192 Det(X 1 ) N(X 2 ) NP(\u03bbu.u John e ) \u2192 John V(find e\u2192e\u2192t ) \u2192 found V(catch e\u2192e\u2192t ) \u2192 caught Conj(\u2227 t\u2192t\u2192t ) \u2192 and Det(\u03bbuv.\u2203 (e\u2192t)\u2192t (\u03bby.\u2227 t\u2192t\u2192t (uy)(vy))) \u2192 a N(unicorn e\u2192t ) \u2192 unicorn", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) \u2203(\u03bby.\u2227(unicorn y)(find y John)) encoding the first-order logic formula representing the meaning of the sentence (i.e., its logical form).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Thus, computing the logical form(s) of a sentence involves parsing and \u03bb-term normalization. To find a sentence expressing a given logical form, it suffices to find a derivation tree whose root node is associated with a \u03bb-term that \u03b2-reduces to the given logical form; the desired sentence can simply be read off from the derivation tree. At the heart of both tasks is the computation of the derivation tree(s) that yield the input. In the case of generation, this may be viewed as parsing the input \u03bb-term with a \"contextfree\" grammar that generates a set of \u03bb-terms (in normal form) ( Figure 7 ), which is obtained from the original CFG with Montague semantics by stripping off terminal symbols. Determining whether a given logical form is surface realizable with the original grammar is equivalent to recognition with the resulting context-free \u03bb-term grammar (CFLG).", "cite_spans": [], "ref_spans": [ { "start": 587, "end": 595, "text": "Figure 7", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "S(X 1 X 2 ) :\u2212 NP(X 1 ), VP(X 2 ). VP(\u03bbx.X 2 (\u03bby.X 1 yx)) :\u2212 V(X 1 ), NP(X 2 ). V(\u03bbyx.X 2 (X 1 yx)(X 3 yx)) :\u2212 V(X 1 ), Conj(X 2 ), V(X 3 ). NP(X 1 X 2 ) :\u2212 Det(X 1 ), N(X 2 ). NP(\u03bbu.u John e ). V(find e\u2192e\u2192t ). V(catch e\u2192e\u2192t ). Conj(\u2227 t\u2192t\u2192t ). Det(\u03bbuv.\u2203 (e\u2192t)\u2192t (\u03bby.\u2227 t\u2192t\u2192t (uy)(vy))). N(unicorn e\u2192t ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In a CFLG such as in Figure 7 , constants appearing in the \u03bb-terms have preassigned types indicated by superscripts. There is a mapping \u03c3 from nonterminals to their types (", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 29, "text": "Figure 7", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u03c3 = {S \u2192 t, NP \u2192 (e \u2192 t) \u2192 t, VP \u2192 e\u2192t, V \u2192 e\u2192e\u2192t, Conj \u2192 t\u2192t\u2192t, Det \u2192 (e\u2192t)\u2192(e\u2192t)\u2192t, N \u2192 e\u2192t}).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A rule that has A on the left-hand side and B 1 , . . . , B n as right-hand side nonterminals has its left-hand side annotated with a well-formed \u03bb-term M that has type \u03c3(A) under the type environment", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "X 1 : \u03c3(B 1 ), . . . , X n : \u03c3(B n ) (in sym- bols, X 1 : \u03c3(B 1 ), . . . , X n : \u03c3(B n ) M : \u03c3(A)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "What we have called a context-free \u03bb-term grammar is nothing but an alternative notation for an abstract categorial grammar (de Groote, 2001) whose abstract vocabulary is second-order, with the restriction to linear \u03bb-terms removed. 3 In the linear case, Salvati (2005) has shown the recognition/parsing complexity to be PTIME, and exhibited an algorithm similar to Earley parsing for TAGs. Second-order linear ACGs are known to be expressive enough to encode well-known mildly context-sensitive grammar formalisms in a straightforward way, including TAGs and multiple context-free grammars (de Groote, 2002; de Groote and Pogodalla, 2004) .", "cite_spans": [ { "start": 233, "end": 234, "text": "3", "ref_id": null }, { "start": 255, "end": 269, "text": "Salvati (2005)", "ref_id": "BIBREF21" }, { "start": 591, "end": 608, "text": "(de Groote, 2002;", "ref_id": "BIBREF13" }, { "start": 609, "end": 639, "text": "de Groote and Pogodalla, 2004)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "S(\u03bby.X 1 (\u03bbz.z)y) :\u2212 A(X 1 ). A(\u03bbxy.a o\u2192o (X 1 (\u03bbz.b o\u2192o (x(c o\u2192o z)))(d o\u2192o y))) :\u2212 A(X 1 ). A(\u03bbxy.xy).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For example, the linear CFLG in Figure 8 is an encoding of the TAG in Figure 3 A string-generating grammar coupled with Montague semantics may be represented by a synchronous CFLG, a pair of CFLGs with matching rule sets (de Groote 2001). The transduction between strings and logical forms in either direction consists of parsing the input \u03bb-term with the sourceside grammar and normalizing the \u03bb-term(s) constructed in accordance with the target-side grammar from the derivation tree(s) output by parsing.", "cite_spans": [], "ref_spans": [ { "start": 32, "end": 40, "text": "Figure 8", "ref_id": "FIGREF5" }, { "start": 70, "end": 78, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We show that under a weaker condition than linearity, a CFLG can be represented by a Datalog program, obtaining a tight upper bound (LOGCFL) on the recognition complexity. Due to space limitation, our presentation here is kept at an informal level; formal definitions and rigorous proof of correctness will appear elsewhere.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reduction to Datalog", "sec_num": "3" }, { "text": "We use the grammar in Figure 7 as an example, which is represented by the Datalog program in Figure 9 . Note that all \u03bb-terms in this grammar are almost linear in the sense that they are \u03bbI-terms where any variable occurring free more than once in any subterm must have an atomic type. Our construction is guaranteed to be correct only when this condition is met.", "cite_spans": [ { "start": 93, "end": 101, "text": "Figure 9", "ref_id": null } ], "ref_spans": [ { "start": 22, "end": 30, "text": "Figure 7", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Reduction to Datalog", "sec_num": "3" }, { "text": "Each Datalog rule is obtained from the corresponding grammar rule in the following way. Let M be the \u03bb-term annotating the left-hand side of the grammar rule. We first obtain a principal (i.e., most general) typing of M. 4 In the case of the second rule, this is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reduction to Datalog", "sec_num": "3" }, { "text": "S(p 1 ) :\u2212 NP(p 1 , p 2 , p 3 ), VP(p 2 , p 3 ). VP(p 1 , p 4 ) :\u2212 V(p 2 , p 4 , p 3 ), NP(p 1 , p 2 , p 3 ). V(p 1 , p 4 , p 3 ) :\u2212 V(p 2 , p 4 , p 3 ), Conj(p 1 , p 5 , p 2 ), V(p 5 , p 4 , p 3 ). NP(p 1 , p 4 , p 5 ) :\u2212 Det(p 1 , p 4 , p 5 , p 2 , p 3 ), N(p 2 , p 3 ). NP(p 1 , p 1 , p 2 ) :\u2212 John(p 2 ). V(p 1 , p 3 , p 2 ) :\u2212 find(p 1 , p 3 , p 2 ). V(p 1 , p 3 , p 2 ) :\u2212 catch(p 1 , p 3 , p 2 ). Conj(p 1 , p 3 , p 2 ) :\u2212 \u2227(p 1 , p 3 , p 2 ). Det(p 1 , p 5 , p 4 , p 3 , p 4 ) :\u2212 \u2203(p 1 , p 2 , p 4 ), \u2227(p 2 , p 5 , p 3 ). N(p 1 , p 2 ) :\u2212 unicorn(p 1 , p 2 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reduction to Datalog", "sec_num": "3" }, { "text": "X 1 : p 3 \u2192 p 4 \u2192 p 2 , X 2 : (p 3 \u2192 p 2 ) \u2192 p 1 \u03bbx.X 2 (\u03bby.X 1 yx) : p 4 \u2192 p 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reduction to Datalog", "sec_num": "3" }, { "text": "We then remove \u2192 and parentheses from the types in the principal typing and write the resulting sequences of atomic types in reverse. 5 We obtain the Datalog rule by replacing X i and M in the grammar rule with the sequence coming from the type paired with X i and M, respectively. Note that atomic types in the principal typing become variables in the Datalog rule. When there are constants in the \u03bb-term M, they are treated like free variables. In the case of the second-to-last rule, the principal typing is", "cite_spans": [ { "start": 134, "end": 135, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Reduction to Datalog", "sec_num": "3" }, { "text": "\u2203 : (p 4 \u2192 p 2 ) \u2192 p 1 , \u2227 : p 3 \u2192 p 5 \u2192 p 2 \u03bbuv.\u2203(\u03bby.\u2227(uy)(vy)) : (p 4 \u2192 p 3 ) \u2192 (p 4 \u2192 p 5 ) \u2192 p 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reduction to Datalog", "sec_num": "3" }, { "text": "If the same constant occurs more than once, distinct occurrences are treated as distinct free variables. The construction of the database representing the input \u03bb-term is similar, but slightly more complex. A simple case is the \u03bb-term (2), where each constant occurs just once. We compute its principal typing, treating constants as free variables. 6 \u2203 : (4 \u2192 2) \u2192 1, \u2227 : 3 \u2192 5 \u2192 2, unicorn : 4 \u2192 3, find : 4 \u2192 6 \u2192 5 , John : 6 \u2203(\u03bby.\u2227(unicorn y)(find y John)) : 1. 4 To be precise, we must first convert M to its \u03b7-long form relative to the type assigned to it by the grammar. For example, X 1 X 2 in the first rule is converted to X 1 (\u03bbx.X 2 x).", "cite_spans": [ { "start": 465, "end": 466, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Reduction to Datalog", "sec_num": "3" }, { "text": "5 The reason for reversing the sequences of atomic types is to reconcile the \u03bb-term encoding of strings with the convention of listing string positions from left to right in databases like (1). 6 We assume that the input \u03bb-term is in \u03b7-long normal form.", "cite_spans": [ { "start": 194, "end": 195, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Reduction to Datalog", "sec_num": "3" }, { "text": "We then obtain the corresponding database (3) and query (4) from the antecedent and succedent of this judgment, respectively. Note that here we are using 1, 2, 3, . . . as atomic types, which become database constants.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reduction to Datalog", "sec_num": "3" }, { "text": "\u2203(1, 2, 4). \u2227(2, 5, 3). unicorn(3, 4). find(5, 6, 4). John(6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reduction to Datalog", "sec_num": "3" }, { "text": "( 3)?\u2212 S(1). 4When the input \u03bb-term contains more than one occurrence of the same constant, it is not always correct to simply treat them as distinct free variables, unlike in the case of \u03bb-terms annotating grammar rules. Consider the \u03bb-term (5) (John found and caught a unicorn):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reduction to Datalog", "sec_num": "3" }, { "text": "(5) \u2203(\u03bby.\u2227(unicorn y)(\u2227(find y John)(catch y John))).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reduction to Datalog", "sec_num": "3" }, { "text": "Here, the two occurrences of John must be treated as the same variable. The principal typing is (6) and the resulting database is (7). \u2203 : (4 \u2192 2) \u2192 1, \u2227 1 : 3 \u2192 5 \u2192 2, unicorn : 4 \u2192 3, \u2227 2 : 6 \u2192 8 \u2192 5, find : 4 \u2192 7 \u2192 6, John : 7, catch : 4 \u2192 7 \u2192 8 \u2203(\u03bby.\u2227 1 (unicorn y) (\u2227 2 (find y John)(catch y John))) : 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reduction to Datalog", "sec_num": "3" }, { "text": "\u2203(1, 2, 4). \u2227(2, 5, 3). \u2227(5, 8, 6). unicron(3, 4). find(6, 7, 4). John(7). catch(8, 7, 4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reduction to Datalog", "sec_num": "3" }, { "text": "It is not correct to identify the two occurrences of \u2227 in this example. The rule is to identify distinct occurrences of the same constant just in case they occur in the same position within \u03b1-equivalent subterms of an atomic type. This is a necessary condition for those occurrences to originate as one and the same occurrence in the non-normal \u03bb-term at the root of the derivation tree. (As a preprocessing step, it is also necessary to check that distinct occurrences of a bound variable satisfy the same condition, so that the given \u03bb-term is \u03b2-equal to some almost linear \u03bb-term. 7 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reduction to Datalog", "sec_num": "3" }, { "text": "We sketch some key points in the proof of correctness of our reduction. The \u03bb-term N obtained from the input \u03bb-term by replacing occurrences of constants by free variables in the manner described above is the normal form of some almost linear \u03bbterm N . The leftmost reduction from an almost linear \u03bb-term to its normal form must be non-deleting and almost non-duplicating in the sense that when a \u03b2-redex (\u03bbx.P)Q is contracted, Q is not deleted, and moreover it is not duplicated unless the type of x is atomic. We can show that the Subject Expansion Theorem holds for such \u03b2-reduction, so the principal typing of N is also the principal typing of N . By a slight generalization of a result by Aoto (1999) , this typing \u0393 N : \u03b1 must be negatively non-duplicated in the sense that each atomic type has at most one negative occurrence in it. By Aoto and Ono's (1994) generalization of the Coherence Theorem (see Mints, 2000) , it follows that every \u03bbterm P such that \u0393 P : \u03b1 for some \u0393 \u2286 \u0393 must be \u03b2\u03b7-equal to N (and consequently to N).", "cite_spans": [ { "start": 694, "end": 705, "text": "Aoto (1999)", "ref_id": "BIBREF5" }, { "start": 843, "end": 864, "text": "Aoto and Ono's (1994)", "ref_id": "BIBREF6" }, { "start": 910, "end": 922, "text": "Mints, 2000)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Correctness of the reduction", "sec_num": "4" }, { "text": "Given the one-one correspondence between the grammar rules and the Datalog rules, a Datalog derivation tree uniquely determines a grammar derivation tree (see Figure 10 as an example). This relation is not one-one, because a Datalog derivation tree contains database constants from the input database. This extra information determines a typing of the \u03bb-term P at the root of the grammar derivation tree (with occurrences of constants in the \u03bb-term corresponding to distinct facts in the database regarded as distinct free variables):", "cite_spans": [], "ref_spans": [ { "start": 159, "end": 168, "text": "Figure 10", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "Correctness of the reduction", "sec_num": "4" }, { "text": "John : 6, find : 4 \u2192 6 \u2192 5, \u2203 : (4 \u2192 2) \u2192 1, \u2227 : 3 \u2192 5 \u2192 2, unicorn : 4 \u2192 3 (\u03bbu.u John) (\u03bbx.(\u03bbuv.\u2203(\u03bby.\u2227(uy)(vy))) unicorn (\u03bby.find y x)) : 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correctness of the reduction", "sec_num": "4" }, { "text": "The antecedent of this typing must be a subset of the antecedent of the principal typing of the \u03bb-term N from which the input database was obtained. By the property mentioned at the end of the preceding paragraph, it follows that the grammar derivation tree is a derivation tree for the input \u03bb-term.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correctness of the reduction", "sec_num": "4" }, { "text": "Conversely, consider the \u03bb-term P (with distinct occurrences of constants regarded as distinct free variables) at the root of a grammar derivation tree for the input \u03bb-term. We can show that there is a substitution \u03b8 which maps the free variables of P to the free variables of the \u03bb-term N used to build the input database such that \u03b8 sends the normal form of P to N. Since P is an almost linear \u03bb-term, the leftmost reduction from P\u03b8 to N is non-deleting and almost non-duplicating. By the Subject Expansion Theorem, the principal typing of N is also the principal typing of P\u03b8, and this together with the grammar derivation tree determines a Datalog derivation tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correctness of the reduction", "sec_num": "4" }, { "text": "Let us call a rule A(M) :\u2212 B 1 (X 1 ), . . . , B n (X n ) in a CFLG an -rule if n = 0 and M does not contain any constants. We can eliminate -rules from an almost linear CFLG by the same method that Kanazawa and Yoshinaka (2005) used for linear grammars, noting that for any \u0393 and \u03b1, there are only finitely many almost linear \u03bb-terms M such that \u0393 M : \u03b1. If a grammar has no -rule, any derivation tree for the input \u03bb-term N that has a \u03bb-term P at its root node corresponds to a Datalog derivation tree whose number of leaves is equal to the number of occurrences of constants in P, which cannot exceed the number of occurrences of constants in N.", "cite_spans": [ { "start": 199, "end": 228, "text": "Kanazawa and Yoshinaka (2005)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Complexity-theoretic consequences", "sec_num": "5" }, { "text": "A Datalog program P is said to have the polynomial fringe property relative to a class D of databases if there is a polynomial p(n) such that for every database D in D of n facts and every query q such that P\u222aD derives q, there is a derivation tree for q whose fringe (i.e., sequence of leaves) is of length at most p(n). For such P and D, it is known that { (D, q) | D \u2208 D, P \u222a D derives q } is in the complexity class LOGCFL (Ullman and Van Gelder, 1988; Kanellakis, 1988) .", "cite_spans": [ { "start": 427, "end": 456, "text": "(Ullman and Van Gelder, 1988;", "ref_id": "BIBREF30" }, { "start": 457, "end": 474, "text": "Kanellakis, 1988)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Complexity-theoretic consequences", "sec_num": "5" }, { "text": "We state without proof that the database-query pair (D, q) representing an input \u03bb-term N can be computed in logspace. By padding D with extra useless facts so that the size of D becomes equal to the number of occurrences of constants in N, we obtain a logspace reduction from the set of \u03bb-terms generated by an almost linear CFLG to a set of the form { (D, q) | D \u2208 D, P \u222a D q }, where P has the polynomial fringe property relative to D. This shows that the problem of recognition for an almost linear CFLG is in LOGCFL. By the main result of Gottlob et al. (2002) , the related search problem of finding one derivation tree for the input \u03bb-term is in functional LOGCFL, i.e., the class of functions that can be computed by a logspace-bounded Turing machine with a LOGCFL oracle. In the case of a synchronous almost linear CFLG, the derivation tree found from the source \u03bbterm can be used to compute a target \u03bb-term. Thus, to the extent that transduction back and forth between strings and logical forms can be expressed by a synchronous almost linear CFLG, the search problem of finding one logical form of an input sentence and that of finding one surface realization of an input logical form are both in functional LOGCFL. 8 As a consequence, there are efficient parallel algorithms for these problems.", "cite_spans": [ { "start": 544, "end": 565, "text": "Gottlob et al. (2002)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Complexity-theoretic consequences", "sec_num": "5" }, { "text": "Almost linear CFLGs can represent a substantial fragment of a Montague semantics for English and such \"linear\" grammar formalisms as (multi-component) tree-adjoining grammars (both as string grammars and as tree grammars) and multiple context-free grammars. However, IO macro grammars and parallel multiple context-free grammars cannot be directly represented because representing string copying requires multiple occurrences of a variable of type o \u2192 o. This problem can be solved by switching from strings to trees. We convert the input string into the regular set of binary trees whose yield equals the input string (using c as the sole symbol of rank 2), and turn the grammar into a tree grammar, replacing all instances of string concatenation in the grammar with the tree operation t 1 , t 2 \u2192 c (t 1 , t 2 ) . This way, a string grammar is turned into a tree grammar that generates a set of trees whose image under the yield function is the language of the string grammar. (In the case of an IO macro grammar, the result is an IO contextfree tree grammar (Engelfriet, 1977) .) String copying becomes tree copying, and the resulting grammar can be represented by an almost linear CFLG and hence by a Datalog program. The regular set of all binary trees that yield the input string is represented by a database that is constructed from a deterministic bottom-up finite tree automaton recognizing it. Determinism is important for ensuring correctness of this reduction. Since the database can be computed from the input string in logspace, the complexity-theoretic consequences of the last section carry over here.", "cite_spans": [ { "start": 1062, "end": 1080, "text": "(Engelfriet, 1977)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 802, "end": 814, "text": "(t 1 , t 2 )", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Regular sets of trees as input", "sec_num": "6" }, { "text": "Magic-sets rewriting of a Datalog program allows bottom-up evaluation to avoid deriving useless facts by mimicking top-down evaluation of the original program. The result of the generalized supplementary magic-sets rewriting of Beeri and Ramakrishnan (1991) applied to the Datalog program representing a CFG essentially coincides with the deduction system (Shieber et al., 1995) or uninstantiated parsing system (Sikkel, 1997) for Earley parsing. By applying the same rewriting method to Datalog programs representing almost linear CFLGs, we can obtain efficient parsing and generation algorithms for various grammar formalisms with context-free derivations.", "cite_spans": [ { "start": 228, "end": 257, "text": "Beeri and Ramakrishnan (1991)", "ref_id": "BIBREF7" }, { "start": 356, "end": 378, "text": "(Shieber et al., 1995)", "ref_id": "BIBREF24" }, { "start": 412, "end": 426, "text": "(Sikkel, 1997)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Magic sets and Earley-style algorithms", "sec_num": "7" }, { "text": "We illustrate this approach with the program in Figure 4 , following the presentation of Ullman (1989a; 1989b) . We assume the query to take the form \"?\u2212 S(0, x).\", so that the input database can be processed incrementally. The program is first made safe by eliminating the possibility of deriving nonground atoms:", "cite_spans": [ { "start": 89, "end": 103, "text": "Ullman (1989a;", "ref_id": "BIBREF28" }, { "start": 104, "end": 110, "text": "1989b)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 48, "end": 56, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Magic sets and Earley-style algorithms", "sec_num": "7" }, { "text": "S(p 1 , p 3 ) :\u2212 A(p 1 , p 3 , p 2 , p 2 ). A(p 1 , p 8 , p 4 , p 5 ) :\u2212 A(p 2 , p 7 , p 3 , p 6 ), a(p 1 , p 2 ), b(p 3 , p 4 ), c(p 5 , p 6 ), d(p 7 , p 8 ). A(p 1 , p 8 , p 4 , p 5 ) :\u2212 a(p 1 , p 2 ), b(p 2 , p 4 ), c(p 5 , p 6 ), d(p 6 , p 8 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Magic sets and Earley-style algorithms", "sec_num": "7" }, { "text": "The subgoal rectification removes duplicate arguments from subgoals, creating new predicates as needed:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Magic sets and Earley-style algorithms", "sec_num": "7" }, { "text": "S(p 1 , p 3 ) :\u2212 B(p 1 , p 3 , p 2 ). A(p 1 , p 8 , p 4 , p 5 ) :\u2212 A(p 2 , p 7 , p 3 , p 6 ), a(p 1 , p 2 ), b(p 3 , p 4 ), c(p 5 , p 6 ), d(p 7 , p 8 ). A(p 1 , p 8 , p 4 , p 5 ) :\u2212 a(p 1 , p 2 ), b(p 2 , p 4 ), c(p 5 , p 6 ), d(p 6 , p 8 ). B(p 1 , p 8 , p 4 ) :\u2212 A(p 2 , p 7 , p 3 , p 6 ), a(p 1 , p 2 ), b(p 3 , p 4 ), c(p 4 , p 6 ), d(p 7 , p 8 ). B(p 1 , p 8 , p 4 ) :\u2212 a(p 1 , p 2 ), b(p 2 , p 4 ), c(p 4 , p 6 ), d(p 6 , p 8 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Magic sets and Earley-style algorithms", "sec_num": "7" }, { "text": "We then attach to predicates adornments indicating the free/bound status of arguments in top-down evaluation, reordering subgoals so that as many arguments as possible are marked as bound:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Magic sets and Earley-style algorithms", "sec_num": "7" }, { "text": "S bf (p 1 , p 3 ) :\u2212 B bff (p 1 , p 3 , p 2 ). B bff (p 1 , p 8 , p 4 ) :\u2212 a bf (p 1 , p 2 ), A bfff (p 2 , p 7 , p 3 , p 6 ), b bf (p 3 , p 4 ), c bb (p 4 , p 6 ), d bf (p 7 , p 8 ). B bff (p 1 , p 8 , p 4 ) :\u2212 a bf (p 1 , p 2 ), b bf (p 2 , p 4 ), c bf (p 4 , p 6 ), d bf (p 6 , p 8 ). A bfff (p 1 , p 8 , p 4 , p 5 ) :\u2212 a bf (p 1 , p 2 ), A bfff (p 2 , p 7 , p 3 , p 6 ), b bf (p 3 , p 4 ), c bb (p 5 , p 6 ), d bf (p 7 , p 8 ). A bfff (p 1 , p 8 , p 4 , p 5 ) :\u2212 a bf (p 1 , p 2 ), b bf (p 2 , p 4 ), c ff (p 5 , p 6 ), d bf (p 6 , p 8 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Magic sets and Earley-style algorithms", "sec_num": "7" }, { "text": "The generalized supplementary magic-sets rewriting finally gives the following rule set:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Magic sets and Earley-style algorithms", "sec_num": "7" }, { "text": "r 1 : m B(p 1 ) :\u2212 m S(p 1 ). r 2 : S(p 1 , p 3 ) :\u2212 m B(p 1 ), B(p 1 , p 3 , p 2 ). r 3 : sup 2.1 (p 1 , p 2 ) :\u2212 m B(p 1 ), a(p 1 , p 2 ). r 4 : sup 2.2 (p 1 , p 7 , p 3 , p 6 ) :\u2212 sup 2.1 (p 1 , p 2 ), A(p 2 , p 7 , p 3 , p 6 ). r 5 : sup 2.3 (p 1 , p 7 , p 6 , p 4 ) :\u2212 sup 2.2 (p 1 , p 7 , p 3 , p 6 ), b(p 3 , p 4 ). r 6 : sup 2.4 (p 1 , p 7 , p 4 ) :\u2212 sup 2.3 (p 1 , p 7 , p 6 , p 4 ), c(p 4 , p 6 ). r 7 : B(p 1 , p 8 , p 4 ) :\u2212 sup 2.4 (p 1 , p 7 , p 4 ), d(p 7 , p 8 ). r 8 : sup 3.1 (p 1 , p 2 ) :\u2212 m B(p 1 ), a(p 1 , p 2 ). r 9 : sup 3.2 (p 1 , p 4 ) :\u2212 sup 3.1 (p 1 , p 2 ), b(p 2 , p 4 ). r 10 : sup 3.3 (p 1 , p 4 , p 6 ) :\u2212 sup 3.2 (p 1 , p 4 ), c(p 4 , p 6 ). r 11 : B(p 1 , p 8 , p 4 ) :\u2212 sup 3.3 (p 1 , p 4 , p 6 ), d(p 6 , p 8 ). r 12 : m A(p 2 ) :\u2212 sup 2.1 (p 1 , p 2 ). r 13 : m A(p 2 ) :\u2212 sup 4.1 (p 1 , p 2 ). r 14 : sup 4.1 (p 1 , p 2 ) :\u2212 m A(p 1 ), a(p 1 , p 2 ). r 15 : sup 4.2 (p 1 , p 7 , p 3 , p 6 ) :\u2212 sup 4.1 (p 1 , p 2 ), A(p 2 , p 7 , p 3 , p 6 ). r 16 : sup 4.3 (p 1 , p 7 , p 6 , p 4 ) :\u2212 sup 4.2 (p 1 , p 7 , p 3 , p 6 ), b(p 3 , p 4 ). r 17 : sup 4.4 (p 1 , p 7 , p 4 , p 5 ) :\u2212 sup 4.3 (p 1 , p 7 , p 6 , p 4 ), c(p 5 , p 6 ). r 18 : A(p 1 , p 8 , p 4 , p 5 ) :\u2212 sup 4.4 (p 1 , p 7 , p 4 , p 5 ), d(p 7 , p 8 ). r 19 : sup 5.1 (p 1 , p 2 ) :\u2212 m A(p 1 ), a(p 1 , p 2 ). r 20 : sup 5.2 (p 1 , p 4 ) :\u2212 sup 5.1 (p 1 , p 2 ), b(p 2 , p 4 ). r 21 : sup 5.3 (p 1 , p 4 , p 5 , p 6 ) :\u2212 sup 5.2 (p 1 , p 4 ), c(p 5 , p 6 ). r 22 : A(p 1 , p 8 , p 4 , p 5 ) :\u2212 sup 5.3 (p 1 , p 4 , p 5 , p 6 ), d(p 6 , p 8 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Magic sets and Earley-style algorithms", "sec_num": "7" }, { "text": "The following version of chart parsing adds control structure to this deduction system: 1. (\uf769\uf76e\uf769\uf774) Initialize the chart to the empty set, the agenda to the singleton {m S(0)}, and n to 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Magic sets and Earley-style algorithms", "sec_num": "7" }, { "text": "2. Repeat the following steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Magic sets and Earley-style algorithms", "sec_num": "7" }, { "text": "(a) Repeat the following steps until the agenda is exhausted: i. Remove a fact from the agenda, called the trigger. ii. Add the trigger to the chart. iii. Generate all facts that are immediate consequences of the trigger together with all facts in the chart, and add to the agenda those generated facts that are neither already in the chart nor in the agenda. (b) (\uf773\uf763\uf761\uf76e) Remove the next fact from the input database and add it to the agenda, incrementing n. If there is no more fact in the input database, go to step 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Magic sets and Earley-style algorithms", "sec_num": "7" }, { "text": "3. If S(0, n) is in the chart, accept; otherwise reject.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Magic sets and Earley-style algorithms", "sec_num": "7" }, { "text": "The following is the trace of the algorithm on input string aabbccdd:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Magic sets and Earley-style algorithms", "sec_num": "7" }, { "text": "1. m S(0) \uf769\uf76e\uf769\uf774 2. m B(0) r 1 , 1 3. a(0, 1) \uf773\uf763\uf761\uf76e 4. sup 2.1 (0, 1) r 3 , 2, 3 5. sup 3.1 (0, 1) r 8 , 2, 3 6. m A(1) r 12 , 4 7. a(1, 2) \uf773\uf763\uf761\uf76e 8. sup 4.1 (1, 2) r 14 , 6, 7 9. sup 5.1 (1, 2) r 19 , 6, 7 10. m A(2) r 13 , 8 11. b(2, 3) \uf773\uf763\uf761\uf76e 12. sup 5. 2 (1, 3) Note that unlike existing Earley-style parsing algorithms for TAGs, the present algorithm is an instantiation of a general schema that applies to parsing with more powerful grammar formalisms as well as to generation with Montague semantics.", "cite_spans": [], "ref_spans": [ { "start": 250, "end": 258, "text": "2 (1, 3)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Magic sets and Earley-style algorithms", "sec_num": "7" }, { "text": "Our reduction to Datalog brings sophisticated techniques for Datalog query evaluation to the problems of parsing and generation, and establishes a tight bound on the computational complexity of recognition for a wide range of grammars. In particular, it shows that the use of higher-order \u03bb-terms for semantic representation need not be avoided for the purpose of achieving computational tractability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Context-free grammars on \u03bb-termsConsider an augmentation of the grammar inFigure1 with Montague semantics, where the left-hand1 We only consider exact generation, not taking into account the problem of logical form equivalence, which will most likely render the problem of generation computationally intractable(Moore, 2002).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We follow standard notational conventions in typed \u03bbcalculus. Thus, an application M 1 M 2 M 3 (written without parentheses) associates to the left, \u03bbx.\u03bby.M is abbreviated to \u03bbxy.M, and \u03b1 \u2192 \u03b2 \u2192 \u03b3 stands for \u03b1 \u2192 (\u03b2 \u2192 \u03b3). We refer the reader toHindley, 1997 or S\u00f8rensen andUrzyczyn, 2006 for standard notions used in simply typed \u03bb-calculus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A \u03bb-term is a \u03bbI-term if each occurrence of \u03bb binds at least one occurrence of a variable. A \u03bbI-term is linear if no subterm contains more than one free occurrence of the same variable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that the way we obtain a database from an input \u03bb-term generalizes the standard database representation of a string: from the \u03bb-term encoding \u03bbz.a o\u2192o 1 (. . . (a o\u2192o n z) . . . ) of a string a 1 . . . a n , we obtain the database {a 1 (0, 1), . . . , a n (n\u22121, n)}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "If the target-side grammar is not linear, the normal form of the target \u03bb-term cannot be explicitly computed because its size may be exponential in the size of the source \u03bb-term. Nevertheless, a typing that serves to uniquely identify the target \u03bb-term can be computed from the derivation tree in logspace. Also, if the target-side grammar is linear and string-generating, the target string can be explicitly computed from the derivation tree in logspace(Salvati, 2007).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "(uy)(vy))) unicorn (\u03bby", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S((\u03bbu.u John)(\u03bbx.(\u03bbuv.\u2203(\u03bby.\u2227(uy)(vy))) unicorn (\u03bby.find y x)))", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "(uy)(vy))) unicorn (\u03bby", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "NP(\u03bbu.u John) VP(\u03bbx.(\u03bbuv.\u2203(\u03bby.\u2227(uy)(vy))) unicorn (\u03bby.find y x)))", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "V(find) NP((\u03bbuv.\u2203(\u03bby.\u2227(uy)(vy))) unicorn)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "V(find) NP((\u03bbuv.\u2203(\u03bby.\u2227(uy)(vy))) unicorn)", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "uy)(vy))) N(unicorn", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Det(\u03bbuv.\u2203(\u03bby.\u2227(uy)(vy))) N(unicorn)", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Uniqueness of normal proofs in implicational intuitionistic logic", "authors": [ { "first": "Takahito", "middle": [], "last": "Aoto", "suffix": "" } ], "year": 1999, "venue": "Journal of Logic, Language and Information", "volume": "8", "issue": "", "pages": "217--242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aoto, Takahito. 1999. Uniqueness of normal proofs in implicational intuitionistic logic. Journal of Logic, Language and Information 8, 217-242.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Uniqueness of normal proofs in {\u2192, \u2227}-fragment of NJ. Research Report IS-RR-94-0024F. School of Information Science", "authors": [ { "first": "Takahito", "middle": [], "last": "Aoto", "suffix": "" }, { "first": "Hiroakira", "middle": [], "last": "Ono", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aoto, Takahito and Hiroakira Ono. 1994. Uniqueness of normal proofs in {\u2192, \u2227}-fragment of NJ. Research Re- port IS-RR-94-0024F. School of Information Science, Japan Advanced Institute of Science and Technology.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "On the power of magic", "authors": [ { "first": "Catriel", "middle": [], "last": "Beeri", "suffix": "" }, { "first": "Raghu", "middle": [], "last": "Ramakrishnan", "suffix": "" } ], "year": 1991, "venue": "Journal of Logic Programming", "volume": "10", "issue": "", "pages": "255--299", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beeri, Catriel and Raghu Ramakrishnan. 1991. On the power of magic. Journal of Logic Programming 10, 255-299.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "IO and OI, part I", "authors": [ { "first": "J", "middle": [], "last": "Engelfriet", "suffix": "" }, { "first": "E", "middle": [ "M" ], "last": "Schmidt", "suffix": "" } ], "year": 1977, "venue": "The Journal of Computer and System Sciences", "volume": "15", "issue": "", "pages": "328--353", "other_ids": {}, "num": null, "urls": [], "raw_text": "Engelfriet, J. and E. M. Schmidt. 1977. IO and OI, part I. The Journal of Computer and System Sciences 15, 328-353.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The complexity of languages generated by attribute grammars", "authors": [ { "first": "Joost", "middle": [], "last": "Engelfriet", "suffix": "" } ], "year": 1986, "venue": "SIAM Journal on Computing", "volume": "15", "issue": "", "pages": "70--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Engelfriet, Joost. 1986. The complexity of languages generated by attribute grammars. SIAM Journal on Computing 15, 70-86.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Grammars with Macro-Like Productions", "authors": [ { "first": "Michael", "middle": [ "J" ], "last": "Fisher", "suffix": "" } ], "year": 1968, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fisher, Michael J. 1968. Grammars with Macro-Like Productions. Ph.D. dissertation. Harvard University.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Towards abstract categorial grammars", "authors": [ { "first": "Philippe", "middle": [], "last": "De Groote", "suffix": "" } ], "year": 2001, "venue": "Association for Computational Linguistics, 39th Annual Meeting and 10th Conference of the European Chapter", "volume": "", "issue": "", "pages": "148--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "de Groote, Philippe. 2001. Towards abstract catego- rial grammars. In Association for Computational Lin- guistics, 39th Annual Meeting and 10th Conference of the European Chapter, Proceedings of the Conference, pages 148-155.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Tree-adjoining grammars as abstract categorial grammars", "authors": [ { "first": "Philippe", "middle": [], "last": "De Groote", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Sixth International Workshop on Tree Adjoining Grammar and Related Frameworks (TAG+6)", "volume": "", "issue": "", "pages": "145--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "de Groote, Philippe. 2002. Tree-adjoining gram- mars as abstract categorial grammars. In Proceed- ings of the Sixth International Workshop on Tree Ad- joining Grammar and Related Frameworks (TAG+6), pages 145-150. Universit\u00e1 di Venezia.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "On the expressive power of abstract categorial grammars: Representing context-free formalisms", "authors": [ { "first": "Philippe", "middle": [], "last": "De Groote", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Pogodalla", "suffix": "" } ], "year": 2004, "venue": "Journal of Logic, Language and Information", "volume": "13", "issue": "", "pages": "421--438", "other_ids": {}, "num": null, "urls": [], "raw_text": "de Groote, Philippe and Sylvain Pogodalla. 2004. On the expressive power of abstract categorial grammars: Representing context-free formalisms. Journal of Logic, Language and Information 13, 421-438.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Basic Simple Type Theory", "authors": [ { "first": "J", "middle": [], "last": "Hindley", "suffix": "" }, { "first": "", "middle": [], "last": "Roger", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hindley, J. Roger. 1997. Basic Simple Type Theory. Cambridge: Cambridge University Press.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Treeadjoining grammars", "authors": [ { "first": "K", "middle": [], "last": "Aravind", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "", "middle": [], "last": "Schabes", "suffix": "" } ], "year": 1997, "venue": "Grzegoz Rozenberg and Arto Salomaa", "volume": "3", "issue": "", "pages": "69--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aravind K. Joshi and Yves Schabes. 1997. Tree- adjoining grammars. In Grzegoz Rozenberg and Arto Salomaa, editors, Handbook of Formal Languages, Vol. 3, pages 69-123. Berlin: Springer.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Lexicalization of second-order ACGs", "authors": [ { "first": "Makoto", "middle": [], "last": "Kanazawa", "suffix": "" }, { "first": "Ryo", "middle": [], "last": "Yoshinaka", "suffix": "" } ], "year": 2005, "venue": "NII Technical Report. NII-2005-012E. National Institute of Informatics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kanazawa, Makoto and Ryo Yoshinaka. 2005. Lexi- calization of second-order ACGs. NII Technical Re- port. NII-2005-012E. National Institute of Informat- ics, Tokyo.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Foundations of Deductive Databases and Logic Programming", "authors": [ { "first": "Paris", "middle": [ "C" ], "last": "Kanellakis", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "547--585", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kanellakis, Paris C. 1988. Logic programming and parallel complexity. In Jack Minker, editor, Foun- dations of Deductive Databases and Logic Program- ming, pages 547-585. Los Altos, CA: Morgan Kauf- mann.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A Short Introduction to Intuitionistic Logic", "authors": [ { "first": "Grigori", "middle": [], "last": "Mints", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mints, Grigori. 2000. A Short Introduction to Intuitionis- tic Logic. New York: Kluwer Academic/Plenum Pub- lishers.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A complete, efficient sentencerealization algorithm for unification grammar", "authors": [ { "first": "Robert", "middle": [ "C" ], "last": "Moore", "suffix": "" } ], "year": 2002, "venue": "Proceedings, International Natural Language Generation Conference", "volume": "", "issue": "", "pages": "41--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moore, Robert C. 2002. A complete, efficient sentence- realization algorithm for unification grammar. In Pro- ceedings, International Natural Language Generation Conference, Harriman, New York, pages 41-48.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Probl\u00e8mes de filtrage et probl\u00e8mes d'analyse pour les grammaires cat\u00e9gorielles abstraites. Doctoral dissertation", "authors": [ { "first": "Sylvain", "middle": [], "last": "Salvati", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salvati, Sylvain. 2005. Probl\u00e8mes de filtrage et probl\u00e8mes d'analyse pour les grammaires cat\u00e9gorielles abstraites. Doctoral dissertation, l'Institut National Polytechnique de Lorraine.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Encoding second order string ACG with deterministic tree walking transducers", "authors": [ { "first": "Sylvain", "middle": [], "last": "Salvati", "suffix": "" } ], "year": 2007, "venue": "Proceedings of FG 2006: The 11th conference on Formal Grammar", "volume": "", "issue": "", "pages": "143--156", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salvati, Sylvain. 2007. Encoding second order string ACG with deterministic tree walking transducers. In Shuly Wintner, editor, Proceedings of FG 2006: The 11th conference on Formal Grammar, pages 143-156. FG Online Proceedings. Stanford, CA: CSLI Publica- tions.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "On multiple context-free grammars", "authors": [ { "first": "Hiroyuki", "middle": [], "last": "Seki", "suffix": "" }, { "first": "Takashi", "middle": [], "last": "Matsumura", "suffix": "" }, { "first": "Mamoru", "middle": [], "last": "Fujii", "suffix": "" }, { "first": "Tadao", "middle": [], "last": "Kasami", "suffix": "" } ], "year": 1991, "venue": "Theoretical Computer Science", "volume": "88", "issue": "", "pages": "191--229", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seki, Hiroyuki, Takashi Matsumura, Mamoru Fujii, and Tadao Kasami. 1991. On multiple context-free gram- mars. Theoretical Computer Science 88, 191-229.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Principles and implementations of deductive parsing", "authors": [ { "first": "Stuart", "middle": [ "M" ], "last": "Shieber", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Schabes", "suffix": "" }, { "first": "Fernando", "middle": [ "C N" ], "last": "Pereira", "suffix": "" } ], "year": 1995, "venue": "Journal of Logic Programming", "volume": "24", "issue": "", "pages": "3--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shieber, Stuart M., Yves Schabes, and Fernando C. N. Pereira. 1995. Principles and implementations of de- ductive parsing. Journal of Logic Programming 24, 3-36.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Parsing Schemata", "authors": [ { "first": "Klaas", "middle": [], "last": "Sikkel", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sikkel, Klaas. 1997. Parsing Schemata. Berlin: Springer.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Lectures on the Curry-Howard Isomorphism", "authors": [ { "first": "Morten", "middle": [], "last": "S\u00f8rensen", "suffix": "" }, { "first": "Pawe\u0142", "middle": [], "last": "Heine", "suffix": "" }, { "first": "", "middle": [], "last": "Urzyczyn", "suffix": "" } ], "year": 2006, "venue": "Amsterdam", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S\u00f8rensen, Morten Heine and Pawe\u0142 Urzyczyn. 2006. Lectures on the Curry-Howard Isomorphism. Ams- terdam: Elsevier.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Principles of Database and Knowledge-Base Systems. Volume I", "authors": [ { "first": "Jeffrey", "middle": [ "D" ], "last": "Ullman", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ullman, Jeffrey D. 1988. Principles of Database and Knowledge-Base Systems. Volume I. Rockville, MD.: Computer Science Press.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Bottom-up beats top-down for Datalog", "authors": [ { "first": "Jeffrey", "middle": [ "D" ], "last": "Ullman", "suffix": "" } ], "year": 1989, "venue": "Proceedings of the Eighth ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems", "volume": "", "issue": "", "pages": "140--149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ullman, Jeffrey D. 1989a. Bottom-up beats top-down for Datalog. In Proceedings of the Eighth ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, Philadelphia, pages 140-149.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Principles of Database and Knowledge-Base Systems", "authors": [ { "first": "Jeffrey", "middle": [ "D" ], "last": "Ullman", "suffix": "" } ], "year": 1989, "venue": "The New Technologies", "volume": "II", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ullman, Jeffrey D. 1989b. Principles of Database and Knowledge-Base Systems. Volume II: The New Tech- nologies. Rockville, MD.: Computer Science Press.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Parallel complexity of logical query programs", "authors": [ { "first": "Jeffrey", "middle": [ "D" ], "last": "Ullman", "suffix": "" }, { "first": "Allen", "middle": [], "last": "Van Gelder", "suffix": "" } ], "year": 1988, "venue": "Algorithmica", "volume": "3", "issue": "", "pages": "5--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ullman, Jeffrey D. and Allen Van Gelder. 1988. Par- allel complexity of logical query programs. Algorith- mica 3, 5-42.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Characterizing Mildly Context-Sensitive Grammar Formalisms", "authors": [ { "first": "David", "middle": [ "J" ], "last": "Weir", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David J. Weir. 1988. Characterizing Mildly Context- Sensitive Grammar Formalisms. Ph.D. dissertation. University of Pennsylvania.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "The Datalog representation of a CFG.", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "The Datalog representation of a TAG.", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": "Figure 5: A context-free grammar with Montague semantics. S", "uris": null, "type_str": "figure", "num": null }, "FIGREF4": { "text": "A CFLG.", "uris": null, "type_str": "figure", "num": null }, "FIGREF5": { "text": "The CFLG encoding a TAG.", "uris": null, "type_str": "figure", "num": null }, "FIGREF6": { "text": ", where \u03c3(S) = o\u2192o and \u03c3(A) = (o \u2192 o) \u2192 o \u2192 o (see de Groote, 2002 for details of this encoding). In encoding a stringgenerating grammar, a CFLG uses o as the type of string position and o \u2192 o as the type of string. Each terminal symbol is represented by a constant of type o \u2192 o, and a string a 1 . . . a n is encoded by the \u03bb-term \u03bbz.a o\u2192o 1 (. . . (a o\u2192o n z) . . . ), which has type o \u2192 o.", "uris": null, "type_str": "figure", "num": null }, "FIGREF7": { "text": "The Datalog representation of a CFLG.", "uris": null, "type_str": "figure", "num": null }, "FIGREF8": { "text": "A Datalog derivation tree (left) and the corresponding grammar derivation tree (right)", "uris": null, "type_str": "figure", "num": null } } } }