{ "paper_id": "P13-1015", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:33:49.159362Z" }, "title": "Generic binarization for parsing and translation", "authors": [ { "first": "Matthias", "middle": [], "last": "B\u00fcchse", "suffix": "", "affiliation": { "laboratory": "", "institution": "Technische Universit\u00e4t Dresden", "location": {} }, "email": "matthias.buechse@tu-dresden.de" }, { "first": "Alexander", "middle": [], "last": "Koller", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Potsdam", "location": {} }, "email": "koller@ling.uni-potsdam.de" }, { "first": "Heiko", "middle": [], "last": "Vogler", "suffix": "", "affiliation": { "laboratory": "", "institution": "Technische Universit\u00e4t Dresden", "location": {} }, "email": "heiko.vogler@tu-dresden.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Binarization of grammars is crucial for improving the complexity and performance of parsing and translation. We present a versatile binarization algorithm that can be tailored to a number of grammar formalisms by simply varying a formal parameter. We apply our algorithm to binarizing tree-to-string transducers used in syntax-based machine translation.", "pdf_parse": { "paper_id": "P13-1015", "_pdf_hash": "", "abstract": [ { "text": "Binarization of grammars is crucial for improving the complexity and performance of parsing and translation. We present a versatile binarization algorithm that can be tailored to a number of grammar formalisms by simply varying a formal parameter. We apply our algorithm to binarizing tree-to-string transducers used in syntax-based machine translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Binarization amounts to transforming a given grammar into an equivalent grammar of rank 2, i.e., with at most two nonterminals on any righthand side. The ability to binarize grammars is crucial for efficient parsing, because for many grammar formalisms the parsing complexity depends exponentially on the rank of the grammar. It is also critically important for tractable statistical machine translation (SMT). Syntaxbased SMT systems (Chiang, 2007; Graehl et al., 2008) typically use some type of synchronous grammar describing a binary translation relation between strings and/or trees, such as synchronous context-free grammars (SCFGs) (Lewis and Stearns, 1966; Chiang, 2007) , synchronous tree-substitution grammars (Eisner, 2003) , synchronous tree-adjoining grammars (Nesson et al., 2006; DeNeefe and Knight, 2009) , and tree-tostring transducers (Yamada and Knight, 2001; Graehl et al., 2008) . These grammars typically have a large number of rules, many of which have rank greater than two.", "cite_spans": [ { "start": 435, "end": 449, "text": "(Chiang, 2007;", "ref_id": "BIBREF3" }, { "start": 450, "end": 470, "text": "Graehl et al., 2008)", "ref_id": "BIBREF12" }, { "start": 639, "end": 664, "text": "(Lewis and Stearns, 1966;", "ref_id": "BIBREF19" }, { "start": 665, "end": 678, "text": "Chiang, 2007)", "ref_id": "BIBREF3" }, { "start": 720, "end": 734, "text": "(Eisner, 2003)", "ref_id": "BIBREF5" }, { "start": 773, "end": 794, "text": "(Nesson et al., 2006;", "ref_id": "BIBREF21" }, { "start": 795, "end": 820, "text": "DeNeefe and Knight, 2009)", "ref_id": "BIBREF4" }, { "start": 853, "end": 878, "text": "(Yamada and Knight, 2001;", "ref_id": "BIBREF24" }, { "start": 879, "end": 899, "text": "Graehl et al., 2008)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The classical approach to binarization, as known from the Chomsky normal form transformation for context-free grammars (CFGs), proceeds rule by rule. It replaces each rule of rank greater than 2 by an equivalent collection of rules of rank 2. All CFGs can be binarized in this way, which is why their recognition problem is cubic. In the case of linear context-free rewriting systems (LCFRSs, (Weir, 1988) ) the rule-by-rule technique also applies to every grammar, as long as an increased fanout it permitted (Rambow and Satta, 1999) .", "cite_spans": [ { "start": 393, "end": 405, "text": "(Weir, 1988)", "ref_id": "BIBREF23" }, { "start": 510, "end": 534, "text": "(Rambow and Satta, 1999)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are also grammar formalisms for which the rule-by-rule technique is not complete. In the case of SCFGs, not every grammar has an equivalent representation of rank 2 in the first place (Aho and Ullman, 1969) . Even when such a representation exists, it is not always possible to compute it rule by rule. Nevertheless, the rule-by-rule binarization algorithm of Huang et al. (2009) is very useful in practice.", "cite_spans": [ { "start": 190, "end": 212, "text": "(Aho and Ullman, 1969)", "ref_id": "BIBREF0" }, { "start": 366, "end": 385, "text": "Huang et al. (2009)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we offer a generic approach for transferring the rule-by-rule binarization technique to new grammar formalisms. At the core of our approach is a binarization algorithm that can be adapted to a new formalism by changing a parameter at runtime. Thus it only needs to be implemented once, and can then be reused for a variety of formalisms. More specifically, our algorithm requires the user to (i) encode the grammar formalism as a subclass of interpreted regular tree grammars (IRTGs, (Koller and Kuhlmann, 2011) ) and (ii) supply a collection of b-rules, which represent equivalence of grammars syntactically. Our algorithm then replaces, in a given grammar, each rule of rank greater than 2 by an equivalent collection of rules of rank 2, if such a collection is licensed by the b-rules. We define completeness of b-rules in a way that ensures that if any equivalent collection of rules of rank 2 exists, the algorithm finds one. As a consequence, the algorithm binarizes every grammar that can be binarized rule by rule. Step (i) is possible for all the grammar formalisms mentioned above. We show Step (ii) for SCFGs and tree-to-string transducers.", "cite_spans": [ { "start": 499, "end": 526, "text": "(Koller and Kuhlmann, 2011)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We will use SCFGs as our running example throughout the paper. We will also apply the algo-rithm to tree-to-string transducers (Graehl et al., 2008; Galley et al., 2004) , which describe relations between strings in one language and parse trees of another, which means that existing methods for binarizing SCFGs and LCFRSs cannot be directly applied to these systems. To our knowledge, our binarization algorithm is the first to binarize such transducers. We illustrate the effectiveness of our system by binarizing a large treeto-string transducer for English-German SMT.", "cite_spans": [ { "start": 127, "end": 148, "text": "(Graehl et al., 2008;", "ref_id": "BIBREF12" }, { "start": 149, "end": 169, "text": "Galley et al., 2004)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Plan of the paper. We start by defining IRTGs in Section 2. In Section 3, we define the general outline of our approach to rule-by-rule binarization for IRTGs, and then extend this to an efficient binarization algorithm based on b-rules in Section 4. In Section 5 we show how to use the algorithm to perform rule-by-rule binarization of SCFGs and tree-to-string transducers, and relate the results to existing work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Grammar formalisms employed in parsing and SMT, such as those mentioned in the introduction, differ in the the derived objects-e.g., strings, trees, and graphs-and the operations involved in the derivation-e.g., concatenation, substitution, and adjoining. Interpreted regular tree grammars (IRTGs) permit a uniform treatment of many of these formalisms. To this end, IRTGs combine two ideas, which we explain here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "Algebras IRTGs represent the objects and operations symbolically using terms; the object in question is obtained by interpreting each symbol in the term as a function. As an example, Table 1 shows terms for a string and a tree, together with the denoted object. In the string case, we describe complex strings as concatenation (con 2 ) of elementary symbols (e.g., a, b); in the tree case, we alternate the construction of a sequence of trees (con 2 ) with the construction of a single tree by placing a symbol (e.g., \u03b1, \u03b2, \u03c3) on top of a (possibly empty) sequence of trees. Whenever a term contains variables, it does not denote an object, but rather a function. In the parlance of universalalgebra theory, we are employing initial-algebra semantics (Goguen et al., 1977) .", "cite_spans": [ { "start": 751, "end": 772, "text": "(Goguen et al., 1977)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 183, "end": 190, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "An alphabet is a nonempty finite set. Throughout this paper, let X = {x 1 , x 2 , . . . } be a set, whose elements we call variables. We let X k denote the set {x 1 , . . . , x k } for every k \u2265 0. Let \u03a3 be an alphabet and V \u2286 X. We write T \u03a3 (V ) for the set of all terms over \u03a3 with variables V , i.e., the smallest set T such that (i) V \u2286 T and (ii) for every \u03c3 \u2208 \u03a3, k \u2265 0, and t 1 , . . . , t k \u2208 T , we have \u03c3(t 1 , . . . , t k ) \u2208 T . Alternatively, we view T \u03a3 (V ) as the set of all (rooted, labeled, ordered, unranked) trees over \u03a3 and V , and draw them as usual. By T \u03a3 we abbreviate T \u03a3 (\u2205). The set C \u03a3 (V ) of contexts over \u03a3 and V is the set of all trees over \u03a3 and V in which each variable in V occurs exactly once.", "cite_spans": [ { "start": 491, "end": 527, "text": "(rooted, labeled, ordered, unranked)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "A signature is an alphabet \u03a3 where each symbol is equipped with an arity. We write \u03a3| k for the subset of all k-ary symbols of \u03a3, and \u03c3| k to denote \u03c3 \u2208 \u03a3| k . We denote the signature by \u03a3 as well. A signature is binary if the arities do not exceed 2. Whenever we use T \u03a3 (V ) with a signature \u03a3, we assume that the trees are ranked, i.e., each node labeled by \u03c3 \u2208 \u03a3| k has exactly k children.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "Let \u2206 be a signature. A \u2206-algebra A consists of a nonempty set A called the domain and, for each symbol f \u2208 \u2206 with rank k, a total function f A : A k \u2192 A, the operation associated with f . We can evaluate any term t in T \u2206 (X k ) in A, to obtain a k-ary operation t A over the domain. In particular, terms in T \u2206 evaluate to elements of A. For instance, in the string algebra shown in Table 1, the term con 2 (a, b) evaluates to ab, and the term con 2 (con 2 (x 2 , a), x 1 ) evaluates to a binary operation f such that, e.g., f (b, c) = cab.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "Bimorphisms IRTGs separate the finite control (state behavior) of a derivation from its derived object (in its term representation; generational behavior); the former is captured by a regular tree language, while the latter is obtained by applying a tree homomorphism. This idea goes back to the tree bimorphisms of Arnold and Dauchet (1976) .", "cite_spans": [ { "start": 316, "end": 341, "text": "Arnold and Dauchet (1976)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "Let \u03a3 be a signature. A regular tree grammar (RTG) G over \u03a3 is a triple (Q, q 0 , R) where Q is a finite set (of states), q 0 \u2208 Q, and R is a finite set of rules of the form q \u2192 \u03b1(q 1 , . . . , q k ), where q \u2208 Q, \u03b1 \u2208 \u03a3| k and q, q 1 , . . . , q k \u2208 Q. We call \u03b1 the terminal symbol and k the rank of the rule. Rules of rank greater than two are called suprabinary. For every q \u2208 Q we define the language L q (G) derived from q as the set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "{\u03b1(t 1 , . . . , t k ) | q \u2192 \u03b1(q 1 , . . . , q k ) \u2208 R, t j \u2208 L q j (G)}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "If q = q 0 , we drop the superscript and write L(G) for the tree language of G. In the literature, there is a definition of RTG which also permits more than one terminal symbol per rule, strings over \u0393 trees over \u0393 example term and denoted object or none. This does not increase the generative capacity (Brainerd, 1969) . A (linear, nondeleting) tree homomorphism is a mapping h : T \u03a3 (X) \u2192 T \u2206 (X) that satisfies the following condition: there is a mapping g :", "cite_spans": [ { "start": 303, "end": 319, "text": "(Brainerd, 1969)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "con 2 a b \u2192 ab \u03c3 con 2 \u03b1 con 0 \u03b2 con 0 \u2192 \u03c3 \u03b1 \u03b2 domain \u0393 * T * \u0393 (set of sequences of trees) signature \u2206 {a| 0 | a \u2208 \u0393} \u222a {\u03b3| 1 | \u03b3 \u2208 \u0393} \u222a {con k | k | 0 \u2264 k \u2264 K, k = 1} {con k | k | 0 \u2264 k \u2264 K, k = 1} operations a : () \u2192 a \u03b3 : x 1 \u2192 \u03b3(x 1 ) con k : (x 1 , . . . , x k ) \u2192 x 1 \u2022 \u2022 \u2022 x k con k : (x 1 , . . . , x k ) \u2192 x 1 \u2022 \u2022 \u2022 x k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "\u03a3 \u2192 T \u2206 (X) such that (i) g(\u03c3) \u2208 C \u2206 (X k ) for every \u03c3 \u2208 \u03a3| k , (ii) h(\u03c3(t 1 , . . . , t k ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "is the tree obtained from g(\u03c3) by replacing the occurrence of x j by h(t j ), and (iii) h(x j ) = x j . This extends the usual definition of linear and nondeleting homomorphisms (G\u00e9cseg and Steinby, 1997) to trees with variables. We abuse notation and write h(\u03c3) for g(\u03c3) for every \u03c3 \u2208 \u03a3.", "cite_spans": [ { "start": 178, "end": 204, "text": "(G\u00e9cseg and Steinby, 1997)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "Let n \u2265 1 and \u2206 1 , . . . , \u2206 n be signatures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "A (generalized) bimorphism over (\u2206 1 , . . . , \u2206 n ) is a tuple B = (G, h 1 , . . . , h n ) where G is an RTG over some signature \u03a3 and h i is a tree homo- morphism from T \u03a3 (X) into T \u2206 i (X). The lan- guage L(B) induced by B is the tree relation {(h 1 (t), . . . , h n (t)) | t \u2208 L(G)}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "An IRTG is a bimorphism whose derived trees are viewed as terms over algebras; see Fig. 1 .", "cite_spans": [], "ref_spans": [ { "start": 83, "end": 89, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "Formally, an IRTG G over (\u2206 1 , . . . , \u2206 n ) is a tuple (B, A 1 , . . . , A n ) such that B is a bimor- phism over (\u2206 1 , . . . , \u2206 n ) and A i is a \u2206 i -algebra. The language L(G) induced by G is the relation {(t A 1 1 , . . . , t An n ) | (t 1 , . . . , t n ) \u2208 L(B)}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "We call the trees in L(G) derivation trees and the terms in L(B) semantic terms. We say that two IRTGs G and G are equivalent if L(G) = L(G ). IRTGs were first defined in (Koller and Kuhlmann, 2011) .", "cite_spans": [ { "start": 171, "end": 198, "text": "(Koller and Kuhlmann, 2011)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "For example, Fig. 2 is an IRTG that encodes a synchronous context-free grammar (SCFG). It contains a bimorphism B = (G, h 1 , h 2 ) consisting of an RTG G with four rules and homomor- phisms h 1 and h 2 which map derivation trees to trees over the signature of the string algebra in Table 1. By evaluating these trees in the algebra, the symbols con 3 and con 4 are interpreted as concatenation, and we see that the first rule encodes the SCFG rule A \u2192 BCD, DaBC . Figure 3 shows a derivation tree with its two homomorphic images, which evaluate to the strings bcd and dabc. IRTGs can be tailored to the expressive capacity of specific grammar formalisms by selecting suitable algebras. The string algebra in Table 1 yields context-free languages, more complex string al-", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 19, "text": "Fig. 2", "ref_id": "FIGREF0" }, { "start": 465, "end": 473, "text": "Figure 3", "ref_id": null }, { "start": 709, "end": 716, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "L(G) T \u2206 1 \u2022 \u2022 \u2022 T \u2206n A 1 \u2022 \u2022 \u2022 A n h 1 h n (.) A 1 (.) An \u2286 T \u03a3 bimorphism B = (G, h1, h2) IRTG G = (B, A1, A2) derivation trees semantic terms derived objects Figure 1: IRTG, bimorphism overview. A \u2192 \u03b1(B, C, D) B \u2192 \u03b1 1 , C \u2192 \u03b1 2 , D \u2192 \u03b1 3 con 3 x 1 x 2 x 3 h1 \u2190\u2212 \u03b1 h2 \u2212\u2192 con 4 x 3 a x 1 x 2 b h1 \u2190\u2212 \u03b1 1 h2 \u2212\u2192 b c h1 \u2190\u2212 \u03b1 2 h2 \u2212\u2192 c d h1 \u2190\u2212 \u03b1 3 h2 \u2212\u2192 d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "con 3 b c d h 1 \u2190\u2212 \u03b1 \u03b1 1 \u03b1 2 \u03b1 3 h 2 \u2212\u2192 con 4 d a b c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "Figure 3: Derivation tree and semantic terms. gebras yield tree-adjoining languages (Koller and Kuhlmann, 2012) , and algebras over other domains can yield languages of trees, graphs, or other objects. Furthermore, IRTGs with n = 1 describe languages that are subsets of the algebra's domain, n = 2 yields synchronous languages or tree transductions, and so on.", "cite_spans": [ { "start": 84, "end": 111, "text": "(Koller and Kuhlmann, 2012)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "A \u2192 \u03b1 (A , D) A \u2192 \u03b1 (B, C) con 2 x 1 x 2 h 1 \u2190\u2212 \u03b1 h 2 \u2212\u2192 con 2 con 2 x 2 a x 1 con 2 x 1 x 2 h 1 \u2190\u2212 \u03b1 h 2 \u2212\u2192 con 2 x 1 x 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreted regular tree grammars", "sec_num": "2" }, { "text": "We will now show how to apply the rule-by-rule binarization technique to IRTGs. We start in this section by defining the binarization of a rule in an IRTG, and characterizing it in terms of binarization terms and variable trees. We derive the actual binarization algorithm from this in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IRTG binarization", "sec_num": "3" }, { "text": "For the remainder of this paper, let G = (B, A 1 , . . . , A n ) be an IRTG over (\u2206 1 , . . . , \u2206 n ) with B = (G, h 1 , . . . , h n ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IRTG binarization", "sec_num": "3" }, { "text": "We start with an example to give an intuition of our approach. Consider the first rule in Fig. 2 , which has rank three. This rule derives (in one step) the fragment \u03b1(x 1 , x 2 , x 3 ) of the derivation tree in Fig. 3 , which is mapped to the semantic terms h 1 (\u03b1) and h 2 (\u03b1) shown in Fig. 2 . Now consider the rules in Fig. 4 . These rules can be used to derive (in two steps) the derivation tree fragment \u03be in Fig. 5e . Note that the terms h 1 (\u03be) and h 1 (\u03b1) are equivalent in that they denote the same function over the string algebra, and so are the terms h 2 (\u03be) and h 2 (\u03b1). Thus, replacing the \u03b1-rule by the rules in Fig. 4 does not change the language of the IRTG. However, since the new rules are binary, (a)", "cite_spans": [], "ref_spans": [ { "start": 90, "end": 96, "text": "Fig. 2", "ref_id": "FIGREF0" }, { "start": 212, "end": 218, "text": "Fig. 3", "ref_id": null }, { "start": 288, "end": 294, "text": "Fig. 2", "ref_id": "FIGREF0" }, { "start": 323, "end": 329, "text": "Fig. 4", "ref_id": "FIGREF1" }, { "start": 415, "end": 422, "text": "Fig. 5e", "ref_id": null }, { "start": 628, "end": 634, "text": "Fig. 4", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "An introductory example", "sec_num": "3.1" }, { "text": "con 3 x 1 x 2 x 3 con 4 x 3 a x 1 x 2 (b) con 2 x 1 con 2 x 2 x 3 con 2 con 2 x 1 x 2 x 3 t 1 : con 2 con 2 x 3 a con 2 x 1 x 2 t 2 : con 2 con 2 x3 con 2 a x1 x2 (c) (d) con 2 x 1 x 2 x 1 con 2 x 1 x 2 x 1 x 2 con 2 con 2 x 2 a x 1 x 1 con 2 x 1 x 2 x 1 x 2 (e) h1 \u2190\u2212 \u03b1 h2 \u2212\u2192 {x 1 , x 2 , x 3 } {x 1 } {x 2 , x 3 } {x 2 } {x 3 } {x 1 , x 2 , x 3 } {x 1 , x 2 } {x 1 } {x 2 } {x 3 } \u03c4 : {x 1 , x 2 , x 3 } {x 1 , x 3 } {x 1 } {x 3 } {x 2 } con 2 con 2 x 1 x 2 x 3 t 1 : h \u2032 1 \u2190\u2212 \u03b1 \u2032 \u03b1 \u2032\u2032 x 1 x 2 x 3 \u03be : h \u2032 2 \u2212\u2192 con 2 con 2 x 3 a con 2 x 1 x 2 t 2 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An introductory example", "sec_num": "3.1" }, { "text": "Figure 5: Outline of the binarization algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An introductory example", "sec_num": "3.1" }, { "text": "parsing and translation will be cheaper. Now we want to construct the binary rules systematically. In the example, we proceed as follows (cf. Fig. 5 ). For each of the terms h 1 (\u03b1) and h 2 (\u03b1) (Fig. 5a ), we consider all terms that satisfy two properties ( Fig. 5b ): (i) they are equivalent to h 1 (\u03b1) and h 2 (\u03b1), respectively, and (ii) at each node at most two subtrees contain variables. As Fig. 5 suggests, there may be many different terms of this kind. For each of these terms, we analyze the bracketing of variables, obtaining what we call a variable tree (Fig. 5c ). Now we pick terms t 1 and t 2 corresponding to h 1 (\u03b1) and h 2 (\u03b1), respectively, such that (iii) they have the same variable tree, say \u03c4 . We construct a tree \u03be from \u03c4 by a simple relabeling, and we read off the tree homomorphisms h 1 and h 2 from a decomposition we perform on t 1 and t 2 , respectively; see Fig. 5 , dotted arrows, and compare the boxes in Fig. 5d with the homomorphisms in Fig. 4 . Now the rules in Fig. 4 are easily extracted from \u03be.", "cite_spans": [], "ref_spans": [ { "start": 142, "end": 148, "text": "Fig. 5", "ref_id": null }, { "start": 194, "end": 202, "text": "(Fig. 5a", "ref_id": null }, { "start": 258, "end": 265, "text": "Fig. 5b", "ref_id": null }, { "start": 396, "end": 402, "text": "Fig. 5", "ref_id": null }, { "start": 565, "end": 573, "text": "(Fig. 5c", "ref_id": null }, { "start": 888, "end": 894, "text": "Fig. 5", "ref_id": null }, { "start": 937, "end": 944, "text": "Fig. 5d", "ref_id": null }, { "start": 971, "end": 977, "text": "Fig. 4", "ref_id": "FIGREF1" }, { "start": 997, "end": 1003, "text": "Fig. 4", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "An introductory example", "sec_num": "3.1" }, { "text": "These rules are equivalent to r because of (i); they are binary because \u03be is binary, which in turn holds because of (ii); finally, the decompositions of t 1 and t 2 are compatible with \u03be because of (iii). We call terms t 1 and t 2 binarization terms if they satisfy (i)-(iii). We will see below that we can con-struct binary rules equivalent to r from any given sequence of binarization terms t 1 , t 2 , and that binarization terms exist whenever equivalent binary rules exist. The majority of this paper revolves around the question of finding binarization terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An introductory example", "sec_num": "3.1" }, { "text": "Rule-by-rule binarization of IRTGs follows the intuition laid out in this example closely: it means processing each suprabinary rule, attempting to replace it with an equivalent collection of binary rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An introductory example", "sec_num": "3.1" }, { "text": "We will now make this intuition precise. To this end, we assume that r = q \u2192 \u03b1(q 1 , . . . , q k ) is a suprabinary rule of G. As we have seen, binarizing r boils down to constructing:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization terms", "sec_num": "3.2" }, { "text": "\u2022 a tree \u03be over some binary signature \u03a3 and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization terms", "sec_num": "3.2" }, { "text": "\u2022 tree homomorphisms h 1 , . . . , h n of type h i : T \u03a3 (X) \u2192 T \u2206 i (X)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization terms", "sec_num": "3.2" }, { "text": ", such that h i (\u03be) and h i (\u03b1) are equivalent, i.e., they denote the same function over A i . We call such a tuple (\u03be, h 1 , . . . , h n ) a binarization of the rule r.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization terms", "sec_num": "3.2" }, { "text": "Note that a binarization of r need not exist. The problem of rule-by-rule binarization consists in computing a binarization of each suprabinary rule of a grammar. If such a binarization does not exist, the problem does not have a solution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization terms", "sec_num": "3.2" }, { "text": "In order to define variable trees, we assume a mapping seq that maps each finite set U of pairwise disjoint variable sets to a sequence over U which contains each element exactly once. Let t \u2208 C \u2206 (X k ). The variable set of t is the set of all variables that occur in t. The set S(t) of subtree variables of t consists of the nonempty variable sets of all subtrees of t. We represent S(t) as a tree v(t), which we call variable tree as follows. Any two elements of S(t) are either comparable (with respect to the subset relation) or disjoint. We extend this ordering to a tree structure by ordering disjoint elements via seq. We let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization terms", "sec_num": "3.2" }, { "text": "v(L) = {v(t) | t \u2208 L} for every L \u2286 C \u2206 (X k ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization terms", "sec_num": "3.2" }, { "text": "In the example of Fig. 5 , t 1 and t 2 have the same set of subtree variables; it is {{x 1 }, {x 2 }, {x 3 }, {x 1 , x 2 }, {x 1 , x 2 , x 3 }}. If we assume that seq orders sets of variables according to the least variable index, we arrive at the variable tree in the center of Fig. 5 .", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 24, "text": "Fig. 5", "ref_id": null }, { "start": 279, "end": 285, "text": "Fig. 5", "ref_id": null } ], "eq_spans": [], "section": "Binarization terms", "sec_num": "3.2" }, { "text": "Now let t 1 \u2208 T \u2206 1 (X k ), . . . , t n \u2208 T \u2206n (X k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization terms", "sec_num": "3.2" }, { "text": ". We call the tuple t 1 , . . . , t n binarization terms of r if the following properties hold: (i) h i (\u03b1) and t i are equivalent; (ii) at each node the tree t i contains at most two subtrees with variables; and (iii) the terms t 1 , . . . , t n have the same variable tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization terms", "sec_num": "3.2" }, { "text": "Assume for now that we have found binarization terms t 1 , . . . , t n . We show how to construct a binarization (\u03be, h 1 , . . . , h n ) of r with t i = h i (\u03be).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization terms", "sec_num": "3.2" }, { "text": "First, we construct \u03be. Since t 1 , . . . , t n are binarization terms, they have the same variable tree, say, \u03c4 . We obtain \u03be from \u03c4 by replacing every label of the form {x j } with x j , and every other label with a fresh symbol. Because of condition (ii) in in the definition of binarization terms, \u03be is binary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization terms", "sec_num": "3.2" }, { "text": "In order to construct h i (\u03c3) for each symbol \u03c3 in \u03be, we transform t i into a tree t i with labels from C \u2206 i (X) and the same structure as \u03be. Then we read off h i (\u03c3) from the node of t i that corresponds to the \u03c3-labeled node of \u03be. The transformation proceeds as illustrated in Fig. 6 : first, we apply the maximal decomposition operation d ; it replaces every label f \u2208 \u2206 i | k by the tree f (x 1 , . . . , x k ), represented as a box. After that, we keep applying the merge operation m as often as possible; it merges two boxes that are in a parent-child relation, given that one of them has at most one child. Thus the number of variables in any box can only decrease. Finally, the reorder operation o orders the children of each box according to the seq of their variable sets. These operations do not change the variable tree; one can use this to show that t i has the same structure as \u03be.", "cite_spans": [], "ref_spans": [ { "start": 280, "end": 286, "text": "Fig. 6", "ref_id": null } ], "eq_spans": [], "section": "Binarization terms", "sec_num": "3.2" }, { "text": "Thus, if we can find binarization terms, we can construct a binarization of r. Conversely, for any given binarization (\u03be, h 1 , . . . , h n ) the semantic terms h 1 (\u03be), . . . , h n (\u03be) are binarization terms. This proves the following lemma.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization terms", "sec_num": "3.2" }, { "text": "Lemma 1 There is a binarization of r if and only if there are binarization terms of r.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization terms", "sec_num": "3.2" }, { "text": "It remains to show how we can find binarization terms of r, if there are any.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding binarization terms", "sec_num": "3.3" }, { "text": "Let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding binarization terms", "sec_num": "3.3" }, { "text": "b i : T \u2206 i (X k ) \u2192 P(T \u2206 i (X k ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding binarization terms", "sec_num": "3.3" }, { "text": "the mapping with b i (t) = {t \u2208 T \u2206 i (X k ) | t and t are equivalent, and at each node t has at most two children with variables}. Figure 5b shows some elements of b 1 (h 1 (\u03b1)) and b 2 (h 2 (\u03b1)) for our example. Terms t 1 , . . . , t n are binarization terms precisely when t i \u2208 b i (h i (\u03b1)) and t 1 , . . . , t n have the same variable tree. Thus we can characterize binarization terms as follows.", "cite_spans": [], "ref_spans": [ { "start": 132, "end": 141, "text": "Figure 5b", "ref_id": null } ], "eq_spans": [], "section": "Finding binarization terms", "sec_num": "3.3" }, { "text": "i v(b i (h i (\u03b1))) = \u2205. con 2 con 2 x 3 a con 2 x 1 x 2 d con 2 x 1 x 2 con 2 x 1 x 2 x 3 a con 2 x 1 x 2 x 1 x 2 m con 2 x 1 x 2 con 2 x 1 a x 3 con 2 x 1 x 2 x 1 x 2 m con 2 con 2 x 1 a x 2 x 3 con 2 x 1 x 2 x 1 x 2 o con 2 con 2 x 2 a x 1 con 2 x 1 x 2 x 1 x 2 x 3 Figure 6: Transforming t 2 into t 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 2 There are binarization terms if and only if", "sec_num": null }, { "text": "This result suggests the following procedure for obtaining binarization terms. First, determine whether the intersection in Lemma 2 is empty. If it is, then there is no binarization of r. Otherwise, select a variable tree \u03c4 from this set. We know that there are trees t 1 , . . . , t n such that t i \u2208 b i (h i (\u03b1)) and v(t i ) = \u03c4 . We can therefore select arbitrary concrete trees t i \u2208 b i (h i (\u03b1)) \u2229 v \u22121 (\u03c4 ). The terms t 1 , . . . , t n are then binarization terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 2 There are binarization terms if and only if", "sec_num": null }, { "text": "In this section we develop our binarization algorithm. Its key task is finding binarization terms t 1 , . . . , t n . This task involves deciding term equivalence, as t i must be equivalent to h i (\u03b1). In general, equivalence is undecidable, so the task cannot be solved. We avoid deciding equivalence by requiring the user to specify an explicit approximation of b i , which we call a b-rule. This parameter gives rise to a restricted version of the ruleby-rule binarization problem, which is efficiently computable while remaining practically relevant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "Let \u2206 be a signature. A binarization rule (brule) over \u2206 is a mapping b : \u2206 \u2192 P(T \u2206 (X)) where for every f \u2208 \u2206| k we have that b(f ) \u2286 C \u2206 (X k ), at each node of a tree in b(f ) only two children contain variables, and b(f ) is a regular tree language. We extend b to T \u2206 (X) by setting b(x j ) = {x j } and b (f (t 1 , . . . , t k ) ", "cite_spans": [], "ref_spans": [ { "start": 311, "end": 334, "text": "(f (t 1 , . . . , t k )", "ref_id": null } ], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": ") = {t[x j /t j | 1 \u2264 j \u2264 k] | t \u2208 b(f ), t j \u2208 b(t j )}, where [x j /t j ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "denotes substitution of x j by t j . Given an algebra A over \u2206, a b-rule b over \u2206 is called a b-rule over A if, for every t \u2208 T \u2206 (X k ) and t \u2208 b(t), t and t are equivalent in A. Such a b-rule encodes equivalence in A, and it does so in an explicit and compact way: because b(f ) is a regular tree language, a b-rule can be specified by a finite collection of RTGs, one for each symbol f \u2208 \u2206. We will look at examples (for the string and tree algebras shown earlier) in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "From now on, we assume that b 1 , . . . , b n are b-rules over A 1 , . . . , A n , respectively. A binarization (\u03be, h 1 , . . . , h n ) of r is a binarization of r with respect to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "b 1 , . . . , b n if h i (\u03be) \u2208 b i (h i (\u03b1)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "Likewise, binarization terms t 1 , . . . , t n are binarization terms with respect to b 1 , . . . , b n if t i \u2208 b i (h i (\u03b1)). Lemmas 1 and 2 carry over to the restricted notions. The problem of rule-byrule binarization with respect to b 1 , . . . , b n consists in computing a binarization with respect to b 1 , . . . , b n for each suprabinary rule.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "By definition, every solution to this restricted problem is also a solution to the general problem. The converse need not be true. However, we can guarantee that the restricted problem has at least one solution whenever the general problem has one, by requiring", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "v(b i (h i (\u03b1)) = v(b(h i (\u03b1)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "Then the intersection in Lemma 2 is empty in the restricted case if and only if it is empty in the general case. We call the b-rules b 1 , . . . , b 1 complete on G if the equation holds for every \u03b1 \u2208 \u03a3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "Now we show how to effectively compute binarization terms with respect to b 1 , . . . , b n , along the lines of Section 3.3. More specifically, we construct an RTG for each of the sets (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "i) b i (h i (\u03b1)), (ii) b i = v(b i (h i (\u03b1))), (iii) i b i , and (iv) b i = b i (h i (\u03b1))\u2229v \u22121 (\u03c4 ) (given \u03c4 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "Then we can select \u03c4 from (iii) and t i from (iv) using a standard algorithm, such as the Viterbi algorithm or Knuth's algorithm (Knuth, 1977; Nederhof, 2003; Huang and Chiang, 2005) . The effectiveness of our procedure stems from the fact that we only manipulate RTGs and never enumerate languages.", "cite_spans": [ { "start": 129, "end": 142, "text": "(Knuth, 1977;", "ref_id": "BIBREF16" }, { "start": 143, "end": 158, "text": "Nederhof, 2003;", "ref_id": "BIBREF20" }, { "start": 159, "end": 182, "text": "Huang and Chiang, 2005)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "The construction for (i) is recursive, following the definition of b i . The base case is a language {x j }, for which the RTG is easy. For the recursive case, we use the fact that regular tree languages are closed under substitution (G\u00e9cseg and Steinby, 1997, Prop. 7. 3). Thus we obtain an RTG", "cite_spans": [ { "start": 234, "end": 269, "text": "(G\u00e9cseg and Steinby, 1997, Prop. 7.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "G i with L(G i ) = b i (h i (\u03b1)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "For (ii) and (iv), we need the following auxiliary construction. Let G i = (P, p 0 , R). We define the mapping var i : P \u2192 P(X k ) such that for every p \u2208 P , every t \u2208 L p (G i ) contains exactly the variables in var i (p). We construct it as follows. We initialize var i (p) to \"unknown\" for every p. For every rule p \u2192 x j , we set var i (p) = {x j }. For every rule p \u2192 \u03c3(p 1 , . . . , p k ) such that var i (p j ) is known, we set var i (p) = j var i (p j ). This is iterated; it can be shown that var i (p) is never assigned two different values for the same p. Finally, we set all remaining unknown entries to \u2205. For (ii), we construct an RTG G i with L(G i ) = b i as follows. We let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "G i = ({ var i (p) | p \u2208 P }, var i (p 0 ), R ) where R consists of the rules {x j } \u2192 {x j } if p \u2192 x i \u2208 R , var i (p) \u2192 var i (p)( U 1 , . . . , U l ) if p \u2192 \u03c3(p 1 , . . . , p k ) \u2208 R, V = {var i (p j ) | 1 \u2264 j \u2264 k} \\ {\u2205}, |V | \u2265 2, seq(V ) = (U 1 , . . . , U l ) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "For (iii), we use the standard product construction (G\u00e9cseg and Steinby, 1997, Prop. 7.1) .", "cite_spans": [ { "start": 52, "end": 89, "text": "(G\u00e9cseg and Steinby, 1997, Prop. 7.1)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "For (iv), we construct an RTG G i such that L(G i ) = b i as follows. We let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "G i = (P, p 0 , R ), where R consists of the rules p \u2192 \u03c3(p 1 , . . . , p k ) if p \u2192 \u03c3(p 1 , . . . , p k ) \u2208 R, V = {var i (p j ) | 1 \u2264 j \u2264 k} \\ {\u2205}, if |V | \u2265 2, then (var i (p), seq(V )) is a fork in \u03c4 . By a fork (u, u 1 \u2022 \u2022 \u2022 u k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "in \u03c4 , we mean that there is a node labeled u with k children labeled u 1 up to u k . At this point we have all the ingredients for our binarization algorithm, shown in Algorithm 1. It operates directly on a bimorphism, because all the relevant information about the algebras is captured by the b-rules. The following theorem documents the behavior of the algorithm. In short, it solves the problem of rule-by-rule binarization with respect to b-rules b 1 , . . . , b n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "Theorem 3 Let G = (B, A 1 , . . . , A n ) be an IRTG, and let b 1 , . . . , b n be b-rules over A 1 , . . . , A n , respectively.", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 29, "text": "(B, A 1 , .", "ref_id": null } ], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "Algorithm 1 terminates. Let B be the bimorphism computed by Algorithm 1 on B and b 1 , . . . , b n . Then G = (B , A 1 , . . . , A n ) is equivalent to G, and G is of rank 2 if and only Input: bimorphism B = (G, h 1 , . . . , h n ) ,", "cite_spans": [], "ref_spans": [ { "start": 110, "end": 134, "text": "(B , A 1 , . . . , A n )", "ref_id": null }, { "start": 208, "end": 231, "text": "(G, h 1 , . . . , h n )", "ref_id": null } ], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "b-rules b 1 , . . . , b n over \u2206 1 , . . . , \u2206 n Output: bimorphism B 1: B \u2190 (G| \u22642 , h 1 , . . . , h n ) 2: for rule r : q \u2192 \u03b1(q 1 , . . . , q k ) of G| >2 do 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "for i = 1, . . . , n do 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "compute RTG G i for b i (h i (\u03b1)) 5: compute RTG G i for v(b i (h i (\u03b1))) 6: compute RTG G v for i L(G i ) 7: if L(G v ) = \u2205 then 8:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "add r to B 9:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "else 10: select t \u2208 L(G v )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "11: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "for i = 1, . . . ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "b i = b i (h i (\u03b1)) \u2229 v \u22121 (t ) 14: select t i \u2208 L(G i ) 15:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": "construct binarization for t 1 , . . . , t n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effective IRTG binarization", "sec_num": "4" }, { "text": ":", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "16", "sec_num": null }, { "text": "add appropriate rules to B", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "16", "sec_num": null }, { "text": "Algorithm 1: Complete binarization algorithm, where G| \u22642 and G| >2 is G restricted to binary and suprabinary rules, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "16", "sec_num": null }, { "text": "if every suprabinary rule of G has a binarization with respect to b 1 , . . . , b n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "16", "sec_num": null }, { "text": "The runtime of Algorithm 1 is dominated by the intersection construction in line 6, which is O(m 1 \u2022 . . . \u2022 m n ) per rule, where m i is the size of G i . The quantity m i is linear in the size of the terms on the right-hand side of h i , and in the number of rules in the b-rule b i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "16", "sec_num": null }, { "text": "Algorithm 1 implements rule-by-rule binarization with respect to given b-rules. If a rule of the given IRTG does not have a binarization with respect to these b-rules, it is simply carried over to the new grammar, which then has a rank higher than 2. The number of remaining suprabinary rules depends on the b-rules (except for rules that have no binarization at all). The user can thus engineer the b-rules according to their current needs, trading off completeness, runtime, and engineering effort.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applications", "sec_num": "5" }, { "text": "By contrast, earlier binarization algorithms for formalisms such as SCFG and LCFRS simply attempt to find an equivalent grammar of rank 2; there is no analogue of our b-rules. The problem these algorithms solve corresponds to the general rule-by-rule binarization problem from Section 3. Figure 7 : A rule of a tree-to-string transducer.", "cite_spans": [], "ref_spans": [ { "start": 288, "end": 296, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Applications", "sec_num": "5" }, { "text": "We show that under certain conditions, our algorithm can be used to solve this problem as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applications", "sec_num": "5" }, { "text": "In the following two subsections, we illustrate this for SCFGs and tree-to-string transducers, respectively. In the final subsection, we discuss how to extend this approach to other grammar formalisms as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applications", "sec_num": "5" }, { "text": "We have used SCFGs as the running example in this paper. SCFGs are IRTGs with two interpretations into the string algebra of Table 1 , as illustrated by the example in Fig. 2 . In order to make our algorithm ready to use, it remains to specify a b-rule for the string algeba. We use the following b-rule for both b 1 and b 2 . Each symbol a \u2208 \u2206 i | 0 is mapped to the language {a}. Each symbol con k , k \u2265 2, is mapped to the language induced by the following RTG with states of the form [j, j ] (where 0 \u2264 j < j \u2264 k) and final state [0, k]:", "cite_spans": [], "ref_spans": [ { "start": 125, "end": 132, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 168, "end": 174, "text": "Fig. 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Synchronous context-free grammars", "sec_num": "5.1" }, { "text": "[j \u2212 1, j] \u2192 x j (1 \u2264 j \u2264 k) [j, j ] \u2192 con 2 ([j, j ], [j , j ]) (0 \u2264 j < j < j \u2264 k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synchronous context-free grammars", "sec_num": "5.1" }, { "text": "This language expresses all possible ways in which con k can be written in terms of con 2 . Our definition of rule-by-rule binarization with respect to b 1 and b 2 coincides with that of Huang et al. (2009) : any rule can be binarized by both algorithms or neither. For instance, for the SCFG rule A \u2192 BCDE, CEBD , the sets v(b 1 (h 1 (\u03b1))) and v(b 2 (h 2 (\u03b1))) are disjoint, thus no binarization exists. Two strings of length N can be parsed with a binary IRTG that represents an SCFG in time O(N 6 ).", "cite_spans": [ { "start": 187, "end": 206, "text": "Huang et al. (2009)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Synchronous context-free grammars", "sec_num": "5.1" }, { "text": "Some approaches to SMT go beyond string-tostring translation models such as SCFG by exploiting known syntactic structures in the source or target language. This perspective on translation naturally leads to the use of tree-to-string transducers NP \u2192 \u03b1(NNP, JJ, NN) NP con 3 NP con 3 DT the Figure 8 : An IRTG rule encoding the rule in Fig. 7 . (Yamada and Knight, 2001; Galley et al., 2004; Huang et al., 2006; Graehl et al., 2008) . Figure 7 shows an example of a tree-to-string rule. It might be used to translate \"the Commission's strategic plan\" into \"das langfristige Programm der Kommission\". Our algorithm can binarize tree-to-string transducers; to our knowledge, it is the first algorithm to do so. We model the tree-to-string transducer as an IRTG G = ((G, h 1 , h 2 ), A 1 , A 2 ), where A 2 is the string algebra, but this time A 1 is the tree algebra shown in Table 1 . This algebra has operations con k to concatenate sequences of trees and unary \u03b3 that maps any sequence (t 1 , . . . , t l ) of trees to the tree \u03b3(t 1 , . . . , t l ), viewed as a sequence of length 1. Note that we exclude the operation con 1 because it is the identity and thus unnecessary. Thus the rule in Fig. 7 translates to the IRTG rule shown in Fig. 8 .", "cite_spans": [ { "start": 344, "end": 369, "text": "(Yamada and Knight, 2001;", "ref_id": "BIBREF24" }, { "start": 370, "end": 390, "text": "Galley et al., 2004;", "ref_id": "BIBREF6" }, { "start": 391, "end": 410, "text": "Huang et al., 2006;", "ref_id": "BIBREF14" }, { "start": 411, "end": 431, "text": "Graehl et al., 2008)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 290, "end": 298, "text": "Figure 8", "ref_id": null }, { "start": 335, "end": 341, "text": "Fig. 7", "ref_id": null }, { "start": 434, "end": 442, "text": "Figure 7", "ref_id": null }, { "start": 873, "end": 880, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 1192, "end": 1198, "text": "Fig. 7", "ref_id": null }, { "start": 1236, "end": 1242, "text": "Fig. 8", "ref_id": null } ], "eq_spans": [], "section": "Tree-to-string transducers", "sec_num": "5.2" }, { "text": "con 0 x 1 POS 's con 0 x 2 x 3 h1 \u2190\u2212 \u03b1 h2 \u2212\u2192 con 5 das x 2 x 3 der x 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree-to-string transducers", "sec_num": "5.2" }, { "text": "For the string algebra, we reuse the b-rule from Section 5.1; we call it b 2 here. For the tree algebra, we use the following b-rule b 1 . It maps con 0 to {con 0 } and each unary symbol \u03b3 to {\u03b3(x 1 )}. Each symbol con k , k \u2265 2, is treated as in the string case. Using these b-rules, we can binarize the rule in Fig. 8 and obtain the rules in Fig. 9 . Parsing of a binary IRTG that represents a tree-to-string transducer is O(N 3 \u2022 M ) for a string of length N and a tree with M nodes.", "cite_spans": [], "ref_spans": [ { "start": 313, "end": 319, "text": "Fig. 8", "ref_id": null }, { "start": 344, "end": 350, "text": "Fig. 9", "ref_id": null } ], "eq_spans": [], "section": "Tree-to-string transducers", "sec_num": "5.2" }, { "text": "We have implemented our binarization algorithm and the b-rules for the string and the tree algebra. In order to test our implementation, we extracted a tree-to-string transducer from about a million parallel sentences of English-German Europarl data, using the GHKM rule extractor (Galley, 2010). Then we binarized the transducer. The results are shown in Fig. 10 . Of the 2.15 million rules in the extracted transducer, 460,000 were suprabinary, and 67 % of these could be binarized. Binarization took 4.4 minutes on a single core of an Intel Core i5 2520M processor.", "cite_spans": [], "ref_spans": [ { "start": 356, "end": 363, "text": "Fig. 10", "ref_id": null } ], "eq_spans": [], "section": "Tree-to-string transducers", "sec_num": "5.2" }, { "text": "NP \u2192 \u03b1 (NNP, A ) A \u2192 \u03b1 (JJ, NN) NP con 2 NP con 2 DT the con 0 con 2 x 1 POS 's con 0 x 2 h 1 \u2190\u2212 \u03b1 h 2 \u2212\u2192 con 2 con 2 das x 2 con 2 der x 1 con 2 x 1 x 2 h 1 \u2190\u2212 \u03b1 h 2 \u2212\u2192 con 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree-to-string transducers", "sec_num": "5.2" }, { "text": "x 1 x 2 Figure 9 : Binarization of the rule in Fig. 8 . Figure 10: Rules of a transducer extracted from Europarl (ext) vs. its binarization (bin).", "cite_spans": [], "ref_spans": [ { "start": 8, "end": 16, "text": "Figure 9", "ref_id": null }, { "start": 47, "end": 53, "text": "Fig. 8", "ref_id": null } ], "eq_spans": [], "section": "Tree-to-string transducers", "sec_num": "5.2" }, { "text": "Our binarization algorithm can be used to solve the general rule-by-rule binarization problem for a specific grammar formalism, provided that one can find appropriate b-rules. More precisely, we need to devise a class C of IRTGs over the same sequence A 1 , . . . , A n of algebras that encodes the grammar formalism, together with brules b 1 , . . . , b n over A 1 , . . . , A n that are complete on every grammar in C, as defined in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General approach", "sec_num": "5.3" }, { "text": "We have already seen the b-rules for SCFGs and tree-to-string transducers in the preceding subsections; now we have a closer look at the class C for SCFGs. We used the class of all IRTGs with two string algebras and in which h i (\u03b1) contains at most one occurrence of a symbol con k for every \u03b1 \u2208 \u03a3. On such a grammar the b-rules are complete. Note that this would not be the case if we allowed several occurrences of con k , as in con 2 (con 2 (x 1 , x 2 ), x 3 ). This term is equivalent to itself and to con 2 (x 1 , con 2 (x 2 , x 3 )), but the b-rules only cover the former. Thus they miss one variable tree. For the term con 3 (x 1 , x 2 , x 3 ), however, the b-rules cover both variable trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General approach", "sec_num": "5.3" }, { "text": "Generally speaking, given C and b-rules b 1 , . . . , b n that are complete on every IRTG in C, Algorithm 1 solves the general rule-by-rule binarization problem on C. We can adapt Theorem 3 by requiring that G must be in C, and replacing each of the two occurrences of \"binarization with respect to b 1 , . . . , b n \" by simply \"binarization\". If C is such that every grammar from a given grammar formalism can be encoded as an IRTG in C, this solves the general rule-by-rule binarization problem of that grammar formalism.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General approach", "sec_num": "5.3" }, { "text": "We have presented an algorithm for binarizing IRTGs rule by rule, with respect to b-rules that the user specifies for each algebra. This improves the complexity of parsing and translation with any monolingual or synchronous grammar that can be represented as an IRTG. A novel algorithm for binarizing tree-to-string transducers falls out as a special case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In this paper, we have taken the perspective that the binarized IRTG uses the same algebras as the original IRTG. Our algorithm extends to grammars of arbitrary fanout (such as synchronous tree-adjoining grammar (Koller and Kuhlmann, 2012) ), but unlike LCFRS-based approaches to binarization, it will not increase the fanout to ensure binarizability. In the future, we will explore IRTG binarization with fanout increase. This could be done by binarizing into an IRTG with a more complicated algebra (e.g., of string tuples). We might compute binarizations that are optimal with respect to some measure (e.g., fanout (Gomez-Rodriguez et al., 2009) or parsing complexity (Gildea, 2010) ) by keeping track of this measure in the b-rule and taking intersections of weighted tree automata.", "cite_spans": [ { "start": 212, "end": 239, "text": "(Koller and Kuhlmann, 2012)", "ref_id": "BIBREF18" }, { "start": 618, "end": 648, "text": "(Gomez-Rodriguez et al., 2009)", "ref_id": "BIBREF11" }, { "start": 671, "end": 685, "text": "(Gildea, 2010)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "back_matter": [ { "text": "We thank the anonymous referees for their insightful remarks, and Sarah Hemmen for implementing an early version of the algorithm. Matthias B\u00fcchse was financially supported by DFG VO 1011/6-1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Syntax directed translations and the pushdown assembler", "authors": [ { "first": "Alfred", "middle": [ "V" ], "last": "Aho", "suffix": "" }, { "first": "Jeffrey", "middle": [ "D" ], "last": "Ullman", "suffix": "" } ], "year": 1969, "venue": "Journal of Computer and System Sciences", "volume": "3", "issue": "", "pages": "37--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alfred V. Aho and Jeffrey D. Ullman. 1969. Syntax directed translations and the pushdown assembler. Journal of Computer and System Sciences, 3:37-56.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Bitransduction de for\u00eats", "authors": [ { "first": "Andr\u00e9", "middle": [], "last": "Arnold", "suffix": "" }, { "first": "Max", "middle": [], "last": "Dauchet", "suffix": "" } ], "year": 1976, "venue": "Proc. 3rd Int. Coll. Automata, Languages and Programming", "volume": "", "issue": "", "pages": "74--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andr\u00e9 Arnold and Max Dauchet. 1976. Bi- transduction de for\u00eats. In Proc. 3rd Int. Coll. Au- tomata, Languages and Programming, pages 74-86. Edinburgh University Press.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Tree generating regular systems", "authors": [ { "first": "Walter", "middle": [ "S" ], "last": "Brainerd", "suffix": "" } ], "year": 1969, "venue": "Information and Control", "volume": "14", "issue": "2", "pages": "217--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Walter S. Brainerd. 1969. Tree generating regular sys- tems. Information and Control, 14(2):217-231.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Hierarchical phrase-based translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "2", "pages": "201--228", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2007. Hierarchical phrase-based trans- lation. Computational Linguistics, 33(2):201-228.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Synchronous tree-adjoining machine translation", "authors": [ { "first": "Steve", "middle": [], "last": "Deneefe", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "727--736", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steve DeNeefe and Kevin Knight. 2009. Synchronous tree-adjoining machine translation. In Proceedings of EMNLP, pages 727-736.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Learning non-isomorphic tree mappings for machine translation", "authors": [ { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st ACL", "volume": "", "issue": "", "pages": "205--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Eisner. 2003. Learning non-isomorphic tree mappings for machine translation. In Proceedings of the 41st ACL, pages 205-208.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "What's in a translation rule", "authors": [ { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Hopkins", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of HLT/NAACL", "volume": "", "issue": "", "pages": "273--280", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What's in a translation rule? In Proceedings of HLT/NAACL, pages 273-280.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "GHKM rule extractor", "authors": [ { "first": "Michael", "middle": [], "last": "Galley", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Galley. 2010. GHKM rule extractor. http: //www-nlp.stanford.edu/\u02dcmgalley/ software/stanford-ghkm-latest.tar. gz, retrieved on March 28, 2012.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Tree languages", "authors": [ { "first": "Ferenc", "middle": [], "last": "G\u00e9cseg", "suffix": "" }, { "first": "Magnus", "middle": [], "last": "Steinby", "suffix": "" } ], "year": 1997, "venue": "Handbook of Formal Languages", "volume": "3", "issue": "", "pages": "1--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ferenc G\u00e9cseg and Magnus Steinby. 1997. Tree lan- guages. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages, volume 3, chap- ter 1, pages 1-68. Springer-Verlag.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Optimal parsing strategies for linear context-free rewriting systems", "authors": [ { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2010, "venue": "Proceedings of NAACL HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Gildea. 2010. Optimal parsing strategies for linear context-free rewriting systems. In Proceed- ings of NAACL HLT.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Initial algebra semantics and continuous algebras", "authors": [ { "first": "Joseph", "middle": [ "A" ], "last": "Goguen", "suffix": "" }, { "first": "Jim", "middle": [ "W" ], "last": "Thatcher", "suffix": "" }, { "first": "Eric", "middle": [ "G" ], "last": "Wagner", "suffix": "" }, { "first": "Jesse", "middle": [ "B" ], "last": "Wright", "suffix": "" } ], "year": 1977, "venue": "Journal of the ACM", "volume": "24", "issue": "", "pages": "68--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph A. Goguen, Jim W. Thatcher, Eric G. Wagner, and Jesse B. Wright. 1977. Initial algebra seman- tics and continuous algebras. Journal of the ACM, 24:68-95.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Optimal reduction of rule length in linear context-free rewriting systems", "authors": [ { "first": "Carlos", "middle": [], "last": "Gomez-Rodriguez", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Kuhlmann", "suffix": "" }, { "first": "Giorgio", "middle": [], "last": "Satta", "suffix": "" }, { "first": "David", "middle": [], "last": "Weir", "suffix": "" } ], "year": 2009, "venue": "Proceedings of NAACL HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlos Gomez-Rodriguez, Marco Kuhlmann, Giorgio Satta, and David Weir. 2009. Optimal reduction of rule length in linear context-free rewriting systems. In Proceedings of NAACL HLT.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Training tree transducers", "authors": [ { "first": "Jonathan", "middle": [], "last": "Graehl", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "3", "pages": "391--427", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Graehl, Kevin Knight, and Jonathan May. 2008. Training tree transducers. Computational Linguistics, 34(3):391-427.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Better k-best parsing", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 9th IWPT", "volume": "", "issue": "", "pages": "53--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Huang and David Chiang. 2005. Better k-best parsing. In Proceedings of the 9th IWPT, pages 53- 64.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Statistical syntax-directed translation with extended domain of locality", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Aravind", "middle": [], "last": "Joshi", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 7th AMTA", "volume": "", "issue": "", "pages": "66--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proceedings of the 7th AMTA, pages 66-73.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Binarization of synchronous context-free grammars", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2009, "venue": "Computational Linguistics", "volume": "35", "issue": "4", "pages": "559--595", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Huang, Hao Zhang, Daniel Gildea, and Kevin Knight. 2009. Binarization of synchronous context-free grammars. Computational Linguistics, 35(4):559-595.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A generalization of Dijkstra's algorithm", "authors": [ { "first": "Donald", "middle": [ "E" ], "last": "Knuth", "suffix": "" } ], "year": 1977, "venue": "Information Processing Letters", "volume": "6", "issue": "1", "pages": "1--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Donald E. Knuth. 1977. A generalization of Dijkstra's algorithm. Information Processing Letters, 6(1):1- 5.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A generalized view on parsing and translation", "authors": [ { "first": "Alexander", "middle": [], "last": "Koller", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Kuhlmann", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 12th IWPT", "volume": "", "issue": "", "pages": "2--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Koller and Marco Kuhlmann. 2011. A gen- eralized view on parsing and translation. In Pro- ceedings of the 12th IWPT, pages 2-13.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Decomposing TAG algorithms using simple algebraizations", "authors": [ { "first": "Alexander", "middle": [], "last": "Koller", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Kuhlmann", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 11th TAG+ Workshop", "volume": "", "issue": "", "pages": "135--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Koller and Marco Kuhlmann. 2012. De- composing TAG algorithms using simple alge- braizations. In Proceedings of the 11th TAG+ Work- shop, pages 135-143.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Syntax directed transduction. Foundations of Computer Science", "authors": [ { "first": "M", "middle": [], "last": "Philip", "suffix": "" }, { "first": "Richard", "middle": [ "E" ], "last": "Lewis", "suffix": "" }, { "first": "", "middle": [], "last": "Stearns", "suffix": "" } ], "year": 1966, "venue": "IEEE Annual Symposium on", "volume": "0", "issue": "", "pages": "21--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip M. Lewis and Richard E. Stearns. 1966. Syn- tax directed transduction. Foundations of Computer Science, IEEE Annual Symposium on, 0:21-35.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Weighted deductive parsing and Knuth's algorithm", "authors": [ { "first": "-", "middle": [], "last": "Mark", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "135--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark-Jan Nederhof. 2003. Weighted deductive pars- ing and Knuth's algorithm. Computational Linguis- tics, 29(1):135-143.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Induction of probabilistic synchronous tree-insertion grammars for machine translation", "authors": [ { "first": "Rebecca", "middle": [], "last": "Nesson", "suffix": "" }, { "first": "Stuart", "middle": [ "M" ], "last": "Shieber", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 7th AMTA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rebecca Nesson, Stuart M. Shieber, and Alexander Rush. 2006. Induction of probabilistic synchronous tree-insertion grammars for machine translation. In Proceedings of the 7th AMTA.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Independent parallelism in finite copying parallel rewriting systems", "authors": [ { "first": "Owen", "middle": [], "last": "Rambow", "suffix": "" }, { "first": "Giorgio", "middle": [], "last": "Satta", "suffix": "" } ], "year": 1999, "venue": "Theoretical Computer Science", "volume": "223", "issue": "1-2", "pages": "87--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Owen Rambow and Giorgio Satta. 1999. Independent parallelism in finite copying parallel rewriting sys- tems. Theoretical Computer Science, 223(1-2):87- 120.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Characterizing Mildly Context-Sensitive Grammar Formalisms", "authors": [ { "first": "David", "middle": [ "J" ], "last": "Weir", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David J. Weir. 1988. Characterizing Mildly Context- Sensitive Grammar Formalisms. Ph.D. thesis, Uni- versity of Pennsylvania.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A syntaxbased statistical translation model", "authors": [ { "first": "Kenji", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 39th ACL", "volume": "", "issue": "", "pages": "523--530", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenji Yamada and Kevin Knight. 2001. A syntax- based statistical translation model. In Proceedings of the 39th ACL, pages 523-530.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "An IRTG encoding an SCFG.", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "Binary rules corresponding to the \u03b1-rule inFig. 2.", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "JJ x 3 :NN \u2212\u2192 das x 2 x 3 der x 1", "type_str": "figure", "num": null, "uris": null }, "TABREF0": { "type_str": "table", "html": null, "text": "Algebras for strings and trees, given an alphabet \u0393 and a maximum arity K \u2208 N.", "num": null, "content": "" } } } }