{ "paper_id": "U11-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:09:52.436223Z" }, "title": "Formalizing Semantic Parsing with Tree Transducers", "authors": [ { "first": "Keeley", "middle": [], "last": "Bevan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Macquarie University Sydney", "location": { "postCode": "2109", "region": "NSW", "country": "Australia" } }, "email": "" }, { "first": "Mark", "middle": [], "last": "Jones", "suffix": "", "affiliation": { "laboratory": "", "institution": "Macquarie University Sydney", "location": { "postCode": "2109", "region": "NSW", "country": "Australia" } }, "email": "mark.johnson@mq.edu.au" }, { "first": "", "middle": [], "last": "Johnson", "suffix": "", "affiliation": { "laboratory": "", "institution": "Macquarie University Sydney", "location": { "postCode": "2109", "region": "NSW", "country": "Australia" } }, "email": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh Edinburgh", "location": { "postCode": "EH8 9AB", "country": "UK" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper introduces tree transducers as a unifying theory for semantic parsing models based on tree transformations. Many existing models use tree transformations, but implement specialized training and smoothing methods, which makes it difficult to modify or extend the models. By connecting to the rich literature on tree automata, we show how semantic parsing models can be developed using completely general estimation methods. We demonstrate the approach by reframing and extending one state-of-the-art model as a tree automaton. Using a variant of the inside-outside algorithm with variational Bayesian estimation, our generative model achieves higher raw accuracy than existing generative and discriminative approaches on a standard data set.", "pdf_parse": { "paper_id": "U11-1005", "_pdf_hash": "", "abstract": [ { "text": "This paper introduces tree transducers as a unifying theory for semantic parsing models based on tree transformations. Many existing models use tree transformations, but implement specialized training and smoothing methods, which makes it difficult to modify or extend the models. By connecting to the rich literature on tree automata, we show how semantic parsing models can be developed using completely general estimation methods. We demonstrate the approach by reframing and extending one state-of-the-art model as a tree automaton. Using a variant of the inside-outside algorithm with variational Bayesian estimation, our generative model achieves higher raw accuracy than existing generative and discriminative approaches on a standard data set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Automatically interpreting language is an important challenge for computational linguistics. Semantic parsing addresses the specific task of learning to map natural language sentences to formal representations of their meaning, a problem that arises in developing natural language interfaces, for example. Given a set of (sentence, meaning representation) pairs like the example below, we want to to learn a map that generalizes to previously unseen sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Meaning: answer(capital 1(stateid(texas)))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence: what is the capital of texas ?", "sec_num": "1." }, { "text": "Researchers have formalized the learning problem in various ways, with approaches including string classifiers (Kate and Mooney, 2006) , synchronous grammar (Wong and Mooney, 2006) , combinatory categorial grammar (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2010) , and PCFG-based approaches (Lu et al., 2008; Borschinger et al., 2011) . Each approach has required its own custom algorithms, which has made model development and innovation slow. Nevertheless, there are many similarities between the approaches, which all exploit parallels between the structure of the meaning representation and that of the natural language. The meaning representation, as a context-free formal language, has an obvious tree structure. Trees are also widely used to describe natural language structure.", "cite_spans": [ { "start": 111, "end": 134, "text": "(Kate and Mooney, 2006)", "ref_id": "BIBREF6" }, { "start": 157, "end": 180, "text": "(Wong and Mooney, 2006)", "ref_id": "BIBREF16" }, { "start": 214, "end": 245, "text": "(Zettlemoyer and Collins, 2005;", "ref_id": "BIBREF17" }, { "start": 246, "end": 271, "text": "Kwiatkowski et al., 2010)", "ref_id": "BIBREF9" }, { "start": 300, "end": 317, "text": "(Lu et al., 2008;", "ref_id": "BIBREF11" }, { "start": 318, "end": 343, "text": "Borschinger et al., 2011)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence: what is the capital of texas ?", "sec_num": "1." }, { "text": "Consequently, the semantic parsing problem can be generally defined as learning a mapping between trees, one of which may be latent. This mapping can be expressed as a tree transducer, a formalism from automata theory that maps input trees to output trees or strings. Tree transducers have well understood properties and algorithms, and a rich literature, making them a particularly appealing model class.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence: what is the capital of texas ?", "sec_num": "1." }, { "text": "Although some previous approaches strongly resemble tree transducers, to our knowledge, we are the first to explicitly formulate the problem in this way. We argue that connecting semantic parsing to the tree automata literature will free researchers from devising custom solutions and allow them to focus on studying and improving their models and developing more general learning algorithms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence: what is the capital of texas ?", "sec_num": "1." }, { "text": "To demonstrate the effectiveness of the approach, we choose one state-of-the-art model, the hybrid tree (Lu et al., 2008) , translate it into the tree transducer Figure 1 : An extended left hand side, root-to-frontier, linear, non-deleting, tree-to-tree transducer (a) and an example derivation (b). Numbered arrows in the derivation indicate which rules apply during that step. Rule [1] is the only rule with an extended left hand side.", "cite_spans": [ { "start": 104, "end": 121, "text": "(Lu et al., 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 162, "end": 170, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Sentence: what is the capital of texas ?", "sec_num": "1." }, { "text": "framework, and add a small extension, made easy by the framework. We also update a standard tree transducer training algorithm to incorporate a Variational Bayes approximation. The result is the first purely generative model to achieve state-of-the-art results on a standard data set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence: what is the capital of texas ?", "sec_num": "1." }, { "text": "Tree transducers (Rounds, 1970; Thatcher, 1970) are generalizations of finite state machines that take trees as inputs and either output a string or another tree. Mirroring the branching nature of its input, the tree transducer may simultaneously transition to any number of successor states, assigning a separate state to process each sub-tree. Although they were originally conceived of by Rounds (1970) as a way to formalize tree transformations in linguistic theory, they have since received far more interest in theoretical computer science. Recently, however, they have also been used for syntax-based statistical machine translation (Graehl et al., 2008; Knight and Greahl, 2005) . Figure 1 presents an example of a tree-to-tree transducer. It is defined using tree transformation rules, where the left hand side identifies a state of the transducer and a fragment of the input tree, and the right hand side describes a fragment of the output tree. Variables x i stand for entire sub-trees. There are many classes of transducer, each with its own selection of algorithms (Knight and Greahl, 2005) . In this paper we restrict consideration primarily to the extended left hand side, root-to-frontier, linear, nondeleting tree transducers (Maletti et al., 2009) , and we particularly make use of tree-to-string transducers.", "cite_spans": [ { "start": 17, "end": 31, "text": "(Rounds, 1970;", "ref_id": "BIBREF14" }, { "start": 32, "end": 47, "text": "Thatcher, 1970)", "ref_id": "BIBREF15" }, { "start": 392, "end": 405, "text": "Rounds (1970)", "ref_id": "BIBREF14" }, { "start": 640, "end": 661, "text": "(Graehl et al., 2008;", "ref_id": "BIBREF5" }, { "start": 662, "end": 686, "text": "Knight and Greahl, 2005)", "ref_id": "BIBREF7" }, { "start": 1078, "end": 1103, "text": "(Knight and Greahl, 2005)", "ref_id": "BIBREF7" }, { "start": 1243, "end": 1265, "text": "(Maletti et al., 2009)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 689, "end": 697, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Extended, root-to-frontier, linear, non-deleting tree transducers", "sec_num": "2" }, { "text": "Formally, an extended left hand side, rootto-frontier, tree-to-tree transducer is a 5-tuple (Q, \u03a3, \u2206, q start , R). Q is a finite set of states, \u03a3 and \u2206 are the input and output tree alphabets, q start is the start state, and R is the set of rules. We denote a pair of symbols, a and b by a.b, and the cross product of two sets A and B by A.B. Let X be the set of variables {x 0 , x 1 , ...}. Finally, let T \u03a3 (A) be the set of trees with non-terminals from alphabet \u03a3 and leaf symbols from alphabet A. Then, each rule r \u2208 R is of the form", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extended, root-to-frontier, linear, non-deleting tree transducers", "sec_num": "2" }, { "text": "[q.t \u2192 u].v, where q \u2208 Q, t \u2208 T \u03a3 (X), u \u2208 T \u2206 (Q.X) such that every x \u2208 X", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extended, root-to-frontier, linear, non-deleting tree transducers", "sec_num": "2" }, { "text": "in u also occurs in t, and v \u2208 \u211c \u22650 is the weight of the rule.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extended, root-to-frontier, linear, non-deleting tree transducers", "sec_num": "2" }, { "text": "We say q.t is the left hand side of the rule and u is the right hand side. The transducer is linear iff no variable appears more than once on the right hand side. It is non-deleting iff all variables on the left hand side also occur on the right hand side. Iff every tree t on the left hand side is of the form \u03c3(x 0 , ...x n ), where \u03c3 \u2208 \u03a3 (i.e., it is a tree of depth \u2264 1), then the transducer is simply root-to-frontier, otherwise we say it has an extended left hand side with the added power to look a bounded depth into the tree at each step. Finally, for a tree-to-string transducer, \u2206 is an alphabet, and the right hand sides of the rules consist of finite tuples of elements taken from \u2206 \u222a Q.X.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extended, root-to-frontier, linear, non-deleting tree transducers", "sec_num": "2" }, { "text": "A weighted tree transducer may define a probability distribution, either a joint distribution over input and output pairs or a conditional distribution of the output given the input. Here, we will use joint distributions, which can be defined by ensuring that the weights of all rules with the same state on the lefthand side sum to one. In this case, it can be helpful to view the transducer as simultaneously generating both the input and output, rather than the usual view of reading inputs and writing outputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extended, root-to-frontier, linear, non-deleting tree transducers", "sec_num": "2" }, { "text": "The goal of semantic parsing is to assign formal meanings to natural language (NL) sentences, requiring a formal meaning language. Some systems use lambda expressions; others use variable free logical languages or functional languages (such as that of example 1 in the introduction). Here we deal with meaning representations (MRs) of the latter form where the bracketing makes the tree structure obvious 1 We refer to functions and predicates in the MR as either symbols or entities. Since MRs are trees, the language can be defined by a Regular Tree Grammar (a kind of CFG that generates trees). We refer to this grammar as the meaning representation grammar or MR grammar. Figure 3 shows a fragment of such a grammar and an MR parse. The parse is just the MR with each symbol labeled with its grammar rule. Like most systems, the MR grammar is one of our inputs.", "cite_spans": [ { "start": 405, "end": 406, "text": "1", "ref_id": null } ], "ref_spans": [ { "start": 676, "end": 684, "text": "Figure 3", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Semantic parsing and meaning representation languages", "sec_num": "3" }, { "text": "The idea of the hybrid tree model (Lu et al., 2008) is to start with the MR and apply a series of transformations to create a kind of parse tree for the NL. There are two types of transformation. The first determines word order by simultaneously choosing where to attach words (but not the particular words) and whether or not to swap the order of siblings (Figure 2a) . Once the order is determined, word generating transformations are then applied to insert specific words in the determined locations ( Figure 2b ). The hybrid tree includes parameters for the MR as well as the transformations in Figure 2 that relate words to meaning representations. The probability of each symbol in the MR is conditioned on the MR grammar rules that derive its parent symbol. Defining symbol probabilities in terms of their parents' grammar rules (as opposed to parent symbols as in a standard PCFG) distinguishes between functions and predicates with the same name but different semantics (Wong and Mooney, 2006) .", "cite_spans": [ { "start": 34, "end": 51, "text": "(Lu et al., 2008)", "ref_id": "BIBREF11" }, { "start": 979, "end": 1002, "text": "(Wong and Mooney, 2006)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 357, "end": 368, "text": "(Figure 2a)", "ref_id": null }, { "start": 505, "end": 514, "text": "Figure 2b", "ref_id": null }, { "start": 599, "end": 607, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "The hybrid tree model", "sec_num": "4" }, { "text": "To formally define the probability of the MR, let paths be the set of paths from the root to every node in the MR where paths are represented using a variety of Gorn's notation (Gorn, 1962) 2 . Let args i be the set of indices of the children of the node at path i; and R i be the grammar rule that derives the symbol at i according to the MR parse. Then, the following equation defines P (MR).", "cite_spans": [ { "start": 177, "end": 189, "text": "(Gorn, 1962)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "The hybrid tree model", "sec_num": "4" }, { "text": "P (MR) =P (R \u01eb ) i\u2208paths j\u2208args i P (R i\u2022j |j, R i ) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The hybrid tree model", "sec_num": "4" }, { "text": "In other words, each node in the tree is generated according to the probability of the MR rule that derives it conditioned on (1) the MR rule R i that derives its parent symbol and (2) its position j beneath that parent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The hybrid tree model", "sec_num": "4" }, { "text": "The hybrid tree model then re-orders and extends this basic skeleton to include the NL. The probability of this hybrid tree can be formally defined as follows if we let pat i be the word order pattern used to generate the children of the node at path i, and words i be the indices of the words attached under the node at i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The hybrid tree model", "sec_num": "4" }, { "text": "P (NL-MR hybrid) = P (R \u01eb ) i\u2208paths P (pat i |R i ) \u2022 j\u2208args i P (R i\u2022j |j, R i ) k\u2208words i P (w i\u2022k |R i ) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The hybrid tree model", "sec_num": "4" }, { "text": "Note that P (pat|R) and P (w|R) correspond respectively to the weights on the word order and word generation transformations. In fact, equation 2 is a joint probability over not only the NL and MR pair but also the actual set of transformations chosen to produce the particular hybrid tree relating them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The hybrid tree model", "sec_num": "4" }, { "text": "We now define a tree transducer that simultaneously generates an MR tree and NL string according to the joint probability defined by equation 2. We create separate states for each of the two transformation types (order states for word order selection and word states for word generation). In order to model the properties of the MR grammar (necessary for modeling equation 1), we create one additional state type for selecting MR children (arg states) and embed the MR grammar rules into the states so that each state is identified with exactly one grammar rule. Transitions between transducer states then simulate the action of the MR grammar as it generates a new MR tree. Notationally, we employ subscripts to indicate each state's basic type (arg, order, or word) and superscripts to indicate the associated MR grammar Figure 2 : The two transformation types of the hybrid tree model and an example of their application. (a) Word order transformations simultaneously permute arguments and add W symbols where words should be attached. The dotted lines indicate that W symbols may or may not be attached in each of the possible locations, and siblings may or may not be swapped. Each possible configuration of sibling orderings and W attachments corresponds to a single transformation. Thus there are 4 different transformations for the case where A has one child, and 16 for when it has 2. In the case where A has no children, word attachment is not optional. (b) Word generation replaces each W symbol with actual words. (c) The series of transformations from example MR cityid(portland,me) to produce a parse for the Japanese equivalent of 'portland, maine'.", "cite_spans": [], "ref_spans": [ { "start": 823, "end": 831, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Reframing the hybrid tree as a tree transducer", "sec_num": "5" }, { "text": "rule, so that, for instance, state q R order is an order state associated with MRL grammar rule R. Figure 3 presents a graphical representation of the basic state transitions of the transducer, where the states for each grammar rule are clustered inside dotted lines beneath its associated grammar rule label. The transducer begins in an arg state and proceeds as follows. First, the arg state selects the next child by transitioning to an order state corresponding to the MR rule that generates the appropriate child. The order state then chooses the appropriate word order pattern and transitions to the word and arg states associated with that same grammar rule 3 . The word states proceed to generate words one at a time in a loop and finally terminate the string. Then the arg state begins the cycle over again by transitioning to the order of the next child in the MR tree. Table 1 lists the actual transducer rule types. Rule probabilities are conditioned on the state on the left hand side. Thus, since states identify both their function and the grammar rule of the current MR node, rule weights correspond directly to the terms in equation 2: P (R i\u2022j |j, R i ), P (pat|R), and P (w|R).", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 107, "text": "Figure 3", "ref_id": "FIGREF0" }, { "start": 880, "end": 887, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Reframing the hybrid tree as a tree transducer", "sec_num": "5" }, { "text": "P (R i\u2022j |j, R i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source tree language model:", "sec_num": "5.1" }, { "text": "Rule type 1 in Table 1 begins the process by transitioning from start state q start to q R order , where the grammar rule R ranges over those rules with the start symbol S on the left hand side. Choosing exactly which q R order to transition to corresponds to the decision of choosing the root symbol of the MR tree (the symbol generated by R), and these transducer rules define the P (R \u01eb ) term in equation 1, i.e., the probability of the grammar rule corresponding to the root symbol of the MR tree.", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Source tree language model:", "sec_num": "5.1" }, { "text": "For each pair of MR grammar rules R p and R c , we add a transducer rule of the form of rule type 2 that transitions from the states associated with R p to those for R c if R c generates a valid child of the symbol generated by R p . Thus, the choice of state transition here corresponds to choosing the child of the last generated symbol of the input tree. State q R p arg,i selects the i th argument of the current function in the MR without generating anything in the input tree. With rules described in the next section, state q R c order then writes the symbol to the input tree specified by MR grammar rule R c . Since the state on the left encodes the rule of the parent and the argument number, and the state on the right the child rule, the weights for transducer rules of type 2 define P (R i\u2022j |j, R i ) in equation 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source tree language model:", "sec_num": "5.1" }, { "text": "Word ordering decisions are made with the aid of preprocessing step that adds W symbols to the input tree wherever words can be attached. These symbols are just a convenience: it is easier to design rules where every output structure has a counterpart in the input. The symbols are removed later in a postprocessing step (also using a tree transducer). Attachment decisions are then made by deciding which of these W symbols to replace with the empty string (no attachment) or a string of words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Order decisions: P (pat|R)", "sec_num": "5.2" }, { "text": "We add transducer rules of the form of rule type 3 in Table 1 for each MR grammar rule R f , to define the selection of one of the word order patterns of the hybrid tree. These rules simply enumerate the conjunction of all the possible word attachment patterns and argument order decisions. Binary sequence i indicates the word attachment portion of the hybrid tree pattern, where each bit is either 1 indicating an attachment, or 0 for a decision not to attach. For an n argument function, there are n + 1 such choices, requiring an n + 1 bit sequence, where i k is the decision for the k th position. Argument order is indicated by j, a permutation of the numbers 0, 1, ...n \u2212 1, and j k is the k th number in the permutation, indicating which argument appears at position k. State q R f words,1 generates the words for f , state q R f words,0 replaces the symbol W with the empty string, and the states q R f arg,k select the grammar rule with which to generate the k th child. When there is only a single child W , no decisions about argument order or child attachment are needed; rule type 4 always generates words for these constants. ", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 61, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Order decisions: P (pat|R)", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "q start .x 0 \u2192 q R order .x 0 (1) q R p arg,i .x 0 \u2192 q R c order .x 0 (2) q R f order .f (w 0 , x 0 , w 1 , x 1 , w 2 , ...x n\u22121 , w n ) \u2192 q R f words,i 0 .w 0 q R f arg,j 0 .x j 0 q R f words,i 1 .w 1 q R f arg,j 1 .x j 1 q R f words,i 2 .w 2 ... q R f arg,j n\u22121 .x j n\u22121 q R f words,in .w n (3) q R f order .f (w 0 ) \u2192 q R f words,1 .w 0 (4) q R words,1 .x 0 \u2192 word k q R words,1 .x 0 (5) q R words,1 .x 0 \u2192 word k q R words,0 .x 0 (6) q R words,0 .W \u2192 \u01eb", "eq_num": "(7)" } ], "section": "Order decisions: P (pat|R)", "sec_num": "5.2" }, { "text": "R i\u2022j |j, R i ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Order decisions: P (pat|R)", "sec_num": "5.2" }, { "text": "(3)-(4) define P (pat|R i ), and (5)-(7) define P (w|R i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Order decisions: P (pat|R)", "sec_num": "5.2" }, { "text": "The following input tree and output string pair is illustrates an intermediate computation produced by interleaving these two kinds of ordering rules with the argument selection rules of the previous section, and applying them to the example in Figure 2 :", "cite_spans": [], "ref_spans": [ { "start": 245, "end": 253, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Order decisions: P (pat|R)", "sec_num": "5.2" }, { "text": "q R cityid order .cityid(W, portland(W ), W, me(W ), W ) * \u21d2 q R cityid words,0 .W q R me words,1 .W q R cityid words,1 .W q R portland", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Order decisions: P (pat|R)", "sec_num": "5.2" }, { "text": "words,1 .W q R cityid words,0 .W The weights on these rules define the conditional probability P (pat|R), where pat is one of the patterns of the word transformations illustrated in Figure 2.", "cite_spans": [], "ref_spans": [ { "start": 182, "end": 188, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Order decisions: P (pat|R)", "sec_num": "5.2" }, { "text": "Rule types 5 and 6 in Table 1 define the conditional probability of a word word k given an MR grammar rule, and rule type 7 terminates generation by generating W in the input and \u01eb in the output. Using the same example as in the previous section, this yields 5 W symbols in the input tree and the string 'meen no porutorando' in the output.", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 29, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Word generation: P (w|R)", "sec_num": "5.3" }, { "text": "q R cityid words,0 .W * \u21d2 \u01eb q R me words,1 .W * \u21d2 'meen' \u01eb q R cityid words,1 .W * \u21d2 'no' \u01eb q R portland words,1 .W * \u21d2 'porutorando' \u01eb q R cityid words,0 .W * \u21d2 \u01eb", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word generation: P (w|R)", "sec_num": "5.3" }, { "text": "The transducer applies the rules from the three classes of transformation in Table 1 to ultimately produce an MR-NL pair. The probability of this derivation is essentially the same quantity as that of the hybrid tree of the original model (shown in equation 2).", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 84, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Derivation weights and the joint probability distribution", "sec_num": "5.4" }, { "text": "Reordering siblings allows the hybrid tree to capture a large number of word orders, but it is still constrained by the hierarchy of the tree. This constraint reduces the search space but also prevents the model from learning some word orders. Figure 4 To address this problem, we modify the transducer to allow it to rotate parents with their children in addition to re-ordering siblings. This change is easy within the transducer framework but would be difficult in the original implementation, requiring a complete reworking of the training and decoding algorithms. In the original transducer, rules oper- ate on tree fragments of depth \u2264 1. We implement the change using extended left-hand-side transducers, which can operate on larger fragments as long as the depth is bounded (Maletti et al., 2009) . In particular, we introduce rules like the following:", "cite_spans": [ { "start": 782, "end": 804, "text": "(Maletti et al., 2009)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 244, "end": 252, "text": "Figure 4", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "An extension: head-switching", "sec_num": "6" }, { "text": "q R p order .p(w p 0 , c(w c 0 , x c 0 , w c 1 ), w p 1 ) \u2192 q R c words,i 0 .w c 0 q R p words,i 1 .w p 0 q R c arg,0 .x c 0 q R p words,i 2 .w p 1 q R c words,i 3 .w c 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An extension: head-switching", "sec_num": "6" }, { "text": "This rule begins the word generation process simultaneously for both the parent and child, reordering the words to simulate the new nesting structure, and then proceeds to choose the child function's argument. We add similar rules for the various cases where the child and parent have multiple arguments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An extension: head-switching", "sec_num": "6" }, { "text": "Tree transducer derivations are themselves trees, allowing for the computation of inside and outside probabilities much as for the derivation trees of PCFGs. EM can then be applied in much the same way as for PCFGs, substituting the tree-tostring derivation algorithm for standard PCFG parsing (Graehl et al., 2008) . Note that while EM maximizes the likelihood of the training data, items not observed during training receive zero probability, limiting the ability of models to generalize to new data sets. Furthermore, many items that are actually present in the training data are only seen a very few times, which can lead to a poor estimate of their distribution in the target data set. Bayesian estimation techniques such as Variational Bayes (VB) address these problems by allowing us to place a prior probability over the parameters, which particularly influence parameter estimates for sparse items and, depending on the choice of prior, may also assign some non-zero probability to unseen items.", "cite_spans": [ { "start": 294, "end": 315, "text": "(Graehl et al., 2008)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Varitional Bayes parameter estimation", "sec_num": "7" }, { "text": "We give a high-level outline of how a Dirichlet prior can be incorporated into tree transducer training using Variational Bayes, drawing heavily on the essential similarity of inside-outside for PCFGs and training for tree transducers. We direct the reader to Kurihara and Sato (2006) for the details of PCFG training using VB, and to Graehl et al. (2008) for the full treatment of the basic EM algorithm for tree transducers, on which our VB training algorithm is closely based. See Bishop (2006) for a general introduction to VB and Beal (2003) for a derivation of VB as applied to Dirichlet-multinomials.", "cite_spans": [ { "start": 260, "end": 284, "text": "Kurihara and Sato (2006)", "ref_id": "BIBREF8" }, { "start": 335, "end": 355, "text": "Graehl et al. (2008)", "ref_id": "BIBREF5" }, { "start": 484, "end": 497, "text": "Bishop (2006)", "ref_id": "BIBREF1" }, { "start": 535, "end": 546, "text": "Beal (2003)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Varitional Bayes parameter estimation", "sec_num": "7" }, { "text": "The objective of training is to find an estimate for the weights \u03b8 of the transducer rules given some symmetric Dirichlet prior with hyperparameter \u03b1 and observed pairs of natural language sentences W and meaning representation trees Y .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Varitional Bayes parameter estimation", "sec_num": "7" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(\u03b8|\u03b1, W, Y ) = p(W, Y, \u03b8|\u03b1) p(W, Y |\u03b1)", "eq_num": "(8)" } ], "section": "Varitional Bayes parameter estimation", "sec_num": "7" }, { "text": "The tree transducer defines the probability p(W, X, Y |\u03b8), where X is a vector of derivations such that x i \u2208 X is the derivation from MRL tree y i \u2208 Y to NL string w i \u2208 W . We put a symmetric Dirichlet prior over \u03b8 so that the probability p(\u03b8|\u03b1) follows directly from the definition of the Dirichlet distribution. Thus, computing the denominator of equation 8 involves integrating out \u03b8 and X.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Varitional Bayes parameter estimation", "sec_num": "7" }, { "text": "p(W, Y |\u03b1) = p(W, X, Y |\u03b8)p(\u03b8|\u03b1)dXd\u03b8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Varitional Bayes parameter estimation", "sec_num": "7" }, { "text": "However, this integral is intractable, so instead, following from Variational Bayes, we make an approximation q(X, \u03b8) for the posterior probability p(X, \u03b8|W, Y, \u03b1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Varitional Bayes parameter estimation", "sec_num": "7" }, { "text": "log p(W, Y |\u03b1) = log p(W, X, Y, \u03b8|\u03b1)dXd\u03b8 = log q(X, \u03b8) p(W, X, Y, \u03b8|\u03b1) q(X, \u03b8) dXd\u03b8 \u2265 q(X, \u03b8) log p(W, X, Y, \u03b8|\u03b1) q(X, \u03b8) dXd\u03b8 = F", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Varitional Bayes parameter estimation", "sec_num": "7" }, { "text": "We can minimize the KL divergence between q(X, \u03b8) and p(W, Y |\u03b1) by maximizing the lower bound F, called the variational free energy. Since F is a function of q, this amounts to maximizing q.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Varitional Bayes parameter estimation", "sec_num": "7" }, { "text": "Following from Kurihara and Sato (2006) 's treatment of PCFGs, we employ the mean field approximation that assumes the posterior is well approximated by a factorized function q(X, \u03b8) = q 1 (X)q 2 (\u03b8), which treats the derivations X and the rule weights \u03b8 as independent. This allows us to maximizing q by alternately updating parameters for q 1 with q 2 fixed, and then updating parameters for q 2 with q 1 fixed, essentially in the same manner that E and M steps alternate in EM. The mathematical derivation of the modified inside-outside algorithm then follow directly from Kurihara and Sato (2006) .", "cite_spans": [ { "start": 15, "end": 39, "text": "Kurihara and Sato (2006)", "ref_id": "BIBREF8" }, { "start": 576, "end": 600, "text": "Kurihara and Sato (2006)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Varitional Bayes parameter estimation", "sec_num": "7" }, { "text": "In practice, VB requires only a slight modification to the basic EM algorithm, and we refer the reader to Graehl et al. (2008) for the details of EM for tree transducers. As in inside-outside for PCFGs, the E-step involves computing estimated rule counts, weighted using inside and outside probabilities. The M-step resolves to calculating the vector parameters of the multinomial distributions over transducer rules using these count estimates. That is, if \u03b8 s is a multinomial parameter vector for transducer rules with state s on the left hand side, \u03b8 s,k is its k th component (i.e., the weight of the k th rule with s on the left hand side), and c s,k is the corresponding expected count, we have the following equation for straight EM.", "cite_spans": [ { "start": 106, "end": 126, "text": "Graehl et al. (2008)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Varitional Bayes parameter estimation", "sec_num": "7" }, { "text": "\u03b8 s,k = c s,k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Varitional Bayes parameter estimation", "sec_num": "7" }, { "text": "k \u2032 c s,k \u2032 Incorporating a Dirichlet prior with parameter \u03b1 using our VB approximation simply requires replac-ing this ratio with the following alternative quantity \u03c4 , where \u03a8 is the digamma function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Varitional Bayes parameter estimation", "sec_num": "7" }, { "text": "\u03c4 s,k = exp \u03a8(c s,k + \u03b1) \u2212 \u03a8 k \u2032 c s,k \u2032 + \u03b1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Varitional Bayes parameter estimation", "sec_num": "7" }, { "text": "For each step of EM, the updated \u03c4 vectors from the previous M-step are then used to compute the expected counts c during the current E-step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Varitional Bayes parameter estimation", "sec_num": "7" }, { "text": "We use Tiburon (May and Knight, 2006) , a tree transducer toolkit, to train our transducer using 40 iterations of its inside-outside-like EM training procedure, and modify it slightly to include the mean field VB approximation for a symmetric Dirichlet prior over the multinomial parameters as just described.", "cite_spans": [ { "start": 15, "end": 37, "text": "(May and Knight, 2006)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental setup", "sec_num": "8" }, { "text": "Decoding is handled the same by Tiburon for both training procedures, producing the MR input tree with the tree transducer derivation that maximizes the probability over derivations of equation 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental setup", "sec_num": "8" }, { "text": "In keeping with the original hybrid tree, we run 100 iterations of IBM alignment model 1 (Brown et al., 1993) to initialize the word distribution parameters. Also in keeping with Lu et al. (2008) , we use the standard noun phrase list from the given language to help initialize the word distributions for their counterparts in the meaning representation language.", "cite_spans": [ { "start": 89, "end": 109, "text": "(Brown et al., 1993)", "ref_id": "BIBREF3" }, { "start": 179, "end": 195, "text": "Lu et al. (2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental setup", "sec_num": "8" }, { "text": "To evaluate our models, we use the the GeoQuery corpus, a standard benchmark data set. The corpus contains English sentences (questions about U.S. geography) paired with an MR in a database query language, 250 of which were translated into Japanese (among other languages) yielding two training sets using the same MRs. For testing we run 10-fold cross validation, using the standard train and test splits of Wong and Mooney (2006) , and microaverage our performance metrics across folds.", "cite_spans": [ { "start": 409, "end": 431, "text": "Wong and Mooney (2006)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "9" }, { "text": "We measure performance using precision, recall, and f-score (the harmonic mean of precision and recall) as standardly defined in the semantic parsing literature. Recall is simply the raw accuracy: the percentage of correct parses found out of all test sentences (where a parse is considered correct if it retrieves the same results from the GeoQuery database as the gold MR). Precision is the percentage of correct parses out of all sentences for which we find any parse at all. Table 2 compares our models' performance to previously published results. We list two versions of our model: the direct adaptation of the hybrid tree and the transducer with parent-child swapping rules. We train each version with both standard EM and the VB approximation (hyperparameter 0.1). The other state-of-the-art systems shown are: 1) two versions of the original hybrid tree (Lu et al., 2008) : Lu-uni, which uses a unigram distribution over words, and is therefore the most similar to our transducer implementation, and Lu-dis, the best-performing version, which uses a mixture of unigram and bigram model with discriminative re-ranking; 2) WASP (Wong and Mooney, 2006) , which uses a synchronous grammar approach; and 3) UBL-s (Kwiatkowski et al., 2010) , the model with the highest published raw accuracy (recall).", "cite_spans": [ { "start": 863, "end": 880, "text": "(Lu et al., 2008)", "ref_id": "BIBREF11" }, { "start": 1135, "end": 1158, "text": "(Wong and Mooney, 2006)", "ref_id": "BIBREF16" }, { "start": 1217, "end": 1243, "text": "(Kwiatkowski et al., 2010)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 479, "end": 486, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "9" }, { "text": "The transducers are competitive with the state-ofthe-art, especially when using VB. VB smooths the parameter estimates, so there are no parse failures in the test set due to unseen words or functions; precision, recall, and f-score all reduce to raw accuracy. The basic transducer with VB has higher accuracy (recall) than all other models except for UBLs, which does better on Japanese. The head-switch transducer is better still, with the highest recall on both languages. Although the improvement over the basic transducer is small, we anticipate that using the transducer framework will allow us to easily explore many other possible extensions that could increase performance further.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "9" }, { "text": "As expected, the basic EM-trained transducer gets numbers that are similar, though not identical, to Luuni. The main reason for the discrepancy is that Lu et al. (2008) use custom smoothing methods for the source tree language model and word probabilities. While these could be emulated in a transducer, we instead use a more general approach, VB, with better pay-off. Lu-uni was the simplest model presented by Lu et al. (2008) , yet applying VB to our transducer implementation yields a fully generative model whose performance rivals their bestperforming system that uses discriminative reranking.", "cite_spans": [ { "start": 101, "end": 107, "text": "Luuni.", "ref_id": null }, { "start": 152, "end": 168, "text": "Lu et al. (2008)", "ref_id": "BIBREF11" }, { "start": 412, "end": 428, "text": "Lu et al. (2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "9" }, { "text": "In this paper, we have shown how to formulate semantic parsing as tree transduction. This formulation is more general than previous approaches and allows us to exploit the rich literature on transducers, including theoretical results as well as standard algorithms and toolkits. We focused here on extended left hand side, root-to-frontier, linear, nondeleting, tree-to-string transducers (Maletti et al., 2009) , using them to reformulate and extend an existing model (Lu et al., 2008) . Although we tried only one simple extension, our purely generative model already outperforms all previous models on raw accuracy, with comparable f-score. Since the transducer framework makes modifications easy, we anticipate further gains in future, especially if we add a discriminative reranking step as in Lu et al. (2008) . We also hope to investigate other transducer classes. Finally, we note that working with a general framework encourages the development of algorithms that are widely applicable, even if developed for a particular application. The VB training algorithm presented here is just one example of such a contribution.", "cite_spans": [ { "start": 389, "end": 411, "text": "(Maletti et al., 2009)", "ref_id": "BIBREF12" }, { "start": 469, "end": 486, "text": "(Lu et al., 2008)", "ref_id": "BIBREF11" }, { "start": 799, "end": 815, "text": "Lu et al. (2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "10" }, { "text": "Bevan Jones, Mark Johnson and Sharon Goldwater. 2011. Formalizing Semantic Parsing with Tree Transducers. In Proceedings of Australasian Language Technology Association Workshop, pages 19\u221228", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "With a pre-parsing step, it may also be possible to represent lambda expressions with trees (seeLiang et al. (2011)).2 I.e., paths are represented by strings where the empty string \u01eb is the path to the root, and if i is a path and j is the index of a child of the node at i, i \u2022 j is the path to that child.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that only arg states are permitted to transition to states for different grammar rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Wei Lu and Jon May for generously providing source code and support for the hybrid tree parser and Tiburon, respectively. Also, this work was supported under the Australian Research Council's Discovery Projects funding scheme (project number DP110102506).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Variational Algorithms for Approximate Bayesian Inference", "authors": [ { "first": "Matthew", "middle": [ "J" ], "last": "Beal", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew J. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, Gatsby Computational Neuroscience unit, University College London, 2003.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Pattern Recognition and Machine Learning", "authors": [ { "first": "Christopher", "middle": [ "M" ], "last": "Bishop", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher M. Bishop. Pattern Recognition and Ma- chine Learning. Springer, 2006.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Reducing grounded learning tasks to grammatical inference", "authors": [ { "first": "Benjamin", "middle": [], "last": "Borschinger", "suffix": "" }, { "first": "K", "middle": [], "last": "Bevan", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Jones", "suffix": "" }, { "first": "", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2011, "venue": "Proc. of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Borschinger, Bevan K. Jones, and Mark John- son. Reducing grounded learning tasks to grammati- cal inference. In Proc. of the Conference on Empirical Methods in Natural Language Processing, 2011.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The mathematics of statistical machine translation: Parameter estimation", "authors": [ { "first": "F", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Stephen", "middle": [ "A Della" ], "last": "Brown", "suffix": "" }, { "first": "Vincent", "middle": [ "J" ], "last": "Pietra", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. The mathematics of statis- tical machine translation: Parameter estimation. Com- putational Linguistics, 19:263-311, 1993.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Processors for infinite codes of shannon-fano type", "authors": [ { "first": "Saul", "middle": [], "last": "Gorn", "suffix": "" } ], "year": 1962, "venue": "Symp. Math. Theory of Automata", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saul Gorn. Processors for infinite codes of shannon-fano type. In Symp. Math. Theory of Automata, 1962.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Training tree transducers", "authors": [ { "first": "Jonathon", "middle": [], "last": "Graehl", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Jon", "middle": [], "last": "", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathon Graehl, Kevin Knight, and Jon May. Training tree transducers. Computational Linguistics, 34, 2008.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Using stringkernels for learning semantic parsers", "authors": [ { "first": "J", "middle": [], "last": "Rohit", "suffix": "" }, { "first": "Raymond", "middle": [ "J" ], "last": "Kate", "suffix": "" }, { "first": "", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2006, "venue": "Proc. of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the ACL", "volume": "", "issue": "", "pages": "913--920", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rohit J. Kate and Raymond J. Mooney. Using string- kernels for learning semantic parsers. In Proc. of the 21st International Conference on Computational Lin- guistics and the 44th annual meeting of the ACL, pages 913-920, 2006.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An overview of probabilistic tree transducers for natural language processing", "authors": [ { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Jonathon", "middle": [], "last": "Greahl", "suffix": "" } ], "year": 2005, "venue": "Proc. of the 6th International Conference on Intelligent Text Processing and Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Knight and Jonathon Greahl. An overview of prob- abilistic tree transducers for natural language process- ing. In Proc. of the 6th International Conference on Intelligent Text Processing and Computational Linguis- tics, 2005.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Variational bayesian grammar induction for natural language", "authors": [ { "first": "Kenichi", "middle": [], "last": "Kurihara", "suffix": "" }, { "first": "Taisuke", "middle": [], "last": "Sato", "suffix": "" } ], "year": 2006, "venue": "Proc. of the 8th International Colloquium on Grammatical Inference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenichi Kurihara and Taisuke Sato. Variational bayesian grammar induction for natural language. In Proc. of the 8th International Colloquium on Grammatical In- ference, 2006.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Inducing probabilistic ccg grammars from logical form with higher-order unification", "authors": [ { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2010, "venue": "Proc. of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. Inducing probabilistic ccg gram- mars from logical form with higher-order unification. In Proc. of the Conference on Empirical Methods in Natu- ral Language Processing, 2010.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning dependency-based compositional semantics", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2011, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang, Michael I. Jordan, and Dan Klein. Learning dependency-based compositional semantics. In Associ- ation for Computational Linguistics (ACL), 2011.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A generative model for parsing natural language to meaning representations", "authors": [ { "first": "Wei", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Wee", "middle": [], "last": "Hwee Tou Ng", "suffix": "" }, { "first": "Luke", "middle": [ "S" ], "last": "Sun Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2008, "venue": "Proc. of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Lu, Hwee Tou Ng, Wee Sun Lee, and Luke S. Zettle- moyer. A generative model for parsing natural language to meaning representations. In Proc. of the Conference on Empirical Methods in Natural Language Processing, 2008.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The power of extended top-down tree transducers", "authors": [ { "first": "Andreas", "middle": [], "last": "Maletti", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Graehl", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Hopkins", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2009, "venue": "SIAM J. Comput", "volume": "39", "issue": "", "pages": "410--430", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Maletti, Jonathan Graehl, Mark Hopkins, and Kevin Knight. The power of extended top-down tree transducers. SIAM J. Comput., 39:410-430, June 2009.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Tiburon: A weighted tree automata toolkit", "authors": [ { "first": "Jon", "middle": [], "last": "May", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2006, "venue": "Proc. of International Conference on Implementation and Application of Automata", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jon May and Kevin Knight. Tiburon: A weighted tree automata toolkit. In Proc. of International Conference on Implementation and Application of Automata, 2006.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Mappings and grammars on trees", "authors": [ { "first": "W", "middle": [ "C" ], "last": "Rounds", "suffix": "" } ], "year": 1970, "venue": "Mathematical Systems Theory", "volume": "4", "issue": "", "pages": "257--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "W.C. Rounds. Mappings and grammars on trees. Mathe- matical Systems Theory 4, pages 257-287, 1970.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Generalized sequential machine maps", "authors": [ { "first": "J", "middle": [ "W" ], "last": "Thatcher", "suffix": "" } ], "year": 1970, "venue": "J. Comput. System Sci", "volume": "4", "issue": "", "pages": "339--367", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.W. Thatcher. Generalized sequential machine maps. J. Comput. System Sci. 4, pages 339-367, 1970.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning for semantic parsing with statistical machine translation", "authors": [ { "first": "Yuk", "middle": [ "Wah" ], "last": "Wong", "suffix": "" }, { "first": "Raymond", "middle": [ "J" ], "last": "Mooney", "suffix": "" } ], "year": 2006, "venue": "Proc. of Human Language Technology Conference / North American Chapter of the Association for Computational Linguistics Annual Meeting", "volume": "", "issue": "", "pages": "439--446", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuk Wah Wong and Raymond J. Mooney. Learning for semantic parsing with statistical machine translation. In Proc. of Human Language Technology Conference / North American Chapter of the Association for Com- putational Linguistics Annual Meeting, pages 439-446, New York City, NY, 2006.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars", "authors": [ { "first": "", "middle": [ "S" ], "last": "Luke", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2005, "venue": "Proc. of the 21st Conference on Uncertainty in Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luke. S. Zettlemoyer and Michael Collins. Learning to map sentences to logical form: Structured classifica- tion with probabilistic categorial grammars. In Proc. of the 21st Conference on Uncertainty in Artificial In- telligence, 2005.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "State transitions selecting appropriate grammar rules for generating an MR. Solid arcs indicate a state transition was taken; dotted lines are alternatives. States are divided up into disjoint sets and associated with a specific MR rule. Transitioning between state sets implicitly chooses an MR rule. The rules lined up with the MR tree to the left constitute an MR parse. The bottom right shows the grammar fragment corresponding to this portion of the transducer.", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": "An example from Japanese illustrating head-switching. The tree on the left attempts (and fails) to generate the target sentence from the gold meaning representation. Switching highest and place allows the correct MR-NL map.", "type_str": "figure" }, "TABREF0": { "num": null, "html": null, "type_str": "table", "content": "