{ "paper_id": "P19-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:30:01.373413Z" }, "title": "Generating Logical Forms from Graph Representations of Text and Entities", "authors": [ { "first": "Peter", "middle": [], "last": "Shaw", "suffix": "", "affiliation": {}, "email": "petershaw@google.com" }, { "first": "Philip", "middle": [], "last": "Massey", "suffix": "", "affiliation": {}, "email": "pmassey@google.com" }, { "first": "Angelica", "middle": [], "last": "Chen", "suffix": "", "affiliation": {}, "email": "angelicachen@google.com" }, { "first": "Francesco", "middle": [], "last": "Piccinno", "suffix": "", "affiliation": {}, "email": "piccinno@google.com" }, { "first": "Yasemin", "middle": [], "last": "Altun", "suffix": "", "affiliation": {}, "email": "altun@google.com" }, { "first": "", "middle": [], "last": "Google", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Google", "middle": [], "last": "Research", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Structured information about entities is critical for many semantic parsing tasks. We present an approach that uses a Graph Neural Network (GNN) architecture to incorporate information about relevant entities and their relations during parsing. Combined with a decoder copy mechanism, this approach provides a conceptually simple mechanism to generate logical forms with entities. We demonstrate that this approach is competitive with the stateof-the-art across several tasks without pretraining, and outperforms existing approaches when combined with BERT pre-training.", "pdf_parse": { "paper_id": "P19-1010", "_pdf_hash": "", "abstract": [ { "text": "Structured information about entities is critical for many semantic parsing tasks. We present an approach that uses a Graph Neural Network (GNN) architecture to incorporate information about relevant entities and their relations during parsing. Combined with a decoder copy mechanism, this approach provides a conceptually simple mechanism to generate logical forms with entities. We demonstrate that this approach is competitive with the stateof-the-art across several tasks without pretraining, and outperforms existing approaches when combined with BERT pre-training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic parsing maps natural language utterances into structured meaning representations. The representation languages vary between tasks, but typically provide a precise, machine interpretable logical form suitable for applications such as question answering (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2007; Berant et al., 2013) . The logical forms typically consist of two types of symbols: a vocabulary of operators and domain-specific predicates or functions, and entities grounded to some knowledge base or domain.", "cite_spans": [ { "start": 261, "end": 285, "text": "(Zelle and Mooney, 1996;", "ref_id": "BIBREF50" }, { "start": 286, "end": 316, "text": "Zettlemoyer and Collins, 2007;", "ref_id": "BIBREF52" }, { "start": 317, "end": 337, "text": "Berant et al., 2013)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent approaches to semantic parsing have cast it as a sequence-to-sequence task (Dong and Lapata, 2016; Jia and Liang, 2016; Ling et al., 2016) , employing methods similar to those developed for neural machine translation (Bahdanau et al., 2014) , with strong results. However, special consideration is typically given to handling of entities. This is important to improve generalization and computational efficiency, as most tasks require handling entities unseen during training, and the set of unique entities can be large.", "cite_spans": [ { "start": 82, "end": 105, "text": "(Dong and Lapata, 2016;", "ref_id": "BIBREF10" }, { "start": 106, "end": 126, "text": "Jia and Liang, 2016;", "ref_id": "BIBREF18" }, { "start": 127, "end": 145, "text": "Ling et al., 2016)", "ref_id": "BIBREF27" }, { "start": 224, "end": 247, "text": "(Bahdanau et al., 2014)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Some recent approaches have replaced surface forms of entities in the utterance with placehold-ers (Dong and Lapata, 2016) . This requires a preprocessing step to completely disambiguate entities and replace their spans in the utterance. Additionally, for some tasks it may be beneficial to leverage relations between entities, multiple entity candidates per span, or entity candidates without a corresponding span in the utterance, while generating logical forms.", "cite_spans": [ { "start": 99, "end": 122, "text": "(Dong and Lapata, 2016)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Other approaches identify only types and surface forms of entities while constructing the logical form (Jia and Liang, 2016) , using a separate post-processing step to generate the final logical form with grounded entities. This ignores potentially useful knowledge about relevant entities.", "cite_spans": [ { "start": 103, "end": 124, "text": "(Jia and Liang, 2016)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Meanwhile, there has been considerable recent interest in Graph Neural Networks (GNNs) (Scarselli et al., 2009; Li et al., 2016; Kipf and Welling, 2017; Gilmer et al., 2017; Veli\u010dkovi\u0107 et al., 2018) for effectively learning representations for graph structures. We propose a GNN architecture based on extending the self-attention mechanism of the Transformer (Vaswani et al., 2017) to make use of relations between input elements.", "cite_spans": [ { "start": 87, "end": 111, "text": "(Scarselli et al., 2009;", "ref_id": "BIBREF31" }, { "start": 112, "end": 128, "text": "Li et al., 2016;", "ref_id": "BIBREF10" }, { "start": 129, "end": 152, "text": "Kipf and Welling, 2017;", "ref_id": "BIBREF21" }, { "start": 153, "end": 173, "text": "Gilmer et al., 2017;", "ref_id": "BIBREF12" }, { "start": 174, "end": 198, "text": "Veli\u010dkovi\u0107 et al., 2018)", "ref_id": "BIBREF36" }, { "start": 359, "end": 381, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We present an application of this GNN architecture to semantic parsing, conditioning on a graph representation of the given natural language utterance and potentially relevant entities. This approach is capable of handling ambiguous and potentially conflicting entity candidates jointly with a natural language utterance, relaxing the need for completely disambiguating a set of linked entities before parsing. This graph formulation also enables us to incorporate knowledge about the relations between entities where available. Combined with a copy mechanism while decoding, this approach also provides a conceptually simple method for generating logical forms with grounded entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We demonstrate the capability of the pro-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "x : which states does the mississippi run through ? y : answer ( state ( traverse 1( riverid ( mississippi ) ) ) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GEO", "sec_num": null }, { "text": "x : in denver what kind of ground transportation is there from the airport to downtown y : ( _lambda $0 e ( _and ( _ground_transport $0 ) ( _to_city $0 denver : ci ) ( _from_airport $0 den : ap ) ) ) SPIDER x : how many games has each stadium held ? y : SELECT T1 . id , count ( * ) FROM stadium AS T1 JOIN game AS T2 ON T1 . id = T2 . stadium id GROUP BY T1 . id Table 1 : Example input utterances, x, and meaning representations, y, with entities underlined.", "cite_spans": [], "ref_spans": [ { "start": 364, "end": 371, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "ATIS", "sec_num": null }, { "text": "posed architecture by achieving competitive results across 3 semantic parsing tasks. Further improvements are possible by incorporating a pretrained BERT (Devlin et al., 2018) encoder within the architecture.", "cite_spans": [ { "start": 154, "end": 175, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "ATIS", "sec_num": null }, { "text": "Our goal is to learn a model for semantic parsing from pairs of natural language utterances and structured meaning representations. Let the natural language utterance be represented as a sequence x = (x 1 , . . . , x |x| ) of |x| tokens, and the meaning representation be represented as a sequence y = (y 1 , . . . , y |y| ) of |y| elements. The goal is to estimate p(y | x), the conditional probability of the meaning representation y given utterance x, which is augmented by a set of potentially relevant entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Formulation", "sec_num": "2" }, { "text": "Each token x i \u2208 V in is from a vocabulary of input tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Utterance", "sec_num": null }, { "text": "Entity Candidates Given the input utterance x, we retrieve a set, e = {e 1 , . . . , e |e| }, of potentially relevant entity candidates, with e \u2286 V e , where V e is in the set of all entities for a given domain. We assume the availability of an entity candidate generator for each task to generate e given x, with details given in \u00a7 5.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Utterance", "sec_num": null }, { "text": "For each entity candidate, e \u2208 V e , we require a set of task-specific attributes containing one or more elements from V a . These attributes can be NER types or other characteristics of the entity, such as \"city\" or \"river\" for some of the entities listed in Table 1 . Whereas V e can be quite large for open domains, or even infinite if it includes sets such as the natural numbers, V a is typically much smaller. Therefore, we can effectively learn representations for entities given their set of attributes, from our set of example pairs.", "cite_spans": [], "ref_spans": [ { "start": 260, "end": 267, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Input Utterance", "sec_num": null }, { "text": "Edge Labels In addition to x and e for a particular example, we also consider the (|x|+|e|) 2 pairwise relations between all tokens and entity candidates, represented as edge labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Utterance", "sec_num": null }, { "text": "The edge label between tokens x i and x j corresponds to the relative sequential position, j \u2212 i, of the tokens, clipped to within some range.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Utterance", "sec_num": null }, { "text": "The edge label between token x i and entity e j , and vice versa, corresponds to whether x i is within the span of the entity candidate e j , or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Utterance", "sec_num": null }, { "text": "The edge label between entities e i and e j captures the relationship between the entities. These edge labels can have domain-specific interpretations, such as relations in a knowledge base, or any other type of entity interaction features. For tasks where this information is not available or useful, a single generic label between entity candidates can be used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Utterance", "sec_num": null }, { "text": "Output We consider the logical form, y, to be a linear sequence (Vinyals et al., 2015b) . We tokenize based on the syntax of each domain. Our formulation allows each element of y to be either an element of the output vocabulary, V out , or an entity copied from the set of entity candidates e. Therefore, y i \u2208 V out \u222a V e . Some experiments in \u00a75.2 also allow elements of y to be tokens \u2208 V in from x that are copied from the input.", "cite_spans": [ { "start": 64, "end": 87, "text": "(Vinyals et al., 2015b)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Input Utterance", "sec_num": null }, { "text": "Our model architecture is based on the Transformer (Vaswani et al., 2017) , with the selfattention sub-layer extended to incorporate relations between input elements, and the decoder extended with a copy mechanism. We use an example from SPIDER to illustrate the model inputs: tokens from the given utterance, x, a set of potentially relevant entities, e, and their relations. We selected two edge label types to highlight: edges denoting that an entity spans a token, and edges between entities that, for SPIDER, indicate a foreign key relationship between columns, or an ownership relationship between columns and tables.", "cite_spans": [ { "start": 51, "end": 73, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3" }, { "text": "We extend the Transformer's self-attention mechanism to form a Graph Neural Network (GNN) sublayer that incorporates a fully connected, directed graph with edge labels. The sub-layer maps an ordered sequence of node representations, u = (u 1 , . . . , u |u| ), to a new sequence of node representations, u = (u 1 , . . . , u |u| ), where each node is represented \u2208 R d . We use r ij to denote the edge label corresponding to u i and u j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GNN Sub-layer", "sec_num": "3.1" }, { "text": "We implement this sub-layer in terms of a function f (m, l) over a node representation m \u2208 R d and an edge label l that computes a vector representation in R d . We use n heads parallel attention heads, with d = d/n heads . For each head k, the new representation for the node u i is computed by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GNN Sub-layer", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "u k i = |u| j=1 \u03b1 ij f (u j , r ij ),", "eq_num": "(1)" } ], "section": "GNN Sub-layer", "sec_num": "3.1" }, { "text": "where each coefficient \u03b1 ij is a softmax over the scaled dot products s ij ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GNN Sub-layer", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s ij = (W q u i ) f (u j , r ij ) \u221a d ,", "eq_num": "(2)" } ], "section": "GNN Sub-layer", "sec_num": "3.1" }, { "text": "and W q is a learned matrix. Finally, we concatenate representations from each head,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GNN Sub-layer", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "u i = W h u 1 i | \u2022 \u2022 \u2022 | u n heads i ,", "eq_num": "(3)" } ], "section": "GNN Sub-layer", "sec_num": "3.1" }, { "text": "where W h is another learned matrix and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GNN Sub-layer", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "[ \u2022 \u2022 \u2022 ] denotes concatenation. If we implement f as, f (m, l) = W r m,", "eq_num": "(4)" } ], "section": "GNN Sub-layer", "sec_num": "3.1" }, { "text": "where W r \u2208 R d \u00d7d is a learned matrix, then the sub-layer would be effectively identical to self-attention as initially proposed in the Transformer (Vaswani et al., 2017) . We focus on two alternative formulations of f that represent edge labels as learned matrices and learned vectors.", "cite_spans": [ { "start": 149, "end": 171, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "GNN Sub-layer", "sec_num": "3.1" }, { "text": "Edge Matrices The first formulation represents edge labels as linear transformations, a common parameterization for GNNs (Li et al., 2016) ,", "cite_spans": [ { "start": 121, "end": 138, "text": "(Li et al., 2016)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "GNN Sub-layer", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f (m, l) = W l m,", "eq_num": "(5)" } ], "section": "GNN Sub-layer", "sec_num": "3.1" }, { "text": "where W l \u2208 R d \u00d7d is a learned embedding matrix per edge label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GNN Sub-layer", "sec_num": "3.1" }, { "text": "Edge Vectors The second formulation represents edge labels as additive vectors using the same formulation as Shaw et al. (2018) ,", "cite_spans": [ { "start": 109, "end": 127, "text": "Shaw et al. (2018)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "GNN Sub-layer", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f (m, l) = W r m + w l ,", "eq_num": "(6)" } ], "section": "GNN Sub-layer", "sec_num": "3.1" }, { "text": "where W r \u2208 R d \u00d7d is a learned matrix shared by all edge labels, and w l \u2208 R d is a learned embedding vector per edge label l.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GNN Sub-layer", "sec_num": "3.1" }, { "text": "Input Representations Before the initial encoder layer, tokens are mapped to initial representations using either a learned embedding Figure 2 : Our model architecture is based on the Transformer (Vaswani et al., 2017) , with two modifications. First, the self-attention sub-layer has been extended to be a GNN that incorporates edge representations. In the encoder, the GNN sub-layer is conditioned on tokens, entities, and their relations. Second, the decoder has been extended to include a copy mechanism (Vinyals et al., 2015a) . We can optionally incorporate a pre-trained model such as BERT to generate contextual token representations.", "cite_spans": [ { "start": 196, "end": 218, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF35" }, { "start": 508, "end": 531, "text": "(Vinyals et al., 2015a)", "ref_id": "BIBREF37" } ], "ref_spans": [ { "start": 134, "end": 142, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Encoder", "sec_num": "3.2" }, { "text": "V a . We also concatenate an embedding representing the node type, token or entity, to each input representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "3.2" }, { "text": "We assume some arbitrary ordering for entity candidates, generating a combined sequence of initial node representations for tokens and entities. We have edge labels between every pair of nodes as described in \u00a7 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "3.2" }, { "text": "Encoder Layers Our encoder layers are essentially identical to the Transformer, except with the proposed extension to self-attention to incorporate edge labels. Therefore, each encoder layer consists of two sub-layers. The first is the GNN sub-layer, which yields new sets of token and entity representations. The second sub-layer is an element-wise feed-forward network. Each sublayer is followed by a residual connection and layer normalization (Ba et al., 2016) . We stack N enc encoder layers, yielding a final set of token representations, w x (Nenc) , and entity representations, w e (Nenc) .", "cite_spans": [ { "start": 447, "end": 464, "text": "(Ba et al., 2016)", "ref_id": null }, { "start": 549, "end": 555, "text": "(Nenc)", "ref_id": null }, { "start": 590, "end": 596, "text": "(Nenc)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "3.2" }, { "text": "The decoder auto-regressively generates output symbols, y 1 , . . . , y |y| . It is similarly based on the Transformer (Vaswani et al., 2017) , with the self-attention sub-layer replaced by the GNN sublayer. Decoder edge labels are based only on the relative timesteps of the previous outputs. The encoder-decoder attention layer considers both encoder outputs w x (Nenc) and w e (Nenc) , jointly normalizing attention weights over tokens and entity candidates. We stack N dec decoder layers to produce an output vector representation at each output step, z j \u2208 R dz , for j \u2208 {1, . . . , |y|}.", "cite_spans": [ { "start": 119, "end": 141, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF35" }, { "start": 365, "end": 371, "text": "(Nenc)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.3" }, { "text": "We allow the decoder to copy tokens or entity candidates from the input, effectively combining a Pointer Network (Vinyals et al., 2015a ) with a standard softmax output layer for selecting symbols from an output vocabulary (Gu et al., 2016; Gulcehre et al., 2016; Jia and Liang, 2016) . We define a latent action at each output step, a j for j \u2208 {1, . . . , |y|}, using similar notation as Jia et al. (2016) . We normalize action probabilities with a softmax over all possible actions.", "cite_spans": [ { "start": 113, "end": 135, "text": "(Vinyals et al., 2015a", "ref_id": "BIBREF37" }, { "start": 223, "end": 240, "text": "(Gu et al., 2016;", "ref_id": "BIBREF14" }, { "start": 241, "end": 263, "text": "Gulcehre et al., 2016;", "ref_id": "BIBREF15" }, { "start": 264, "end": 284, "text": "Jia and Liang, 2016)", "ref_id": "BIBREF18" }, { "start": 390, "end": 407, "text": "Jia et al. (2016)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.3" }, { "text": "We can generate a symbol, denoted", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Symbols", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Generate[i], P (a j =Generate[i] | x, y 1:j\u22121 ) \u221d exp(z j w out i ),", "eq_num": "(7)" } ], "section": "Generating Symbols", "sec_num": null }, { "text": "where w out i is a learned embedding vector for the element \u2208 V out with index i. If a j = Generate[i], then y j will be the element \u2208 V out with index i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Symbols", "sec_num": null }, { "text": "Copying Entities We can also copy an entity candidate, denoted", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Symbols", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "CopyEntity[i], P (a j = CopyEntity[i] | x, y 1:j\u22121 ) \u221d exp((z j W e ) w (Nenc) e i ),", "eq_num": "(8)" } ], "section": "Generating Symbols", "sec_num": null }, { "text": "where W e is a learned matrix, and i \u2208 {1, . . . , |e|}. If a j = CopyEntity[i], then y j = e i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Symbols", "sec_num": null }, { "text": "Various approaches to learning semantic parsers from pairs of utterances and logical forms have been developed over the years (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2011; Andreas et al., 2013) . More recently, encoder-decoder architectures have been applied with strong results (Dong and Lapata, 2016; Jia and Liang, 2016) . Even for tasks with relatively small domains of entities, such as GEO and ATIS, it has been shown that some special consideration of entities within an encoder-decoder architecture is important to improve generalization. This has included extending decoders with copy mechanisms (Jia and Liang, 2016) and/or identifying entities in the input as a pre-processing step (Dong and Lapata, 2016) .", "cite_spans": [ { "start": 126, "end": 149, "text": "(Tang and Mooney, 2000;", "ref_id": "BIBREF34" }, { "start": 150, "end": 180, "text": "Zettlemoyer and Collins, 2007;", "ref_id": "BIBREF52" }, { "start": 181, "end": 206, "text": "Kwiatkowski et al., 2011;", "ref_id": "BIBREF24" }, { "start": 207, "end": 228, "text": "Andreas et al., 2013)", "ref_id": "BIBREF1" }, { "start": 314, "end": 337, "text": "(Dong and Lapata, 2016;", "ref_id": "BIBREF10" }, { "start": 338, "end": 358, "text": "Jia and Liang, 2016)", "ref_id": "BIBREF18" }, { "start": 640, "end": 661, "text": "(Jia and Liang, 2016)", "ref_id": "BIBREF18" }, { "start": 728, "end": 751, "text": "(Dong and Lapata, 2016)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Other work has considered open domain tasks, such as WEBQUESTIONSSP (Yih et al., 2016) . Recent approaches have typically relied on a separate entity linking model, such as S-MART (Yang and Chang, 2015) , to provide a single disambiguated set of entities to consider. In principle, a learned entity linker could also serve as an entity candidate generator within our framework, although we do not explore such tasks in this work.", "cite_spans": [ { "start": 68, "end": 86, "text": "(Yih et al., 2016)", "ref_id": "BIBREF45" }, { "start": 180, "end": 202, "text": "(Yang and Chang, 2015)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Considerable recent work has focused on constrained decoding of various forms within an encoder-decoder architecture to leverage the known structure of the logical forms. This has led to approaches that leverage this structure during decoding, such as using tree decoders (Dong and Lapata, 2016; Alvarez-Melis and Jaakkola, 2017) or other mechanisms (Dong and Lapata, 2018; Goldman et al., 2017) . Other approaches use grammar rules to constrain decoding (Xiao et al., 2016; Yin and Neubig, 2017; Krishnamurthy et al., 2017; Yu et al., 2018b) . We leave investigation of such decoder constraints to future work.", "cite_spans": [ { "start": 272, "end": 295, "text": "(Dong and Lapata, 2016;", "ref_id": "BIBREF10" }, { "start": 296, "end": 329, "text": "Alvarez-Melis and Jaakkola, 2017)", "ref_id": "BIBREF0" }, { "start": 350, "end": 373, "text": "(Dong and Lapata, 2018;", "ref_id": "BIBREF11" }, { "start": 374, "end": 395, "text": "Goldman et al., 2017)", "ref_id": "BIBREF13" }, { "start": 455, "end": 474, "text": "(Xiao et al., 2016;", "ref_id": "BIBREF41" }, { "start": 475, "end": 496, "text": "Yin and Neubig, 2017;", "ref_id": "BIBREF46" }, { "start": 497, "end": 524, "text": "Krishnamurthy et al., 2017;", "ref_id": "BIBREF22" }, { "start": 525, "end": 542, "text": "Yu et al., 2018b)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Many formulations of Graph Neural Networks (GNNs) that propagate information over local neighborhoods have recently been proposed (Li et al., 2016; Kipf and Welling, 2017; Gilmer et al., 2017; Veli\u010dkovi\u0107 et al., 2018) . Recent work has often focused on large graphs (Hamilton et al., 2017) and effectively propagating information over multiple graph steps (Xu et al., 2018) . The graphs we consider are relatively small and are fullyconnected, avoiding some of the challenges posed by learning representations for large, sparsely con-nected graphs.", "cite_spans": [ { "start": 130, "end": 147, "text": "(Li et al., 2016;", "ref_id": "BIBREF10" }, { "start": 148, "end": 171, "text": "Kipf and Welling, 2017;", "ref_id": "BIBREF21" }, { "start": 172, "end": 192, "text": "Gilmer et al., 2017;", "ref_id": "BIBREF12" }, { "start": 193, "end": 217, "text": "Veli\u010dkovi\u0107 et al., 2018)", "ref_id": "BIBREF36" }, { "start": 266, "end": 289, "text": "(Hamilton et al., 2017)", "ref_id": "BIBREF16" }, { "start": 356, "end": 373, "text": "(Xu et al., 2018)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Other recent work related to ours has considered GNNs for natural language tasks, such as combining structured and unstructured data for question answering (Sun et al., 2018) , or for representing dependencies in tasks such as AMR parsing and machine translation (Beck et al., 2018; Bastings et al., 2017) . The approach of Krishnamurthy et al. (2017) similarly considers ambiguous entity mentions jointly with query tokens for semantic parsing, although does not directly consider a GNN.", "cite_spans": [ { "start": 156, "end": 174, "text": "(Sun et al., 2018)", "ref_id": "BIBREF33" }, { "start": 263, "end": 282, "text": "(Beck et al., 2018;", "ref_id": "BIBREF6" }, { "start": 283, "end": 305, "text": "Bastings et al., 2017)", "ref_id": "BIBREF4" }, { "start": 324, "end": 351, "text": "Krishnamurthy et al. (2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Previous work has interpreted the Transformer's self-attention mechanism as a GNN (Veli\u010dkovi\u0107 et al., 2018; Battaglia et al., 2018) , and extended it to consider relative positions as edge representations (Shaw et al., 2018) . Previous work has also similarly represented edge labels as vectors, as opposed to matrices, in order to avoid over-parameterizing the model .", "cite_spans": [ { "start": 82, "end": 107, "text": "(Veli\u010dkovi\u0107 et al., 2018;", "ref_id": "BIBREF36" }, { "start": 108, "end": 131, "text": "Battaglia et al., 2018)", "ref_id": "BIBREF5" }, { "start": 205, "end": 224, "text": "(Shaw et al., 2018)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "We consider three semantic parsing datasets, with examples given in Table 1. GEO The GeoQuery dataset consists of natural language questions about US geography along with corresponding logical forms (Zelle and Mooney, 1996) . We follow the convention of Zettlemoyer and Collins (2005) and use 600 training examples and 280 test examples. We use logical forms based on Functional Query Language (FunQL) (Kate et al., 2005) .", "cite_spans": [ { "start": 199, "end": 223, "text": "(Zelle and Mooney, 1996)", "ref_id": "BIBREF50" }, { "start": 402, "end": 421, "text": "(Kate et al., 2005)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 68, "end": 76, "text": "Table 1.", "ref_id": null } ], "eq_spans": [], "section": "Semantic Parsing Datasets", "sec_num": "5.1" }, { "text": "ATIS The Air Travel Information System (ATIS) dataset consists of natural language queries about travel planning (Dahl et al., 1994) . We follow Zettlemoyer and Collins (2007) and use 4473 training examples, 448 test examples, and represent the logical forms as lambda expressions.", "cite_spans": [ { "start": 113, "end": 132, "text": "(Dahl et al., 1994)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Parsing Datasets", "sec_num": "5.1" }, { "text": "SPIDER This is a large-scale text-to-SQL dataset that consists of 10,181 questions and 5,693 unique complex SQL queries across 200 database tables spanning 138 domains (Yu et al., 2018c) . We use the standard training set of 8,659 training example and development set of 1,034 examples, split across different tables.", "cite_spans": [ { "start": 168, "end": 186, "text": "(Yu et al., 2018c)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Parsing Datasets", "sec_num": "5.1" }, { "text": "Model Configuration We configured hyperparameters based on performance on the validation set for each task, if provided, otherwise crossvalidated on the training set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "For the encoder and decoder, we selected the number of layers from {1, 2, 3, 4} and embedding and hidden dimensions from {64, 128, 256}, setting the feed forward layer hidden dimensions 4\u00d7 higher.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "We employed dropout at training time with P dropout selected from {0.1, 0.2, 0.3, 0.4, 0.5, 0.6}. We used 8 attention heads for each task. We used a clipping distance of 8 for relative position representations (Shaw et al., 2018) .", "cite_spans": [ { "start": 210, "end": 229, "text": "(Shaw et al., 2018)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "We used the Adam optimizer (Kingma and Ba, 2015) with \u03b2 1 = 0.9, \u03b2 2 = 0.98, and = 10 \u22129 , and tuned the learning rate for each task. We used the same warmup and decay strategy for learning rate as Vaswani et al. (2017) , selecting a number of warmup steps up to a maximum of 3000. Early stopping was used to determine the total training steps for each task. We used the final checkpoint for evaluation. We batched training examples together, and selected batch size from {32, 64, 128, 256, 512}. During training we used masked self-attention (Vaswani et al., 2017) to enable parallel decoding of output sequences. For evaluation, we used greedy search.", "cite_spans": [ { "start": 27, "end": 48, "text": "(Kingma and Ba, 2015)", "ref_id": "BIBREF20" }, { "start": 198, "end": 219, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF35" }, { "start": 543, "end": 565, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "We used a simple strategy of splitting each input utterance on spaces to generate a sequence of tokens. We mapped any token that didn't occur at least 2 times in the training dataset to a special outof-vocabulary token. For experiments that used BERT, we instead used the same wordpiece (Wu et al., 2016) tokenization as used for pre-training.", "cite_spans": [ { "start": 287, "end": 304, "text": "(Wu et al., 2016)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "BERT For some of our experiments, we evaluated incorporating a pre-trained BERT (Devlin et al., 2018) encoder by effectively using the output of the BERT encoder in place of a learned token embedding table. We then continue to use graph encoder and decoder layers with randomly initialized parameters in addition to BERT, so there are many parameters that are not pre-trained. The additional encoder layers are still necessary to condition on entities and relations.", "cite_spans": [ { "start": 80, "end": 101, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "We achieved best results by freezing the pretrained parameters for an initial number of steps, and then jointly fine-tuning all parameters, similar to existing approaches for gradual unfreezing (Howard and Ruder, 2018) . When unfreezing the pre-trained parameters, we restart the learning rate schedule. We found this to perform better than keeping pre-trained parameters either entirely frozen or entirely unfrozen during fine-tuning.", "cite_spans": [ { "start": 194, "end": 218, "text": "(Howard and Ruder, 2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "We used BERT LARGE (Devlin et al., 2018) , which has 24 layers. For fine tuning we used the same Adam optimizer with weight decay and learning rate decay as used for BERT pre-training. We reduced batch sizes to accommodate the significantly larger model size, and tuned learning rate, warm up steps, and number of frozen steps for pre-trained parameters.", "cite_spans": [ { "start": 19, "end": 40, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "Entity Candidate Generator We use an entity candidate generator that, given x, can retrieve a set of potentially relevant entities, e, for the given domain. Although all generators share a common interface, their implementation varies across tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "For GEO and ATIS we use a lexicon of entity aliases in the dataset and attempt to match with ngrams in the query. Each entity has a single attribute corresponding to the entity's type. We used binary valued relations between entity candidates based on whether entity candidate spans overlap, but experiments did not show significant improvements from incorporating these relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "For SPIDER, we generalize our notion of entities to include tables and table columns. We include all relevant tables and columns as entity candidates, but make use of Levenshtein distance between query ngrams and table and column names to determine edges between tokens and entity candidates. We use attributes based on the types and names of tables and columns. Edges between entity candidates capture relations between columns and the table they belong to, and foreign key relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "For GEO, ATIS, and SPIDER, this leads to 19.5%, 32.7%, and 74.6% of examples containing at least one span associated with multiple entity candidates, respectively, indicating some entity ambiguity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "Further details on how entity candidate generators were constructed are provided in \u00a7 A.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "We pre-processed output sequences to identify entity argument values, and replaced those elements with references to entity candidates in the input. In cases where our entity candidate generator did not retrieve an entity that was used as an argument, we dropped the example from the training data set or considered it incorrect Method GEO ATIS Kwiatkowski et al. (2013) 89.0 - 87.9 -Wang et al. 2014-91.3 Zhao and Huang (2015) 88.9 84.2 Jia and Liang (2016) 89 Xu et al. (2017) 10.9 Yu et al. (2018a) 8.0 Yu et al. (2018b) 24.8 \u2212 data augmentation 18.9", "cite_spans": [ { "start": 345, "end": 370, "text": "Kwiatkowski et al. (2013)", "ref_id": "BIBREF23" }, { "start": 406, "end": 427, "text": "Zhao and Huang (2015)", "ref_id": "BIBREF53" }, { "start": 438, "end": 458, "text": "Jia and Liang (2016)", "ref_id": "BIBREF18" }, { "start": 462, "end": 478, "text": "Xu et al. (2017)", "ref_id": "BIBREF43" }, { "start": 484, "end": 501, "text": "Yu et al. (2018a)", "ref_id": "BIBREF47" }, { "start": 506, "end": 523, "text": "Yu et al. (2018b)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Output Sequences", "sec_num": null }, { "text": "Ours GNN w/ edge matrices 29.3 GNN w/ edge vectors 32.1 GNN w/ edge vectors + BERT 23.5 Table 2 : We report accuracies on GEO, ATIS, and SPIDER for various implementations of our GNN sub-layer. For GEO and ATIS, we use \u2020 to denote neural approaches that disambiguate and replace entities in the utterance as a pre-processing step. For SPIDER, the evaluation set consists of examples for databases unseen during training.", "cite_spans": [], "ref_spans": [ { "start": 88, "end": 95, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Output Sequences", "sec_num": null }, { "text": "if in the test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output Sequences", "sec_num": null }, { "text": "Evaluation To evaluate accuracy, we use exact match accuracy relative to gold logical forms. For GEO we directly compare output symbols. For ATIS, we compare normalized logical forms using canonical variable naming and sorting for unordered arguments (Jia and Liang, 2016) . For SPI-DER we use the provided evaluation script, which decomposes each SQL query and conducts set comparison within each clause without values. All accuracies are reported on the test set, except for SPIDER where we report and compare accuracies on the development set.", "cite_spans": [ { "start": 251, "end": 272, "text": "(Jia and Liang, 2016)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Output Sequences", "sec_num": null }, { "text": "Copying Tokens To better understand the effect of conditioning on entities and their relations, we also conducted experiments that considered an alternative method for selecting and disambiguating entities similar to Jia et al. (2016) . In this approach we use our model's copy mechanism to copy tokens corresponding to the surface forms of entity arguments, rather than copying entities directly.", "cite_spans": [ { "start": 217, "end": 234, "text": "Jia et al. (2016)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Output Sequences", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (a j = CopyToken[i] | x, y 1:j\u22121 ) \u221d exp((z j W x ) w (Nenc) x i ),", "eq_num": "(9)" } ], "section": "Output Sequences", "sec_num": null }, { "text": "where W x is a learned matrix, and where i \u2208 {1, . . . , |x|} refers to the index of token", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output Sequences", "sec_num": null }, { "text": "x i \u2208 V in . If a j = CopyToken[i]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output Sequences", "sec_num": null }, { "text": ", then y j = x i . This allows us to ablate entity information in the input while still generating logical forms. When copying tokens, the decoder determines the type of the entity using an additional output symbol. For GEO, the actual entity can then be identified as a post-processing step, as a type and surface form is sufficient. For other tasks this could require a more complicated post-processing step to disambiguate entities given a surface form and type.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output Sequences", "sec_num": null }, { "text": "Method GEO", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output Sequences", "sec_num": null }, { "text": "GNN w/ edge vectors + BERT 92.5 GNN w/ edge vectors 89.3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Copying Entities", "sec_num": null }, { "text": "GNN w/ edge vectors 87.9 \u2212 entity candidates, e 84.3 BERT 89.6 Table 3 : Experimental results for copying tokens instead of entities when decoding, with and without conditioning on the set of entity candidates, e.", "cite_spans": [], "ref_spans": [ { "start": 63, "end": 70, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Copying Tokens", "sec_num": null }, { "text": "Accuracies on GEO, ATIS, and SPIDER are shown in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 49, "end": 56, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5.3" }, { "text": "GEO and ATIS Without pre-training, and despite adding a bit of entity ambiguity, we achieve similar results to other recent approaches that disambiguate and replace entities in the utterance as a pre-processing step during both training and evaluating Lapata, 2016, 2018) . When incorporating BERT, we increase absolute accuracies over Dong and Lapata (2018) on GEO and ATIS by 3.2% and 2.0%, respectively. Notably, they also present techniques and results that leverage constrained decoding, which our approach would also likely further benefit from. For GEO, we find that when ablating all entity information in our model and copying tokens instead of entities, we achieve similar results as Jia and Liang (2016) when also ablating their data augmentation method, as shown in Table 3 . This is expected, since when ablating entities completely, our architecture essentially reduces to the same sequence-to-sequence task setup. These results demonstrate the impact of conditioning on the entity candidates, as it improves performance even on the token copying setup. It appears that leveraging BERT can partly compensate for not conditioning on entity candidates, but combining BERT with our GNN approach and copying entities achieves 2.9% higher accuracy than using only a BERT encoder and copying tokens.", "cite_spans": [ { "start": 252, "end": 271, "text": "Lapata, 2016, 2018)", "ref_id": null }, { "start": 336, "end": 358, "text": "Dong and Lapata (2018)", "ref_id": "BIBREF11" }, { "start": 694, "end": 714, "text": "Jia and Liang (2016)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 778, "end": 785, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5.3" }, { "text": "For ATIS, our results are outperformed by Wang et al. (2014) by 1.6%. Their approach uses hand-engineered templates to build a CCG lexicon. Some of these templates attempt to handle the specific types of ungrammatical utterances in the ATIS task.", "cite_spans": [ { "start": 42, "end": 60, "text": "Wang et al. (2014)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5.3" }, { "text": "SPIDER For SPIDER, a relatively new dataset, there is less prior work. Competitive approaches have been specific to the text-to-SQL task (Xu et al., 2017; Yu et al., 2018a,b) , incorporating taskspecific methods to condition on table and column information, and incorporating SQL-specific structure when decoding. Our approach improves absolute accuracy by +7.3% relative to Yu et al. (2018b) without using any pre-trained language representations, or constrained decoding. Our approach could also likely benefit from some of the other aspects of Yu et al. (2018b) such as more structured decoding, data augmentation, and using pre-trained representations (they use GloVe (Pennington et al., 2014)) for tokens, columns, and tables.", "cite_spans": [ { "start": 137, "end": 154, "text": "(Xu et al., 2017;", "ref_id": "BIBREF43" }, { "start": 155, "end": 174, "text": "Yu et al., 2018a,b)", "ref_id": null }, { "start": 375, "end": 392, "text": "Yu et al. (2018b)", "ref_id": "BIBREF48" }, { "start": 547, "end": 564, "text": "Yu et al. (2018b)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5.3" }, { "text": "Our results were surprisingly worse when attempting to incorporate BERT. Of course, successfully incorporating pre-trained representations is not always straightforward. In general, we found using BERT within our architecture to be sensitive to learning rates and learning rate schedules. Notably, the evaluation setup for SPIDER is very different than training, as examples are for tables unseen during training. Models may not generalize well to unseen tables and columns. It's likely that successfully incorporating BERT for SPIDER would require careful tuning of hyperparameters specifically for the database split configuration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5.3" }, { "text": "Entity Spans and Relations Ablating span relations between entities and tokens for GEO and ATIS is shown in Table 4 . The impact is more significant for ATIS, which contains many queries with multiple entities of the same type, such as nonstop flights seattle to boston where disambiguating the origin and destination entities requires knowledge of which tokens they are associated with, given that we represent entities based only on their types for these tasks. We leave for future work consideration of edges between entity candidates that incorporate relevant domain knowledge for these tasks. For SPIDER, results ablating relations between entities and tokens, and relations between entities, are shown in Table 5 . This demonstrates the importance of entity relations, as they include useful information for disambiguating entities such as which columns belong to which tables, and which columns have foreign key relations.", "cite_spans": [], "ref_spans": [ { "start": 108, "end": 115, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 711, "end": 718, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5.3" }, { "text": "GNN w/ edge vectors 32.1 \u2212 entity span edges 27.8 \u2212 entity relation edges 26.3 Table 5 : Results for ablating information about relations between entity candidates and tokens for SPIDER.", "cite_spans": [], "ref_spans": [ { "start": 79, "end": 86, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Edge Ablations SPIDER", "sec_num": null }, { "text": "Edge Representations Using additive edge vectors outperforms using learned edge matrix transformations for implementing f , across all tasks. While the vector formulation is less expressive, it also introduces far fewer parameters per edge type, which can be an important consideration given that our graph contains many similar edge labels, such as those representing similar relative positions between tokens. We leave further exploration of more expressive edge representations to future work. Another direction to explore is a heterogeneous formulation of the GNN sub-layer, that employs different formulations for different subsets of nodes, e.g. for tokens and entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Edge Ablations SPIDER", "sec_num": null }, { "text": "We have presented an architecture for semantic parsing that uses a Graph Neural Network (GNN) to condition on a graph of tokens, entities, and their relations. Experimental results have demonstrated that this approach can achieve competitive results across a diverse set of tasks, while also providing a conceptually simple way to incorporate entities and their relations during parsing. For future direction, we are interested in exploring constrained decoding, better incorporating pre-trained language representations within our architecture, conditioning on additional relations between entities, and different GNN formulations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "More broadly, we have presented a flexible approach for conditioning on available knowledge in the form of entities and their relations, and demonstrated its effectiveness for semantic parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "In this section we provide details of how we constructed entity candidate generators for each task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Entity Candidate Generator Details", "sec_num": null }, { "text": "GEO The annotator was constructed from the geobase database, which provides a list of geographical facts. For each entry in the database, we extracted the name as the entity alias and the type (e.g., \"state\", \"city\") as its attribute. Since not all cities used in the GEO query set are listed as explicit entries, we also used cities in the state entries. Finally, geobase has no entries around countries, so we added relevant aliases for \"USA\" with a special \"country\" attribute. There was 1 example where an entity in the logical form did not appear in the input, leading to the example being dropped from the training set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Entity Candidate Generator Details", "sec_num": null }, { "text": "In lieu of task-specific edge relations, we used binary edge labels between entities that captured which annotations span the same tokens. However, experiments demonstrated that these edges did not significantly affect performance. We leave consideration of other types of entity relations for these tasks to future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Entity Candidate Generator Details", "sec_num": null }, { "text": "ATIS We constructed a lexicon mapping natural language entity aliases in the dataset (e.g., \"newark international\", \"5pm\") to unique entity identifiers (e.g. \"ewr:ap\", \"1700:ti\"). For ATIS, this lexicon required some manual construction. Each entity identifier has a two-letter suffix (e.g., \"ap\", \"ti\") that maps it to a single attribute (e.g., \"airport\", \"time\"). We allowed overlapping entity mentions when the entities referred to different entity identifiers. For instance, in the query span \"atlanta airport\", we include both the city of Atlanta and the Atlanta airport.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Entity Candidate Generator Details", "sec_num": null }, { "text": "Notably there were 9 examples where one of the entities used as an argument in the logical form did not have a corresponding mention in the input utterance. From manual inspection, many of the dropped examples appear to have incorrectly annotated logical forms. These examples were dropped from training set or marked as incorrect if they appeared in the test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Entity Candidate Generator Details", "sec_num": null }, { "text": "We use the same binary edge labels between entities as for GEO.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Entity Candidate Generator Details", "sec_num": null }, { "text": "SPIDER For SPIDER we generalize our notion of entities to consider tables and columns as entities. We attempt to determine spans for each ta-ble and column by computing normalized Levenshtein distance between table and column names and unigrams or bigrams in the utterance. The best alignment having a score > 0.75 is selected, and we use these generated alignments to populate the edges between tokens and entity candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Entity Candidate Generator Details", "sec_num": null }, { "text": "We generate a set of attributes for the table based on unigrams in the table name, and an attribute to identify the entity as a table. Likewise, for columns, we generate a set of attributes based on unigrams in the column name as well as an attribute to identify the value type of the column. We also include attributes indicating whether an alignment was found between the entity and the input text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Entity Candidate Generator Details", "sec_num": null }, { "text": "We include 3 task-specific edge label types between entity candidates to denote bi-directional relations between column entities and the table entity they belong to, and to denote the presence of a foreign key relationship between columns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Entity Candidate Generator Details", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Karl Pichotta, Zuyao Li, Tom Kwiatkowski, and Dipanjan Das for helpful discussions. Thanks also to Ming-Wei Chang and Kristina Toutanova for their comments, and to all who provided feedback in draft reading sessions. Finally, we are grateful to the anonymous reviewers for their useful feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Tree structured decoding with doubly recurrent neural networks", "authors": [ { "first": "D", "middle": [], "last": "Alvarez-Melis", "suffix": "" }, { "first": "T", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2017, "venue": "International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Alvarez-Melis and T. Jaakkola. 2017. Tree struc- tured decoding with doubly recurrent neural net- works. In International Conference on Learning Representations (ICLR).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Semantic parsing as machine translation", "authors": [ { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Vlachos", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "47--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proceedings of the 51st Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 47-52.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.0473" ] }, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Graph convolutional encoders for syntax-aware neural machine translation", "authors": [ { "first": "Joost", "middle": [], "last": "Bastings", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" }, { "first": "Wilker", "middle": [], "last": "Aziz", "suffix": "" }, { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Khalil", "middle": [], "last": "Simaan", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1957--1967", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaan. 2017. Graph convolutional encoders for syntax-aware neural ma- chine translation. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing, pages 1957-1967.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Relational inductive biases, deep learning, and graph networks", "authors": [ { "first": "W", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Jessica", "middle": [ "B" ], "last": "Battaglia", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Hamrick", "suffix": "" }, { "first": "Alvaro", "middle": [], "last": "Bapst", "suffix": "" }, { "first": "Vinicius", "middle": [], "last": "Sanchez-Gonzalez", "suffix": "" }, { "first": "Mateusz", "middle": [], "last": "Zambaldi", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Malinowski", "suffix": "" }, { "first": "David", "middle": [], "last": "Tacchetti", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Raposo", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Santoro", "suffix": "" }, { "first": "", "middle": [], "last": "Faulkner", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1806.01261" ] }, "num": null, "urls": [], "raw_text": "Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Ma- teusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. 2018. Rela- tional inductive biases, deep learning, and graph net- works. arXiv preprint arXiv:1806.01261.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Graph-to-sequence learning using gated graph neural networks", "authors": [ { "first": "Daniel", "middle": [], "last": "Beck", "suffix": "" }, { "first": "Gholamreza", "middle": [], "last": "Haffari", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "273--283", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-sequence learning using gated graph neural networks. In Proceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 273-283.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Semantic parsing on freebase from question-answer pairs", "authors": [ { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Chou", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Frostig", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1533--1544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1533-1544.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Expanding the scope of the atis task: The atis-3 corpus", "authors": [ { "first": "A", "middle": [], "last": "Deborah", "suffix": "" }, { "first": "Madeleine", "middle": [], "last": "Dahl", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Bates", "suffix": "" }, { "first": "William", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Fisher", "suffix": "" }, { "first": "David", "middle": [], "last": "Hunicke-Smith", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Pallett", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Pao", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Rudnicky", "suffix": "" }, { "first": "", "middle": [], "last": "Shriberg", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the workshop on Human Language Technology", "volume": "", "issue": "", "pages": "43--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deborah A Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the atis task: The atis-3 corpus. In Proceedings of the workshop on Human Language Technology, pages 43-48. As- sociation for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Language to logical form with neural attention", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "33--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Dong and Mirella Lapata. 2016. Language to logi- cal form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 33-43.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Coarse-to-fine decoding for neural semantic parsing", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Dong and Mirella Lapata. 2018. Coarse-to-fine de- coding for neural semantic parsing. In ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Neural message passing for quantum chemistry", "authors": [ { "first": "Justin", "middle": [], "last": "Gilmer", "suffix": "" }, { "first": "S", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "", "middle": [], "last": "Schoenholz", "suffix": "" }, { "first": "F", "middle": [], "last": "Patrick", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Riley", "suffix": "" }, { "first": "George", "middle": [ "E" ], "last": "Vinyals", "suffix": "" }, { "first": "", "middle": [], "last": "Dahl", "suffix": "" } ], "year": 2017, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "1263--1272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. 2017. Neural message passing for quantum chemistry. In Inter- national Conference on Machine Learning, pages 1263-1272.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Weaklysupervised semantic parsing with abstract examples", "authors": [ { "first": "Omer", "middle": [], "last": "Goldman", "suffix": "" }, { "first": "Veronica", "middle": [], "last": "Latcinnik", "suffix": "" }, { "first": "Udi", "middle": [], "last": "Naveh", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Globerson", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1711.05240" ] }, "num": null, "urls": [], "raw_text": "Omer Goldman, Veronica Latcinnik, Udi Naveh, Amir Globerson, and Jonathan Berant. 2017. Weakly- supervised semantic parsing with abstract examples. arXiv preprint arXiv:1711.05240.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Incorporating copying mechanism in sequence-to-sequence learning", "authors": [ { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Zhengdong", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "O", "middle": [ "K" ], "last": "Victor", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1631--1640", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), volume 1, pages 1631-1640.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Pointing the unknown words", "authors": [ { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Sungjin", "middle": [], "last": "Ahn", "suffix": "" }, { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "140--149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 140-149.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Inductive representation learning on large graphs", "authors": [ { "first": "Will", "middle": [], "last": "Hamilton", "suffix": "" }, { "first": "Zhitao", "middle": [], "last": "Ying", "suffix": "" }, { "first": "Jure", "middle": [], "last": "Leskovec", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "1024--1034", "other_ids": {}, "num": null, "urls": [], "raw_text": "Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Sys- tems, pages 1024-1034.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Universal language model fine-tuning for text classification", "authors": [ { "first": "Jeremy", "middle": [], "last": "Howard", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1801.06146" ] }, "num": null, "urls": [], "raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Data recombination for neural semantic parsing", "authors": [ { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "12--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 12-22.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Learning to transform natural to formal languages", "authors": [ { "first": "J", "middle": [], "last": "Rohit", "suffix": "" }, { "first": "Yuk", "middle": [ "Wah" ], "last": "Kate", "suffix": "" }, { "first": "Raymond J", "middle": [], "last": "Wong", "suffix": "" }, { "first": "", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the National Conference on Artificial Intelligence", "volume": "20", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rohit J Kate, Yuk Wah Wong, and Raymond J Mooney. 2005. Learning to transform natural to formal lan- guages. In Proceedings of the National Conference on Artificial Intelligence, volume 20, page 1062. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "Diederik", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Semisupervised classification with graph convolutional networks", "authors": [ { "first": "N", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kipf", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2017, "venue": "International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas N Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Neural semantic parsing with type constraints for semi-structured tables", "authors": [ { "first": "Jayant", "middle": [], "last": "Krishnamurthy", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Dasigi", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1516--1526", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gard- ner. 2017. Neural semantic parsing with type con- straints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 1516-1526.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Scaling semantic parsers with on-the-fly ontology matching", "authors": [ { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "1545--1556", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1545-1556.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Lexical generalization in ccg grammar induction for semantic parsing", "authors": [ { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "1512--1523", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwa- ter, and Mark Steedman. 2011. Lexical generaliza- tion in ccg grammar induction for semantic parsing. In Proceedings of the conference on empirical meth- ods in natural language processing, pages 1512- 1523. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Gated graph sequence neural networks", "authors": [ { "first": "Yujia", "middle": [], "last": "Li", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Tarlow", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Brockschmidt", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zemel", "suffix": "" } ], "year": 2016, "venue": "International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. 2016. Gated graph sequence neural networks. In International Conference on Learning Representations (ICLR).", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Learning dependency-based compositional semantics", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Michael I Jordan", "suffix": "" }, { "first": "", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2013, "venue": "Computational Linguistics", "volume": "39", "issue": "2", "pages": "389--446", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang, Michael I Jordan, and Dan Klein. 2013. Learning dependency-based compositional seman- tics. Computational Linguistics, 39(2):389-446.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Latent predictor networks for code generation", "authors": [ { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Karl", "middle": [ "Moritz" ], "last": "Hermann", "suffix": "" }, { "first": "Tom\u00e1\u0161", "middle": [], "last": "Ko\u010disk\u1ef3", "suffix": "" }, { "first": "Fumin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Senior", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "599--609", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u1ef3, Fumin Wang, and Andrew Senior. 2016. Latent predictor networks for code generation. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 599-609.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Encoding sentences with graph convolutional networks for semantic role labeling", "authors": [ { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1506--1515", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for se- mantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1506-1515.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Abstract syntax networks for code generation and semantic parsing", "authors": [ { "first": "Maxim", "middle": [], "last": "Rabinovich", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Stern", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1139--1149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017. Abstract syntax networks for code genera- tion and semantic parsing. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 1139-1149.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "The graph neural network model", "authors": [ { "first": "Franco", "middle": [], "last": "Scarselli", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Gori", "suffix": "" }, { "first": "Ah", "middle": [], "last": "Chung Tsoi", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Hagenbuchner", "suffix": "" }, { "first": "Gabriele", "middle": [], "last": "Monfardini", "suffix": "" } ], "year": 2009, "venue": "IEEE Transactions on Neural Networks", "volume": "20", "issue": "1", "pages": "61--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2009. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Self-attention with relative position representations", "authors": [ { "first": "Peter", "middle": [], "last": "Shaw", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "464--468", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position represen- tations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), volume 2, pages 464-468.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Open domain question answering using early fusion of knowledge bases and text", "authors": [ { "first": "Haitian", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Bhuwan", "middle": [], "last": "Dhingra", "suffix": "" }, { "first": "Manzil", "middle": [], "last": "Zaheer", "suffix": "" }, { "first": "Kathryn", "middle": [], "last": "Mazaitis", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "William", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4231--4242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Co- hen. 2018. Open domain question answering using early fusion of knowledge bases and text. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 4231- 4242.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Automated construction of database interfaces: Integrating statistical and relational learning for semantic parsing", "authors": [ { "first": "R", "middle": [], "last": "Lappoon", "suffix": "" }, { "first": "Raymond J", "middle": [], "last": "Tang", "suffix": "" }, { "first": "", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 2000 Joint SIG-DAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics", "volume": "13", "issue": "", "pages": "133--141", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lappoon R Tang and Raymond J Mooney. 2000. Au- tomated construction of database interfaces: Inte- grating statistical and relational learning for seman- tic parsing. In Proceedings of the 2000 Joint SIG- DAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the As- sociation for Computational Linguistics-Volume 13, pages 133-141. Association for Computational Lin- guistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Graph attention networks", "authors": [ { "first": "Petar", "middle": [], "last": "Veli\u010dkovi\u0107", "suffix": "" }, { "first": "Guillem", "middle": [], "last": "Cucurull", "suffix": "" }, { "first": "Arantxa", "middle": [], "last": "Casanova", "suffix": "" }, { "first": "Adriana", "middle": [], "last": "Romero", "suffix": "" }, { "first": "Pietro", "middle": [], "last": "Li\u00f2", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. 2018. Graph attention networks. In International Conference on Learning Representations (ICLR).", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Pointer networks", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Meire", "middle": [], "last": "Fortunato", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2692--2700", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015a. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692-2700.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Grammar as a foreign language", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Terry", "middle": [], "last": "Koo", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2773--2781", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals, \u0141ukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015b. Gram- mar as a foreign language. In Advances in Neural Information Processing Systems, pages 2773-2781.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Morpho-syntactic lexical generalization for ccg semantic parsing", "authors": [ { "first": "Adrienne", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1284--1295", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adrienne Wang, Tom Kwiatkowski, and Luke Zettle- moyer. 2014. Morpho-syntactic lexical generaliza- tion for ccg semantic parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1284-1295.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Google's neural machine translation system: Bridging the gap between human and machine translation", "authors": [ { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Le", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Norouzi", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Qin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Gao", "suffix": "" }, { "first": "", "middle": [], "last": "Macherey", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1609.08144" ] }, "num": null, "urls": [], "raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural ma- chine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Sequence-based structured prediction for semantic parsing", "authors": [ { "first": "Chunyang", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Dymetman", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1341--1350", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for se- mantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), volume 1, pages 1341-1350.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Representation learning on graphs with jumping knowledge networks", "authors": [ { "first": "Keyulu", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Chengtao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yonglong", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Tomohiro", "middle": [], "last": "Sonobe", "suffix": "" }, { "first": "Ken-Ichi", "middle": [], "last": "Kawarabayashi", "suffix": "" }, { "first": "Stefanie", "middle": [], "last": "Jegelka", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keyulu Xu, Chengtao Li, Yonglong Tian, Tomo- hiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. 2018. Representation learning on graphs with jumping knowledge networks. In International Conference on Learning Representations (ICLR).", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Sqlnet: Generating structured queries from natural language without reinforcement learning", "authors": [ { "first": "Xiaojun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Chang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Dawn", "middle": [], "last": "Song", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1711.04436" ] }, "num": null, "urls": [], "raw_text": "Xiaojun Xu, Chang Liu, and Dawn Song. 2017. Sqlnet: Generating structured queries from natural language without reinforcement learning. arXiv preprint arXiv:1711.04436.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "S-mart: Novel tree-based structured learning algorithms applied to tweet entity linking", "authors": [ { "first": "Yi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "504--513", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Yang and Ming-Wei Chang. 2015. S-mart: Novel tree-based structured learning algorithms applied to tweet entity linking. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 504-513.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "The value of semantic parse labeling for knowledge base question answering", "authors": [ { "first": "Matthew", "middle": [], "last": "Wen-Tau Yih", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Richardson", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Meek", "suffix": "" }, { "first": "Jina", "middle": [], "last": "Chang", "suffix": "" }, { "first": "", "middle": [], "last": "Suh", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "201--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wen-tau Yih, Matthew Richardson, Chris Meek, Ming- Wei Chang, and Jina Suh. 2016. The value of se- mantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 201-206.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "A syntactic neural model for general-purpose code generation", "authors": [ { "first": "Pengcheng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "440--450", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 440-450.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Typesql: Knowledgebased type-aware neural text-to-sql generation", "authors": [ { "first": "Tao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Zifan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zilin", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "588--594", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Yu, Zifan Li, Zilin Zhang, Rui Zhang, and Dragomir Radev. 2018a. Typesql: Knowledge- based type-aware neural text-to-sql generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 588-594.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Syntaxsqlnet: Syntax tree networks for complex and cross-domaintext-to-sql task", "authors": [ { "first": "Tao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Michihiro", "middle": [], "last": "Yasunaga", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Dongxu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zifan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.05237" ] }, "num": null, "urls": [], "raw_text": "Tao Yu, Michihiro Yasunaga, Kai Yang, Rui Zhang, Dongxu Wang, Zifan Li, and Dragomir Radev. 2018b. Syntaxsqlnet: Syntax tree networks for complex and cross-domaintext-to-sql task. arXiv preprint arXiv:1810.05237.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task", "authors": [ { "first": "Tao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Michihiro", "middle": [], "last": "Yasunaga", "suffix": "" }, { "first": "Dongxu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zifan", "middle": [], "last": "Li", "suffix": "" }, { "first": "James", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Irene", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qingning", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Shanelle", "middle": [], "last": "Roman", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1809.08887" ] }, "num": null, "urls": [], "raw_text": "Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingn- ing Yao, Shanelle Roman, et al. 2018c. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. arXiv preprint arXiv:1809.08887.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Learning to parse database queries using inductive logic programming", "authors": [ { "first": "M", "middle": [], "last": "John", "suffix": "" }, { "first": "Raymond J", "middle": [], "last": "Zelle", "suffix": "" }, { "first": "", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the thirteenth national conference on Artificial intelligence", "volume": "2", "issue": "", "pages": "1050--1055", "other_ids": {}, "num": null, "urls": [], "raw_text": "John M Zelle and Raymond J Mooney. 1996. Learn- ing to parse database queries using inductive logic programming. In Proceedings of the thirteenth na- tional conference on Artificial intelligence-Volume 2, pages 1050-1055. AAAI Press.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Learning to map sentences to logical form: structured classification with probabilistic categorial grammars", "authors": [ { "first": "S", "middle": [], "last": "Luke", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence", "volume": "", "issue": "", "pages": "658--666", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luke S Zettlemoyer and Michael Collins. 2005. Learn- ing to map sentences to logical form: structured classification with probabilistic categorial gram- mars. In Proceedings of the Twenty-First Confer- ence on Uncertainty in Artificial Intelligence, pages 658-666. AUAI Press.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Online learning of relaxed ccg grammars for parsing to logical form", "authors": [ { "first": "S", "middle": [], "last": "Luke", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2007, "venue": "EMNLP-CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luke S Zettlemoyer and Michael Collins. 2007. On- line learning of relaxed ccg grammars for parsing to logical form. EMNLP-CoNLL 2007, page 678.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Type-driven incremental semantic parsing with polymorphism", "authors": [ { "first": "Kai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1416--1421", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai Zhao and Liang Huang. 2015. Type-driven in- cremental semantic parsing with polymorphism. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1416-1421.", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "uris": null, "text": "Figure 1: We use an example from SPIDER to illustrate the model inputs: tokens from the given utterance, x, a set of potentially relevant entities, e, and their relations. We selected two edge label types to highlight: edges denoting that an entity spans a token, and edges between entities that, for SPIDER, indicate a foreign key relationship between columns, or an ownership relationship between columns and tables.", "type_str": "figure" }, "TABREF3": { "num": null, "html": null, "content": "
Edge AblationsGEO ATIS
GNN w/ edge vectors89.3 87.1
\u2212 entity span edges 88.6 34.2
", "text": "Results for ablating information about entity candidate spans for GEO and ATIS.", "type_str": "table" } } } }