{ "paper_id": "P18-1037", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:37:20.136822Z" }, "title": "AMR Parsing as Graph Prediction with Latent Alignment", "authors": [ { "first": "Chunchuan", "middle": [], "last": "Lyu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": {} }, "email": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "meaning representations (AMRs) are broad-coverage sentence-level semantic representations. AMRs represent sentences as rooted labeled directed acyclic graphs. AMR parsing is challenging partly due to the lack of annotated alignments between nodes in the graphs and words in the corresponding sentences. We introduce a neural parser which treats alignments as latent variables within a joint probabilistic model of concepts, relations and alignments. As exact inference requires marginalizing over alignments and is infeasible, we use the variational autoencoding framework and a continuous relaxation of the discrete alignments. We show that joint modeling is preferable to using a pipeline of align and parse. The parser achieves the best reported results on the standard benchmark (74.4% on LDC2016E25).", "pdf_parse": { "paper_id": "P18-1037", "_pdf_hash": "", "abstract": [ { "text": "meaning representations (AMRs) are broad-coverage sentence-level semantic representations. AMRs represent sentences as rooted labeled directed acyclic graphs. AMR parsing is challenging partly due to the lack of annotated alignments between nodes in the graphs and words in the corresponding sentences. We introduce a neural parser which treats alignments as latent variables within a joint probabilistic model of concepts, relations and alignments. As exact inference requires marginalizing over alignments and is infeasible, we use the variational autoencoding framework and a continuous relaxation of the discrete alignments. We show that joint modeling is preferable to using a pipeline of align and parse. The parser achieves the best reported results on the standard benchmark (74.4% on LDC2016E25).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Abstract meaning representations (AMRs) (Banarescu et al., 2013) are broad-coverage sentencelevel semantic representations. AMR encodes, among others, information about semantic relations, named entities, co-reference, negation and modality. The semantic representations can be regarded as rooted labeled directed acyclic graphs (see Figure 1 ). As AMR abstracts away from details of surface realization, it is potentially beneficial in many semantic related NLP tasks, including text summarization (Liu et al., 2015; Dohare and Karnick, 2017) , machine translation (Jones et al., 2012) and question answering (Mitra and Baral, 2016) . Figure 1: An example of AMR, the dashed lines denote latent alignments, obligate-01 is the root. Numbers indicate depth-first traversal order.", "cite_spans": [ { "start": 40, "end": 64, "text": "(Banarescu et al., 2013)", "ref_id": "BIBREF3" }, { "start": 499, "end": 517, "text": "(Liu et al., 2015;", "ref_id": "BIBREF21" }, { "start": 518, "end": 543, "text": "Dohare and Karnick, 2017)", "ref_id": null }, { "start": 566, "end": 586, "text": "(Jones et al., 2012)", "ref_id": "BIBREF15" }, { "start": 610, "end": 633, "text": "(Mitra and Baral, 2016)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 334, "end": 342, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "AMR parsing has recently received a lot of attention (e.g., (Flanigan et al., 2014; Artzi et al., 2015; Konstas et al., 2017) ). One distinctive aspect of AMR annotation is the lack of explicit alignments between nodes in the graph (concepts) and words in the sentences. Though this arguably simplified the annotation process (Banarescu et al., 2013) , it is not straightforward to produce an effective parser without relying on an alignment. Most AMR parsers (Damonte et al., 2017; Flanigan et al., 2016; Werling et al., 2015; Foland and Martin, 2017) use a pipeline where the aligner training stage precedes training a parser. The aligners are not directly informed by the AMR parsing objective and may produce alignments suboptimal for this task.", "cite_spans": [ { "start": 60, "end": 83, "text": "(Flanigan et al., 2014;", "ref_id": "BIBREF12" }, { "start": 84, "end": 103, "text": "Artzi et al., 2015;", "ref_id": "BIBREF1" }, { "start": 104, "end": 125, "text": "Konstas et al., 2017)", "ref_id": "BIBREF19" }, { "start": 326, "end": 350, "text": "(Banarescu et al., 2013)", "ref_id": "BIBREF3" }, { "start": 460, "end": 482, "text": "(Damonte et al., 2017;", "ref_id": "BIBREF7" }, { "start": 483, "end": 505, "text": "Flanigan et al., 2016;", "ref_id": "BIBREF11" }, { "start": 506, "end": 527, "text": "Werling et al., 2015;", "ref_id": "BIBREF40" }, { "start": 528, "end": 552, "text": "Foland and Martin, 2017)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "The boys must not go", "sec_num": null }, { "text": "In this work, we demonstrate that the alignments can be treated as latent variables in a joint probabilistic model and induced in such a way as to be beneficial for AMR parsing. Intuitively, in our probabilistic model, every node in a graph is assumed to be aligned to a word in a sentence: each concept is predicted based on the corresponding RNN state. Similarly, graph edges (i.e. relations) are predicted based on representations of concepts and aligned words (see Figure 2 ). As alignments are latent, exact inference requires marginalizing over latent alignments, which is in-feasible. Instead we use variational inference, specifically the variational autoencoding framework of Kingma and Welling (2014) . Using discrete latent variables in deep learning has proven to be challenging (Mnih and Gregor, 2014; Bornschein and Bengio, 2015) . We use a continuous relaxation of the alignment problem, relying on the recently introduced Gumbel-Sinkhorn construction (Mena et al., 2018) . This yields a computationally-efficient approximate method for estimating our joint probabilistic model of concepts, relations and alignments.", "cite_spans": [ { "start": 685, "end": 710, "text": "Kingma and Welling (2014)", "ref_id": "BIBREF17" }, { "start": 791, "end": 814, "text": "(Mnih and Gregor, 2014;", "ref_id": "BIBREF30" }, { "start": 815, "end": 843, "text": "Bornschein and Bengio, 2015)", "ref_id": "BIBREF4" }, { "start": 967, "end": 986, "text": "(Mena et al., 2018)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 469, "end": 477, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The boys must not go", "sec_num": null }, { "text": "We assume injective alignments from concepts to words: every node in the graph is aligned to a single word in the sentence and every word is aligned to at most one node in the graph. This is necessary for two reasons. First, it lets us treat concept identification as sequence tagging at test time. For every word we would simply predict the corresponding concept or predict NULL to signify that no concept should be generated at this position. Secondly, Gumbel-Sinkhorn can only work under this assumption. This constraint, though often appropriate, is problematic for certain AMR constructions (e.g., named entities). In order to deal with these cases, we re-categorized AMR concepts. Similar recategorization strategies have been used in previous work (Foland and Martin, 2017; Peng et al., 2017) .", "cite_spans": [ { "start": 755, "end": 780, "text": "(Foland and Martin, 2017;", "ref_id": "BIBREF13" }, { "start": 781, "end": 799, "text": "Peng et al., 2017)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "The boys must not go", "sec_num": null }, { "text": "The resulting parser achieves 74.4% Smatch score on the standard test set when using LDC2016E25 training set, 1 an improvement of 3.4% over the previous best result (van Noord and Bos, 2017) . We also demonstrate that inducing alignments within the joint model is indeed beneficial. When, instead of inducing alignments, we follow the standard approach and produce them on preprocessing, the performance drops by 0.9% Smatch. Our main contributions can be summarized as follows:", "cite_spans": [ { "start": 165, "end": 190, "text": "(van Noord and Bos, 2017)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "The boys must not go", "sec_num": null }, { "text": "\u2022 we introduce a joint probabilistic model for alignment, concept and relation identification;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The boys must not go", "sec_num": null }, { "text": "\u2022 we demonstrate that a continuous relaxation can be used to effectively estimate the model;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The boys must not go", "sec_num": null }, { "text": "\u2022 the model achieves the best reported results. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The boys must not go", "sec_num": null }, { "text": "1 The standard deviation across multiple training runs was 0.16%. 2 The code can be accessed from https://github. com/ChunchuanLv/AMR_AS_GRAPH_PREDICTION", "cite_spans": [ { "start": 66, "end": 67, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The boys must not go", "sec_num": null }, { "text": "In this section we describe our probabilistic model and the estimation technique. In section 3, we describe preprocessing and post-processing (including concept re-categorization, sense disambiguation, wikification and root selection).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Model", "sec_num": "2" }, { "text": "We will use the following notation throughout the paper. We refer to words in the sentences as w = (w 1 , . . . , w n ), where n is sentence length, w k \u2208 V for k \u2208 {1 . . . , n}. The concepts (i.e. labeled nodes) are c = (c 1 , . . . , c m ), where m is the number of concepts and c i \u2208 C for i \u2208 {1 . . . , m}. For example, in Figure 1 , c = (obligate, go, boy, -). 3 Note that senses are predicted at post-processing, as discussed in Section 3.2 (i.e. go is labeled as go-02).", "cite_spans": [], "ref_spans": [ { "start": 329, "end": 337, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Notation and setting", "sec_num": "2.1" }, { "text": "A relation between 'predicate concept' i and 'argument concept' j is denoted by r ij \u2208 R; it is set to NULL if j is not an argument of i. In our example, r 2,3 = ARG0 and r 1,3 = NULL. We will use R to denote all relations in the graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation and setting", "sec_num": "2.1" }, { "text": "To represent alignments, we will use a = {a 1 , . . . , a m }, where a i \u2208 {1, . . . , n} returns the index of a word aligned to concept i. In our example, a 1 = 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation and setting", "sec_num": "2.1" }, { "text": "All three model components rely on bidirectional LSTM encoders (Schuster and Paliwal, 1997) . We denote states of BiLSTM (i.e. concatenation of forward and backward LSTM states) as h k \u2208 R d (k \u2208 {1, . . . , n}). The sentence encoder takes pre-trained fixed word embeddings, randomly initialized lemma embeddings, part-ofspeech and named-entity tag embeddings.", "cite_spans": [ { "start": 63, "end": 91, "text": "(Schuster and Paliwal, 1997)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Notation and setting", "sec_num": "2.1" }, { "text": "We believe that using discrete alignments, rather than attention-based models (Bahdanau et al., 2015) is crucial for AMR parsing. AMR banks are a lot smaller than parallel corpora used in machine translation (MT) and hence it is important to inject a useful inductive bias. We constrain our alignments from concepts to words to be injective. First, it encodes the observation that concepts are mostly triggered by single words (especially, after re-categorization, Section 3.1). Second, it implies that each word corresponds to at most one concept (if any). This encourages competition: alignments are mutually-repulsive. In our example, obligate is not lexically similar to the word must and may be hard to align. However, given that other concepts are easy to predict, alignment candidates other than must and the will be immediately ruled out. We believe that these are the key reasons for why attention-based neural models do not achieve competitive results on AMR (Konstas et al., 2017) and why state-of-the-art models rely on aligners. Our goal is to combine best of two worlds: to use alignments (as in state-of-the-art AMR methods) and to induce them while optimizing for the end goal (similarly to the attention component of encoder-decoder models).", "cite_spans": [ { "start": 78, "end": 101, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF2" }, { "start": 969, "end": 991, "text": "(Konstas et al., 2017)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Method overview", "sec_num": "2.2" }, { "text": "Our model consists of three parts: (1) the concept identification model P \u03b8 (c|a, w); (2) the relation identification model P \u03c6 (R|a, w, c) and (3) the alignment model Q \u03c8 (a|c, R, w). 4 Formally, (1) and (2) together with the uniform prior over alignments P (a) form the generative model of AMR graphs. In contrast, the alignment model Q \u03c8 (a|c, R, w), as will be explained below, is approximating the intractable posterior P \u03b8,\u03c6 (a|c, R, w) within that probabilistic model. In other words, we assume the following model for generating the AMR graph:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method overview", "sec_num": "2.2" }, { "text": "P \u03b8,\u03c6 (c, R|w) = a P (a)P \u03b8 (c|a, w)P \u03c6 (R|a, w, c) = a P (a) m i=1 P (c i |h a i ) m i,j=1 P (r ij |h a i ,c i ,h a j ,c j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method overview", "sec_num": "2.2" }, { "text": "4 \u03b8, \u03c6 and \u03c8 denote all parameters of the models. AMR concepts are assumed to be generated conditional independently relying on the BiLSTM states and surface forms of the aligned words. Similarly, relations are predicted based only on AMR concept embeddings and LSTM states corresponding to words aligned to the involved concepts. Their combined representations are fed into a bi-affine classifier (Dozat and Manning, 2017) ", "cite_spans": [ { "start": 398, "end": 423, "text": "(Dozat and Manning, 2017)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Method overview", "sec_num": "2.2" }, { "text": "(see Fig- ure 2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method overview", "sec_num": "2.2" }, { "text": "The expression involves intractable marginalization over all valid alignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method overview", "sec_num": "2.2" }, { "text": "As standard in variational autoencoders, VAEs (Kingma and Welling, 2014), we lower-bound the loglikelihood as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method overview", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "log P \u03b8,\u03c6 (c, R|w) \u2265 E Q [log P \u03b8 (c|a, w)P \u03c6 (R|a, w, c)] \u2212 D KL (Q \u03c8 (a|c, R, w)||P (a)),", "eq_num": "(1)" } ], "section": "Method overview", "sec_num": "2.2" }, { "text": "where Q \u03c8 (a|c, R, w) is the variational posterior (aka the inference network), E Q [. . .] refers to the expectation under Q \u03c8 (a|c, R, w) and D KL is the Kullback-Liebler divergence. In VAEs, the lower bound is maximized both with respect to model parameters (\u03b8 and \u03c6 in our case) and the parameters of the inference network (\u03c8). Unfortunately, gradient-based optimization with discrete latent variables is challenging. We use a continuous relaxation of our optimization problem, where realvalued vectors\u00e2 i \u2208 R n (for every concept i) approximate discrete alignment variables a i . This relaxation results in low-variance estimates of the gradient using the parameterization trick (Kingma and Welling, 2014), and ensures fast and stable training. We will describe the model components and the relaxed inference procedure in detail in sections 2.6 and 2.7. Though the estimation procedure requires the use of the relaxation, the learned parser is straightforward to use. Given our assumptions about the alignments, we can independently choose for each word w k (k = 1, . . . , m) the most probably concept according to P \u03b8 (c|h k ). If the highest scoring option is NULL, no concept is introduced. The relations could then be predicted relying on P \u03c6 (R|a, w, c). This would have led to generating inconsistent AMR graphs, so instead we search for the highest scoring valid graph (see Section 3.2). Note that the alignment model Q \u03c8 is not used at test time and only necessary to train accurate concept and relation identification models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method overview", "sec_num": "2.2" }, { "text": "The concept identification model chooses a concept c (i.e. a labeled node) conditioned on the aligned word k or decides that no concept should be introduced (i.e. returns NULL). Though it can be modeled with a softmax classifier, it would not be effective in handling rare or unseen words. First, we split the decision into estimating the probability of concept category \u03c4 (c) \u2208 T (e.g. 'number', 'frame') and estimating the probability of the specific concept within the chosen category. Second, based on a lemmatizer and training data 5 we prepare one candidate concept e k for each word k in vocabulary (e.g., it would propose want if the word is wants). Similar to Luong et al. (2015) , our model can then either copy the candidate e k or rely on the softmax over potential concepts of category \u03c4 . Formally, the concept prediction model is defined as", "cite_spans": [ { "start": 669, "end": 688, "text": "Luong et al. (2015)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Concept identification model", "sec_num": "2.3" }, { "text": "P \u03b8 (c|h k , w k ) = P (\u03c4 (c)|h k , w k )\u00d7 [[e k = c]] \u00d7 exp(v T copy h k ) + exp(v T c h k ) Z(h k , \u03b8) ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept identification model", "sec_num": "2.3" }, { "text": "where the first multiplicative term is a softmax classifier over categories (including . .] ] denotes the indicator function and equals 1 if its argument is true and 0, otherwise; Z(h, \u03b8) is the partition function ensuring that the scores sum to 1.", "cite_spans": [ { "start": 87, "end": 91, "text": ". .]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Concept identification model", "sec_num": "2.3" }, { "text": "NULL); v copy , v c \u2208 R d (for c \u2208 C) are model parameters; [[.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept identification model", "sec_num": "2.3" }, { "text": "We use the following arc-factored relation identification model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation identification model", "sec_num": "2.4" }, { "text": "P \u03c6 (R|a, w, c) = m i,j=1 P (r ij |h a i ,c i ,h a j ,c j ) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation identification model", "sec_num": "2.4" }, { "text": "Each term is modeled in exactly the same way:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation identification model", "sec_num": "2.4" }, { "text": "1. for both endpoints, embedding of the concept c is concatenated with the RNN state h;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation identification model", "sec_num": "2.4" }, { "text": "2. they are linearly projected to a lower dimension separately through", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation identification model", "sec_num": "2.4" }, { "text": "M h (h a i \u2022 c i ) \u2208 R d f and M d (h a j \u2022 c j ) \u2208 R d f , where \u2022 denotes concatenation;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation identification model", "sec_num": "2.4" }, { "text": "3. a log-linear model with bilinear scores", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation identification model", "sec_num": "2.4" }, { "text": "M h (h a i \u2022 c i ) T C r M d (h a j \u2022 c j ), C r \u2208 R d f \u00d7d f", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation identification model", "sec_num": "2.4" }, { "text": "is used to compute the probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation identification model", "sec_num": "2.4" }, { "text": "5 See supplementary materials.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation identification model", "sec_num": "2.4" }, { "text": "In the above discussion, we assumed that BiL-STM encodes a sentence once and the BiLSTM states are then used to predict concepts and relations. In semantic role labeling, the task closely related to the relation identification stage of AMR parsing, a slight modification of this approach was shown more effective (Zhou and Xu, 2015; . In that previous work, the sentence was encoded by a BiLSTM once per each predicate (i.e. verb) and the encoding was in turn used to identify arguments of that predicate. The only difference across the re-encoding passes was a binary flag used as input to the BiL-STM encoder at each word position. The flag was set to 1 for the word corresponding to the predicate and to 0 for all other words. In that way, BiLSTM was encoding the sentence specifically for predicting arguments of a given predicate. Inspired by this approach, when predicting label r ij for j \u2208 {1, . . . m}, we input binary flags p 1 , . . . p n to the BiLSTM encoder which are set to 1 for the word indexed by a i (p a i = 1) and to 0 for other words (p j = 0, for j = a i ). This also means that BiLSTM encoders for predicting relations and concepts end up being distinct. We use this multi-pass approach in our experiments. 6", "cite_spans": [ { "start": 313, "end": 332, "text": "(Zhou and Xu, 2015;", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Relation identification model", "sec_num": "2.4" }, { "text": "Recall that the alignment model is only used at training, and hence it can rely both on input (states h 1 , . . . , h n ) and on the list of concepts c 1 , . . . , c m .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment model", "sec_num": "2.5" }, { "text": "Formally, we add (m\u2212n) NULL concepts to the list. 7 Aligning a word to any NULL, would correspond to saying that the word is not aligned to any 'real' concept. Note that each one-to-one alignment (i.e. permutation) between n such concepts and n words implies a valid injective alignment of n words to m 'real' concepts. This reduction to permutations will come handy when we turn to the Gumbel-Sinkhorn relaxation in the next section. Given this reduction, from now on, we will assume that m = n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment model", "sec_num": "2.5" }, { "text": "As with sentences, we use a BiLSTM model to encode concepts c, where g i \u2208 R dg , i \u2208 {1, . . . , n}. We use a globally-normalized align-ment model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment model", "sec_num": "2.5" }, { "text": "Q \u03c8 (a|c, R, w) = exp( n i=1 \u03d5(g i , h a i )) Z \u03c8 (c, w) ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment model", "sec_num": "2.5" }, { "text": "where Z \u03c8 (c, w) is the intractable partition function and the terms \u03d5(g i , h a i ) score each alignment link according to a bilinear form", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment model", "sec_num": "2.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03d5(g i , h a i ) = g T i Bh a i ,", "eq_num": "(3)" } ], "section": "Alignment model", "sec_num": "2.5" }, { "text": "where B \u2208 R dg\u00d7d is a parameter matrix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment model", "sec_num": "2.5" }, { "text": "Recall that our learning objective (1) involves expectation under the alignment model. The partition function of the alignment model Z \u03c8 (c, w) is intractable, and it is tricky even to draw samples from the distribution. Luckily, the recently proposed relaxation (Mena et al., 2018) lets us circumvent this issue. First, note that exact samples from a categorical distribution can be obtained using the perturb-and-max technique (Papandreou and Yuille, 2011) . For our alignment model, it would correspond to adding independent noise to the score for every possible alignment and choosing the highest scoring one:", "cite_spans": [ { "start": 263, "end": 282, "text": "(Mena et al., 2018)", "ref_id": "BIBREF28" }, { "start": 429, "end": 458, "text": "(Papandreou and Yuille, 2011)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Estimating model with Gumbel-Sinkhorn", "sec_num": "2.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a = argmax a\u2208P n i=1 \u03d5(g i , h a i ) + a ,", "eq_num": "(4)" } ], "section": "Estimating model with Gumbel-Sinkhorn", "sec_num": "2.6" }, { "text": "where P is the set of all permutations of n elements, a is a noise drawn independently for each a from the fixed Gumbel distribution (G(0, 1)). Unfortunately, this is also intractable, as there are n! permutations. Instead, in perturband-max an approximate schema is used where noise is assumed factorizable. In other words, first noisy scores are computed as\u03c6( 1) and an approximate sample is obtained by a = argmax a n i=1\u03c6 (g i , h a i ), Such sampling procedure is still intractable in our case and also non-differentiable. The main contribution of Mena et al. (2018) is approximating this argmax with a simple differentiable computation\u00e2 = S t (\u03a6, \u03a3) which yields an approximate (i.e. relaxed) permutation. We use \u03a6 and \u03a3 to denote the n \u00d7 n matrices of alignment scores \u03d5(g i , h k ) and noise variables ik , respectively. Instead of returning index a i for every concept i, it would return a (peaky) distribution over word\u015d a i . The peakiness is controlled by the temperature parameter t of Gumbel-Sinkhorn which balances smoothness ('differentiability') vs. bias of the estimator. For further details and the derivation, we refer the reader to the original paper (Mena et al., 2018) .", "cite_spans": [ { "start": 553, "end": 571, "text": "Mena et al. (2018)", "ref_id": "BIBREF28" }, { "start": 1172, "end": 1191, "text": "(Mena et al., 2018)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 362, "end": 364, "text": "1)", "ref_id": null } ], "eq_spans": [], "section": "Estimating model with Gumbel-Sinkhorn", "sec_num": "2.6" }, { "text": "g i , h a i ) = \u03d5(g i , h a i ) + i,a i , where i,a i \u223c G(0,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimating model with Gumbel-Sinkhorn", "sec_num": "2.6" }, { "text": "Note that \u03a6 is a function of the alignment model Q \u03c8 , so we will write \u03a6 \u03c8 in what follows. The variational bound (1) can now be approximated as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimating model with Gumbel-Sinkhorn", "sec_num": "2.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E \u03a3\u223cG(0,1) [log P \u03b8 (c|S t (\u03a6 \u03c8 , \u03a3), w) + log P \u03c6 (R|S t (\u03a6 \u03c8 , \u03a3), w, c)] \u2212 D KL ( \u03a6 \u03c8 + \u03a3 t || \u03a3 t 0 )", "eq_num": "(5)" } ], "section": "Estimating model with Gumbel-Sinkhorn", "sec_num": "2.6" }, { "text": "Following Mena et al. (2018) , the original KL term from equation 1is approximated by the KL term between two n \u00d7 n matrices of i.i.d. Gumbel distributions with different temperature and mean. The parameter t 0 is the 'prior temperature'. Using the Gumbel-Sinkhorn construction unfortunately does not guarantee that i\u00e2 ij = 1. To encourage this equality to hold, and equivalently to discourage overlapping alignments, we add another regularizer to the objective 5:", "cite_spans": [ { "start": 10, "end": 28, "text": "Mena et al. (2018)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Estimating model with Gumbel-Sinkhorn", "sec_num": "2.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2126(\u00e2, \u03bb) = \u03bb j max( i (\u00e2 ij ) \u2212 1, 0).", "eq_num": "(6)" } ], "section": "Estimating model with Gumbel-Sinkhorn", "sec_num": "2.6" }, { "text": "Our final objective is fully differentiable with respect to all parameters (i.e. \u03b8, \u03c6 and \u03c8) and has low variance as sampling is performed from the fixed non-parameterized distribution, as in standard VAEs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimating model with Gumbel-Sinkhorn", "sec_num": "2.6" }, { "text": "One remaining question is how to use the soft input\u00e2 = S t (\u03a6 \u03c8 , \u03a3) in the concept and relation identification models in equation 5. In other words, we need to define how we compute P \u03b8 (c|S t (\u03a6 \u03c8 , \u03a3), w) and P \u03c6 (R|S t (\u03a6 \u03c8 , \u03a3), w, c). The standard technique would be to pass to the models expectations under the relaxed variables n k=1\u00e2 ik h k , instead of the vectors h a i (Maddison et al., 2017; Jang et al., 2017) . This is what we do for the relation identification model. We use this approach also to relax the one-hot encoding of the predicate position (p, see Section 2.4).", "cite_spans": [ { "start": 381, "end": 404, "text": "(Maddison et al., 2017;", "ref_id": "BIBREF24" }, { "start": 405, "end": 423, "text": "Jang et al., 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Relaxing concept and relation identification", "sec_num": "2.7" }, { "text": "However, the concept prediction model log P \u03b8 (c|S t (\u03a6 \u03c8 , \u03a3), w) relies on the pointing mechanism, i.e. directly exploits the words w rather than relies only on biLSTM states h k . So instead we treat\u00e2 i as a prior in a hierarchical model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relaxing concept and relation identification", "sec_num": "2.7" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "logP \u03b8 (c i |\u00e2 i , w) \u2248 log n k=1\u00e2 ik P \u03b8 (c i |a i = k, w)", "eq_num": "(7)" } ], "section": "Relaxing concept and relation identification", "sec_num": "2.7" }, { "text": "As we will show in our experiments, a softer version of the loss is even more effective:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relaxing concept and relation identification", "sec_num": "2.7" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "logP \u03b8 (c i |\u00e2 i , w) \u2248 log n k=1 (\u00e2 ik P \u03b8 (c i |a i = k, w)) \u03b1 ,", "eq_num": "(8)" } ], "section": "Relaxing concept and relation identification", "sec_num": "2.7" }, { "text": "where we set the parameter \u03b1 = 0.5. We believe that using this loss encourages the model to more actively explore the alignment space. Geometrically, the loss surface shaped as a ball in the 0.5norm space would push the model away from the corners, thus encouraging exploration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relaxing concept and relation identification", "sec_num": "2.7" }, { "text": "3 Pre-and post-pocessing", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relaxing concept and relation identification", "sec_num": "2.7" }, { "text": "AMR parsers often rely on a pre-processing stage, where specific subgraphs of AMR are grouped together and assigned to a single node with a new compound category (e.g., Werling et al. (2015) ; Foland and Martin (2017); Peng et al. (2017)); this transformation is reversed at the post-processing stage. Our approach is very similar to the Factored Concept Label system of , with one important difference that we unpack our concepts before the relation identification stage, so the relations are predicted between original concepts (all nodes in each group share the same alignment distributions to the RNN states). Intuitively, the goal is to ensure that concepts rarely lexically triggered (e.g., thing in Figure 3 ) get grouped together with lexically triggered nodes.", "cite_spans": [ { "start": 169, "end": 190, "text": "Werling et al. (2015)", "ref_id": "BIBREF40" } ], "ref_spans": [ { "start": 706, "end": 714, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Re-Categorization", "sec_num": "3.1" }, { "text": "Such 'primary' concepts get encoded in the category of the concept (the set of categories is \u03c4 , see also section 2.3). In Figure 3 , the re-categorized concept thing(opinion) is produced from thing and opine-01. We use concept as the dummy category type. There are 8 templates in our system which extract re-categorizations for fixed phrases (e.g. thing(opinion)), and a deterministic system for grouping lexically flexible, but structurally stable sub-graphs (e.g., named entities, have-rel-role-91 and have-org-role-91 concepts).", "cite_spans": [], "ref_spans": [ { "start": 123, "end": 131, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Re-Categorization", "sec_num": "3.1" }, { "text": "Details of the re-categorization procedure and other pre-processing are provided in appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Re-Categorization", "sec_num": "3.1" }, { "text": "For post-processing, we handle sensedisambiguation, wikification and ensure legitimacy of the produced AMR graph. For sense disambiguation we pick the most frequent sense for that particular concept ('-01', if unseen). For wikification we again look-up in the training set and default to \"-\". There is certainly room for improvement in both stages. Our probability model predicts edges conditional independently and thus cannot guarantee the connectivity of AMR graph, also there are additional constraints which are useful to impose. We enforce three constraints: (1) specific concepts can have only one neighbor (e.g., 'number' and 'string'; see appendix for details); (2) each predicate concept can have at most one argument for each relation r \u2208 R;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-processing", "sec_num": "3.2" }, { "text": "(3) the graph should be connected. Constraint (1) is addressed by keeping only the highest scoring neighbor. In order to satisfy the last two constraints we use a simple greedy procedure. First, for each edge, we pick-up the highest scoring relation and edge (possibly NULL). If the constraint (2) is violated, we simply keep the highest scoring edge among the duplicates and drop the rest. If the graph is not connected (i.e. constraint (3) is violated), we greedily choose edges linking the connected components until the graph gets connected (MSCG in Flanigan et al. (2014) ).", "cite_spans": [ { "start": 545, "end": 576, "text": "(MSCG in Flanigan et al. (2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Post-processing", "sec_num": "3.2" }, { "text": "Finally, we need to select a root node. Similarly to relation identification, for each candidate concept c i , we concatenate its embedding with the corresponding LSTM state (h a i ) and use these scores in a softmax classifier over all the concepts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-processing", "sec_num": "3.2" }, { "text": "Data Smatch JAMR (Flanigan et al., 2016) R1 67.0 AMREager (Damonte et al., 2017) R1 64.0 CAMR (Wang et al., 2016) R1 66.5 SEQ2SEQ + 20M (Konstas et al., 2017) R1", "cite_spans": [ { "start": 17, "end": 40, "text": "(Flanigan et al., 2016)", "ref_id": "BIBREF11" }, { "start": 58, "end": 80, "text": "(Damonte et al., 2017)", "ref_id": "BIBREF7" }, { "start": 94, "end": 113, "text": "(Wang et al., 2016)", "ref_id": "BIBREF38" }, { "start": 136, "end": 158, "text": "(Konstas et al., 2017)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "62.1 Mul-BiLSTM (Foland and Martin, 2017) R1 70.7 Ours R1 73.7 Neural-Pointer (Buys and Blunsom, 2017) R2 61.9 ChSeq (van Noord and Bos, 2017) R2 64.0 ChSeq + 100K (van Noord and Bos, 2017) R2 71.0 Ours R2 74.4 \u00b1 0.16 4 Experiments and Discussion", "cite_spans": [ { "start": 16, "end": 41, "text": "(Foland and Martin, 2017)", "ref_id": "BIBREF13" }, { "start": 78, "end": 102, "text": "(Buys and Blunsom, 2017)", "ref_id": "BIBREF6" }, { "start": 117, "end": 142, "text": "(van Noord and Bos, 2017)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "We primarily focus on the most recent LDC2016E25 (R2) dataset, which consists of 36521, 1368 and 1371 sentences in training, development and testing sets, respectively. The earlier LDC2015E86 (R1) dataset has been used by much of the previous work. It contains 16833 training sentences, and same sentences for development and testing as R2. 8 We used the development set to perform model selection and hyperparameter tuning. The hyperparameters, as well as information about embeddings and pre-processing, are presented in the supplementary materials.", "cite_spans": [ { "start": 341, "end": 342, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data and setting", "sec_num": "4.1" }, { "text": "We used Adam (Kingma and Ba, 2014) to optimize the loss (5) and to train the root classifier. Our best model is trained fully jointly, and we do early stopping on the development set scores. Training takes approximately 6 hours on a single GeForce GTX 1080 Ti with Intel Xeon CPU E5-2620 v4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and setting", "sec_num": "4.1" }, { "text": "We start by comparing our parser to previous work (see Table 1 ). Our model substantially outperforms all the previous models on both datasets. Specifically, it achieves 74.4% Smatch score on LDC2016E25 (R2), which is an improvement of 3.4% over character seq2seq model relying on silver data (van Noord and Bos, 2017). For LDC2015E86 (R1), we obtain 73.7% Smatch score, which is an improvement of 3.0% over 8 Annotation in R2 has also been slightly revised. Foland and Martin (2017) . In order to disentangle individual phenomena, we use the AMR-evaluation tools (Damonte et al., 2017) and compare to systems which reported these scores (Table 2) . We obtain the highest scores on most subtasks. The exception is negation detection. However, this is not too surprising as many negations are encoded with morphology, and character models, unlike our word-level model, are able to capture predictive morphological features (e.g., detect prefixes such as \"un-\" or \"im-\").", "cite_spans": [ { "start": 408, "end": 409, "text": "8", "ref_id": null }, { "start": 459, "end": 483, "text": "Foland and Martin (2017)", "ref_id": "BIBREF13" }, { "start": 564, "end": 586, "text": "(Damonte et al., 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 55, "end": 62, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 638, "end": 647, "text": "(Table 2)", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiments and discussion", "sec_num": "4.2" }, { "text": "Now, we turn to ablation tests (see Table 3 ). First, we would like to see if our latent alignment framework is beneficial. In order to test this, we create a baseline version of our system ('prealign') which relies on the JAMR aligner (Flani- Figure 4 : When modeling concepts alone, the posterior probability of the correct (green) and wrong (red) alignment links will be the same.", "cite_spans": [], "ref_spans": [ { "start": 36, "end": 43, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 244, "end": 252, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Experiments and discussion", "sec_num": "4.2" }, { "text": "Concepts SRL Smatch 2 stages 85.6 68.9 73.6 2 stages, tune align 85.6 69.2 73.9 Full model 85.9 69.8 74.4 Table 4 : Ablation studies: effect of joint modeling (all on R2). Scores on ablations are averaged over 2 runs. The first two models load the same concept and alignment model before the second stage.", "cite_spans": [], "ref_spans": [ { "start": 106, "end": 113, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Ablation", "sec_num": null }, { "text": "gan et al., 2014), rather than induces alignments as latent variables. Recall that in our model we used training data and a lemmatizer to produce candidates for the concept prediction model (see Section 2.3, the copy function). In order to have a fair comparison, if a concept is not aligned after JAMR, we try to use our copy function to align it. If an alignment is not found, we make the alignment uniform across the unaligned words. In preliminary experiments, we considered alternatives versions (e.g., dropping concepts unaligned by JAMR or dropping concepts unaligned after both JAMR and the matching heuristic), but the chosen strategy was the most effective. These scores of pre-align are superior to the results from Foland and Martin (2017) which also relies on JAMR alignments and uses BiLSTM encoders. There are many potential reasons for this difference in performance. For example, their relation identification model is different (e.g., single pass, no bi-affine modeling), they used much smaller networks than us, they use plain JAMR rather than a combination of JAMR and our copy function, they use a different recategorization system. These results confirm that we started with a strong basic model, and that our variational alignment framework provided further gains in performance. Now we would like to confirm that joint training of alignments with both concepts and relations is beneficial. In other words, we would like to see if alignments need to be induced in such a way as to benefit the relation identification task. For this ablation we break the full joint training into two stages. We start by jointly training the alignment model and the concept identification model. When these are trained, we optimizing the relation model but keep the concept identification model and alignment models fixed ('2 stages' in see Table 4) . When compared to our joint model ('full model'), we observe a substantial drop in Smatch score (-0.8%). In another version ('2 stages, tune align') we also use two stages but we fine-tune the alignment model on the second stage. This approach appears slightly more accurate but still -0.5% below the full model. In both cases, the drop is more substantial for relations ('SRL'). In order to see why relations are potentially useful in learning alignments, consider Figure 4 . The example contains duplicate concepts long. The concept prediction model factorizes over concepts and does not care which way these duplicates are aligned: correctly (green edges) or not (red edges). Formally, the true posterior under the conceptonly model in '2 stages' assigns exactly the same probability to both configurations, and the alignment model Q \u03c8 will be forced to mimic it (even though it relies on an LSTM model of the graph).", "cite_spans": [ { "start": 727, "end": 751, "text": "Foland and Martin (2017)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 1846, "end": 1854, "text": "Table 4)", "ref_id": null }, { "start": 2322, "end": 2330, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Ablation", "sec_num": null }, { "text": "The spurious ambiguity will have a detrimental effect on the relation identification stage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation", "sec_num": null }, { "text": "It is interesting to see the contribution of other modeling decisions we made when modeling and relaxing alignments. First, instead of using Gumbel-Sinkhorn, which encourages mutuallyrepulsive alignments, we now use a factorized alignment model. Note that this model ('No Sinkhorn' in Table 5 ) still relies on (relaxed) discrete alignments (using Gumbel softmax) but does not constrain the alignments to be injective. A substantial drop in performance indicates that the prior knowledge about the nature of alignments appears beneficial. Second, we remove the additional regularizer for Gumbel-Sinkhorn approximation (equation (6)). The performance drop in Smatch score ('No Sinkhorn reg') is only moderate. Finally, we show that using the simple hierarchical relaxation (equation (7)) rather than our softer version of the loss (equation (8)) results in a substantial drop in performance ('No soft loss', -0.7% Smatch). We hypothesize that the softer relaxation favors exploration of alignments and helps to discover better configurations.", "cite_spans": [], "ref_spans": [ { "start": 285, "end": 292, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Ablation", "sec_num": null }, { "text": "Alignment performance has been previously identified as a potential bottleneck affecting AMR parsing (Damonte et al., 2017; Foland and Martin, 2017) . Some recent work has focused on building aligners specifically for training their parsers (Werling et al., 2015; . However, those aligners are trained independently of concept and relation identification and only used at pre-processing.", "cite_spans": [ { "start": 101, "end": 123, "text": "(Damonte et al., 2017;", "ref_id": "BIBREF7" }, { "start": 124, "end": 148, "text": "Foland and Martin, 2017)", "ref_id": "BIBREF13" }, { "start": 241, "end": 263, "text": "(Werling et al., 2015;", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Additional Related Work", "sec_num": "5" }, { "text": "Treating alignment as discrete variables has been successful in some sequence transduction tasks with neural models (Yu et al., 2017 (Yu et al., , 2016 . Our work is similar in that we also train discrete alignments jointly but the tasks, the inference framework and the decoders are very different.", "cite_spans": [ { "start": 116, "end": 132, "text": "(Yu et al., 2017", "ref_id": "BIBREF41" }, { "start": 133, "end": 151, "text": "(Yu et al., , 2016", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Additional Related Work", "sec_num": "5" }, { "text": "The discrete alignment modeling framework has been developed in the context of traditional (i.e. non-neural) statistical machine translation (Brown et al., 1993) . Such translation models have also been successfully applied to semantic parsing tasks (e.g., (Andreas et al., 2013) ), where they rivaled specialized semantic parsers from that period. However, they are considerably less accurate than current state-of-the-art parsers applied to the same datasets (e.g., (Dong and Lapata, 2016) ).", "cite_spans": [ { "start": 141, "end": 161, "text": "(Brown et al., 1993)", "ref_id": "BIBREF5" }, { "start": 257, "end": 279, "text": "(Andreas et al., 2013)", "ref_id": "BIBREF0" }, { "start": 468, "end": 491, "text": "(Dong and Lapata, 2016)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Additional Related Work", "sec_num": "5" }, { "text": "For AMR parsing, another way to avoid using pre-trained aligners is to use seq2seq models (Konstas et al., 2017; van Noord and Bos, 2017) . In particular, van Noord and Bos (2017) used character level seq2seq model and achieved the previous state-of-the-art result. However, their model is very data demanding as they needed to train it on additional 100K sentences parsed by other parsers. This may be due to two reasons. First, seq2seq models are often not as strong on smaller datasets. Second, recurrent decoders may struggle with predicting the linearized AMRs, as many statistical dependencies are highly non-local.", "cite_spans": [ { "start": 90, "end": 112, "text": "(Konstas et al., 2017;", "ref_id": "BIBREF19" }, { "start": 113, "end": 137, "text": "van Noord and Bos, 2017)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Additional Related Work", "sec_num": "5" }, { "text": "We introduced a neural AMR parser trained by jointly modeling alignments, concepts and relations. We make such joint modeling computationally feasible by using the variational autoencoding framework and continuous relaxations. The parser achieves state-of-the-art results and ablation tests show that joint modeling is indeed beneficial.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "We believe that the proposed approach may be extended to other parsing tasks where alignments are latent (e.g., parsing to logical form (Liang, 2016) ). Another promising direction is integrating character seq2seq to substitute the copy function. This should also improve the handling of negation and rare words. Though our parsing model does not use any linearization of the graph, we relied on LSTMs and somewhat arbitrary linearization (depth-first traversal) to encode the AMR graph in our alignment model. A better alternative would be to use graph convolutional networks Kipf and Welling, 2017) : neighborhoods in the graph are likely to be more informative for predicting alignments than the neighborhoods in the graph traversal.", "cite_spans": [ { "start": 136, "end": 149, "text": "(Liang, 2016)", "ref_id": "BIBREF20" }, { "start": 577, "end": 600, "text": "Kipf and Welling, 2017)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "The probabilistic model is invariant to the ordering of concepts, though the order affects the inference algorithm (see Section 2.5). We use depth-first traversal of the graph to generate the ordering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Using the vanilla one-pass model from equation (2) results in 1.4% drop in Smatch score.7 After re-categorization (Section 3.1), m \u2265 n holds for most cases. For exceptions, we append NULL to the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Marco Damonte, Shay Cohen, Diego Marcheggiani and Wilker Aziz for helpful discussions as well as anonymous reviewers for their suggestions. The project was supported by the European Research Council (ERC StG BroadSem 678254) and the Dutch National Science Foundation (NWO VIDI 639.022.518).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Semantic parsing as machine translation", "authors": [ { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Vlachos", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "47--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proceedings of the 51st Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 47-52.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Broad-coverage CCG semantic parsing with AMR", "authors": [ { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1699--1710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 1699-1710.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. International Con- ference on Learning Representations.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Abstract Meaning Representation for Sembanking", "authors": [ { "first": "Laura", "middle": [], "last": "Banarescu", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Bonial", "suffix": "" }, { "first": "Shu", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Madalina", "middle": [], "last": "Georgescu", "suffix": "" }, { "first": "Kira", "middle": [], "last": "Griffitt", "suffix": "" }, { "first": "Ulf", "middle": [], "last": "Hermjakob", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for Sembanking.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Reweighted wake-sleep. International Conference on Learning Representations", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Bornschein", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f6rg Bornschein and Yoshua Bengio. 2015. Reweighted wake-sleep. International Confer- ence on Learning Representations.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The mathematics of statistical machine translation: Parameter estimation", "authors": [ { "first": "F", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Brown", "suffix": "" }, { "first": "J", "middle": [ "Della" ], "last": "Vincent", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Pietra", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Comput. Linguist", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Pa- rameter estimation. Comput. Linguist., 19(2):263- 311.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Oxford at semeval-2017 task 9: Neural amr parsing with pointeraugmented attention", "authors": [ { "first": "Jan", "middle": [], "last": "Buys", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)", "volume": "", "issue": "", "pages": "914--919", "other_ids": { "DOI": [ "10.18653/v1/S17-2157" ] }, "num": null, "urls": [], "raw_text": "Jan Buys and Phil Blunsom. 2017. Oxford at semeval- 2017 task 9: Neural amr parsing with pointer- augmented attention. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 914-919. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An Incremental Parser for Abstract Meaning Representation", "authors": [ { "first": "Marco", "middle": [], "last": "Damonte", "suffix": "" }, { "first": "B", "middle": [], "last": "Shay", "suffix": "" }, { "first": "Giorgio", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "", "middle": [], "last": "Satta", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter", "volume": "1", "issue": "", "pages": "536--546", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Damonte, Shay B Cohen, and Giorgio Satta. 2017. An Incremental Parser for Abstract Meaning Representation. In Proceedings of the 15th Confer- ence of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, volume 1, pages 536-546.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Language to logical form with neural attention", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "33--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Dong and Mirella Lapata. 2016. Language to logi- cal form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 33-43.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Deep Biaffine Attention for Neural Dependency Parsing. International Conference on Learning Representations", "authors": [ { "first": "Timothy", "middle": [], "last": "Dozat", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Dozat and Christopher D. Manning. 2017. Deep Biaffine Attention for Neural Dependency Parsing. International Conference on Learning Rep- resentations.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "CMU at SemEval-2016 Task 8: Graph-based AMR Parsing with Infinite Ramp Loss", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Flanigan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", "volume": "", "issue": "", "pages": "1202--1206", "other_ids": { "DOI": [ "10.18653/v1/S16-1186" ] }, "num": null, "urls": [], "raw_text": "Jeffrey Flanigan, Chris Dyer, Noah A. Smith, and Jaime Carbonell. 2016. CMU at SemEval-2016 Task 8: Graph-based AMR Parsing with Infinite Ramp Loss. In Proceedings of the 10th Interna- tional Workshop on Semantic Evaluation (SemEval- 2016), pages 1202-1206. Association for Computa- tional Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A Discriminative Graph-Based Parser for the Abstract Meaning Representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Flanigan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1426--1436", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A Discrim- inative Graph-Based Parser for the Abstract Mean- ing Representation. In Proceedings of the 52nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426- 1436, Baltimore, Maryland. Association for Com- putational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Abstract Meaning Representation Parsing using LSTM Recurrent Neural Networks", "authors": [ { "first": "William", "middle": [], "last": "Foland", "suffix": "" }, { "first": "James", "middle": [ "H" ], "last": "Martin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "463--472", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Foland and James H. Martin. 2017. Abstract Meaning Representation Parsing using LSTM Re- current Neural Networks. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 463-472, Vancouver, Canada. Association for Com- putational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Categorical reparameterization with gumbel-softmax", "authors": [ { "first": "Eric", "middle": [], "last": "Jang", "suffix": "" }, { "first": "Shixiang", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Poole", "suffix": "" } ], "year": 2017, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categor- ical reparameterization with gumbel-softmax. Inter- national Conference on Learning Representations.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Semantics-Based Machine Translation with Hyperedge Replacement Grammars", "authors": [ { "first": "K", "middle": [], "last": "Bevan", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Karl", "middle": [ "Moritz" ], "last": "Bauer", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Hermann", "suffix": "" }, { "first": "", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2012, "venue": "COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bevan K. Jones, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, and Kevin Knight. 2012. Semantics-Based Machine Translation with Hyper- edge Replacement Grammars. In COLING.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Adam: A Method for Stochastic Optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. International Conference on Learning Representations.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Autoencoding variational bayes", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2014, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Max Welling. 2014. Auto- encoding variational bayes. International Confer- ence on Learning Representations.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Semisupervised classification with graph convolutional networks", "authors": [ { "first": "N", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kipf", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2017, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas N Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. International Conference on Learning Representations.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Neural AMR: Sequence-to-Sequence Models for Parsing and Generation", "authors": [ { "first": "Ioannis", "middle": [], "last": "Konstas", "suffix": "" }, { "first": "Srinivasan", "middle": [], "last": "Iyer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "146--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural AMR: Sequence-to-Sequence Models for Parsing and Gen- eration. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 146-157, Van- couver, Canada. Association for Computational Lin- guistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Learning executable semantic parsers for natural language understanding", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Communications of the ACM", "volume": "59", "issue": "9", "pages": "68--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang. 2016. Learning executable semantic parsers for natural language understanding. Com- munications of the ACM, 59(9):68-76.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Toward Abstractive Summarization Using Semantic Representations", "authors": [ { "first": "Fei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Flanigan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Norman", "middle": [ "M" ], "last": "Sadeh", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman M. Sadeh, and Noah A. Smith. 2015. Toward Ab- stractive Summarization Using Semantic Represen- tations. In HLT-NAACL.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "NLTK: The Natural Language Toolkit", "authors": [ { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics", "volume": "1", "issue": "", "pages": "63--70", "other_ids": { "DOI": [ "10.3115/1118108.1118117" ] }, "num": null, "urls": [], "raw_text": "Edward Loper and Steven Bird. 2002. NLTK: The Nat- ural Language Toolkit. In Proceedings of the ACL- 02 Workshop on Effective Tools and Methodolo- gies for Teaching Natural Language Processing and Computational Linguistics -Volume 1, ETMTNLP '02, pages 63-70, Stroudsburg, PA, USA. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Addressing the rare word problem in neural machine translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Zaremba", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "11--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 11-19, Beijing, China. Association for Computational Lin- guistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The concrete distribution: A continuous relaxation of discrete random variables", "authors": [ { "first": "Andriy", "middle": [], "last": "Chris J Maddison", "suffix": "" }, { "first": "Yee Whye", "middle": [], "last": "Mnih", "suffix": "" }, { "first": "", "middle": [], "last": "Teh", "suffix": "" } ], "year": 2017, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris J Maddison, Andriy Mnih, and Yee Whye Teh. 2017. The concrete distribution: A continuous re- laxation of discrete random variables. International Conference on Learning Representations.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "The Stanford CoreNLP Natural Language Processing Toolkit", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [ "J" ], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2014, "venue": "Association for Computational Linguistics (ACL) System Demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP Natural Lan- guage Processing Toolkit. In Association for Com- putational Linguistics (ACL) System Demonstra- tions, pages 55-60.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A Simple and Accurate Syntax-Agnostic Neural Model for Dependency-based Semantic Role Labeling", "authors": [ { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Frolov", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "411--420", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diego Marcheggiani, Anton Frolov, and Ivan Titov. 2017. A Simple and Accurate Syntax-Agnostic Neural Model for Dependency-based Semantic Role Labeling. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 411-420, Vancouver, Canada. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Encoding Sentences with Graph Convolutional Networks for Semantic Role Labeling", "authors": [ { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1507--1516", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diego Marcheggiani and Ivan Titov. 2017. Encoding Sentences with Graph Convolutional Networks for Semantic Role Labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1507-1516, Copenhagen, Denmark. Association for Computational Linguis- tics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Learning Latent Permutations with Gumbel-Sinkhorn Networks. International Conference on Learning Representations", "authors": [ { "first": "Gonzalo", "middle": [], "last": "Mena", "suffix": "" }, { "first": "David", "middle": [], "last": "Belanger", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Linderman", "suffix": "" }, { "first": "Jasper", "middle": [], "last": "Snoek", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gonzalo Mena, David Belanger, Scott Linderman, and Jasper Snoek. 2018. Learning Latent Permutations with Gumbel-Sinkhorn Networks. International Conference on Learning Representations. Accepted as poster.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Addressing a question answering challenge by combining statistical methods with inductive rule learning and reasoning", "authors": [ { "first": "Arindam", "middle": [], "last": "Mitra", "suffix": "" }, { "first": "Chitta", "middle": [], "last": "Baral", "suffix": "" } ], "year": 2016, "venue": "30th AAAI Conference on Artificial Intelligence, AAAI 2016", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arindam Mitra and Chitta Baral. 2016. Addressing a question answering challenge by combining statisti- cal methods with inductive rule learning and reason- ing. In 30th AAAI Conference on Artificial Intelli- gence, AAAI 2016. AAAI press.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Neural variational inference and learning in belief networks", "authors": [ { "first": "Andriy", "middle": [], "last": "Mnih", "suffix": "" }, { "first": "Karol", "middle": [], "last": "Gregor", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andriy Mnih and Karol Gregor. 2014. Neural varia- tional inference and learning in belief networks. In Proceedings of the International Conference on Ma- chine Learning.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Neural Semantic Parsing by Character-based Translation: Experiments with Abstract Meaning Representations", "authors": [ { "first": "Rik", "middle": [], "last": "Van Noord", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Bos", "suffix": "" } ], "year": 2017, "venue": "Computational Linguistics in the Netherlands Journal", "volume": "7", "issue": "", "pages": "93--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rik van Noord and Johan Bos. 2017. Neural Se- mantic Parsing by Character-based Translation: Ex- periments with Abstract Meaning Representations. Computational Linguistics in the Netherlands Jour- nal, 7:93-108.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Perturband-map random fields: Using discrete optimization to learn and sample from energy models", "authors": [ { "first": "George", "middle": [], "last": "Papandreou", "suffix": "" }, { "first": "Alan", "middle": [ "L" ], "last": "Yuille", "suffix": "" } ], "year": 2011, "venue": "Computer Vision (ICCV), 2011 IEEE International Conference on", "volume": "", "issue": "", "pages": "193--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Papandreou and Alan L Yuille. 2011. Perturb- and-map random fields: Using discrete optimization to learn and sample from energy models. In Com- puter Vision (ICCV), 2011 IEEE International Con- ference on, pages 193-200. IEEE.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Automatic differentiation in PyTorch", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Soumith", "middle": [], "last": "Chintala", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Devito", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Alban", "middle": [], "last": "Desmaison", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gre- gory Chanan, Edward Yang, Zachary DeVito, Zem- ing Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Addressing the Data Sparsity Issue in Neural AMR Parsing", "authors": [ { "first": "Xiaochang", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Chuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "366--375", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaochang Peng, Chuan Wang, Daniel Gildea, and Ni- anwen Xue. 2017. Addressing the Data Sparsity Is- sue in Neural AMR Parsing. In Proceedings of the 15th Conference of the European Chapter of the As- sociation for Computational Linguistics: Volume 1, Long Papers, pages 366-375. Association for Com- putational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "GloVe: Global Vectors for Word Representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1532- 1543.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Aligning english strings with abstract meaning representation graphs", "authors": [ { "first": "Nima", "middle": [], "last": "Pourdamghani", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Ulf", "middle": [], "last": "Hermjakob", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "425--429", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nima Pourdamghani, Yang Gao, Ulf Hermjakob, and Kevin Knight. 2014. Aligning english strings with abstract meaning representation graphs. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 425-429.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "CAMR at SemEval-2016 Task 8: An Extended Transition-based AMR Parser", "authors": [ { "first": "Chuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Xiaoman", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Ji", "middle": [], "last": "Heng", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", "volume": "", "issue": "", "pages": "1173--1178", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chuan Wang, Sameer Pradhan, Xiaoman Pan, Heng Ji, and Nianwen Xue. 2016. CAMR at SemEval- 2016 Task 8: An Extended Transition-based AMR Parser. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1173-1178, San Diego, California. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Getting the Most out of AMR Parsing", "authors": [ { "first": "Chuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1257--1268", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chuan Wang and Nianwen Xue. 2017. Getting the Most out of AMR Parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1257-1268.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Robust Subgraph Generation Improves Abstract Meaning Representation Parsing", "authors": [ { "first": "Keenon", "middle": [], "last": "Werling", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keenon Werling, Gabor Angeli, and Christopher D. Manning. 2015. Robust Subgraph Generation Im- proves Abstract Meaning Representation Parsing. In ACL.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "The Neural Noisy Channel", "authors": [ { "first": "Lei", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Kocisky", "suffix": "" } ], "year": 2017, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lei Yu, Phil Blunsom, Chris Dyer, Edward Grefen- stette, and Tomas Kocisky. 2017. The Neural Noisy Channel. In International Conference on Learning Representations.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Online Segment to Segment Neural Transduction", "authors": [ { "first": "Lei", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Buys", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1307--1316", "other_ids": { "DOI": [ "10.18653/v1/D16-1138" ] }, "num": null, "urls": [], "raw_text": "Lei Yu, Jan Buys, and Phil Blunsom. 2016. Online Segment to Segment Neural Transduction. In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 1307- 1316. Association for Computational Linguistics.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "End-to-end learning of semantic role labeling using recurrent neural networks", "authors": [ { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1127--1137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural net- works. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), volume 1, pages 1127-1137.", "links": null } }, "ref_entries": { "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "Relation identification: predicting a relation between boy and go-02 relying on the two concepts and corresponding RNN states." }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "An example of re-categorized AMR. AMR graph at the top, re-categorized concepts in the middle, and the sentence is at the bottom." }, "TABREF0": { "text": "", "content": "
: Smatch scores on the test set. R2 is
LDC2016E25 dataset, and R1 is LDC2015E86
dataset. Statistics on R2 are over 8 runs.
", "html": null, "num": null, "type_str": "table" }, "TABREF2": { "text": "", "content": "
:F1 scores on individual phenom-
ena. A'17 is AMREager, C'16 is CAMR, J'16 is
JAMR, Ch'17 is ChSeq+100K. Ours are marked
with standard deviation.
MetricPre-R1Pre-R2
AlignAlign mean
Smatch72.873.7 73.574.4
Unlabeled75.376.3 76.177.1
No WSD73.874.7 74.675.5
Reentrancy 50.250.6 52.652.3
Concepts85.485.5 85.585.9
NER85.384.8 85.386.0
Wiki66.875.6 67.875.7
Negations56.057.2 56.658.4
SRL68.868.9 70.269.8
", "html": null, "num": null, "type_str": "table" }, "TABREF3": { "text": "", "content": "
: F1 scores of on subtasks. Scores on
ablations are averaged over 2 runs. The left side
results are from LDC2015E86 and right results are
from LDC2016E25.
the previous best model, multi-BiLSTM parser
of
", "html": null, "num": null, "type_str": "table" } } } }