{ "paper_id": "N18-1041", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:53:39.902342Z" }, "title": "Abstract Meaning Representation for Paraphrase Detection", "authors": [ { "first": "Fuad", "middle": [], "last": "Issa", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": { "addrLine": "10 Crichton Street", "postCode": "EH8 9AB", "settlement": "Edinburgh", "country": "UK" } }, "email": "issa.fuad@gmail.com" }, { "first": "Marco", "middle": [], "last": "Damonte", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": { "addrLine": "10 Crichton Street", "postCode": "EH8 9AB", "settlement": "Edinburgh", "country": "UK" } }, "email": "m.damonte@sms.ed.ac.uk" }, { "first": "Shay", "middle": [ "B" ], "last": "Cohen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": { "addrLine": "10 Crichton Street", "postCode": "EH8 9AB", "settlement": "Edinburgh", "country": "UK" } }, "email": "scohen@inf.ed.ac.uk" }, { "first": "Xiaohui", "middle": [], "last": "Yan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Huawei Technologies", "location": { "postCode": "95050", "settlement": "San Jose", "region": "CA", "country": "USA" } }, "email": "yanxiaohui2@huawei.com" }, { "first": "Yi", "middle": [], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Huawei Technologies", "location": { "postCode": "95050", "settlement": "San Jose", "region": "CA", "country": "USA" } }, "email": "yi.chang@huawei.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Meaning Representation (AMR) parsing aims at abstracting away from the syntactic realization of a sentence, and denoting only its meaning in a canonical form. As such, it is ideal for paraphrase detection, a problem in which one is required to specify whether two sentences have the same meaning. We show that na\u00efve use of AMR in paraphrase detection is not necessarily useful, and turn to describe a technique based on latent semantic analysis in combination with AMR parsing that significantly advances state-of-the-art results in paraphrase detection for the Microsoft Research Paraphrase Corpus. Our best results in the transductive setting are 86.6% for accuracy and 90.0% for F 1 measure.", "pdf_parse": { "paper_id": "N18-1041", "_pdf_hash": "", "abstract": [ { "text": "Meaning Representation (AMR) parsing aims at abstracting away from the syntactic realization of a sentence, and denoting only its meaning in a canonical form. As such, it is ideal for paraphrase detection, a problem in which one is required to specify whether two sentences have the same meaning. We show that na\u00efve use of AMR in paraphrase detection is not necessarily useful, and turn to describe a technique based on latent semantic analysis in combination with AMR parsing that significantly advances state-of-the-art results in paraphrase detection for the Microsoft Research Paraphrase Corpus. Our best results in the transductive setting are 86.6% for accuracy and 90.0% for F 1 measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Abstract Meaning Representation (AMR) parsing focuses on the conversion of natural language sentences into AMR graphs, aimed at abstracting away from the surface realizations of the sentences while preserving their meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We make a first step towards showing that AMR can be used in practice for a task that requires identifying the canonicalization of language: paraphrase detection. In a \"perfect world\" using AMR to test for paraphrasing relation of two sentences should be simple. It would require finding the two AMR parses for each of the sentences, and then checking whether they are identical. Since AMR is aimed at abstracting away from the surface form which is used to express meaning, two sentences should be paraphrases only if they have identical AMRs. For instance, the three sentences: 1. He described her as a curmudgeon, 2. His description of her: curmudgeon, 3. She was a curmudgeon, according to his description.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "describe-01 he curmudgeon she ARG0 ARG2 ARG1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Figure 1: AMR graph for \"He described her as a curmudgeon\", \"His description of her: curmudgeon\" and \"She was a curmudgeon, according to his description\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "should result in the same AMR graph as shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 48, "end": 56, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, in practice, things are different. First, there are no known AMR parsers that really distil only the meaning in text. For example, predicates which have interchangeable meaning use different AMR concepts, and there are errors that exist because of the machine learning techniques that are used for learning the parsers from data. Finally, even human annotations do not yield perfect AMRs, as the interannotator agreement reported in the literature for AMR is around 80% (Banarescu et al., 2013) .", "cite_spans": [ { "start": 479, "end": 503, "text": "(Banarescu et al., 2013)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Second, meaning is often contextual, and it is not fully possible to determine the corresponding AMR parse just by looking at a given sentence. Entity mentions denote different entities in different contexts, and similarly predicates and nouns are ambiguous and depend on context. As such, one cannot expect to use AMR in the transparent way mentioned above to identify paraphrase relations. However, we demonstrate in this paper that AMR can be used in a \"softer\" way to detect such relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Evaluation of AMR parsers is traditionally performed using the Smatch score (Cai and Knight, 2013) . However, Damonte et al. (2017) argue that more ad-hoc metrics can be useful for advancing AMR research. Paraphrase detection can be seen as a further benchmark for AMR parsers, highlighting their ability of abstracting away from syntax and representing the core concepts expressed in the sentence. In order to advance research in AMR and its applications, it is important to have metrics that reflect on the ability of AMR graphs to have impact on subsequent tasks. In this work we therefore use two different AMR parsers, comparing them throughout all experiments.", "cite_spans": [ { "start": 76, "end": 98, "text": "(Cai and Knight, 2013)", "ref_id": "BIBREF8" }, { "start": 110, "end": 131, "text": "Damonte et al. (2017)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "AMRs are rooted, edge labeled, node labeled, directed graphs. They are biased towards the English language and rely on PropBank (Kingsbury and Palmer, 2002) for the definition of the main events in the sentence. Nodes in an AMR graph represent events and concepts, while edges represent the relationships between them. Banarescu et al. (2013) state that AMR are aimed at canonicalizing multiple ways of expressing the same idea, which could be of great assistance to solve the problem of paraphrase detection. However, this goal is not entirely achieved in practice, and it will take long for AMR parsers to mature and achieve such canonicalization. At the moment, for example, even a simple pair of sentences such as \"the boy desires the cake\" and the \"the boy wants the cake\" would not have the same canonical form by state-of-the-art AMR parsers.", "cite_spans": [ { "start": 128, "end": 156, "text": "(Kingsbury and Palmer, 2002)", "ref_id": "BIBREF21" }, { "start": 319, "end": 342, "text": "Banarescu et al. (2013)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "While some researchers (Fodor, 1975) have doubted the practical possibility of canonicalizing language or finding identical paraphrases in English or otherwise, much work in NLP has been devoted to the problem of paraphrase identification (Mitchell and Lapata, 2010; Baroni and Lenci, 2010; Socher et al., 2011; Guo and Diab, 2012; Ji and Eisenstein, 2013) and more weakly, finding entailment between sentences and phrases (Dagan et al., 2006; Bos and Markert, 2005; Harabagiu and Hickl, 2006; Lewis and Steedman, 2013) . In this work, we use the AMRs parsed for given sentences as a mean to extract useful information and train paraphrase detection classifiers on top of them.", "cite_spans": [ { "start": 23, "end": 36, "text": "(Fodor, 1975)", "ref_id": "BIBREF15" }, { "start": 239, "end": 266, "text": "(Mitchell and Lapata, 2010;", "ref_id": "BIBREF30" }, { "start": 267, "end": 290, "text": "Baroni and Lenci, 2010;", "ref_id": "BIBREF3" }, { "start": 291, "end": 311, "text": "Socher et al., 2011;", "ref_id": "BIBREF36" }, { "start": 312, "end": 331, "text": "Guo and Diab, 2012;", "ref_id": "BIBREF18" }, { "start": 332, "end": 356, "text": "Ji and Eisenstein, 2013)", "ref_id": "BIBREF20" }, { "start": 423, "end": 443, "text": "(Dagan et al., 2006;", "ref_id": "BIBREF9" }, { "start": 444, "end": 466, "text": "Bos and Markert, 2005;", "ref_id": "BIBREF7" }, { "start": 467, "end": 493, "text": "Harabagiu and Hickl, 2006;", "ref_id": "BIBREF19" }, { "start": 494, "end": 519, "text": "Lewis and Steedman, 2013)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Our work falls under the category of distributional methods for paraphrase detection (Turney and Pantel, 2010; Mihalcea et al., 2006; Mitchell and Lapata, 2010; Guo and Diab, 2012; Ji and Eisenstein, 2013) such as with latent semantic analysis (LSA, Landauer et al., 1998) . The main principle behind this approach is to detect semantic similarity through distributional representations for a given sentence and its potential paraphrase, where these representations are compared against each other according to some similarity metric or used as features with a discriminative classification method (Mihalcea et al., 2006; Guo and Diab, 2012; Ji and Eisenstein, 2013) .", "cite_spans": [ { "start": 85, "end": 110, "text": "(Turney and Pantel, 2010;", "ref_id": "BIBREF37" }, { "start": 111, "end": 133, "text": "Mihalcea et al., 2006;", "ref_id": "BIBREF29" }, { "start": 134, "end": 160, "text": "Mitchell and Lapata, 2010;", "ref_id": "BIBREF30" }, { "start": 161, "end": 180, "text": "Guo and Diab, 2012;", "ref_id": "BIBREF18" }, { "start": 181, "end": 205, "text": "Ji and Eisenstein, 2013)", "ref_id": "BIBREF20" }, { "start": 244, "end": 272, "text": "(LSA, Landauer et al., 1998)", "ref_id": null }, { "start": 598, "end": 621, "text": "(Mihalcea et al., 2006;", "ref_id": "BIBREF29" }, { "start": 622, "end": 641, "text": "Guo and Diab, 2012;", "ref_id": "BIBREF18" }, { "start": 642, "end": 666, "text": "Ji and Eisenstein, 2013)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis", "sec_num": "2.1" }, { "text": "LSA is indeed one of the main tools in obtaining such distributional representations for the problem of paraphrase detection. Most often, TF-IDF weighting has been used for building the sentence-term matrix, but Ji and Eisenstein (2013) have shown that a significant improvement can be achieved in detecting similarity if one re-weights the sentence-term matrix differently. Indeed, this is one of our main contributions: we build on previous work on LSA for paraphrase detection and propose a technique to re-weight a sentenceconcept matrix based on the AMR graphs for the given sentences. More details on the use of LSA for paraphrase detection appear in Section 4.", "cite_spans": [ { "start": 212, "end": 236, "text": "Ji and Eisenstein (2013)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis", "sec_num": "2.1" }, { "text": "AMR parsing is the task of converting natural language sentences into AMR graphs, which are Directed Acyclic Graphs (DAGs) in all cases except a few rare controversial cases. This task embeds several common NLP problems together, such as named entity recognition, sentential-level coreference resolution, semantic role labeling and wordsense disambiguation. Several parsers for AMR have been recently developed (Flanigan et al., 2014; Wang et al., 2015; Peng et al., 2015; Pust et al., 2015; Goodman et al., 2016; Rao et al., 2015; Vanderwende et al., 2015; Artzi et al., 2015; Barzdins and Gosko, 2016a; Zhou et al., 2016; Damonte et al., 2017; Barzdins and Gosko, 2016b; Konstas et al., 2017) . Shared tasks were also organized in order to push forward the state-of-the-art (May, 2016; May and Priyadarshi, 2017) .", "cite_spans": [ { "start": 411, "end": 434, "text": "(Flanigan et al., 2014;", "ref_id": "BIBREF14" }, { "start": 435, "end": 453, "text": "Wang et al., 2015;", "ref_id": "BIBREF39" }, { "start": 454, "end": 472, "text": "Peng et al., 2015;", "ref_id": "BIBREF33" }, { "start": 473, "end": 491, "text": "Pust et al., 2015;", "ref_id": "BIBREF34" }, { "start": 492, "end": 513, "text": "Goodman et al., 2016;", "ref_id": "BIBREF17" }, { "start": 514, "end": 531, "text": "Rao et al., 2015;", "ref_id": "BIBREF35" }, { "start": 532, "end": 557, "text": "Vanderwende et al., 2015;", "ref_id": "BIBREF38" }, { "start": 558, "end": 577, "text": "Artzi et al., 2015;", "ref_id": "BIBREF1" }, { "start": 578, "end": 604, "text": "Barzdins and Gosko, 2016a;", "ref_id": "BIBREF4" }, { "start": 605, "end": 623, "text": "Zhou et al., 2016;", "ref_id": "BIBREF41" }, { "start": 624, "end": 645, "text": "Damonte et al., 2017;", "ref_id": "BIBREF11" }, { "start": 646, "end": 672, "text": "Barzdins and Gosko, 2016b;", "ref_id": "BIBREF5" }, { "start": 673, "end": 694, "text": "Konstas et al., 2017)", "ref_id": "BIBREF22" }, { "start": 776, "end": 787, "text": "(May, 2016;", "ref_id": "BIBREF27" }, { "start": 788, "end": 814, "text": "May and Priyadarshi, 2017)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "AMR Parsing", "sec_num": "2.2" }, { "text": "Meaning representations are usually evaluated based on their compositionality (construction of a representation based on parts of the text in a consistent way), verifiability (ability to check whether a meaning representation is true in a given model of the world), unambiguity (ability to full disambiguate text into the representation in a way that does not leave any ambiguity lingering), inference (the existence of a calculus that can be used to infer whether one meaning representation is logically implied by others) and canonicalization (the ability to map several surface forms, such as paraphrases, into a single unique meaning representation). In this paper, we evaluate AMR on its ability to canonicalize language through its assistance in deciding whether two sentences are paraphrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AMR Parsing", "sec_num": "2.2" }, { "text": "We note that this test is masked by the accuracy of the AMR parsers we use, which indeed do not give always fully correct predictions. These errors in our paraphrase detection due to the accuracy of the AMR parser are different than those which originate in an inherent difficulty of representing paraphrases using AMR because of the limitations of the formalism and the annotation guidelines that AMR follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AMR Parsing", "sec_num": "2.2" }, { "text": "We experiment with two AMR parsers for which a public version is available. The first is JAMR (Flanigan et al., 2014) , which is a graphbased approach to AMR parsing. It works by performing two steps on the input sentence: concept identification and relation identification. The former discovers the concept fragments corresponding to span of words in the sentence, while the latter finds the optimal spanning connected subgraph from the concepts identified in the first step. The concept identification step has quadratic complexity and the relation identification step is O(|V | 2 log |V |), with |V | being the set of nodes in the AMR graph.", "cite_spans": [ { "start": 94, "end": 117, "text": "(Flanigan et al., 2014)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "AMR Parsing", "sec_num": "2.2" }, { "text": "The second is AMREager (Damonte et al., 2017) , which is a transition-based parser that works by scanning the string left-to-right and building the graph as the scan proceeds. This transition-based system is akin to the dependency parsing transition-system ArcEager of Nivre (2004) , only without constraints that ensure that the resulting structure is a tree. In addition, there are operations that make the system create additional non-projective structures by checking after transition step whether siblings should be connected together with an edge. The complexity of AMREager is linear in the length of the sentence. AMREager was extended to other languages (Damonte and Cohen, 2018) , and we leave it for future work to test the utility of AMR for paraphrase detection in these languages.", "cite_spans": [ { "start": 23, "end": 45, "text": "(Damonte et al., 2017)", "ref_id": "BIBREF11" }, { "start": 269, "end": 281, "text": "Nivre (2004)", "ref_id": "BIBREF31" }, { "start": 663, "end": 688, "text": "(Damonte and Cohen, 2018)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "AMR Parsing", "sec_num": "2.2" }, { "text": "Let S be a set of sentences. We are given input data in the form of (x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "(i) 1 , x (i) 2 , b (i) ) for i \u2208 [n]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "where n is the number of training examples, x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "(i) j \u2208 S, j \u2208 {1, 2} and b (i) \u2208 {0, 1} is a binary indica- tor that tells whether x (i) 1 is a paraphrase of x (i) 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "The goal is to learn a classifier c : S \u00d7 S \u2192 {0, 1} that tells for unseen instances whether the pair of sentences given as input are paraphrases of each other. We denote by [n] the set {1, . . . , n}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "The first step in our approach is the construction of lower-dimensional representations for the sentences in the training data. We use latent semantic analysis to get the sentence representations, which are then used to detect paraphrases using a classifier. More specifically, given a set of sentences", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "S = {x (i) j | j \u2208 {1, 2}, i \u2208 [n]},", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "we build a sentence-term matrix T such that T k indicates the use of the th word in the kth sentence in S. The number of rows is the number of sentences in the dataset and the number of columns is the vocabulary size. This follows previous work with the use of LSA for paraphrasing (Guo and Diab, 2012; Ji and Eisenstein, 2013) .", "cite_spans": [ { "start": 282, "end": 302, "text": "(Guo and Diab, 2012;", "ref_id": "BIBREF18" }, { "start": 303, "end": 327, "text": "Ji and Eisenstein, 2013)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "As a baseline, we experiment with two ways of assigning the values to the matrix:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "\u2022 T k is the count of the th word in the kth sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "T k = count( , k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "\u2022 T k is the term frequency-inverse document frequency (TF-IDF) for the kth sentence with respect to the th word. TF-IDF is commonly used in Information Retrieval to score words in a document and combines the frequency of the words in a document with the rarity of the term across documents. With TF-IDF, in order to have a high score, concepts must appear in this sentence and not in many others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "In that case, we define:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "T k = count( , k) \u00d7 n csent( , k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "where count( , k) gives the count of the th word in the kth sentence and csent is the number of sentences which contain the th word:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "csent( , k) = |{k \u2208 [|S|] : count(k, ) > 0}|.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "The AMR-based systems of Section 5 build upon this by re-weighting T k with terms depending on the AMRs of the sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "For paraphrasing, previous work (Ji and Eisenstein, 2013) has also considered the transductive setting (Gammerman et al., 1998) , which we also use in our experiments. In the transductive setting, S also includes the sentences on which we expect to perform the final evaluation for the purpose of learning the latent representations. Note that, in this case, the labels b (i) are not used in the process of constructing word representations. In the inductive setting, on the other hand, the sentences in the testing set are not included in training and we project them instead using the LSA projection matrices onto the latent space learned to find their representations.", "cite_spans": [ { "start": 32, "end": 57, "text": "(Ji and Eisenstein, 2013)", "ref_id": "BIBREF20" }, { "start": 103, "end": 127, "text": "(Gammerman et al., 1998)", "ref_id": "BIBREF16" }, { "start": 372, "end": 375, "text": "(i)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "Once we constructed the matrix T , we perform truncated singular value decomposition (SVD) on it, such that:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "T \u2248 U \u03a3V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "where U \u2208 R k\u00d7m , V \u2208 R \u00d7m and \u03a3 \u2208 R m\u00d7m is a diagonal matrix of singular values. The final sentence representations are the rows of the U matrix which range over the sentences and have m dimensions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "The output of this process is a function f : S \u2192 R m which attaches to each sentence a representation. The idea behind LSA is that this matrix decomposition will make semantically similar sentences to appear close in the latent space, hence alleviating the problem of data sparsity and making it easier to detect when two sentences are paraphrases of each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "Once we construct the sentence representations from the training data (either in the inductive or the transductive setting) we use the function f to map each pair of sentences from the training data", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "(x (i) 1 , x (i) 2 ) to two vectors f (x (i) 1 ) + f (x (i) 2 ) and |f (x (i) 1 ) \u2212 f (x (i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "2 )| (where the absolute value is taken coordinate-wise) and then concatenate them into a feature vector \u03c6(x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "(i) 1 , x (i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "2 ), which is then used as input to a support vector machine (SVM) classifier (Ji and Eisenstein, 2013 ). 1", "cite_spans": [ { "start": 78, "end": 102, "text": "(Ji and Eisenstein, 2013", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Latent Semantic Analysis for Paraphrase Detection", "sec_num": "4" }, { "text": "The main hypothesis tested in this work is that AMR can be useful in deciding whether two sentences are paraphrases of each other. We investigate two ways to use AMR information to better inform the classifier: similarity-based and LSAbased.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract Meaning Representation Features", "sec_num": "5" }, { "text": "An obvious way to use AMR information is to just compute the similarity between the two graphs and use the score as an additional feature. As a score we use Smatch, which computes the overlap in terms of recall, precision and F-score between two unaligned graphs by finding the alignments between the graphs that maximizes the overlap. The alignment step is necessary because in AMR multiple nodes can have the same labels and arbitrary variable names are used to distinguish between them. Smatch is the standard metric to evaluate the overlap between AMR graphs. The score returned by Smatch is used as a single additional feature for the SVM. The amount of overlap in the AMR nodes of the two graphs can be a good indicator of whether the sentences are paraphrases of each other. To test this hypothesis, we extract the unordered sets of AMR nodes and use the Jaccard similarity coefficient as a feature. This is directly related to the concept identification step of the AMR parsing process, which is concerned with generating and labeling the nodes of the AMR graph. Concept identification is arguably one of the most challenging part of AMR parsing as the mapping between word spans and AMR nodes is not trivial (Werling et al., 2015) . It is often considered as the first stage in the AMR parsing pipeline and it is therefore reasonable to attempt using its intermediate results. We choose Jaccard as a metric for bag of concepts overlap following previous work in paraphrase detection (Achananuparp et al., 2008; Be-rant and Liang, 2014) .", "cite_spans": [ { "start": 1217, "end": 1239, "text": "(Werling et al., 2015)", "ref_id": "BIBREF40" }, { "start": 1492, "end": 1519, "text": "(Achananuparp et al., 2008;", "ref_id": "BIBREF0" }, { "start": 1520, "end": 1544, "text": "Be-rant and Liang, 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Graph Similarity and Bag of AMR Concepts", "sec_num": "5.1" }, { "text": "We note that while this approach of using AMR to detect paraphrase may sound plausible, it does not perform very well. As such, we compare and contrast this as an AMR baseline with the approach that makes use of PageRank with TF-IDF reweighting for LSA, as described next.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Similarity and Bag of AMR Concepts", "sec_num": "5.1" }, { "text": "The main idea is to re-weight the LSA sentenceterm matrix T (Section 4) according to a probability distribution over the AMR nodes, which we accomplish by means of PageRank (Page et al., 1999) . The utility of re-weighting terms in the sentence-term matrix has been previously proved (Turney and Pantel, 2010). PageRank is a method, originally developed for web pages, for ranking nodes in a graph according to their impact on other nodes. The algorithm works iteratively by adjusting at each iteration the score of each node based on the number and scores of nearby nodes that is connected to it, until convergence. Prior to applying PageRank, we merge the two graphs by collapsing the concepts in the two graphs that have the same labels, similarly to Liu et al. (2015) , as shown in Figure 2 . We then compute the PageRank score for each node in the merged graph and multiply them by the corresponding frequency count of that concept in the sentence-term matrix. The graph merging step is necessary in order to ensure that overlapping concepts obtain high PageRank scores. The PageRank step applied to the merged graph ensures that this importance propagates to nearby nodes. For a given graph G = (V, E), PageRank takes as input a list of edges between nodes:", "cite_spans": [ { "start": 173, "end": 192, "text": "(Page et al., 1999)", "ref_id": "BIBREF32" }, { "start": 754, "end": 771, "text": "Liu et al. (2015)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 786, "end": 794, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "PageRank and TF-IDF Reweighting for LSA", "sec_num": "5.2" }, { "text": "E = {(n i , m i )}, \u2200i = 0, . . . , n n = |E|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank and TF-IDF Reweighting for LSA", "sec_num": "5.2" }, { "text": "and outputs a PageRank score for each node by solving the following equations with respect to PG(\u2022):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank and TF-IDF Reweighting for LSA", "sec_num": "5.2" }, { "text": "PG(n) = m\u2208I(n) PG(m) l(m)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank and TF-IDF Reweighting for LSA", "sec_num": "5.2" }, { "text": "where I(n) are the input edges to node n and l(m) is the number of edges coming out of m.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank and TF-IDF Reweighting for LSA", "sec_num": "5.2" }, { "text": "For each concept of the merged AMR graph, we compute T k , the weight for the LSA matrix introduced in Section 4, as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank and TF-IDF Reweighting for LSA", "sec_num": "5.2" }, { "text": "T k = PG(l, k) \u00d7 count(l, k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank and TF-IDF Reweighting for LSA", "sec_num": "5.2" }, { "text": "where PG(l, k) is the PageRank of th concept for the kth sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank and TF-IDF Reweighting for LSA", "sec_num": "5.2" }, { "text": "As a baseline for the PageRank system, the TF-IDF re-weighting scheme, as described in Section 4, is also used to re-weight the AMR concepts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank and TF-IDF Reweighting for LSA", "sec_num": "5.2" }, { "text": "We now describe the experiments that we devised to discover whether AMR is useful for paraphrase detection. For AMR parsing, we used the JAMR 2 version published for SemEval 2016 (Flanigan et al., 2016) , reporting 0.67 Smatch score on LDC2015E86 and the first and only version available for AMREager, 3 obtaining 0.64 Smatch score on the same dataset. First, we discuss experiments where the AMRs are used as a mean to extract additional sparse features for a SVM classifier. Then we turn to LSA to construct a representation of the sentence based on the reweighting on the AMR nodes achieved through either PageRank or TF-IDF. Results show how the latter, which builds on state-of-the-art systems for this task, is a much more promising approach. Finally, we analyze how performance changes as a function of the number of dimensions used in the truncated matrix.", "cite_spans": [ { "start": 179, "end": 202, "text": "(Flanigan et al., 2016)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "For evaluation, we use the Microsoft Research Paraphrase Corpus (Dolan et al., 2004) . We use 70% of the dataset as training data and 30% as a test set. The total number of sentence pairs in the corpus is 5,801.", "cite_spans": [ { "start": 64, "end": 84, "text": "(Dolan et al., 2004)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "The Bag of words (BOW) baseline consists of a SVM that takes into account one single feature: the Jaccard score between the BOW representations for the two sentences, i.e., one-hot vectors indicating whether each word in the vocabulary is used or not. The use of the single Jaccard feature means that for the linear kernel we just learn a threshold on the score. We note that the addition of the similarity-based features does not suffice to outperform the BOW baseline, as described in Table 1 . Unlike Smatch, the bag of concepts feature does not need to find a, possibly wrong, alignment between the two graphs Figure 2 : Visualization of the graph merging procedure for the sentence Yucaipa owned Dominick's before selling the chain to Safeway in 1998 for $2.5 billion. (above) and Yucaipa bought Dominick's in 1995 for $693 million and sold it to Safeway for $1.8 billion in 1998. (below). The \"date-entity\", \"sell-01\" and \"1998\" nodes in the two AMR graphs on the left are merged in the resulting graph on the right.", "cite_spans": [], "ref_spans": [ { "start": 487, "end": 494, "text": "Table 1", "ref_id": null }, { "start": 614, "end": 622, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Graph Similarity and Bag of AMR Concepts", "sec_num": "6.1" }, { "text": "because it considers the node labels only. Interestingly, the addition of the bag of concepts feature is beneficial only for AMREager. It is indeed worth noting the different behaviors of the two parsers: when using the Smatch score only, JAMR reports slightly higher numbers than AMREager. However, when using the bag of concepts features too, AMREager is considerably better than JAMR, which is unexpected as the concept identification performance of the two parsers is reported to be identical (Damonte et al., 2017) . There is also some variability with the kernel used for the SVM classifier. The polynomial kernel does consistently better than the RBF and linear kernel. This means that a low-level interaction between the sentence representations does exist (when trying to determine whether they are paraphrases), but a higher order interaction, such as implied with RBF, is not necessary to be modeled.", "cite_spans": [ { "start": 497, "end": 519, "text": "(Damonte et al., 2017)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Graph Similarity and Bag of AMR Concepts", "sec_num": "6.1" }, { "text": "We now turn to experiments involving LSA as a mean to represent the candidate paraphrases. In this set of experiments, the baseline consists of using TF-IDF to weight the bag of words in the sentence-term matrix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PageRank and TF-IDF Reweighting for LSA", "sec_num": "6.2" }, { "text": "We first try to replace the bag of words with the bag of concepts from the AMR graphs, also re-weighted by TF-IDF. Then, we also replace the TF-IDF with PageRank as it is more appropriate to re-weight graph structures than TF-IDF. We report experiments for both inductive setting and transductive setting (Table 3) . Our first finding is that, regardless of the parser, AMR is very helpful in the tranductive setting while it is harmful in the inductive setting. When using bag of words, it is easy to project sentences of the test set into the latent space learned on the training set only. However, our experiments indicate that this is not as easy with the AMR concepts produced by the two parsers. On the other hand, when the latent space is learned using also the sentences in the test set, the abstractive power of AMRs is helpful for this task. In the inductive setting, PageRank fails to improve over the TF-IDF scheme and neither of them outperform the BOW baseline. AMREager outperforms JAMR in this case. In the transduc-tive case, the AMRs provided by JAMR are helpful with both TF-IDF and PageRank, while the graphs provided by AMREager give good results only for the PageRank scheme. The best result is achieved with JAMR, PageRank and a linear kernel for the SVM classifier.", "cite_spans": [], "ref_spans": [ { "start": 305, "end": 314, "text": "(Table 3)", "ref_id": null } ], "eq_spans": [], "section": "PageRank and TF-IDF Reweighting for LSA", "sec_num": "6.2" }, { "text": "We wanted to test in our experiments whether the same gains that are achieved with AMR parsing can also be achieved with just a syntactic parser. To test that, we parsed the paraphrase dataset with a dependency parser and reduced the syntactic parse trees to AMR graphs (meaning, we represented the dependency trees as graphs by representing each word as a node and labeled dependency relations as edges). Figure 3 gives an example of such conversion. As can be see, the AMRlike representation for the dependency trees retains words such as determiners (\"the\"). It also uses a different set of relations, as reflected by the edge labels that the dependency parser returns.", "cite_spans": [], "ref_spans": [ { "start": 406, "end": 414, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "PageRank and TF-IDF Reweighting for LSA", "sec_num": "6.2" }, { "text": "We chose to do this reduction instead of directly building a classifier that makes use of the dependency trees to ensure we are conducting a controlled experiment in which we precisely compare the use of syntax for paraphrase against the use of semantics. Once the syntactic trees are converted to AMR graphs, the same code is used to run the experiments as in the case of AMR parsing, with both the PageRank and TF-IDF reweighting settings. We used the dependency parser from the Stanford CoreNLP (Manning et al., 2014) . The results are given in Table 3 , under \"dep.\" As can be seen, these results lag behind the bag-of-words model in the inductive case and the AMR models in the transductive case. This could be attributed to AMR parsers better abstracting away from the surface form than dependency parsers.", "cite_spans": [ { "start": 498, "end": 520, "text": "(Manning et al., 2014)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 548, "end": 555, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "PageRank and TF-IDF Reweighting for LSA", "sec_num": "6.2" }, { "text": "acc. F 1 Most common class 66.5 79.9 Mitchell and Lapata (2010) 73.0 82.3 Baroni and Lenci (2010) 73.5 82.2 Socher et al. (2011) 76.8 83.6 Guo and Diab (2012) 71.5 NR Ji and Eisenstein (2013) (ind.) 80.0 85.4 Ji and Eisenstein (2013) (trans.) 80.4 86.0 Our paper (inductive) 68.7 80.9 Our paper (transductive) 86.6 90.0 Table 3 : LSA experiments in the inductive and transductive settings, with two different reweighting schema: \"PageRank\" and \"TF-IDF\". \"linear,\" \"poly\" and \"rbf\" denote the kernel for the SVM. \"dep.\" denotes the use of syntactic parsing instead of semantic parsing.", "cite_spans": [ { "start": 37, "end": 63, "text": "Mitchell and Lapata (2010)", "ref_id": "BIBREF30" }, { "start": 74, "end": 97, "text": "Baroni and Lenci (2010)", "ref_id": "BIBREF3" }, { "start": 108, "end": 128, "text": "Socher et al. (2011)", "ref_id": "BIBREF36" }, { "start": 139, "end": 158, "text": "Guo and Diab (2012)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 320, "end": 327, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "truncated matrix U (Section 4). More specifically, on the x axis of the plots we have m/l, where m is the number of columns in the truncated matrix and l the number of words in the vocabulary. The plot shows that the performance stays stable for inductive inference. With transductive inference, however, performance peaks when m is very close to the vocabulary size. This shows that, in order to achieve good results, it is not necessary to remove a large number of columns from the original sentence-term matrix. The plot gives us more evidence on how the inductive setting is not ideal for the AMR-based approach. For the TF-IDF reweighting, the systems that show a considerably different behavior are JAMR with linear and RBF kernels, where we show clear peaks for the transductive case. For PageRank also the AMREager systems with linear and RBF kernel follow this trend. In general the polynomial kernel is the one less affected by this variable. Table 2 shows that our best result for the transductive case, which we obtain with JAMR and PageRank, outperforms the current state of the art for paraphrase detection in the transductive setting. This is not true for the inductive case, proving the preference of the AMR-based LSA approach for the former setting.", "cite_spans": [], "ref_spans": [ { "start": 953, "end": 960, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "We described an approach to incorporate an AMR parser output into the detection of paraphrases. Our method works by merging two graphs that need to be tested for a paraphrase relation, and then re-weighting a sentence-term matrix by the PageRank values of the nodes in the merged graph. We find that our method gives significant improvements over state of the art in paraphrase detection in the transductive setting, showing that AMR is indeed helpful for this task. We further show that the inductive settings is instead not ideal for this type of approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We are encouraged by the results, and believe that paraphrase detection can also be used as a proxy test for the performance of an AMR parser: if an AMR parser is close to canonicalizing language, it should be of significant help in detecting", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We note that while the NLP community has largely switched to the use of neural networks for classification problems, in our case support vector machines prove to be a simpler and more efficient solution. They also tend to generalize better than neural networks, as the number of features we use is not large.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "JAMR is available from https://github.com/ jflanigan/jamr.3 AMREager is available from http://cohort.inf. ed.ac.uk/amreager.html.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank the three anonymous reviewers for their helpful comments. This research was supported by a grant from Bloomberg, a grant from Huawei Technologies and by the EU H2020 project SUMMA, under grant agreement 688139.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": " Table 1 : Baseline results for paraphrase detection with AMR and with bag-of-words (BOW). \"linear,\" \"poly\" and \"rbf\" denote the kernel which is used with a support vector machine classifier. \"Smatch\" denotes the use of the additional graph similarity feature and \"BOC\" the use of the additional Jaccard score on the bag of concept. Best result in each column is in bold. paraphrase relations. In our experiments, the overall best result was achieved by JAMR. More generally, our results show that JAMR has been more helpful in the transductive setting and in the first set of experiment when using the Smatch score only, while AMREager wins the comparison in the inductive case as well as in the first set of experiments when using both the Smatch score and the bag of concepts score as additional features.", "cite_spans": [], "ref_spans": [ { "start": 1, "end": 8, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The evaluation of sentence similarity measures", "authors": [ { "first": "Palakorn", "middle": [], "last": "Achananuparp", "suffix": "" }, { "first": "Xiaohua", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Xiajiong", "middle": [], "last": "Shen", "suffix": "" } ], "year": 2008, "venue": "Proceedings of DaWaK", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Palakorn Achananuparp, Xiaohua Hu, and Xiajiong Shen. 2008. The evaluation of sentence similarity measures. In Proceedings of DaWaK.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Broad-coverage CCG semantic parsing with AMR", "authors": [ { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2015, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. Proceedings of EMNLP .", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Abstract meaning representation for sembanking", "authors": [ { "first": "Laura", "middle": [], "last": "Banarescu", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Bonial", "suffix": "" }, { "first": "Shu", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Madalina", "middle": [], "last": "Georgescu", "suffix": "" }, { "first": "Kira", "middle": [], "last": "Griffitt", "suffix": "" }, { "first": "Ulf", "middle": [], "last": "Hermjakob", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. Linguistic Annotation Workshop .", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Distributional memory: A general framework for corpus-based semantics", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Lenci", "suffix": "" } ], "year": 2010, "venue": "Computational Linguistics", "volume": "36", "issue": "4", "pages": "673--721", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Baroni and Alessandro Lenci. 2010. Dis- tributional memory: A general framework for corpus-based semantics. Computational Linguistics 36(4):673-721.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "RIGA at SemEval-2016 task 8: Impact of smatch extensions and character-level neural translation on AMR parsing accuracy", "authors": [ { "first": "Guntis", "middle": [], "last": "Barzdins", "suffix": "" }, { "first": "Didzis", "middle": [], "last": "Gosko", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1604.01278" ] }, "num": null, "urls": [], "raw_text": "Guntis Barzdins and Didzis Gosko. 2016a. RIGA at SemEval-2016 task 8: Impact of smatch extensions and character-level neural translation on AMR pars- ing accuracy. arXiv preprint arXiv:1604.01278 .", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Riga: Impact of smatch extensions and character-level neural translation on AMR parsing accuracy", "authors": [ { "first": "Guntis", "middle": [], "last": "Barzdins", "suffix": "" }, { "first": "Didzis", "middle": [], "last": "Gosko", "suffix": "" } ], "year": 2016, "venue": "Proceedings of SemEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guntis Barzdins and DIdzis Gosko. 2016b. Riga: Im- pact of smatch extensions and character-level neural translation on AMR parsing accuracy. Proceedings of SemEval .", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Semantic parsing via paraphrasing", "authors": [ { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Proceedings of ACL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Recognising textual entailment with logical inference", "authors": [ { "first": "Johan", "middle": [], "last": "Bos", "suffix": "" }, { "first": "Katja", "middle": [], "last": "Markert", "suffix": "" } ], "year": 2005, "venue": "Proceedings of HLT/EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johan Bos and Katja Markert. 2005. Recognising tex- tual entailment with logical inference. In Proceed- ings of HLT/EMNLP.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Smatch: an evaluation metric for semantic feature structures", "authors": [ { "first": "Shu", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2013, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shu Cai and Kevin Knight. 2013. Smatch: an evalua- tion metric for semantic feature structures. Proceed- ings of ACL .", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The pascal recognising textual entailment challenge", "authors": [ { "first": "Oren", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Glickman", "suffix": "" }, { "first": "", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2006, "venue": "Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment", "volume": "", "issue": "", "pages": "177--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Machine learning challenges. evalu- ating predictive uncertainty, visual object classifica- tion, and recognising tectual entailment, Springer, pages 177-190.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Crosslingual abstract meaning representation parsing", "authors": [ { "first": "Marco", "middle": [], "last": "Damonte", "suffix": "" }, { "first": "Shay", "middle": [ "B" ], "last": "Cohen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Damonte and Shay B. Cohen. 2018. Cross- lingual abstract meaning representation parsing. Proceedings of NAACL .", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An incremental parser for abstract meaning representation", "authors": [ { "first": "Marco", "middle": [], "last": "Damonte", "suffix": "" }, { "first": "Shay", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "Giorgio", "middle": [], "last": "Satta", "suffix": "" } ], "year": 2017, "venue": "Proceedings of EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Damonte, Shay B. Cohen, and Giorgio Satta. 2017. An incremental parser for abstract meaning representation. In Proceedings of EACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources", "authors": [ { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" } ], "year": 2004, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Un- supervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Pro- ceedings of COLING.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "CMU at SemEval-2016 task 8: Graph-based AMR parsing with infinite ramp loss. Proceedings of SemEval", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Flanigan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Carbonell", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Flanigan, Chris Dyer, Noah A Smith, and Jaime Carbonell. 2016. CMU at SemEval-2016 task 8: Graph-based AMR parsing with infinite ramp loss. Proceedings of SemEval .", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A discriminative graph-based parser for the abstract meaning representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Flanigan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Jaime", "middle": [ "G" ], "last": "Carbonell", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Flanigan, Sam Thomson, Jaime G Carbonell, Chris Dyer, and Noah A Smith. 2014. A discrim- inative graph-based parser for the abstract meaning representation. Proceedings of ACL .", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The language of thought", "authors": [ { "first": "A", "middle": [], "last": "Jerry", "suffix": "" }, { "first": "", "middle": [], "last": "Fodor", "suffix": "" } ], "year": 1975, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jerry A Fodor. 1975. The language of thought, vol- ume 5. Harvard University Press.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning by transduction", "authors": [ { "first": "Alexander", "middle": [], "last": "Gammerman", "suffix": "" }, { "first": "Volodya", "middle": [], "last": "Vovk", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Vapnik", "suffix": "" } ], "year": 1998, "venue": "Proceedings of AUAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Gammerman, Volodya Vovk, and Vladimir Vapnik. 1998. Learning by transduction. In Pro- ceedings of AUAI.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Noise reduction and targeted exploration in imitation learning for abstract meaning representation parsing", "authors": [ { "first": "James", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Vlachos", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Naradowsky", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Goodman, Andreas Vlachos, and Jason Narad- owsky. 2016. Noise reduction and targeted explo- ration in imitation learning for abstract meaning rep- resentation parsing. Proceedings of ACL .", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Modeling sentences in the latent space", "authors": [ { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" } ], "year": 2012, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiwei Guo and Mona Diab. 2012. Modeling sen- tences in the latent space. In Proceedings of ACL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Methods for using textual entailment in open-domain question answering", "authors": [ { "first": "Sanda", "middle": [], "last": "Harabagiu", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Hickl", "suffix": "" } ], "year": 2006, "venue": "Proceedings of COLING/ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanda Harabagiu and Andrew Hickl. 2006. Methods for using textual entailment in open-domain ques- tion answering. In Proceedings of COLING/ACL.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Discriminative improvements to distributional sentence similarity", "authors": [ { "first": "Yangfeng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" } ], "year": 2013, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yangfeng Ji and Jacob Eisenstein. 2013. Discrimina- tive improvements to distributional sentence similar- ity. In Proceedings of EMNLP.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "From treebank to propbank", "authors": [ { "first": "Paul", "middle": [], "last": "Kingsbury", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2002, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Kingsbury and Martha Palmer. 2002. From tree- bank to propbank. Proceedings of LREC .", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Neural amr: Sequence-to-sequence models for parsing and generation", "authors": [ { "first": "Ioannis", "middle": [], "last": "Konstas", "suffix": "" }, { "first": "Srinivasan", "middle": [], "last": "Iyer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.08381" ] }, "num": null, "urls": [], "raw_text": "Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural amr: Sequence-to-sequence models for parsing and gen- eration. arXiv preprint arXiv:1704.08381 .", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "An introduction to latent semantic analysis", "authors": [ { "first": "Thomas", "middle": [ "K" ], "last": "Landauer", "suffix": "" }, { "first": "Peter", "middle": [ "W" ], "last": "Foltz", "suffix": "" }, { "first": "Darrell", "middle": [], "last": "Laham", "suffix": "" } ], "year": 1998, "venue": "Discourse Processes", "volume": "25", "issue": "", "pages": "259--284", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas K. Landauer, Peter W. Foltz, and Darrell La- ham. 1998. An introduction to latent semantic anal- ysis. Discourse Processes 25:259-284.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Combining distributional and logical semantics", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2013, "venue": "Computational Linguistics", "volume": "1", "issue": "", "pages": "179--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Lewis and Mark Steedman. 2013. Combin- ing distributional and logical semantics. Transac- tions of the Association for Computational Linguis- tics 1:179-192.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Toward abstractive summarization using semantic representations", "authors": [ { "first": "Fei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Flanigan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Norman", "middle": [], "last": "Sadeh", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, and Noah A Smith. 2015. Toward abstrac- tive summarization using semantic representations. Proceedings of NAACL .", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The Stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [ "J" ], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Proceedings of ACL.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Semeval-2016 task 8: Meaning representation parsing", "authors": [ { "first": "Jonathan", "middle": [], "last": "", "suffix": "" } ], "year": 2016, "venue": "Proceedings of SemEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan May. 2016. Semeval-2016 task 8: Meaning representation parsing. In Proceedings of SemEval.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Semeval-2017 task 9: Abstract meaning representation parsing and generation", "authors": [ { "first": "Jonathan", "middle": [], "last": "May", "suffix": "" }, { "first": "Jay", "middle": [], "last": "Priyadarshi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of SemEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan May and Jay Priyadarshi. 2017. Semeval- 2017 task 9: Abstract meaning representation pars- ing and generation. In Proceedings of SemEval.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Corpus-based and knowledge-based measures of text semantic similarity", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Courtney", "middle": [], "last": "Corley", "suffix": "" }, { "first": "Carlo", "middle": [], "last": "Strapparava", "suffix": "" } ], "year": 2006, "venue": "Proceedings of AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea, Courtney Corley, Carlo Strapparava, et al. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In Proceedings of AAAI.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Composition in distributional models of semantics", "authors": [ { "first": "Jeff", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2010, "venue": "Cognitive science", "volume": "34", "issue": "8", "pages": "1388--1429", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive sci- ence 34(8):1388-1429.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Incrementality in deterministic dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2004, "venue": "Workshop on Incremental Parsing: Bringing Engineering and Cognition Together", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre. 2004. Incrementality in determinis- tic dependency parsing. Workshop on Incremental Parsing: Bringing Engineering and Cognition To- gether .", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "The pagerank citation ranking: Bringing order to the web", "authors": [ { "first": "Lawrence", "middle": [], "last": "Page", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Brin", "suffix": "" }, { "first": "Rajeev", "middle": [], "last": "Motwani", "suffix": "" }, { "first": "Terry", "middle": [], "last": "Winograd", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation rank- ing: Bringing order to the web. Technical report, Stanford InfoLab.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A synchronous hyperedge replacement grammar based approach for AMR parsing", "authors": [ { "first": "Xiaochang", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Linfeng", "middle": [], "last": "Song", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2015, "venue": "Proceedings of CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaochang Peng, Linfeng Song, and Daniel Gildea. 2015. A synchronous hyperedge replacement gram- mar based approach for AMR parsing. Proceedings of CoNLL .", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Using syntaxbased machine translation to parse english into abstract meaning representation", "authors": [ { "first": "Michael", "middle": [], "last": "Pust", "suffix": "" }, { "first": "Ulf", "middle": [], "last": "Hermjakob", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1504.06665" ] }, "num": null, "urls": [], "raw_text": "Michael Pust, Ulf Hermjakob, Kevin Knight, Daniel Marcu, and Jonathan May. 2015. Using syntax- based machine translation to parse english into abstract meaning representation. arXiv preprint arXiv:1504.06665 .", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Parser for abstract meaning representation using learning to search", "authors": [ { "first": "Sudh", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Yogarshi", "middle": [], "last": "Vyas", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daume", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1510.07586" ] }, "num": null, "urls": [], "raw_text": "Sudh Rao, Yogarshi Vyas, Hal Daume III, and Philip Resnik. 2015. Parser for abstract meaning represen- tation using learning to search. arXiv:1510.07586 .", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Semi-supervised recursive autoencoders for predicting sentiment distributions", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "H", "middle": [], "last": "Eric", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Huang", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2011, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011. Semi-supervised recursive autoencoders for predict- ing sentiment distributions. In Proceedings of EMNLP.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "From frequency to meaning: Vector space models of semantics", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Turney", "suffix": "" }, { "first": "", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2010, "venue": "Journal of artificial intelligence research", "volume": "37", "issue": "", "pages": "141--188", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter D Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of se- mantics. Journal of artificial intelligence research 37:141-188.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "An AMR parser for English", "authors": [ { "first": "Lucy", "middle": [], "last": "Vanderwende", "suffix": "" }, { "first": "Arul", "middle": [], "last": "Menezes", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Quirk", "suffix": "" } ], "year": 2015, "venue": "and Japanese and a new AMR-annotated corpus. Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucy Vanderwende, Arul Menezes, and Chris Quirk. 2015. An AMR parser for English, French, German, Spanish and Japanese and a new AMR-annotated corpus. Proceedings of NAACL-HLT .", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Boosting transition-based AMR parsing with refined actions and auxiliary analyzers", "authors": [ { "first": "Chuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015. Boosting transition-based AMR parsing with refined actions and auxiliary analyzers. Proceedings of ACL .", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Robust subgraph generation improves abstract meaning representation parsing", "authors": [ { "first": "Keenon", "middle": [], "last": "Werling", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1506.03139" ] }, "num": null, "urls": [], "raw_text": "Keenon Werling, Gabor Angeli, and Christopher Man- ning. 2015. Robust subgraph generation improves abstract meaning representation parsing. arXiv preprint arXiv:1506.03139 .", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "AMR parsing with an incremental joint model", "authors": [ { "first": "Junsheng", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Feiyu", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Hans", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Q", "middle": [ "U" ], "last": "Weiguang", "suffix": "" }, { "first": "Ran", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yanhui", "middle": [], "last": "Gu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junsheng Zhou, Feiyu Xu, Hans Uszkoreit, Weiguang QU, Ran Li, and Yanhui Gu. 2016. AMR parsing with an incremental joint model. In Proceedings of EMNLP.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "chased nsubj ( m / mouse det ( t1 / The ) ) dobj ( c / cat det ( t2 / the ) ) )Figure 3: An example of a dependency tree (a) converted to an AMR graph (b)." }, "TABREF1": { "content": "
6.3 Dimensionality of the Truncated Matrix
Figure 4 shows how performance changes as a
function of the number of dimensions used in the
", "type_str": "table", "text": "Comparison of our results with previous work (\"NR\" stands for \"not reported\"). All work mentioned above was done in the inductive setting, except forJi and Eisenstein (2013), which, like us, was done in both settings.", "num": null, "html": null } } } }