|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:38:54.257947Z" |
|
}, |
|
"title": "Referenceless Parsing-Based Evaluation of AMR-to-English Generation", |
|
"authors": [ |
|
{ |
|
"first": "Emma", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Georgetown University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Georgetown University", |
|
"location": {} |
|
}, |
|
"email": "nathan.schneider@georgetown.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Reference-based automatic evaluation metrics are notoriously limited for NLG due to their inability to fully capture the range of possible outputs. We examine a referenceless alternative: evaluating the adequacy of English sentences generated from Abstract Meaning Representation (AMR) graphs by parsing into AMR and comparing the parse directly to the input. We find that the errors introduced by automatic AMR parsing substantially limit the effectiveness of this approach, but a manual editing study indicates that as parsing improves, parsing-based evaluation has the potential to outperform most reference-based metrics.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Reference-based automatic evaluation metrics are notoriously limited for NLG due to their inability to fully capture the range of possible outputs. We examine a referenceless alternative: evaluating the adequacy of English sentences generated from Abstract Meaning Representation (AMR) graphs by parsing into AMR and comparing the parse directly to the input. We find that the errors introduced by automatic AMR parsing substantially limit the effectiveness of this approach, but a manual editing study indicates that as parsing improves, parsing-based evaluation has the potential to outperform most reference-based metrics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Natural language generation (NLG) is notoriously difficult to evaluate well due to its one-to-many nature: thanks to the infinite capacity of human language, any given meaning can be expressed in a potentially unlimited number of ways. Thus, listing the 'right answer(s)' and comparing a system's output against such a list, which is a possible evaluation method for many other tasks, is fundamentally limited for NLG. Nevertheless, automatic evaluation of NLG has traditionally been dominated by reference-based metrics like BLEU (Papineni et al., 2002) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 531, |
|
"end": 554, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In recent years, however, referenceless evaluation metrics have been gaining popularity in NLG and related fields. In this paper we examine the potential and limitations of one such approach: using semantic parsing to compare a generated sentence to a meaning representation from which it was generated, in order to measure semantic adequacy. We focus on generation of English text from Abstract Meaning Representation graphs (\"AMRs\"; Banarescu et al., 2013) . Figure 1 shows an example of an AMR, which represents the meaning of a sentence. AMR does not represent certain morphological and syntactic details such as tense, number, definiteness, and word order, so the graph shown could represent a number of alternate sentences, such as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 435, |
|
"end": 458, |
|
"text": "Banarescu et al., 2013)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 461, |
|
"end": 469, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Oleh Belokolos, the Ukrainian diplomat in Kenya, states: \u2022 A Ukrainian diplomat in Kenya named Oleh Belokolos has made a statement. Ideally, then, a sentence generated from an AMR graph should be judged on how well it expresses the elements of meaning given in the graph, ignoring the details that are not included.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We examine the hypothesis that we can measure the semantic adequacy of a sentence generated from an AMR by performing the reverse operation-namely, parsing the generated sentence into AMR-and measuring the similarity of the parsed AMR graph to the original. In essence, this idea exploits complementarity of English-to-AMR parsers and AMR-to-English generators being evaluated. Assuming an accurate parse, we would ex-Var Description r Reference sentence a Gold AMR, created from r g Sentence automatically generated from a p AMR automatically parsed from g p \u2032", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Manually-corrected version of automatic parses p pect this to be a good measure of the adequacy of the generated sentence, since a sentence that accurately expresses the meaning in the original AMR should have the same AMR. We further formalize this approach in \u00a72. As discussed in \u00a73, this method has also been suggested by Opitz and Frank (2021) ; we contribute new analyses of its validity, in particular by measuring its correlation with human adequacy judgments collected by Manning et al. (2020) . We find that errors made by an automatic AMR parser substantially limit the quality of parsing-based evaluation as a proxy for human evaluation, resulting in a lower correlation with adequacy scores than many reference-based metrics ( \u00a75). To approximate an upper bound for the potential of this evaluation approach with improved parsing, we conduct an additional study using manually-corrected AMR parses; we find that this substantially improves the quality of the metric ( \u00a76).", |
|
"cite_spans": [ |
|
{ |
|
"start": 325, |
|
"end": 347, |
|
"text": "Opitz and Frank (2021)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 480, |
|
"end": 501, |
|
"text": "Manning et al. (2020)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "AMR-to-English generation is the task of taking an input AMR graph a and generating a sentence g expressing the meaning content of the AMR in English. Ideally, we would evaluate generation by comparing g directly to a to determine how well g expresses the meaning in a; this is what the human annotators whose judgments we use (see \u00a74) did. However, we don't know of an existing way to directly compare a sentence to an AMR graph; instead, our metrics tend to compare two items of the same type. Reference-based metrics compare the generated sentence g to r, the English sentence for which the AMR was created, and which is typically used as the sole reference in evaluation. We analyze the hypothesis that we can more accurately capture the details relevant to AMR by comparing AMRs to each other: specifically, comparing a to either p, an automatic parse of g, or to p \u2032 , a manually-corrected version of p. Our notation is summarized in table 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing-Based Evaluation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The evaluation method we analyze in this paper is closely related to the MF b metric suggested by Opitz and Frank (2021) , which combines a measure of meaning preservation, M, with a language model-based measure of grammatical form, F. Their meaning preservation metric, M, assigns a score to a generated sentence by parsing it into AMR and computing the parse's similarity to the gold AMR. They use the AMR parser by Cai and Lam (2020) and the S 2 match similarity metric; we experiment with these as well as other options for both parser and metric. While they perform a number of pilot experiments to test the robustness of MF b , such as its performance with different parsers, Opitz and Frank do not test the correlation of their metric with human judgments; thus, the work presented here adds to our understanding of the validity of this type of metric as a proxy for human evaluation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 120, |
|
"text": "Opitz and Frank (2021)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 418, |
|
"end": 436, |
|
"text": "Cai and Lam (2020)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As a baseline, we also compare the results of parsing-based evaluation with several referencebased metrics, including those that have traditionally been used to evaluate AMR generation as well as newer metrics that have shown promising results for NLG.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We use human judgment data from Manning et al. (2020) , consisting of judgments on a total of 600 sentences: 100 human-authored reference sentences and their corresponding AMRs, and 500 sentences automatically generated from these AMRs by 5 different systems. When comparing to reference-based automatic metrics, we do not score the reference sentences themselves since, with only one reference per AMR, these would trivially receive perfect scores on such metrics.", |
|
"cite_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 53, |
|
"text": "Manning et al. (2020)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Each sentence has a score on a scale of 0-100, for each of fluency and adequacy, averaged over two annotators. We compare automatic metrics to these judgments, and particularly to the adequacy judgments, since we are primarily interested in parsing-based evaluation as a proxy for adequacy evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Annotators additionally provided binary judgments on whether information was added or omitted, and whether the sentence was incomprehensible; we use the latter of these judgments in \u00a76 to determine which generated sentences to manually edit the parses of.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "This section describes experiments with variations on the automatic version of the parsing based metric; that is, the use of similarity metrics comparing the automatic parse p to the gold AMR a. We experiment with different AMR parsers ( \u00a75.1) and variations on the Smatch similarity metric ( \u00a75.2) and measure the correlation to human judgments of adequacy ( \u00a75.3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment 1: Automatic Metrics", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We compare gold AMRs to AMR parses of the generated sentences. This includes using three different automatic English-to-AMR parsers, described below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsers", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "JAMR. The JAMR parser 1 (Flanigan et al., 2014 (Flanigan et al., , 2016 is an early AMR parser; we use it as a baseline to compare against the more recent, higher-accuracy parsers. The JAMR parser uses a semi-Markov model to identify concepts, followed by a graph variant on Maximum Spanning Tree algorithms to identify the relations between concepts. We used the 2016 version, which achieved a Smatch score of 67 on the LDC2015E86 dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 46, |
|
"text": "(Flanigan et al., 2014", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 47, |
|
"end": 71, |
|
"text": "(Flanigan et al., , 2016", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsers", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "LYU-TITOV. While most AMR parsers first train an aligner to align AMR nodes with words in a sentence prior to training the parser itself, Lyu and Titov (2018) 2 treat alignments as latent variables in a joint probabilistic model for identifying concepts, relations, and alignments. This parser achieved a Smatch score of 73.7 on LDC2015E86 and 74.4 on LDC2016E25, which at the time was state-of-the-art. 20203 was the state of the art in AMR parsing as of 2020, with a Smatch score of 80.2 on LDC2017T10. This transformerbased parser uses iterative inference to determine which part of the input sentence to parse and where to add it to the output graph, without requiring explicit alignments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsers", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Parser performance. We evaluate each parser's accuracy on our sample of 100 sentences by computing Smatch(a, p(r)), i.e., the similarity between the gold AMRs in the sample and their corresponding parsed references. We find that CAI-LAM performs the best with a Smatch score of 84.9, followed by 76.3 for LYU-TITOV and 71.1 for JAMR.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CAI-LAM. Cai and Lam", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/ChunchuanLv/AMR _ AS _ GRAPH _ PREDICTION 3 Code: https://github.com/jcyk/AMR-gs", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CAI-LAM. Cai and Lam", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The second piece needed for parsing-based evaluation is a way to quantify the similarity of an AMR parse to the original AMR.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarity Metrics", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The standard metric for comparing two AMRssuch as to evaluate the quality of an AMR parser or inter-annotator-agreement between human parsesis Smatch . The Smatch score compares triples between two AMR graphs, where each triple is an edge of the graph (a semantic relationship) combined with each of the nodes it connects. For a given pair of AMRs, the Smatch score is the maximum F1-score of triples which can be obtained with a one-to-one mapping of variables between the two graphs. 4 We also experimented with a small variation on the original Smatch. Smatch computes the similarity between two different AMR graphs based on inferred alignments between the two graphs' concepts. Since checking all possible mappings is computationally intractable, it starts with one 'smart' initialization, then retries with random initializations; the default is four random restarts. This means that Smatch scores are nondeterministic; when running twice on the same pair of AMRs, we sometimes got different scores. To mitigate this effect, we made two changes: First, we increased the number of restarts to 100 to increase the chance that the best mapping would be found, while still maintaining a reasonable runtime. Second, we seeded the random function in the Smatch script to make the results reproducible. In table 3, we refer to the default Smatch as 'Smatch 4 ', while the variation with a seed and 100 restarts is 'Smatch 100 +seed'.", |
|
"cite_spans": [ |
|
{ |
|
"start": 486, |
|
"end": 487, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarity Metrics", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "More recently, Opitz et al. (2020) analyzed both Smatch and an alternative metric, SemBleu (Song and Gildea, 2019) , and proposed a new variant of Smatch, S 2 match, which conforms to desirable principles better than either previous metric. In particular, S 2 match introduces the concept of embedding-based semantic gradable semantic similarity by allowing for soft matches between concepts. While the primary advantage of this variant is for tasks with more variation in wording, such as measuring the similarity of paraphrases, it could also be advantageous in our setting-for example, to penalize AMR generation systems that represent a concept with the wrong word less if it is a se- mantically related one, or to mitigate the effects of certain parser errors. Thus, we also experiment with computing the S 2 match similarity of parsed sentences to the original AMRs. 5", |
|
"cite_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 114, |
|
"text": "(Song and Gildea, 2019)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarity Metrics", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The primary statistic of interest for this study is the sentence-level correlation between a proposed metric and human judgments, particularly those for adequacy. We measure this with Spearman's Rho correlation. Following Manning et al. 2020, we compare several popular reference-based metrics; table 2 reports the correlations for the 5 metrics they used: BLEU (Papineni et al., 2002) , METEOR (Banerjee and Lavie, 2005) , TER (Snover et al., 2006) , ChrF++ (Popovi\u0107, 2017) , and BERTScore (Zhang et al., 2020) . We add the results of one newer metric, BLEURT (Sellam et al., 2020) . Of these, BLEURT performs the best by this measure with a correlation of 0.69. BLEU, the most popular metric for this task, has a correlation of 0.52. Table 3 shows the correlation with adequacy for each variant of the parser-based metric, combining the three AMR parsers and three similarity metrics used. Notably, even the highest correlations here underperform those achieved by BLEU, METEOR, BERTScore, and BLEURT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 362, |
|
"end": 385, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 395, |
|
"end": 421, |
|
"text": "(Banerjee and Lavie, 2005)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 428, |
|
"end": 449, |
|
"text": "(Snover et al., 2006)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 459, |
|
"end": 474, |
|
"text": "(Popovi\u0107, 2017)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 491, |
|
"end": 511, |
|
"text": "(Zhang et al., 2020)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 561, |
|
"end": 582, |
|
"text": "(Sellam et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 736, |
|
"end": 743, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "As expected, the correlation increases with parser quality, indicating that parsers that have higher accuracy on human-authored sentences also do better with generated sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "For each parser, there is very little difference between the different similarity metrics. The similarity between Smatch 4 and Smatch 100 +seed is expected, since these are separated only by minor implementation differences. The lack of substantial improvement when using S 2 match is probably because it is rare for the generated sentences to contain concepts that are different but semantically similar to those in the gold AMR. 5 We calculate S 2 match using https://github.com/ Heidelberg-NLP/amr-metric-suite. Table 3 : Sentence-level correlations with human judgments for parsing-based metrics, with different choices of parser and similarity metric.", |
|
"cite_spans": [ |
|
{ |
|
"start": 431, |
|
"end": 432, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 515, |
|
"end": 522, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Since none of the similarity metrics are clearly stronger than the others based on correlations, we choose Smatch 100 +seed as the best for more conceptual reasons: it is more reproducible, and unlike S 2 match, does not rely on embeddings. The use of additional resources seems unjustified in this case if it does not improve performance, especially given concerns that using embeddings in an evaluation metric makes it less transparent and more arbitrary (results can vary depending on specific choice of language model) than a simpler method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Thus, for the following experiments, we use the CAI-LAM parser combined with Smatch 100 +seed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Even a state-of-the-art AMR parser is of course not perfect, and may struggle more with parsing automatically-generated sentences than the humanauthored ones it is designed for. The potential for parser error is a major limitation of the proposed approach; evaluating the parse p against gold AMR a can only be a good measure of g's relationship to a if p is a sufficiently accurate parse of g. Thus, to get a better sense of the effect that parsing errors can have on this metric even when using a SOTA parser, and of a rough upper bound for how well the metric could work in the future as parsing improves, we also manually edited a sample of the parses p to create alternate parses, p \u2032 , which better reflect the meaning expressed in the generated sentences g, and use Smatch to compare p \u2032 to a.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment 2: Manually-Edited Parses", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Since the CAI-LAM parser is the strongest automatic parser, we used its parses as a starting point. For a given generated sentence g or reference sentence r, we compared the sentence to the automatic parse p(g) or p(r), and edited the parse to represent, as accurately as possible, the meaning expressed in the sentence. This sometimes included referring to the gold AMR a to ensure consistency between our annotations and the canonical representation of the same meanings. All edits were performed by the first author. However, this approach is limited by an assumption that the generated sentences have meanings in the same way that human-authored sentences do. In fact, many of the generated sentences in this dataset do not clearly and unambiguously express a particular meaning. Since it is essentially impossible to 'accurately' parse an incoherent sentence, we only edited the parses of sentences which were not marked as incomprehensible by either annotator in the human evaluation. Table 4 shows how many sentences from each system fit this criterion. Overall, we edited parses for 406 sentences, or 67.7% of the total sample of 600 sentences used in the human evaluation. Excluding references, we edited parses for 316 of the 500 automatically-generated sentences, or 63.2%. For the remaining sentences, we use the unedited automatic parse.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 991, |
|
"end": 998, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Even after filtering out those marked incomprehensible, we encountered many sentences that we found unclear or highly ambiguous; perhaps there were so many unclear sentences in the data that annotators reserved the annotation only for the most egregious cases. We did our best to interpret these sentences as well as we could, erring on the side of preserving the automatic parse's interpretation when it seemed as reasonable as an alternative. Nevertheless, this required some subjective judgment calls. An example of a difficult-to-annotate case is shown in table 5. The generated sentence, \"ukraine and ukraine in kenya stated -\", would probably never be produced by a human author, and it is difficult to assign a precise meaning to it. In this case, we decided to preserve the automatic parser's interpretation that it describes a statement being made by two entities: the country Ukraine, and a separate location, also known as Ukraine, that is in Kenya.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "r", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Ukrainian diplomat in Kenya oleh belokolos statedg ukraine and ukraine in kenya statedp (c0 / state-01 :ARG0 (c1 / and :op1 (c2 / ukraine) :op2 (c3 / ukraine :location (c4 / country :name (c5 / name :op1 \"Kenya\") :wiki \"Kenya\")))) p \u2032 (c0 / state-01 :ARG0 (c1 / and :op1 (c2 / country :wiki \"Ukraine\" :name (n2 / name :op1 \"Ukraine\")) :op2 (c3 / location :name (n3 / name :op1 \"Ukraine\") :location (c4 / country :name (c5 / name :op1 \"Kenya\") :wiki \"Kenya\")))) Figure 2 : Scatterplot of number of capitalized words in the output compared to the reference for each system (jitter=0.5).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 461, |
|
"end": 469, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "As table 6 shows, the correlation of Smatch with adequacy improves substantially when using the edited parses, as opposed to the purely automatic ones. With edits, the correlation over all data increases to 0.66, better than most of the automatic metrics-the only exception is BLEURT, with a correlation of 0.69 (see table 2). It seemed possible that this improvement occurred simply because the edited sample of sentences, which generally received stronger human scores, largely had their Smatch scores improved by the editing process. Thus we also include the correlations on only the edited sample (INC0); the fact that the correlation improves within this sample demonstrates that editing does help distinguish better and worse sentences. Table 7 shows an example where editing helped substantially. The generated sentence fully expresses the information in the gold AMR, and received fluency and adequacy scores of 100-in fact, it differs from the reference only in capitalizationbut the automatic parse differs greatly from the gold AMR, resulting in a low Smatch score of 0.222. Parser errors in this case include a failure to recognize the two named entities in the sentence, as well as misidentifying the root concept as be-located-at-91 rather than organization. While the edited parse doesn't perfectly match the gold AMR, it corrects these major errors, resulting in a much higher Smatch score of 0.875. r The Institute for Science and International Security is a private research organization located in Washington. g the institute for science and international security is a private research organization located in washington . a (o / organization :mod (r / research-01) :ARG1-of (p / private-03) :domain (o2 / organization :wiki \"Institute _ for _ Science _ and _ International _ Security\" :name (n / name :op1 \"Institute\" :op2 \"for\" :op3 \"Science\" :op4 \"and\" :op5 \"International\" :op6 \"Security\")) :ARG1-of (l / locate-01", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 743, |
|
"end": 750, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": ":location (c / city :wiki \"Washington, _ D.C.\" :name (n2 / name :op1 \"Washington\")))) p (c0 / be-located-at-91 :ARG1 (c1 / institute :mod (c3 / organization :ARG1-of (c6 / private-03) :mod (c5 / research-01)) :topic (c4 / and :op1 (c7 / science) :op2 (c8 / security :mod (c9 / international)))) :ARG2 (c2 / washington)) p \u2032 (c0 / organization :domain (c4 / organization :name (c6 / name :op1 \"Institute\" :op2 \"for\" :op3 \"Science\" :op4 \"and\" :op5 \"International\" :op6 \"Security\") :wiki \"Institute _ for _ Science _ and _ International _ Security\") :location (c3 / city :name (c5 / name :op1 \"Washington\") :wiki \"Washington, _ D.C.\") :mod (c1 / private-03) :mod (c2 / research-01)) Table 7 : An example where parser error led to a low Smatch score on a high-adequacy sentence, which is improved substantially in the edited parse.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 680, |
|
"end": 687, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "A US-endorsed package of incentives to cease enriched uranium production g the us endorsed package of incentives to cease enriched uranium production . a (p / package :consist-of (t / thing :ARG0-of (i / incentivize-01 :ARG2 (c2 / cease-01 :ARG1 (p2 / produce-01 :ARG1 (u / uranium :ARG1-of (e2 / enrich-01)))))) :ARG1-of (e / endorse-01 :ARG0 (c / country :wiki \"United _ States\" :name (n / name :op1 \"US\")))) p (c0 / endorse-01 :ARG0 (c2 / we) :ARG1 (c1 / package-01 :ARG1 (c3 / incentivize-01 :ARG2 (c4 / cease-01 :ARG1 (c5 / produce-01 :ARG1 (c6 / uranium) :ARG1-of (c7 / enrich-01)))))) p \u2032 (c0 / endorse-01 :ARG0 (c2 / country :name (c5 / name :op1 \"US\") :wiki \"United _ States\") :ARG1 (c1 / package-01 :ARG1 (c3 / incentivize-01 :ARG2 (c4 / cease-01 :ARG1 (p1 / produce-01 :ARG1 (c6 / uranium :ARG1-of (c7 / enrich-01))))))) Table 8 : An example of a parser error due to lack of capitalization in the generated sentence. 'US', written as 'us' by the system, is treated as a form of the pronoun 'we' by the parser. Table 9 : Spearman's correlation of adequacy scores with Smatch scores based on unedited and edited parses. The two systems that produce capitalization are shown above the line; the three below output only lowercase.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 832, |
|
"end": 839, |
|
"text": "Table 8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1021, |
|
"end": 1028, |
|
"text": "Table 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "r", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A common parser error was failure to recognize named entities when they were not capitalized; examples of this are given in tables 7 and 8. As figure 2 shows, three of the systems never produce capitals in their output, while those of Konstas and Manning typically produce about as many capitals as are present in the reference. Thus, it seems likely that the systems that never produce capitals may be unfairly penalized by a parsing-based metric. Table 9 shows that when separating the data by system, there is no clear difference in the degree to which Smatch correlates with adequacy for systems that capitalize compared to those that do not. However, the difference between Smatch(a, p \u2032 ) and Smatch(a, p) is greater for the systems that do not produce capitals; that is, manual editing had a greater effect on the reliability of the parserbased metric on the systems which do not produce capitals than those that do.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 449, |
|
"end": 456, |
|
"text": "Table 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "It may be possible to overcome this particular limitation of the automatic parser by adding a preprocessing step that recognizes and capitalizes named entities, or by training the parser on more all-lowercase examples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In this paper, we have explored the idea of evaluating AMR generation via AMR parsing and similarity metrics, using the human judgments of adequacy collected by Manning et al. (2020) to test the validity of possible variants of the parsingbased metric approach and compare them to existing reference-based metrics. We found that parser quality is a major factor affecting the performance of this evaluation approach: the better the AMR parser, the better the evaluation; however, even a state-of-the-art parser with an accuracy of 80+% on standard human-authored data has significant limitations for evaluating generated sentences, including a failure to recognize named entities in the absence of capitalization. We showed that when automatic AMR parses are manually edited to better reflect the meaning in generated sentences, this referenceless metric outperforms most popular automatic reference-based metrics, including BLEU and BERTScore (but not BLEURT).", |
|
"cite_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 182, |
|
"text": "Manning et al. (2020)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "While the current reliance on manual editing for more reliable results may not be practical for evaluation, the results of this experiment indicate that fully-automatic parser-based metrics are likely to prove more reliable in the future as the state of the art in AMR parsing continues to improve, especially if newer AMR generation systems also more closely replicate human-authored data, such as by producing more human-like capitalization than the majority of systems tested here did.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Code: https://github.com/jflanigan/jamr 2 Code:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We compute Smatch using the smatch.py script found at https://github.com/jflanigan/jamr/tree/ Semeval-2016/scripts/smatch _ 2.0.2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Abstract Meaning Representation for sembanking", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Banarescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Bonial", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shu", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Madalina", |
|
"middle": [], |
|
"last": "Georgescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kira", |
|
"middle": [], |
|
"last": "Griffitt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulf", |
|
"middle": [], |
|
"last": "Hermjakob", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "178--186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sembanking. In Proceedings of the 7th Linguis- tic Annotation Workshop and Interoperability with Discourse, pages 178-186, Sofia, Bulgaria. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", |
|
"authors": [ |
|
{ |
|
"first": "Satanjeev", |
|
"middle": [], |
|
"last": "Banerjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "65--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65-72, Ann Ar- bor, Michigan. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "AMR parsing via graphsequence iterative inference", |
|
"authors": [ |
|
{ |
|
"first": "Deng", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wai", |
|
"middle": [], |
|
"last": "Lam", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1290--1301", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Deng Cai and Wai Lam. 2020. AMR parsing via graph- sequence iterative inference. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 1290-1301, Online. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Smatch: an evaluation metric for semantic feature structures", |
|
"authors": [ |
|
{ |
|
"first": "Shu", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "748--752", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shu Cai and Kevin Knight. 2013. Smatch: an evalua- tion metric for semantic feature structures. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 748-752, Sofia, Bulgaria. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "CMU at SemEval-2016 task 8: Graph-based AMR parsing with infinite ramp loss", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Flanigan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1202--1206", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Flanigan, Chris Dyer, Noah A. Smith, and Jaime Carbonell. 2016. CMU at SemEval-2016 task 8: Graph-based AMR parsing with infinite ramp loss. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1202-1206, San Diego, California. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A discriminative graph-based parser for the Abstract Meaning Representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Flanigan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Thomson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1426--1436", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discrim- inative graph-based parser for the Abstract Mean- ing Representation. In Proceedings of the 52nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426- 1436, Baltimore, Maryland. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "AMR parsing as graph prediction with latent alignment", |
|
"authors": [ |
|
{ |
|
"first": "Chunchuan", |
|
"middle": [], |
|
"last": "Lyu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "397--407", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chunchuan Lyu and Ivan Titov. 2018. AMR parsing as graph prediction with latent alignment. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 397-407, Melbourne, Australia. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A human evaluation of AMR-to-English generation systems", |
|
"authors": [ |
|
{ |
|
"first": "Emma", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shira", |
|
"middle": [], |
|
"last": "Wein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 28th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4773--4786", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emma Manning, Shira Wein, and Nathan Schneider. 2020. A human evaluation of AMR-to-English gen- eration systems. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 4773-4786, Barcelona, Spain (Online). Inter- national Committee on Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Towards a decomposable metric for explainable evaluation of text generation from AMR", |
|
"authors": [ |
|
{ |
|
"first": "Juri", |
|
"middle": [], |
|
"last": "Opitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anette", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1504--1518", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Juri Opitz and Anette Frank. 2021. Towards a decom- posable metric for explainable evaluation of text gen- eration from AMR. In Proceedings of the 16th Con- ference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1504-1518, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "AMR similarity metrics from principles", |
|
"authors": [ |
|
{ |
|
"first": "Juri", |
|
"middle": [], |
|
"last": "Opitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Letitia", |
|
"middle": [], |
|
"last": "Parcalabescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anette", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "522--538", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Juri Opitz, Letitia Parcalabescu, and Anette Frank. 2020. AMR similarity metrics from principles. Transactions of the Association for Computational Linguistics, 8:522-538.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Bleu: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "chrF++: words helping character n-grams", |
|
"authors": [ |
|
{ |
|
"first": "Maja", |
|
"middle": [], |
|
"last": "Popovi\u0107", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Second Conference on Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "612--618", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maja Popovi\u0107. 2017. chrF++: words helping charac- ter n-grams. In Proceedings of the Second Con- ference on Machine Translation, pages 612-618, Copenhagen, Denmark. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "BLEURT: Learning robust metrics for text generation", |
|
"authors": [ |
|
{ |
|
"first": "Thibault", |
|
"middle": [], |
|
"last": "Sellam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankur", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7881--7892", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7881-7892, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A Study of Translation Edit Rate with Targeted Human Annotation", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Snover", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bonnie", |
|
"middle": [], |
|
"last": "Dorr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linnea", |
|
"middle": [], |
|
"last": "Micciulla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Makhoul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 7th Conference of the Association for Machine Translation in the Americas", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "223--231", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Annota- tion. In Proceedings of the 7th Conference of the As- sociation for Machine Translation in the Americas, pages 223-231, Cambridge.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "SemBleu: A robust metric for AMR parsing evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Linfeng", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4547--4552", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Linfeng Song and Daniel Gildea. 2019. SemBleu: A robust metric for AMR parsing evaluation. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4547- 4552, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "BERTScore: Evaluating Text Generation with BERT", |
|
"authors": [ |
|
{ |
|
"first": "Tianyi", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Varsha", |
|
"middle": [], |
|
"last": "Kishore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kilian", |
|
"middle": [ |
|
"Q" |
|
], |
|
"last": "Weinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Artzi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating Text Generation with BERT. In ICLR.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"text": "AMR graph for the sentence \"Ukrainian diplomat in Kenya oleh belokolos stated -\"", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"text": "Summary of notation used in this paper, with a description of each type of sentence and AMR used.", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"text": "Sentence-level correlations with human judgments for reference-based metrics.", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"text": "Number of sentences (out of 100) from each generation system that were not marked as incomprehensible by either annotator, and whose AMRs were manually edited.", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"text": "An example of a generated sentence with unclear meaning.", |
|
"num": null, |
|
"content": "<table><tr><td/><td colspan=\"2\">Full Sample INC0</td></tr><tr><td>Automatic Parses</td><td>0.49</td><td>0.35</td></tr><tr><td>Edited Parses</td><td>0.66</td><td>0.46</td></tr></table>", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"text": "Sentence-level Spearman's correlation of Smatch with human adequacy scores, when using edited parses vs. automatic ones. INC0 indicates the subset of AMRs that were edited.", |
|
"num": null, |
|
"content": "<table><tr><td>119</td></tr></table>", |
|
"html": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |