{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:38:25.552211Z" }, "title": "Truth or Error? Towards systematic analysis of factual errors in abstractive summaries", "authors": [ { "first": "Klaus-Michael", "middle": [], "last": "Lux", "suffix": "", "affiliation": { "laboratory": "", "institution": "Radboud University", "location": {} }, "email": "" }, { "first": "Maya", "middle": [], "last": "Sappelli", "suffix": "", "affiliation": { "laboratory": "", "institution": "HAN University of Applied Sciences", "location": {} }, "email": "maya.sappelli@han.nl" }, { "first": "Martha", "middle": [], "last": "Larson", "suffix": "", "affiliation": { "laboratory": "", "institution": "Radboud University", "location": {} }, "email": "m.larson@cs.ru.nl" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents a typology of errors produced by automatic summarization systems. The typology was created by manually analyzing the output of four recent neural summarization systems. Our work is motivated by the growing awareness of the need for better summary evaluation methods that go beyond conventional overlap-based metrics. Our typology is structured into two dimensions. First, the Mapping Dimension describes surface-level errors and provides insight into word-sequence transformation issues. Second, the Meaning Dimension describes issues related to interpretation and provides insight into breakdowns in truth, i.e., factual faithfulness to the original text. Comparative analysis revealed that two neural summarization systems leveraging pretrained models have an advantage in decreasing grammaticality errors, but not necessarily factual errors. We also discuss the importance of ensuring that summary length and abstractiveness do not interfere with evaluating summary quality.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "This paper presents a typology of errors produced by automatic summarization systems. The typology was created by manually analyzing the output of four recent neural summarization systems. Our work is motivated by the growing awareness of the need for better summary evaluation methods that go beyond conventional overlap-based metrics. Our typology is structured into two dimensions. First, the Mapping Dimension describes surface-level errors and provides insight into word-sequence transformation issues. Second, the Meaning Dimension describes issues related to interpretation and provides insight into breakdowns in truth, i.e., factual faithfulness to the original text. Comparative analysis revealed that two neural summarization systems leveraging pretrained models have an advantage in decreasing grammaticality errors, but not necessarily factual errors. We also discuss the importance of ensuring that summary length and abstractiveness do not interfere with evaluating summary quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We are currently witnessing a sharp increase of research interest in neural abstractive text summarization. However, we have also seen growing concern that truth, as represented in the original document, becomes lost or twisted during the summarization process. The issue was raised recently by Kryscinski et al. (2019) , who point out that widely used automatic metrics, which rely mostly on word overlap, fail to reflect factual faithfulness of a summary to the original text. Until now, work on summarization has not provided systematic analysis of factual faithfulness. Instead, the trend has been for papers to provide a few examples or general descriptions of frequent errors. An example is Falke et al. (2019) , who state that \"[c]ommon mistakes are using wrong subjects or objects in a proposi-tion [...] , confusing numbers, reporting hypothetical facts as factual [...] or attributing quotes to the wrong person.\", but stop short of providing a more rigorous analysis. Recent work that breaks the trend is Durmus et al. (2020) , who propose an evaluation framework for faithfulness in abstractive summarization. The summaries used to develop the framework are annotated with different types of faithfulness errors. However, the annotation scheme does not incorporate linguistic concepts, e.g., does not differentiate between semantic and pragmatic faithfulness.", "cite_spans": [ { "start": 295, "end": 319, "text": "Kryscinski et al. (2019)", "ref_id": "BIBREF10" }, { "start": 697, "end": 716, "text": "Falke et al. (2019)", "ref_id": "BIBREF5" }, { "start": 807, "end": 812, "text": "[...]", "ref_id": null }, { "start": 874, "end": 879, "text": "[...]", "ref_id": null }, { "start": 1016, "end": 1036, "text": "Durmus et al. (2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The aim of our research is to go beyond existing characterizations and provide a comprehensive typology that can be used to understand errors that neural abstractive summarization systems produce, and how they affect the factual faithfulness of summaries. The contribution of this paper is an error typology that was created by analyzing the output of four abstractive summarization systems. The systems vary in their use of pre-training, their model architecture and in the integration of extractive tasks during training. We carry out a comparative analysis that demonstrates the ability of the typology to uncover interesting differences between systems that are not revealed by conventional overlap-based metrics in current use. This paper represents the main results of Lux (2020) , which contains additional examples and analysis. Further, annotations used for our analysis and more detailed statistics are publicly available 1 to support future research on faithfulness errors.", "cite_spans": [ { "start": 775, "end": 785, "text": "Lux (2020)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Over the years there have been several methods to evaluate summarization methods (Lloret et al., 2018; Ermakova et al., 2019) , each with their own strengths and challenges. In this section, we first cover the ROUGE score, which is the main target of the criticism of overlap-based summarization metrics, such as from Kryscinski et al. (2019) mentioned in Section 1. We then provide a discussion on the relatively limited amount of work that has dealt with factual errors in summaries. Finally, we introduce the automatic summarization systems that we use in our study.", "cite_spans": [ { "start": 81, "end": 102, "text": "(Lloret et al., 2018;", "ref_id": "BIBREF16" }, { "start": 103, "end": 125, "text": "Ermakova et al., 2019)", "ref_id": "BIBREF4" }, { "start": 318, "end": 342, "text": "Kryscinski et al. (2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "ROUGE is a set of metrics that measures textual overlap (Lin, 2004) . The ROUGE score is almost exclusively used as the optimization and evaluation metric in neural summarization methods, even though it has been recognized to be difficult to interpret and does not correlate well with human judgement (van der Lee et al., 2019) .", "cite_spans": [ { "start": 56, "end": 67, "text": "(Lin, 2004)", "ref_id": "BIBREF14" }, { "start": 301, "end": 327, "text": "(van der Lee et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "ROUGE", "sec_num": "2.1" }, { "text": "The major issue with the ROUGE score is its focus on textual overlap with a reference summary, which does not measure important aspects in summaries such as redundancy, relevance and informativeness (Peyrard, 2019a) . Moreover, there is no clear optimal variant of ROUGE, and the exact choice can have a large impact on how a (neural) summarizer behaves when it is used as a training objective (Peyrard, 2019b) . Sun et al. (2019) demonstrate another shortfall of ROUGEbased evaluation: Since the metric does not adjust for summary length, a comparison between systems can be misleading if one of them is inherently worse at the task, but better tuned to the summary length that increases ROUGE.", "cite_spans": [ { "start": 199, "end": 215, "text": "(Peyrard, 2019a)", "ref_id": "BIBREF18" }, { "start": 394, "end": 410, "text": "(Peyrard, 2019b)", "ref_id": "BIBREF19" }, { "start": 413, "end": 430, "text": "Sun et al. (2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "ROUGE", "sec_num": "2.1" }, { "text": "The shortcomings of ROUGE suggest that we should work towards metrics that are more focused on summary quality as perceived by readers. Unfortunately, quality is hard to measure, demonstated by an interactive summarization experiment by Gao et al. (2019) , in which the authors show that users find it easier to give preference feedback on summaries. Simple preference ordering, however, does not give insight in the actual cause of preference. An important factor of perceived quality can be the errors being made by the summarizer. Grammatical errors can have an effect on the perceived quality, credibility and informativeness of news articles when there are many (Appelman and Schmierbach, 2018) . Moreover with the rise in fake news and misinformation it seems important to have a better grip on factual errors that are a result of the summarization process.", "cite_spans": [ { "start": 237, "end": 254, "text": "Gao et al. (2019)", "ref_id": "BIBREF6" }, { "start": 667, "end": 699, "text": "(Appelman and Schmierbach, 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "ROUGE", "sec_num": "2.1" }, { "text": "Recent abstractive systems have a tendency to generate summaries that are factually incorrect, meaning that they fail to be factually faithful to the documents that they summarize. An analysis by Cao et al. (2018) of a neural summarization system finds that up to 30% of generated summaries contain \"fabricated facts\". Similarly, the authors of Falke et al. (2019) evaluate three different state-of-the-art systems and find that between 8 and 26% of the generated summaries contain at least one factual error, even though ROUGE scores indicate good performance. Kry\u015bci\u0144ski et al. (2019) propose a weakly supervised method for verifying factual consistency between document and summary by training a binary model that predicts whether or not a sentence is consistent. For this purpose they artificially generate a dataset with various types of errors, such as entity or number swapping, paraphrasing, pronoun swapping, sentence negation and noise injection. The authors claim the error patterns to be based on an error analysis of system output. However, it is not conclusively established that they constitute a good approximation of the actual errors that current summarization systems make.", "cite_spans": [ { "start": 196, "end": 213, "text": "Cao et al. (2018)", "ref_id": "BIBREF1" }, { "start": 345, "end": 364, "text": "Falke et al. (2019)", "ref_id": "BIBREF5" }, { "start": 562, "end": 586, "text": "Kry\u015bci\u0144ski et al. (2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Factual errors in summaries", "sec_num": "2.2" }, { "text": "Additionally, Goodrich et al. (2019) compare several models such as relation extraction, binary classification and end-to-end models (E2E) for estimating factual accuracy on a Wikipedia text summarization task. They show that their E2E model for factual correctness has the highest correlation with human judgements and suggest that the E2E models could benefit from a better labeling scheme.", "cite_spans": [ { "start": 14, "end": 36, "text": "Goodrich et al. (2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Factual errors in summaries", "sec_num": "2.2" }, { "text": "In contrast, Lebanoff et al. (2019) are interested in what happens when summarization systems fuse sentences from the source. They automatically extract fused summary sentences generated by five different systems and conduct a manual annotation of faithfulness and grammaticality using crowd sourcing. Reference summaries are annotated as well. Generally, they find that fused sentences are often unfaithful to the source, especially when there is a marked imbalance in the contribution of multiple sentences. Surprisingly, the reference summaries achieve lower than the expected 100% faithfulness and grammaticality, which may have been due to low inter-annotator agreement or to presentation bias as suggested by the authors. Out of all five systems, See et al. (2017) and Chen and Bansal (2018) perform best, but are still more error-prone than reference summaries.", "cite_spans": [ { "start": 13, "end": 35, "text": "Lebanoff et al. (2019)", "ref_id": "BIBREF12" }, { "start": 753, "end": 770, "text": "See et al. (2017)", "ref_id": "BIBREF20" }, { "start": 775, "end": 797, "text": "Chen and Bansal (2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Factual errors in summaries", "sec_num": "2.2" }, { "text": "We find that previous research has not established a detailed typology of summarization errors. Most work instead relies on on a binary distinction between correct and erroneous (Cao et al., 2018; Falke et al., 2019; Lebanoff et al., 2019) or faithfulness measured on a Likert scale (Goodrich et al., 2019) . However, not all errors are created equal. Some errors might be less severe than others. As mentioned in Section 1, Durmus et al. (2020) is an exceptional case that looks at different kinds of errors related to faithfulness. Our work goes further, since it recognizes linguistic differences between factual errors, providing a more detailed typology.", "cite_spans": [ { "start": 178, "end": 196, "text": "(Cao et al., 2018;", "ref_id": "BIBREF1" }, { "start": 197, "end": 216, "text": "Falke et al., 2019;", "ref_id": "BIBREF5" }, { "start": 217, "end": 239, "text": "Lebanoff et al., 2019)", "ref_id": "BIBREF12" }, { "start": 283, "end": 306, "text": "(Goodrich et al., 2019)", "ref_id": "BIBREF7" }, { "start": 425, "end": 445, "text": "Durmus et al. (2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Factual errors in summaries", "sec_num": "2.2" }, { "text": "Here, we describe the summarization systems that generate the summaries used to create the typology (Section 3) and to carry out our comparative analysis (Section 4). We include two older approaches trained entirely from scratch on the summarization task, namely a pointer-generator architecture (See et al., 2017) , henceforth referred to as PG and an RL-inspired rewriting paradigm (Chen and Bansal, 2018) , FAST-ABS-RL. Additionally, two approaches using pre-trained language models are included: The first is TRANSFORMER-LM, proposed by (Hoang et al., 2019) , a language-modeling approach leveraging GPT (a transformer-based model trained on roughly 7,000 books). The second is BERTSUM, an approach leveraging pre-trained BERT encoders (another transformer-based model trained on the books and the English Wikipedia), proposed by (Liu and Lapata, 2019) . All four models were trained on the same split of the nonanonymized version of the CNN/Daily Mail dataset. PG and TRANSFORMER-LM directly train on the abstractive task and do not involve extraction. In contrast, BERTSUM performs initial fine-tuning on an extractive task and FAST-ABS-RL even involves an extractive sub-step directly in the pipeline.", "cite_spans": [ { "start": 296, "end": 314, "text": "(See et al., 2017)", "ref_id": "BIBREF20" }, { "start": 384, "end": 407, "text": "(Chen and Bansal, 2018)", "ref_id": "BIBREF2" }, { "start": 541, "end": 561, "text": "(Hoang et al., 2019)", "ref_id": "BIBREF9" }, { "start": 834, "end": 856, "text": "(Liu and Lapata, 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Neural summarization systems", "sec_num": "2.3" }, { "text": "In this section, we describe our methods and present the typology that we created.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building the typology", "sec_num": "3" }, { "text": "We collected the output of four summarization systems varying in a number of design aspects in order to capture as much linguistic diversity of generated text as possible. All systems were trained on the CNN/Daily Mail dataset (CNN/DM), a large corpus of news articles with associated abstractive summaries (Hermann et al., 2015) , which has been widely used in the summarization literature. Generated summaries of test set articles as provided by the original authors were used. We conduct sentence-level annotation, allowing us to look at fine-grained differences.", "cite_spans": [ { "start": 307, "end": 329, "text": "(Hermann et al., 2015)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3.1" }, { "text": "Our typology was created in two steps. First, we carried out a card sort to establish an initial set of categories. For each of the four summarization systems, we randomly sampled 30 of its summaries, ensuring that each corresponded to a different article. Each summary was divided into sentences and one sentence was printed on a card, with the respective article printed above. This yielded a total of 393 sentences. Six experts in the news domain working at a news company sorted the cards (including one of the paper authors). Cards with similar errors were placed together in a pile. Then the experts iterated over the piles together, dividing and merging them until the sentences were grouped into a stable set of categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3.1" }, { "text": "Second, we carried out a review of the categories in order to ensure that the boundaries of the categories were clear and to connect the categories to linguistic concepts. The review was carried out by the authors of the paper, two of whom were working at the news company. This group differed from the card sort group in that they have had training in linguistics. It was observed that some of the categories established in the card sort focused on surface nature of the error, others dealt more with the consequences of the error. This led us to establish a two dimensional typology, described in the following section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3.1" }, { "text": "The resulting error typology distinguishes two dimensions of summary error. First, the Mapping Dimension describes the surface level, looking at how the summary system used words and phrases from article sentences to create the erroneous summary sentence. This dimension can help us to understand the cause of an error, potentially helping to establish how these errors can be avoided. It distinguishes the four categories in Table 1 . Second, the Meaning Dimension describes the effect of the error on whether the sentence can be understood and how the reader interprets it. This dimension distinguishes six categories, presented in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 426, "end": 433, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 634, "end": 641, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Typology of summarization errors", "sec_num": "3.2" }, { "text": "Copying words from an article sentence, but omitting necessary words or phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Omission", "sec_num": null }, { "text": "Copying words or phrases from multiple article sentences and combining them into an erroneous sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wrong combination", "sec_num": null }, { "text": "Introducing one or multiple new words or phrases that cause an error.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fabrication", "sec_num": null }, { "text": "Failing to adequately re-write sentences, e.g., by not replacing referential expressions with their original antecedents in the text. When the antecedents are not present in the preceding summary context, this causes an error. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lack of re-writing", "sec_num": null }, { "text": "A sentence that is syntactically unnatural and would not be uttered by a competent speaker. Syntactically malformed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Malformed Ungrammatical", "sec_num": null }, { "text": "A sentence that is semantically unnatural and would not be uttered by a competent speaker. Nonsensical due to semantic errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantically implausible", "sec_num": null }, { "text": "A sentence that is grammatically correct, but to which no meaning can be assigned, even after accommodation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "No meaning can be inferred", "sec_num": null }, { "text": "In the summary context, the semantic content assigned to a sentence is not entailed by the original article.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Misleading Meaning changed, not entailed", "sec_num": null }, { "text": "In the summary context, the semantic content assigned to a sentence is in contradiction to the article.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meaning changed, contradiction", "sec_num": null }, { "text": "In the summary context, the sentence gains a pragmatic meaning not present in the original article. Or, a pragmatic meaning present in the article is lost. This dimension provides insight into the interaction of linguistic concepts and factual correctness. Errors from the first three categories can be considered to be malformed sentences: They will cause readers to stumble and question the quality of the summary, but they do not have the potential to mislead. In contrast, the remaining three categories can be considered misleading: They could give rise to incorrect beliefs that would not have been produced by the article alone. Misleading errors can be equated with factual errors in traditional parlance. Examples of errors and the corresponding annotation can be found below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pragmatic meaning changed", "sec_num": null }, { "text": "To validate the typology, we computed the interannotator agreement of three annotators. We selected a random subset of 30 articles from the CNN/DM dataset. Three annotators (the authors) applied the typology to judge the summaries generated by all four systems for this subset of articles. The origin of the summaries was not specified and the summaries were presented in a random order for each article. Annotators could refer to the original article and no time restrictions were applied.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pragmatic meaning changed", "sec_num": null }, { "text": "Each sentence that contained an error was assigned both a Meaning and a Mapping category. For cases where there was no majority agreement, arbitration was used to reach agreement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pragmatic meaning changed", "sec_num": null }, { "text": "We analyzed the sentence-level inter-annotator agreement (Cohen's \u03ba) of each dimension separately. Both showed moderate agreement (Meaning Dimension: \u03ba = 0.44; Mapping Dimension: \u03ba = 0.46). Further analysis of the annotations revealed that most disagreement was not between different categories in the dimension, but rather caused by raters not agreeing whether a sentence contains an error at all. We reviewed all cases for which we disagreed on whether an error was present. There are two likely sources of lower than expected agreement. First, the annotation task is not trivial and requires maintaining close attention: A total of 14 misleading sentences were missed entirely by at least one annotator. Often, these sentences are perfectly plausible at the surface (cf. Example 1) and only a very close reading of both the article and the summary ensures they are identified. Similarly, there is often at least some judgment involved in deciding whether a given sentence is actually misleading. We found 20 examples judged misleading by one annotator and acceptable by two others that reflected different personal views on whether certain edits had faithfully retained original meaning. Consider Example 2: It shows that it is plausible that prior knowledge that the annotators might have (here, about the football team in question) causes them to accept the sentence as faithful, while annotators without this knowledge might disagree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pragmatic meaning changed", "sec_num": null }, { "text": "In this section, we carry out a comparative analysis of the four summarization systems using the error typology. This analysis highlights the usefulness of the typology for achieving insight into the nature of summary errors. We made a random selection of 170 articles and one annotator annotated all four summaries for each article using the typology. These were combined with the previously annotated set of 30 articles. This yielded a total of 800 summaries with roughly 2600 annotated sentences. Sentence annotations were additionally aggregated to summary level: A summary is labeled as malformed if it contains at least one malformed sentence, but no misleading sentence. If it contains at least one misleading sentence, it is labeled as misleading.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of summarizers", "sec_num": "4" }, { "text": "Our comparative analysis focuses on the Meaning Dimension of the typology, starting with the sentence level errors. Figure 1 presents the distribution of errors at the level of malformed and misleading errors. Exact sentence and summary level rates are presented in Table 3 . A larger table including the fine-grained categories is released with the annotations.", "cite_spans": [], "ref_spans": [ { "start": 116, "end": 124, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 266, "end": 273, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Meaning dimension errors", "sec_num": "4.1" }, { "text": "All systems produce both misleading and malformed errors, but the distribution is quite different. PG, which does not use pre-training, produces the fewest misleading sentences. Malformed sentences are much more common for PG and FAST-ABS-RL, which are trained from scratch, than for TRANSFORMER-LM and BERTSUM, which use pre-training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meaning dimension errors", "sec_num": "4.1" }, { "text": "Next, we look at summary-level errors. We see that around 40% summaries contain at least one error of any kind for three of the systems and FAST-ABS-RL faring worse at almost 75%. Between 1 in 3 and 1 in 10 summaries generated by our systems contain at least one misleading statement. Our observations are consistent with summary-level error estimates reported by Falke et al. (2019) . Their estimates for PG (8%) and FAST-ABS-RL (26%) are both somewhat lower than our rates, but the general trend is reflected. For all systems, the observed summary-level error rate is closely aligned with what would be expected if errors were distributed randomly across summaries. This means that longer summaries, such as produced by FAST-ABS-RL, will have a higher error-rate independently of the sentencelevel error rate. This observation underlines the importance of our choice to carry out error analysis at the sentence-level. ", "cite_spans": [ { "start": 364, "end": 383, "text": "Falke et al. (2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Meaning dimension errors", "sec_num": "4.1" }, { "text": "Next, we look into the interaction between the two error dimensions. Figure 2 illustrates the distribution of errors over summarization systems and the connection between the categories of the Meaning and Mapping dimensions. All systems suffer about equally from lack of re-writing and wrong combinations. However, the two pre-trained systems (TRANSFORMER-LM and BERTSUM) engage more frequently in fabrications and less frequently in omissions. FAST-ABS-RL suffers markedly from omissions. Figure 2 also reveals that there is a correlation between the Mapping and Meaning dimension, but that essentially the dimensions are capturing two different aspects of summarization error. An important insight is that all four categories of Mapping error contribute to misleading errors, the more harmful type of Meaning error.", "cite_spans": [], "ref_spans": [ { "start": 69, "end": 77, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 490, "end": 498, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Interaction of mapping and meaning", "sec_num": "4.2" }, { "text": "PICTURED: Mother-of-three who 'dropped her son in a cheetah pit' as it's revealed she is a CHILDCARE WORKER", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Headline", "sec_num": null }, { "text": "On Monday, a spokesman for Kindercare, a nationally-acclaimed education, care and resource provider, confirmed Schwab has taken a leave of absence from her management role at one of the centers in Columbus, Ohio. Summary sentence Schwab is a nationally-acclaimed education, care and resource provider.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Article excerpt", "sec_num": null }, { "text": "Example 1: Wrong combination -Meaning changed, contradiction. Missed by two raters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Article excerpt", "sec_num": null }, { "text": "West Brom vs Leicester City: Team news, kick-off time, probable line-ups, odds and stats for the Premier League clash", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Headline", "sec_num": null }, { "text": "Boss Nigel Pearson has no further injury worries as his rock bottom side continue to fight for Barclays Premier League survival.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Article excerpt", "sec_num": null }, { "text": "Nigel Pearson has no further injury worries as his rock bottom side fight for survival.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary sentence", "sec_num": null }, { "text": "Example 2: Omission -Pragmatic meaning changed. Two aspects of pragmatic meaning, i.e. that the fight has already started and that it was not for existence, but to avoid relegation, were resolved using background knowledge by two raters, but caused one rater to flag the sentence. Table 3 : Error rates for the Meaning Dimension, sentence-level (Sent.) and summary-level (Sum.). ", "cite_spans": [], "ref_spans": [ { "start": 281, "end": 288, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Summary sentence", "sec_num": null }, { "text": "We turn now to the connection between abstractiveness and error. Improving the abstractiveness of a summary involves increasing the amount of rewriting. It could thus be expected that systems that are more abstractive are also more error-prone, unless they are inherently more capable of correctly abstracting sentences. Durmus et al. (2020) found that more abstractive systems are generally more error-prone, but did not look into the interaction of sentence-level error rates and abstractiveness. For this reason, we carry out a sentence-level analysis. We calculate an abstractiveness score for each sentence in each summary as follows. For each sentence, we automatically select the closest document sentence in terms of word overlap. We then compute ROUGE-L. Normalizing by the length of the article sentence gives the precision of ROUGE-L and thus shows how much of the article sentence is retained. Similarly, normalizing by the summary length gives the recall of ROUGE-L, capturing how much of the summary originates from the closest document sentence. To get a combined metric, we compute ROUGE-L-F1, the harmonic mean of precision and recall for all ROUGE values. Sentences are then binned into two equal size bins, yielding a threshold of 0.705. We consider sentences about the threshold to have high abstractiveness and those below to have low abstractiveness. Figure 3 displays the sentence-level error-rates for high and low abstractiveness summaries, separately for all four systems. Across all systems, higher abstractiveness is associated with a higher error rate. BERTSUM has a slightly lower error rate for highly abstractive sentences than the other systems with similar error rates. For", "cite_spans": [ { "start": 321, "end": 341, "text": "Durmus et al. (2020)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 1373, "end": 1381, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Differences in abstractiveness", "sec_num": "4.3" }, { "text": "The Justice Department's questionable battle against FedEx", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Headline", "sec_num": null }, { "text": "It turns out a corporation can indeed be prosecuted like a person. It's a practice the Supreme Court has approved of for over a century. Summary sentence It's a practice the Supreme Court has approved of for over a century.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Article excerpt", "sec_num": null }, { "text": "Example 3: Lack of re-writing -No meaning can be inferred. System: PG", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Article excerpt", "sec_num": null }, { "text": "Prince Charles leads tributes to '100-year-old teenager' Hayley Okines as hundreds gather for her funeral.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Headline", "sec_num": null }, { "text": "She suffered from the rare disease progeria which ages the body at eight times the normal rate. Summary sentence She suffered from rare disease progeria which ages the body at eight times.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Article excerpt", "sec_num": null }, { "text": "Example 4: Omission -Ungrammatical. System: FAST-ABS-RL largely extractive sentences (low abstractiveness), PG, TRANSFORMER-LM and BERTSUM perform about equally well, while FAST-ABS-RL has a higher error rate. These findings support the observation that an absolute difference in sentence error rate between systems could be explained not by one system being inherently better, but just being less likely to write more abstractively and thus more error-prone. We also observed that sentences that score high in abstractiveness are more than twice as likely to be misleading and 50% more likely to be malformed than those that score low. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Article excerpt", "sec_num": null }, { "text": "In this section, we tie together the main contributions and insights of this paper, and discuss the avenues that it opens for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and outlook", "sec_num": "5" }, { "text": "In this paper, we have presented a typology of errors produced by automatic summarization systems, created by analyzing the output of four recent neural systems. The typology describes summary errors along a Mapping Dimension and a Meaning Dimensions, which are related, but are shown to capture different aspects of summary error. The Meaning Dimension is further divided into types of errors that describe malformed sentences and those that describe misleading sentences. The typology supports systematic analysis of abstractive summaries, and allows for focusing on the misleading sentences produced by automatic summarization systems. These errors are highly problematic because they impact the truth of a summary, i.e., its factual faithfulness to the original document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary and discussion", "sec_num": "5.1" }, { "text": "Our comparative analysis has revealed the importance of using well-designed summarization metrics. With the wrong metrics, summarization systems will appear to be successful if the length of the summary or its abstractiveness has been decreased. In order to avoid these effects, and to achieve truly improved summaries, more advanced evaluation methods must be developed. The typology of errors that we have proposed here provides the basis for such methods. Metrics can become independent of length and abstractiveness if they take into account sentence-level errors and if they treat different errors differently. In particular, we recommend that misleading errors should be more important in signalling failed summaries than malformed errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary and discussion", "sec_num": "5.1" }, { "text": "If we consider the practical implications of improved summary evaluation, our typology makes a contribution in three related, but distinct directions: First, it can support the training of human assessors who can monitor live summarization systems in order to ensure that they do not lead to the publication of misinformation, which can have dangerous consequences. Second, it would be possible to train machine learning systems to support these human judgements. Third, it would be possible to improve automatic summarization systems in a way that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary and discussion", "sec_num": "5.1" }, { "text": "Andy Murray will jet straight from wedding with Kim Sears to run rule over prospective new assistant coach Jonas Bjorkman", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Headline", "sec_num": null }, { "text": "Mauresmo, who is to give birth some time in August, will be around eight months' pregnant during Wimbledon this summer. Summary sentence Mauresmo is eight months' pregnant with her first child.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Article excerpt", "sec_num": null }, { "text": "Example 5: Fabrication -Meaning changed, not entailed. System: T-LM", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Article excerpt", "sec_num": null }, { "text": "Amazon removes new game that mocks anorexia sufferers by allowing players to throw food and sweets at character to fatten her up", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Headline", "sec_num": null }, { "text": "If the player misses the girl, she starts to lose weight until she eventually dies. Gamers have to throw food at the girl who appears in one of nine holes before she disappears again.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Article excerpt", "sec_num": null }, { "text": "Gamers have to throw food at the girl who appears in one of nine holes before she dies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary sentence", "sec_num": null }, { "text": "Example 6: Wrong combination -Meaning changed, contradiction. System: BERTSUM allows them to specifically avoid generating misleading sentences. The work we have presented here has set down a foundation for these directions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary sentence", "sec_num": null }, { "text": "In terms of improving summmarization systems, our typology has supported interesting insights: The four neural summarization systems that we studied differ considerably in their error patterns (cf. Table 3 and Figure 2 ). For example, we see that sentence-based rewriting such as in FAST-ABS-RL leads to omission errors, resulting in a higher risk of malformed sentences. More strikingly, the two pre-trained systems are somewhat more successful at avoiding malformed sentences, indicating that pre-training helps to improve grammaticality. This finding makes intuitive sense, as learning the statistical properties of a large corpus of text can be expected to boost the ability to generate grammatical text. However, misleading sentences and fabrication errors are more common for these pretrained systems. Overall, we observe that if any one of these systems were to be used in a real-world scenario, readers could frequently end up confused, irritated or worst of all misled to hold incorrect beliefs. Using our typology these effects can be properly understood and quantified.", "cite_spans": [], "ref_spans": [ { "start": 210, "end": 218, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Summary sentence", "sec_num": null }, { "text": "The typology presented in this paper opens several avenues for future work. First, here, we used summaries from only a single data set (CNN/DM) in a single domain (news). The typology should be validated on different data from different domains, which may allow more nuance to be added to the categories of the dimensions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "5.2" }, { "text": "Second, further research is necessary in order to determine whether it is possible to achieve higher levels of inter-annotator agreement. Recall that we saw a relatively low agreement among annotators as to whether a sentence contains an error at all. This is in line with observations made by Lebanoff et al. (2019) , who noted a relatively low inter-annotator agreement for binary faithfulness annotation. However, more investigation is needed.", "cite_spans": [ { "start": 294, "end": 316, "text": "Lebanoff et al. (2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "5.2" }, { "text": "We point out that the inter-annotator agreement has a possible dependency with the domain and data that is being analyzed. Specific linguistic properties of the CNN/DM dataset could have negatively affected agreement about the malformedness of sentences, namely telegraphic language style and the issue of reference summaries lacking relevant context. The lack of context issue is specific to the data set, which omits the article headline, even though summaries often rely on it for interpretability. This means that some reference summaries are hard to understand in isolation, and could potentially bias systems to imitate the style. Summary sentences that suffer from these issues are a likely source of annotator disagreement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "5.2" }, { "text": "We hope that researchers will build on and continue to refine the typology that we have presented here. For example, more detailed study of how human judgement interacts with malformed vs. misleading errors could lead to an improvement in the category descriptions or in the divisions between the categories. A refined typology would support standardization of the judgement protocols for automatically generated summaries, which would in turn help fight the adverse effects of factual errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "5.2" }, { "text": "https://tinyurl.com/truth-error-2020.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Acknowledgments: We thank FD Mediagroep for conducting the Smart Journalism project which allowed us to perform this research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Make no mistake? Exploring cognitive and perceptual effects of grammatical errors in news articles", "authors": [ { "first": "Alyssa", "middle": [], "last": "Appelman", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schmierbach", "suffix": "" } ], "year": 2018, "venue": "Journalism & Mass Communication Quarterly", "volume": "95", "issue": "4", "pages": "930--947", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alyssa Appelman and Mike Schmierbach. 2018. Make no mistake? Exploring cognitive and perceptual ef- fects of grammatical errors in news articles. Journal- ism & Mass Communication Quarterly, 95(4):930- 947.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Faithful to the original: Fact aware neural abstractive summarization", "authors": [ { "first": "Ziqiang", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "" }, { "first": "Sujian", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)", "volume": "", "issue": "", "pages": "4784--4791", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018. Faithful to the original: Fact aware neural abstrac- tive summarization. In Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence (AAAI-18), pages 4784-4791.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Fast abstractive summarization with reinforce-selected sentence rewriting", "authors": [ { "first": "Yen-Chun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "675--686", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yen-Chun Chen and Mohit Bansal. 2018. Fast abstrac- tive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675-686.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "FEQA: A question answering evaluation framework for faithfulness assessment in abstractive summarization", "authors": [ { "first": "Esin", "middle": [], "last": "Durmus", "suffix": "" }, { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5055--5070", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.454" ] }, "num": null, "urls": [], "raw_text": "Esin Durmus, He He, and Mona Diab. 2020. FEQA: A question answering evaluation framework for faith- fulness assessment in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5055- 5070.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A survey on evaluation of summarization methods. Information Processing & Management", "authors": [ { "first": "Liana", "middle": [], "last": "Ermakova", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Val\u00e8re Cossu", "suffix": "" }, { "first": "Josiane", "middle": [], "last": "Mothe", "suffix": "" } ], "year": 2019, "venue": "", "volume": "56", "issue": "", "pages": "1794--1814", "other_ids": { "DOI": [ "10.1016/j.ipm.2019.04.001" ] }, "num": null, "urls": [], "raw_text": "Liana Ermakova, Jean Val\u00e8re Cossu, and Josiane Mothe. 2019. A survey on evaluation of summariza- tion methods. Information Processing & Manage- ment, 56(5):1794-1814.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Ranking generated summaries by correctness: An interesting but challenging application for natural language inference", "authors": [ { "first": "Tobias", "middle": [], "last": "Falke", "suffix": "" }, { "first": "Leonardo", "middle": [ "F R" ], "last": "Ribeiro", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Prasetya Ajie Utama", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2214--2220", "other_ids": { "DOI": [ "10.18653/v1/P19-1213" ] }, "num": null, "urls": [], "raw_text": "Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019. Ranking generated summaries by correctness: An in- teresting but challenging application for natural lan- guage inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2214-2220. Association for Compu- tational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Preference-based interactive multi-document summarisation", "authors": [ { "first": "Yang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "M", "middle": [], "last": "Christian", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Meyer", "suffix": "" }, { "first": "", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Information Retrieval Journal", "volume": "", "issue": "", "pages": "1--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Gao, Christian M Meyer, and Iryna Gurevych. 2019. Preference-based interactive multi-document summarisation. Information Retrieval Journal, pages 1-31.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Assessing the factual accuracy of generated text", "authors": [ { "first": "Ben", "middle": [], "last": "Goodrich", "suffix": "" }, { "first": "Vinay", "middle": [], "last": "Rao", "suffix": "" }, { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Saleh", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining", "volume": "", "issue": "", "pages": "166--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Goodrich, Vinay Rao, Peter J Liu, and Moham- mad Saleh. 2019. Assessing the factual accuracy of generated text. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 166-175.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Teaching machines to read and comprehend", "authors": [ { "first": "Karl", "middle": [], "last": "Moritz Hermann", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Kocisky", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Lasse", "middle": [], "last": "Espeholt", "suffix": "" }, { "first": "Will", "middle": [], "last": "Kay", "suffix": "" }, { "first": "Mustafa", "middle": [], "last": "Suleyman", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "1693--1701", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in neural information processing systems, pages 1693-1701.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Efficient adaptation of pretrained transformers for abstractive summarization", "authors": [ { "first": "Andrew", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bosselut", "suffix": "" }, { "first": "Asli", "middle": [], "last": "Celikyilmaz", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.00138[cs].ArXiv:1906.00138" ] }, "num": null, "urls": [], "raw_text": "Andrew Hoang, Antoine Bosselut, Asli Celikyilmaz, and Yejin Choi. 2019. Efficient adaptation of pre- trained transformers for abstractive summarization. arXiv:1906.00138 [cs]. ArXiv: 1906.00138.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Neural text summarization: A critical evaluation", "authors": [ { "first": "Wojciech", "middle": [], "last": "Kryscinski", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Shirish Keskar", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Mc-Cann", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "540--551", "other_ids": { "DOI": [ "10.18653/v1/D19-1051" ] }, "num": null, "urls": [], "raw_text": "Wojciech Kryscinski, Nitish Shirish Keskar, Bryan Mc- Cann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 540- 551. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Evaluating the factual consistency of abstractive text summarization", "authors": [ { "first": "Wojciech", "middle": [], "last": "Kry\u015bci\u0144ski", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Mccann", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.12840[cs].ArXiv:1910.12840" ] }, "num": null, "urls": [], "raw_text": "Wojciech Kry\u015bci\u0144ski, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Evaluating the fac- tual consistency of abstractive text summarization. arXiv:1910.12840 [cs]. ArXiv: 1910.12840.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Analyzing sentence fusion in abstractive summarization", "authors": [ { "first": "Logan", "middle": [], "last": "Lebanoff", "suffix": "" }, { "first": "John", "middle": [], "last": "Muchovej", "suffix": "" }, { "first": "Franck", "middle": [], "last": "Dernoncourt", "suffix": "" }, { "first": "Soon", "middle": [], "last": "Doo", "suffix": "" }, { "first": "Seokhwan", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.00203[cs].ArXiv:1910.00203" ] }, "num": null, "urls": [], "raw_text": "Logan Lebanoff, John Muchovej, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, and Fei Liu. 2019. Analyzing sentence fusion in abstrac- tive summarization. arXiv:1910.00203 [cs]. ArXiv: 1910.00203.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Best practices for the human evaluation of automatically generated text", "authors": [ { "first": "Chris", "middle": [], "last": "Van Der Lee", "suffix": "" }, { "first": "Albert", "middle": [], "last": "Gatt", "suffix": "" }, { "first": "Sander", "middle": [], "last": "Emiel Van Miltenburg", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Wubben", "suffix": "" }, { "first": "", "middle": [], "last": "Krahmer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 12th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "355--368", "other_ids": { "DOI": [ "10.18653/v1/W19-8643" ] }, "num": null, "urls": [], "raw_text": "Chris van der Lee, Albert Gatt, Emiel van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th Interna- tional Conference on Natural Language Generation, pages 355-368. Association for Computational Lin- guistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Rouge: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Workshop Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Proceedings of the Workshop Text Summarization Branches Out, pages 74-81. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Text summarization with pretrained encoders", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3721--3731", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Liu and Mirella Lapata. 2019. Text summariza- tion with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3721-3731.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The challenging task of summary evaluation: an overview. Language Resources and Evaluation", "authors": [ { "first": "Elena", "middle": [], "last": "Lloret", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Plaza", "suffix": "" }, { "first": "Ahmet", "middle": [], "last": "Aker", "suffix": "" } ], "year": 2018, "venue": "", "volume": "52", "issue": "", "pages": "101--148", "other_ids": { "DOI": [ "10.1007/s10579-017-9399-2" ] }, "num": null, "urls": [], "raw_text": "Elena Lloret, Laura Plaza, and Ahmet Aker. 2018. The challenging task of summary evaluation: an overview. Language Resources and Evaluation, 52(1):101-148.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "On the factual correctness and robustness of deep abstractive text summarization", "authors": [ { "first": "Klaus-Michael", "middle": [], "last": "Lux", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klaus-Michael Lux. 2020. On the factual correct- ness and robustness of deep abstractive text summa- rization. Master's thesis, Radboud University, Ni- jmegen, August.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A simple theoretical model of importance for summarization", "authors": [ { "first": "Maxime", "middle": [], "last": "Peyrard", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1059--1073", "other_ids": { "DOI": [ "10.18653/v1/P19-1101" ] }, "num": null, "urls": [], "raw_text": "Maxime Peyrard. 2019a. A simple theoretical model of importance for summarization. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1059-1073, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Studying summarization evaluation metrics in the appropriate scoring range", "authors": [ { "first": "Maxime", "middle": [], "last": "Peyrard", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5093--5100", "other_ids": { "DOI": [ "10.18653/v1/P19-1502" ] }, "num": null, "urls": [], "raw_text": "Maxime Peyrard. 2019b. Studying summarization evaluation metrics in the appropriate scoring range. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5093-5100, Florence, Italy. Association for Compu- tational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Get to the point: Summarization with pointergenerator networks", "authors": [ { "first": "Abigail", "middle": [], "last": "See", "suffix": "" }, { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1073--1083", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "How to compare summarizers without target length? Pitfalls, solutions and re-examination of the neural summarization literature", "authors": [ { "first": "Simeng", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Ori", "middle": [], "last": "Shapira", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Ani", "middle": [], "last": "Nenkova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation", "volume": "", "issue": "", "pages": "21--29", "other_ids": { "DOI": [ "10.18653/v1/W19-2303" ] }, "num": null, "urls": [], "raw_text": "Simeng Sun, Ori Shapira, Ido Dagan, and Ani Nenkova. 2019. How to compare summarizers without target length? Pitfalls, solutions and re-examination of the neural summarization literature. In Proceedings of the Workshop on Methods for Optimizing and Eval- uating Neural Language Generation, pages 21-29. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Sentence-level error type incidence rates by system, c.f.", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "Sankey diagram showing the interaction of summarization systems, Mapping Dimension errors and Meaning Dimension errors.", "type_str": "figure", "uris": null }, "FIGREF2": { "num": null, "text": "Binned ROUGE-F1 scores, average error rates in bins separately by system. 95 % CI obtained by bootstrap sampling.", "type_str": "figure", "uris": null }, "TABREF0": { "num": null, "type_str": "table", "html": null, "content": "
. 95 % CI obtained by bootstrap |
sampling. |