{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:29:10.666803Z" }, "title": "On User Interfaces for Large-Scale Document-Level Human Evaluation of Machine Translation Outputs", "authors": [ { "first": "Roman", "middle": [], "last": "Grundkiewicz", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft", "location": { "addrLine": "1 Microsoft Way", "postCode": "98121", "settlement": "Redmond", "region": "WA", "country": "USA" } }, "email": "" }, { "first": "Marcin", "middle": [], "last": "Junczys-Dowmunt", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft", "location": { "addrLine": "1 Microsoft Way", "postCode": "98121", "settlement": "Redmond", "region": "WA", "country": "USA" } }, "email": "" }, { "first": "Tom", "middle": [], "last": "Kocmi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft", "location": { "addrLine": "1 Microsoft Way", "postCode": "98121", "settlement": "Redmond", "region": "WA", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recent studies emphasize the need of document context in human evaluation of machine translations, but little research has been done on the impact of user interfaces on annotator productivity and the reliability of assessments. In this work, we compare human assessment data from the last two WMT evaluation campaigns collected via two different methods for document-level evaluation. Our analysis shows that a document-centric approach to evaluation where the annotator is presented with the entire document context on a screen leads to higher quality segment and document level assessments. It improves the correlation between segment and document scores and increases inter-annotator agreement for document scores but is considerably more time consuming for annotators.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Recent studies emphasize the need of document context in human evaluation of machine translations, but little research has been done on the impact of user interfaces on annotator productivity and the reliability of assessments. In this work, we compare human assessment data from the last two WMT evaluation campaigns collected via two different methods for document-level evaluation. Our analysis shows that a document-centric approach to evaluation where the annotator is presented with the entire document context on a screen leads to higher quality segment and document level assessments. It improves the correlation between segment and document scores and increases inter-annotator agreement for document scores but is considerably more time consuming for annotators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recently, several studies have suggested that document context is required for the reliable human evaluation of machine-translated documents Laubli et al., 2020) . With the improved performance of neural machine translation systems (NMT) over the past years, this is particularly important when assessing the potential for human parity or super-human performance of MT systems (L\u00e4ubli et al., 2018; Toral et al., 2018) . Following these recommendations, the WMT Conference on Machine Translation 1 has moved towards adopting and presenting document context in their human evaluation campaigns of 2019 and 2020 (Barrault et al., 2019 (Barrault et al., , 2020 . The WMT campaigns are the largest academic efforts on human evaluation of machine-translated news articles in the field, running yearly since 2007.", "cite_spans": [ { "start": 141, "end": 161, "text": "Laubli et al., 2020)", "ref_id": "BIBREF16" }, { "start": 377, "end": 398, "text": "(L\u00e4ubli et al., 2018;", "ref_id": "BIBREF17" }, { "start": 399, "end": 418, "text": "Toral et al., 2018)", "ref_id": null }, { "start": 610, "end": 632, "text": "(Barrault et al., 2019", "ref_id": "BIBREF1" }, { "start": 633, "end": 657, "text": "(Barrault et al., , 2020", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "At WMT19, the previous segment-level direct assessment evaluation (Bojar et al., 2017 (Bojar et al., , 2018 ) -1 http://www.statmt.org/wmt20/ where translated segments were presented to evaluators 2 in random order -was extended by introducing \"segment ratings with document context\" (Barrault et al., 2019) , and assessments of both, individual segments and entire documents, were collected. In this approach, segments from a single document translated by the same MT system were provided sequentially to evaluators in the order as they appear in the document, only one segment shown at a time ( Fig. 1a) , followed by the entire document comprised of already scored segments ( Fig. 1b) . WMT 2020 (Barrault et al., 2020) implemented a more document-centric approach, displaying the full translated document on a single screen (Fig. 1c) for most of the out-of-English language pairs.", "cite_spans": [ { "start": 66, "end": 85, "text": "(Bojar et al., 2017", "ref_id": "BIBREF2" }, { "start": 86, "end": 107, "text": "(Bojar et al., , 2018", "ref_id": "BIBREF3" }, { "start": 284, "end": 307, "text": "(Barrault et al., 2019)", "ref_id": "BIBREF1" }, { "start": 699, "end": 722, "text": "(Barrault et al., 2020)", "ref_id": null } ], "ref_spans": [ { "start": 597, "end": 605, "text": "Fig. 1a)", "ref_id": "FIGREF1" }, { "start": 679, "end": 687, "text": "Fig. 1b)", "ref_id": "FIGREF1" }, { "start": 828, "end": 837, "text": "(Fig. 1c)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While the change was primarily about the user interface (UI), we believe it can impact the quality of document-level evaluation to a large extent. Toral (2020) has noticed potential issues arising from the limited inter-sentential context in the WMT19 method, in which the evaluator does not have continuous access to all segments from the document. Unable to revisit previous sentences and never seeing subsequent sentences, the evaluator might forget or lack access to important details necessary to rate the current segment. On the other hand, displaying a long document on a screen can notably increase cognitive load, potentially lowering reliability of assessments over time (Gonzalez et al., 2011) , and increase annotation time and costs, especially at the scale of the WMT evaluation campaigns.", "cite_spans": [ { "start": 147, "end": 159, "text": "Toral (2020)", "ref_id": "BIBREF21" }, { "start": 681, "end": 704, "text": "(Gonzalez et al., 2011)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we compare human assessment scores collected during the last two WMT evaluation campaigns and analyze the impacts of the user interface changes between these campaigns. We also attempt to determine whether switching to the document-centric UI was an improvement to the human evaluation procedure and should be adopted in future editions of WMT for all language pairs. We examine if and to what extent human raters make use of the document context, estimate the reliability of document ratings collected through both interfaces, and study potential additional costs resulting from the document-centric evaluation at a large scale.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Document context in human evaluation of MT outputs", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent research emphasized the importance of document context in human evaluation of machine translation, especially in terms of accessing potential human parity or super-human performance (L\u00e4ubli et al., 2018; Toral et al., 2018; Toral, 2020) . Several works have compiled sets of recommendations for document-level evaluation. For example, Laubli et al. (2020) recommend evaluation of documents instead of independent sentences as translators tend to judge machine translation more favourably if they cannot identify errors related to textual coherence and cohesion due to lack of context. have examined the necessary context span needed for evaluation across different domains, and for relatively short documents like news articles, the authors recommend presenting the whole document during the assessment of individual segments. Using document context has also been recommended by Toral (2020) who reported that this information was needed for evaluators to rank systems in a contrastive evaluation setting. Having the text available during the assessment of fluency or adequacy might be essential for some evaluators who spend more time reading than assessing (Castilho, 2020) .", "cite_spans": [ { "start": 189, "end": 210, "text": "(L\u00e4ubli et al., 2018;", "ref_id": "BIBREF17" }, { "start": 211, "end": 230, "text": "Toral et al., 2018;", "ref_id": null }, { "start": 231, "end": 243, "text": "Toral, 2020)", "ref_id": "BIBREF21" }, { "start": 342, "end": 362, "text": "Laubli et al. (2020)", "ref_id": "BIBREF16" }, { "start": 886, "end": 898, "text": "Toral (2020)", "ref_id": "BIBREF21" }, { "start": 1166, "end": 1182, "text": "(Castilho, 2020)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although the literature is consistent about the need of document context in human evaluation of MT, little research has been done on the impact of experimental design and user interfaces on annotator productivity and the reliability of assessments in this context. The existing research on experimental designs for machine translation evaluation focuses on contrasting direct assessments with pairwise rankings (Novikova et al., 2018; Sakaguchi and Van Durme, 2018) and not on the optimal presentation of the document-level information. However, even the simple UI design decision of aligning document translations on the sentence level impacts efficiency of some evaluators (Popovi\u0107, 2020) . With this work, we want to promote that direction of research.", "cite_spans": [ { "start": 411, "end": 434, "text": "(Novikova et al., 2018;", "ref_id": "BIBREF18" }, { "start": 435, "end": 465, "text": "Sakaguchi and Van Durme, 2018)", "ref_id": "BIBREF20" }, { "start": 675, "end": 690, "text": "(Popovi\u0107, 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3 Document-level human evaluation campaigns at WMT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "During the WMT evaluation campaigns of 2019 and 2020, segment and document-level assessments of document translations were collected, but using different methods and thus user interfaces. Both were implemented in the Appraise evaluation framework (Federmann, 2018) as a source-based direct assessment task (Graham et al., 2013; Cettolo et al., 2017) , i.e. all segments and entire documents were judged on a continuous scale between 0 and 100 by bilingual annotators.", "cite_spans": [ { "start": 247, "end": 264, "text": "(Federmann, 2018)", "ref_id": "BIBREF8" }, { "start": 306, "end": 327, "text": "(Graham et al., 2013;", "ref_id": "BIBREF12" }, { "start": 328, "end": 349, "text": "Cettolo et al., 2017)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "At WMT19, the evaluation of a translated document consisted of two parts: first, an evaluator would rate all individual segments in a document translated by one MT system, one by one, in the order they appear in the document, followed by assigning a single score to the whole document. Evaluators would be presented with the translation of a single segment (a source sentence and its translation) per screen, or the translation of the entire document. Figures 1a and 1b depict segment-level and document-level portions of the interface, respectively. This method was a simple document-level extension of the purely segment-level evaluations hosted during the previous editions of the WMT evaluation campaigns and did not require significant changes to the UI. A consequence of this approach was limited inter-sentential context as discussed by Toral (2020) , since evaluators could not revisit the previously rated segments nor see subsequent ones. A rating decision could not be corrected in the light of the later-revealed context.", "cite_spans": [ { "start": 844, "end": 856, "text": "Toral (2020)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "The WMT19 interface", "sec_num": "3.1" }, { "text": "At WMT20, both segment-level and documentlevel evaluations were performed on one screen. An evaluator would be presented with a translation of the entire document produced by one MT system. The document and its translation would be placed on a single vertically scrollable screen in two columns with source sentences on the left and their machine-translated counterparts on the right, aligned at segment-level. Figure 1c depicts a screenshot of this interface.", "cite_spans": [], "ref_spans": [ { "start": 411, "end": 420, "text": "Figure 1c", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The WMT20 interface", "sec_num": "3.2" }, { "text": "In the default scenario, the evaluator would be rating individual segments sequentially and, after rating all segments, on the same screen, the evaluator would rate the translation of the entire document at the bottom of the screen. Evaluators could, however, re-visit and update scores of previously rated segments at any time while still assessing the given document. They could also expand all sliders individually or in full, allowing them to take in all previously assigned scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The WMT20 interface", "sec_num": "3.2" }, { "text": "In our experiments, we utilize the human assessment data collected at the WMT19 and WMT20 evaluation campaigns. We limit the data to outof-English language pairs as the into-English evaluation at WMT20 was done using the WMT19 method of reference-based DA and assessed by crowd workers instead of translators and researchers. Each annotator account provided 200 segment-level scores, and a number of documentlevel scores depending on the length of documents in the annotator's sample. From our analysis, we exclude all documents that contain one or more quality control segments, which constitute about 12% of all segments. 3 We use similar amounts of assessments from both campaigns, as seen in Table 1 : WMT19 provided 208K segment and 13K document ratings, while 187K and 14K were collected for WMT20, respectively. We either compare data collected for Time for single segment score 00:16 \u00b1 00:06 00:24 \u00b1 00:13 +47.4 Time for single document score 00:12 \u00b1 00:09 00:06 \u00b1 00:04 -42.7 all eight languages in each campaign or only subsets from four languages that were present in both years, i.e. Czech, German, Russian, and Chinese, minimizing differentiation factors between the data. Note that the WMT19 and WMT20 assessment data concern disjoint sets of segments as different test sets and MT systems were evaluated in both campaigns. We are interested in general patterns in the data at a larger scale, so we do not perceive this as an issue, but are aware of the fact in our conclusions. In a more ideal situation, we would have been able to perform A/B testing of different interfaces at the same campaign, but this was not an available option during the actual campaigns.", "cite_spans": [ { "start": 624, "end": 625, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 696, "end": 703, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Human assessment data", "sec_num": "4" }, { "text": "We aim at comparing the WMT19 and WMT20 interfaces for segment and document-level human assessments of MT outputs by analyzing the data that has been collected using both methods. We analyze annotation times, compare correlations of document and averaged segment ratings, and examine the inter-annotator agreement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on WMT data", "sec_num": "5" }, { "text": "We analyze annotation times to examine if and to what extent document context is used by annotators if it is available to them during assessment of individual segments. In both interfaces, two timestamps were collected for each segment or document. In WMT19, timestamps were recorded when a new page opened and when an annotator submitted a score. In the WMT20 document-level interface timestamps were recorded when a segment was (automatically or manually) expanded and when a score was submitted. Note that in the WMT20 campaign, annotators see all segments during the assessment of the document and can read ahead even before the first timestamp is collected. This could make the collected annotation times for WMT20 slightly less reliable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation times", "sec_num": "5.1" }, { "text": "We report annotation time statistics only for evaluators who completed their task consisting of 200 segments (74% of evaluators at WMT19 and 84% at WMT20). Very quickly annotated items indicate users who potentially gamed the task and assigned random scores. Items that took an excessive amount of time were likely interrupted with unrelated activity or otherwise idle. In order to account for these situations, we remove data points with values smaller than the 10th percentile or larger than the 90th percentile. The results are shown in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 540, "end": 547, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Annotation times", "sec_num": "5.1" }, { "text": "Our observations are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation times", "sec_num": "5.1" }, { "text": "\u2022 Providing the full document context increases the total annotation time per task by 68% on average. This suggests that annotators do read the context and use it during assessments. Significantly increased annotation time raises the question about cost efficiency of the document-centric evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation times", "sec_num": "5.1" }, { "text": "\u2022 The more context is available, the more time annotators spend on studying it: during WMT20, annotators spent 74% more time on documents with 20 or more segments than on documents of similar length during WMT19, whereas the per-document annotation time for shorter documents with 10 or fewer segments increased by only 37%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation times", "sec_num": "5.1" }, { "text": "\u2022 Comparing the average annotation times for segments from the beginning of the document with those farther into the documents, we can see that with the WMT20 interface annotators significantly increase the pace of annotation throughout the assessment of segments in a document. this is much less prominent for WMT19, which suggests that annotators do read the context ahead before making assessments (Castilho, 2020) and that they can memorize and make better use of the preceding context if it is available to them at all time.", "cite_spans": [ { "start": 401, "end": 417, "text": "(Castilho, 2020)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Annotation times", "sec_num": "5.1" }, { "text": "As described in Section 3, the new interface allowed annotators to revise any segment score in a document before submitting the document score. We found that annotators did not use this feature often, and only 1.9% segment-level scores were revised, which resulted in 9.0% documents with one or more revised scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation times", "sec_num": "5.1" }, { "text": "These observations suggest that annotators do make use of the available context and spend additional time studying it. Whether using that context results in more reliable quality assessments at segment and document level remains however unanswered.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation times", "sec_num": "5.1" }, { "text": "We measure the similarity between document-level scores and aggregated segment-level scores using different statistics, for example an average, from the same documents. We use the Pearson coefficient as the correlation measure (Freedman et al., 2007) . We hypothesize that an increased correlation may be contributed to an improved capability of the user interface for reliable assessment of document translations by annotators.", "cite_spans": [ { "start": 227, "end": 250, "text": "(Freedman et al., 2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Correlation of document and segment-level judgements", "sec_num": "5.2" }, { "text": "Our main results are presented in Table 3 and Figure 2 . We excluded all documents that contained one or more segments used for quality control (26% and 22% for WMT19 and WMT20, respectively) before computing the correlation statistics. We did not exclude scores from users who did not pass the quality control as this is not practiced by the WMT organizers when computing human rankings of MT systems for out-of-English languages. These users contributed only a small fraction of the data and excluding their scores does not meaningfully change the results. The scores were not standardized prior to computation.", "cite_spans": [], "ref_spans": [ { "start": 34, "end": 41, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 46, "end": 54, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Correlation of document and segment-level judgements", "sec_num": "5.2" }, { "text": "We observe the following effects of the WMT20 interface compared to the WMT19 interface:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlation of document and segment-level judgements", "sec_num": "5.2" }, { "text": "\u2022 We can see consistently higher correlations between document-level scores and all tested aggregations of segment-level scores for WMT20. This effect is even more prominent on the four common language pairs used in both campaigns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlation of document and segment-level judgements", "sec_num": "5.2" }, { "text": "\u2022 Document-level scores show the highest correlation with the averaged segment-level scores. The very high correlation of 0.92 indicates that the average of segment ratings from a document might be used as a reasonable approximation of the final document ratings in the document-centric evaluation. This might justify dropping the final document score from the assessment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlation of document and segment-level judgements", "sec_num": "5.2" }, { "text": "\u2022 The lowest segment score in documents correlates better with the overall document score than the highest segment score (Min. seg. vs Max. seg.). Intuitively, badly translated segments may impact the overall perception of the document quality more than higherquality segment translations, or this could be attributed to the fact that shorter sentences are more likely to be translated correctly, but annotators may not see them as contributive to the overall document translation quality as longer sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlation of document and segment-level judgements", "sec_num": "5.2" }, { "text": "\u2022 Regardless of the user interface, segments from the end of a document influence assessment of the entire document more than segments from the beginning of the document (Avg. of first 5 vs Avg. of last 5). From this, we do not observe that showing segments sequentially penalizes the very first segments in the document in contributing to the overall document score. However, the comparison of correlations for short and long documents (up to 10 segments, or more than 20 segments; bottom part of Table 3a ) reveals that WMT20 seems to improve the contribution of early segments to the document score for long documents.", "cite_spans": [], "ref_spans": [ { "start": 498, "end": 506, "text": "Table 3a", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Correlation of document and segment-level judgements", "sec_num": "5.2" }, { "text": "\u2022 In Figure 2 , we computed correlations for averaged segment-level scores in relation to the number of segments in documents. Interestingly, for WMT20, the correlation increases for the longest documents (more than 25 segments).", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 13, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Correlation of document and segment-level judgements", "sec_num": "5.2" }, { "text": "The same trends are observed if Spearman's or Kendall's rank correlation coefficients are used instead of Pearson's correlation coefficient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlation of document and segment-level judgements", "sec_num": "5.2" }, { "text": "We compute annotator agreement as a measure of reliability between annotators with Cohen's kappa coefficient (Cohen, 1960) \u03ba = P a \u2212 P e 1 \u2212 P e , where P a is the observed proportion of times that two annotators agree, and P e is the expected mean proportion of agreement due to chance. Values of \u03ba close to 0 are interpreted as no agreement and \u03ba is equal to 1 if there is perfect agreement. P a is computed from pairwise comparisons of all documents that have been annotated by two or more annotators by counting the proportion of times that two annotators agree on the score. 4 It is assumed that two annotators agree if their assigned scores s i and s j differ no more than a predefined tolerance t, i.e. |s i \u2212 s j | \u2264 t. P e is constant for a given t and computed as the sum of probabilities of randomly assigning a score within the tolerance t (inclusive) over all possible scores from 1 to 100, i.e.:", "cite_spans": [ { "start": 109, "end": 122, "text": "(Cohen, 1960)", "ref_id": "BIBREF7" }, { "start": 580, "end": 581, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Inter-annotator agreement", "sec_num": "5.3" }, { "text": "P e = i\u2208[1,100] min(i + t, 100) \u2212 max(i \u2212 t, 0) + 1 100 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inter-annotator agreement", "sec_num": "5.3" }, { "text": "Examples of P e for different t are presented in Table 5 . We compute inter-annotator agreement (IAA) for t = 5, 10, 15, 20, 25, 30, and compare agreement for document-level and averaged segmentlevel scores, presenting the results in Table 4 . Since there are very few annotators who have annotated the same documents more than once, we do not compute document-level intra-annotator agreement.", "cite_spans": [], "ref_spans": [ { "start": 49, "end": 56, "text": "Table 5", "ref_id": "TABREF10" }, { "start": 234, "end": 241, "text": "Table 4", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Inter-annotator agreement", "sec_num": "5.3" }, { "text": "Here, our main observations are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inter-annotator agreement", "sec_num": "5.3" }, { "text": "\u2022 Obviously, the larger the tolerance t, the higher the agreement. Because the average dif- ference of document-level and segment-level scores for documents assessed multiple times is between 15.0 and 19.6 (not shown in the table), we can assume that a t value of 15 or 20 is the most reasonable. In this case, the inter-annotator agreement is fair or sometimes moderate according to the recommended interpretation scale proposed by Landis and Koch (1977) .", "cite_spans": [ { "start": 433, "end": 455, "text": "Landis and Koch (1977)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Inter-annotator agreement", "sec_num": "5.3" }, { "text": "\u2022 For both methods, agreement for documentlevel scores is lower than for segment-level scores. This confirms the finding of Castilho (2020) that document-level evaluation efforts where annotators assign one score per document leads to lower levels of inter-annotator agreement for adequacy when compared to segment-level evaluation. In contrary to that work, our analysis is done at a much larger scale and for multiple language pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inter-annotator agreement", "sec_num": "5.3" }, { "text": "\u2022 Inter-annotator agreement of document-level scores is higher for WMT20 than for WMT19 (4th column). Interestingly, the opposite is true for averaged segment-level scores (7th column), and it is even more prominent for the subset of four common languages. We will discuss this some more in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inter-annotator agreement", "sec_num": "5.3" }, { "text": "\u2022 As shown in Figure 3 , inter-annotator agreement decreases with increasing document length for WMT20, but it flattens for the longest documents in the case of WMT19.", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 22, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Inter-annotator agreement", "sec_num": "5.3" }, { "text": "In Appendix A we provide inter-annotator agreement results computed with the Krippendorff's alpha coefficient (Hayes and Krippendorff, 2007) for reference.", "cite_spans": [ { "start": 110, "end": 140, "text": "(Hayes and Krippendorff, 2007)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Inter-annotator agreement", "sec_num": "5.3" }, { "text": "In the presented experiments, we have observed interesting differences in correlation and interannotator agreement for long documents. In WMT19, for long documents, the correlation between segment-level scores and document-level scores significantly decreases, while IAA flattens out and eventually ends up being higher than for WMT20. We think this might be an effect of cognitive overload when annotators are presented with long document translation text pairs without visual help in the form of sentence alignment and similar hints. 5 A large wall of text might discourage annotators and they might fall back to assigning default or less diverse \"safe\" scores. Analyzing annotation times in relation to the document length, presented in Figure 4 supports this explanation. The average time of document ratings flattens for documents longer than 20 segments for WMT19, while it increases for WMT20.", "cite_spans": [], "ref_spans": [ { "start": 740, "end": 748, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Another non-intuitive observation we have made is that the inter-annotator agreement for averaged segment scores is higher in WMT19 than in WMT20. The agreement for document scores is, as expected, consistently higher for WMT20. If this is not solely attributed to the different data sets used in both campaigns, we would explain it by a tendency of annotators to assign higher scores if they cannot identify errors due to insufficient context ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Annotation time (sec.) WMT20 Seg. WMT20 Doc. WMT19 Seg.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document length", "sec_num": null }, { "text": "WMT19 Doc. Figure 4 : Annotation times (sec.) for single segment or document score in relation to the number of segments in the document (all languages). (Laubli et al., 2020) , which may occur for WMT19 because of its limited inter-sentential context. Another explanation would be that the WMT20 interface presenting all sentences from the document at once, encourages annotators to assign more diversified scores across segments; this may then lower the agreement at segment level. However, we were not able to confirm this based on an analysis of histograms of segment scores and their standard deviations.", "cite_spans": [ { "start": 154, "end": 175, "text": "(Laubli et al., 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 11, "end": 19, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Document length", "sec_num": null }, { "text": "Our study is conducted post-hoc, i.e. we cannot test for scenarios that were not anticipated during the actual evaluation campaigns. A more conclusive interpretation would require A/B testing with the same sets of documents, translations and annotators used for both evaluation methods. Nevertheless, we think that the presented comparison of two WMT evaluation campaigns supports the assumption that the document-centric evaluation conducted during WMT20 produced more reliable document ratings. We believe this to be an important finding because higher quality of collected document assessments should help to avoid statistical issues arising from low statistical power as observed by .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document length", "sec_num": null }, { "text": "In this work, we have compared two methods for document-level human evaluation of MT outputs through an analysis of the large-scale human assessment data from WMT evaluation campaigns, consisting of 8 different out-of-English language pairs. Our main findings are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "7" }, { "text": "\u2022 Showing the entire document can extend the annotation time of individual segments by as much as 68% -presumably because annotators make use of the available context during evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "7" }, { "text": "\u2022 Annotators rarely change their segment-level ratings even if this option is available to them. Nevertheless, in some instances they do.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "7" }, { "text": "\u2022 Annotators tend to rate documents more consistently with their segment ratings if the entire document context is available at all time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "7" }, { "text": "\u2022 In the document-centric evaluation, document ratings can be approximated reasonably well by averaged segment level scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "7" }, { "text": "\u2022 Inter-annotator agreement for document ratings increases if segment level evaluation is made in the global context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "7" }, { "text": "Our analysis suggests that not only the entire document context is needed for reliable human evaluation of news translations, as recent studies have shown, but that the method in which the context is presented to evaluators is also important for collecting good-quality segment and documentlevel assessments. We conclude that the WMT20 method produces more reliable ratings, and thus can be adopted for future editions of the WMT document-level human evaluation campaigns for all languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "7" }, { "text": "In future work, we plan to strengthen our findings by comparing the WMT19 and WMT20 methods in A/B testing with common sets of documents, translations and annotators for both settings. A Appendix Table 6 and Figure 5 provide inter-annotator agreement for document-level and averaged segmentlevel scores in the form of Krippendorff's alpha coefficient (Hayes and Krippendorff, 2007) for 4 common languages from WMT19 and WMT20. We present coefficients computed with interval and ratio metrics, and for a direct comparison with the results presented in Section 5.3, with the nominal metric with different tolerances t, i.e. two scores are assumed equal if they differ no more than t. Krippendorff's alpha coefficients computed using the interval or ratio metrics do not show the higher agreement on document ratings for WMT20 compared to WMT19 that has been observed with Cohen's Kappa, but the difference is again smaller than for averaged segment ratings. Coefficients computed using the nominal metric with tolerance thresholds align with the inter-annotator agreement results obtained with the other statistic measure. Document length Inter-annotator agreement (interval scale) WMT19 Avg. seg. WMT20 Doc. WMT20 Avg. seg. WMT19 Doc.", "cite_spans": [ { "start": 351, "end": 381, "text": "(Hayes and Krippendorff, 2007)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 196, "end": 203, "text": "Table 6", "ref_id": "TABREF14" }, { "start": 208, "end": 216, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Summary", "sec_num": "7" }, { "text": "Figure 5: Inter-annotator agreements (Krippendorff's alpha, interval metric) for document-level and averaged segment-level scores in relation to the number of segments in the document (4 common languages).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "7" }, { "text": "In this work, we use the terms evaluator and annotator interchangeably.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Please refer toBarrault et al. (2020) for more details on the quality control methods used at WMT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "If a document is annotated by more than two annotators, pairwise comparisons between all annotators are counted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See the example onFigure 1bconsisting only of 6 segments. A thoughtful evaluation of an article with 20 or more segments would appear even more challenging.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Proceedings of the Fifth Conference on Machine Translation", "authors": [ { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Magdalena", "middle": [], "last": "Biesialska", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Marta", "middle": [ "R" ], "last": "Costa-Juss\u00e0", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Grundkiewicz", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Joanis", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Kocmi", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Chi-Kiu", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Ljube\u0161i\u0107", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Makoto", "middle": [], "last": "Morishita", "suffix": "" }, { "first": "Masaaki", "middle": [], "last": "Nagata", "suffix": "" }, { "first": "Toshiaki", "middle": [], "last": "Nakazawa", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "1--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lo\u00efc Barrault, Magdalena Biesialska, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljube\u0161i\u0107, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshi- aki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (WMT20). In Proceedings of the Fifth Conference on Machine Translation, pages 1-55, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Findings of the 2019 conference on machine translation (WMT19)", "authors": [ { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Marta", "middle": [ "R" ], "last": "Costa-Juss\u00e0", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Fishel", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Shervin", "middle": [], "last": "Malmasi", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Mathias", "middle": [], "last": "M\u00fcller", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "2", "issue": "", "pages": "1--61", "other_ids": { "DOI": [ "10.18653/v1/W19-5301" ] }, "num": null, "urls": [], "raw_text": "Lo\u00efc Barrault, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M\u00fcller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine transla- tion (WMT19). In Proceedings of the Fourth Con- ference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61, Florence, Italy. As- sociation for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Findings of the 2017 conference on machine translation (WMT17)", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Rajen", "middle": [], "last": "Chatterjee", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Shujian", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Varvara", "middle": [], "last": "Logacheva", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Negri", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Raphael", "middle": [], "last": "Rubino", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Turchi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Second Conference on Machine Translation", "volume": "", "issue": "", "pages": "169--214", "other_ids": { "DOI": [ "10.18653/v1/W17-4717" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Lo- gacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Findings of the 2017 conference on machine translation (WMT17). In Proceedings of the Sec- ond Conference on Machine Translation, pages 169- 214, Copenhagen, Denmark. Association for Com- putational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Findings of the 2018 conference on machine translation (WMT18)", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Fishel", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers", "volume": "", "issue": "", "pages": "272--303", "other_ids": { "DOI": [ "10.18653/v1/W18-6401" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 con- ference on machine translation (WMT18). In Pro- ceedings of the Third Conference on Machine Trans- lation: Shared Task Papers, pages 272-303, Bel- gium, Brussels. Association for Computational Lin- guistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "On the same page? Comparing inter-annotator agreement in sentence and document level human machine translation evaluation", "authors": [ { "first": "Sheila", "middle": [], "last": "Castilho", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "1150--1159", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sheila Castilho. 2020. On the same page? Compar- ing inter-annotator agreement in sentence and doc- ument level human machine translation evaluation. In Proceedings of the Fifth Conference on Machine Translation, pages 1150-1159, Online. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "On context span needed for machine translation evaluation", "authors": [ { "first": "Sheila", "middle": [], "last": "Castilho", "suffix": "" }, { "first": "Maja", "middle": [], "last": "Popovi\u0107", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Way", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "3735--3742", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sheila Castilho, Maja Popovi\u0107, and Andy Way. 2020. On context span needed for machine translation eval- uation. In Proceedings of the 12th Language Re- sources and Evaluation Conference, pages 3735- 3742, Marseille, France. European Language Re- sources Association.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Overview of the IWSLT 2017 evaluation campaign", "authors": [ { "first": "Mauro", "middle": [], "last": "Cettolo", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Niehues", "middle": [], "last": "Jan", "suffix": "" }, { "first": "St\u00fcker", "middle": [], "last": "Sebastian", "suffix": "" }, { "first": "Sudoh", "middle": [], "last": "Katsuitho", "suffix": "" }, { "first": "Yoshino", "middle": [], "last": "Koichiro", "suffix": "" }, { "first": "Federmann", "middle": [], "last": "Christian", "suffix": "" } ], "year": 2017, "venue": "International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "2--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mauro Cettolo, Marcello Federico, Luisa Bentivogli, Niehues Jan, St\u00fcker Sebastian, Sudoh Katsuitho, Yoshino Koichiro, and Federmann Christian. 2017. Overview of the IWSLT 2017 evaluation campaign. In International Workshop on Spoken Language Translation, pages 2-14.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A coefficient of agreement for nominal scales. Educational and Psychological Measurement", "authors": [ { "first": "J", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1960, "venue": "", "volume": "20", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Cohen. 1960. A coefficient of agreement for nomi- nal scales. Educational and Psychological Measure- ment, 20(1):37.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Appraise evaluation framework for machine translation", "authors": [ { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "86--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Federmann. 2018. Appraise evaluation framework for machine translation. In Proceedings of the 27th International Conference on Computa- tional Linguistics: System Demonstrations, pages 86-88, Santa Fe, New Mexico. Association for Com- putational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A cognitive modeling account of simultaneous learning and fatigue effects", "authors": [ { "first": "Cleotilde", "middle": [], "last": "Gonzalez", "suffix": "" }, { "first": "Brad", "middle": [], "last": "Best", "suffix": "" }, { "first": "Alice", "middle": [ "F" ], "last": "Healy", "suffix": "" }, { "first": "James", "middle": [ "A" ], "last": "Kole", "suffix": "" }, { "first": "Lyle", "middle": [ "E" ], "last": "Bourne", "suffix": "" } ], "year": 2011, "venue": "Cognitive Systems Research", "volume": "12", "issue": "1", "pages": "19--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cleotilde Gonzalez, Brad Best, Alice F Healy, James A Kole, and Lyle E Bourne Jr. 2011. A cognitive mod- eling account of simultaneous learning and fatigue effects. Cognitive Systems Research, 12(1):19-32.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Continuous measurement scales in human evaluation of machine translation", "authors": [ { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Alistair", "middle": [], "last": "Moffat", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Zobel", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse", "volume": "", "issue": "", "pages": "33--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2013. Continuous measurement scales in human evaluation of machine translation. In Pro- ceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 33-41, Sofia, Bulgaria. Association for Computational Lin- guistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Statistical power and translationese in machine translation evaluation", "authors": [ { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "72--81", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.6" ] }, "num": null, "urls": [], "raw_text": "Yvette Graham, Barry Haddow, and Philipp Koehn. 2020. Statistical power and translationese in ma- chine translation evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 72-81, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Answering the call for a standard reliability measure for coding data", "authors": [ { "first": "Andrew", "middle": [ "F" ], "last": "Hayes", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 2007, "venue": "Communication Methods and Measures", "volume": "1", "issue": "1", "pages": "77--89", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew F. Hayes and Klaus Krippendorff. 2007. An- swering the call for a standard reliability measure for coding data. Communication Methods and Mea- sures, 1(1):77-89.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The measurement of observer agreement for categorical data", "authors": [ { "first": "J", "middle": [], "last": "", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Landis", "suffix": "" }, { "first": "Gary", "middle": [ "G" ], "last": "Koch", "suffix": "" } ], "year": 1977, "venue": "Biometrics", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Richard Landis and Gary G. Koch. 1977. The mea- surement of observer agreement for categorical data. Biometrics, 33(1).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A set of recommendations for assessing human-machine parity in language translation", "authors": [ { "first": "Samuel", "middle": [], "last": "Laubli", "suffix": "" }, { "first": "Sheila", "middle": [], "last": "Castilho", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Qinlan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Toral", "suffix": "" } ], "year": 2020, "venue": "Journal of Artificial Intelligence Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel Laubli, Sheila Castilho, Graham Neubig, Rico Sennrich, Qinlan Shen, and Antonio Toral. 2020. A set of recommendations for assessing hu- man-machine parity in language translation. Jour- nal of Artificial Intelligence Research (JAIR), 67.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Has machine translation achieved human parity? a case for document-level evaluation", "authors": [ { "first": "Samuel", "middle": [], "last": "L\u00e4ubli", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Volk", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4791--4796", "other_ids": { "DOI": [ "10.18653/v1/D18-1512" ] }, "num": null, "urls": [], "raw_text": "Samuel L\u00e4ubli, Rico Sennrich, and Martin Volk. 2018. Has machine translation achieved human parity? a case for document-level evaluation. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4791-4796, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "RankME: Reliable human ratings for natural language generation", "authors": [ { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "72--78", "other_ids": { "DOI": [ "10.18653/v1/N18-2012" ] }, "num": null, "urls": [], "raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, and Verena Rieser. 2018. RankME: Reliable human ratings for natural language generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 72-78, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Informative manual evaluation of machine translation output", "authors": [], "year": null, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "5059--5069", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.444" ] }, "num": null, "urls": [], "raw_text": "Maja Popovi\u0107. 2020. Informative manual evalua- tion of machine translation output. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 5059-5069, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Efficient online scalar annotation with bounded support", "authors": [ { "first": "Keisuke", "middle": [], "last": "Sakaguchi", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "208--218", "other_ids": { "DOI": [ "10.18653/v1/P18-1020" ] }, "num": null, "urls": [], "raw_text": "Keisuke Sakaguchi and Benjamin Van Durme. 2018. Efficient online scalar annotation with bounded sup- port. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 208-218, Melbourne, Australia. Association for Computational Linguis- tics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Reassessing claims of human parity and super-human performance in machine translation at WMT 2019", "authors": [ { "first": "Antonio", "middle": [], "last": "Toral", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 22nd", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antonio Toral. 2020. Reassessing claims of human par- ity and super-human performance in machine trans- lation at WMT 2019. In Proceedings of the 22nd", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "(a) The segment-level portion of the WMT19 interface.(b) The document-rating portion of the WMT19 interface.(c) The document-centric WMT20 interface", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "Screen shots of the Appraise interfaces used for the WMT19 (left) and WMT20 (right) human evaluation campaigns.", "uris": null, "type_str": "figure" }, "FIGREF2": { "num": null, "text": "Pearson correlations between document-level and the average of segment-level scores in relation to the number of segments in the document (4 common languages).", "uris": null, "type_str": "figure" }, "FIGREF3": { "num": null, "text": "Inter-annotator agreements (Cohen's kappa, t = 15) for document-level and averaged segment-level scores in relation to the number of segments in the document (4 common languages).", "uris": null, "type_str": "figure" }, "TABREF1": { "type_str": "table", "num": null, "content": "", "html": null, "text": "Statistics of data from the WMT19 and WMT20 campaigns, including languages, the total number of annotators and collected segment-level and document-level scores, after excluding documents with quality control items." }, "TABREF3": { "type_str": "table", "num": null, "content": "
", "html": null, "text": "Average annotation times with standard deviations for tasks, documents, parts of documents and segments in the (hours):minutes:seconds format." }, "TABREF5": { "type_str": "table", "num": null, "content": "
", "html": null, "text": "" }, "TABREF9": { "type_str": "table", "num": null, "content": "
t51015202530
Pe 0.107 0.199 0.286 0.368 0.445 0.517
", "html": null, "text": "Inter-annotator agreement (Cohen's kappa) on document-level scores and averaged segment-level scores for different tolerances t, i.e. two scores are assumed equal if they differ no more than t." }, "TABREF10": { "type_str": "table", "num": null, "content": "", "html": null, "text": "Examples of P e for different tolerances t." }, "TABREF12": { "type_str": "table", "num": null, "content": "
", "html": null, "text": "Annual Conference of the European Association for Machine Translation, pages 185-194, Lisboa, Portugal. European Association for Machine Translation. Antonio Toral, Sheila Castilho, Ke Hu, and Andy Way. 2018. Attaining the unattainable? reassessing claims of human parity in neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 113-123, Belgium, Brussels. Association for Computational Linguistics." }, "TABREF14": { "type_str": "table", "num": null, "content": "
0.61
0.6
0.47
0.420.430.51
0.40.38 0.430.380.39 0.37
0.32
0.350.28
0.20.31 0.240.25 0.300.24 0.19 0.300.200.26 0.28
0.19
1-56-10 11-15 16-20 21-25 25+
", "html": null, "text": "Inter-annotator agreement (Krippendorff's alpha) on document-level and averaged segment-level scores for different metrics (4 common languages)." } } } }