{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:29:09.173494Z" }, "title": "Detecting Post-edited References and Their Effect on Human Evaluation", "authors": [ { "first": "V\u011bra", "middle": [], "last": "Kloudov\u00e1", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles University", "location": { "addrLine": "Malostransk\u00e9 n\u00e1m\u011bst\u00ed 25", "postCode": "118 00", "settlement": "Prague", "country": "Czech Republic" } }, "email": "kloudova@ufal.mff.cuni.cz" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles University", "location": { "addrLine": "Malostransk\u00e9 n\u00e1m\u011bst\u00ed 25", "postCode": "118 00", "settlement": "Prague", "country": "Czech Republic" } }, "email": "bojar@ufal.mff.cuni.cz" }, { "first": "Martin", "middle": [], "last": "Popel", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles University", "location": { "addrLine": "Malostransk\u00e9 n\u00e1m\u011bst\u00ed 25", "postCode": "118 00", "settlement": "Prague", "country": "Czech Republic" } }, "email": "popel@ufal.mff.cuni.cz" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper provides a quick overview of possible methods how to detect that reference translations were actually created by post-editing an MT system. Two methods based on automatic metrics are presented: BLEU difference between the suspected MT and some other good MT and BLEU difference using additional references. These two methods revealed a suspicion that the WMT 2020 Czech reference is based on MT. The suspicion was confirmed in a manual analysis by finding concrete proofs of the post-editing procedure in particular sentences. Finally, a typology of post-editing changes is presented where typical errors or changes made by the post-editor or errors adopted from the MT are classified.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This paper provides a quick overview of possible methods how to detect that reference translations were actually created by post-editing an MT system. Two methods based on automatic metrics are presented: BLEU difference between the suspected MT and some other good MT and BLEU difference using additional references. These two methods revealed a suspicion that the WMT 2020 Czech reference is based on MT. The suspicion was confirmed in a manual analysis by finding concrete proofs of the post-editing procedure in particular sentences. Finally, a typology of post-editing changes is presented where typical errors or changes made by the post-editor or errors adopted from the MT are classified.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Over ten years of WMT (Conference on Machine Translation, Barrault et al., 2020) 1 saw a number of manual evaluation methods and established the best strategies for obtaining reference translations for automatic evaluation, see Appendix B in WMT 2020 Findings (Barrault et al., 2020) .", "cite_spans": [ { "start": 58, "end": 80, "text": "Barrault et al., 2020)", "ref_id": null }, { "start": 260, "end": 283, "text": "(Barrault et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One of the instructions for preparing the reference translations explicitly prohibits using any machine translation. Yet, in 2020, one of the agencies has not followed this instruction. Not only was it easy to recognize, but we learned several novel insights into manual evaluation of translation, by examining post-edited and independent reference translations and providing a small contrastive style of manual evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We used the English-Czech part of WMT2020 (Barrault et al., 2020) news test set, which consists of 130 documents (1418 segments) originally written 1 http://www.statmt.org/wmt06 till wmt20 in English -news stories downloaded from web. The test set comes with an official reference translation into Czech (REF1) provided by the WMT organizers and done by a professional translation agency. There are also 8 machine translations submitted by the participants of the WMT news translation shared task and 4 translations by online systems anonymized as ONLINE-A, ONLINE-B, ONLINE-G and ONLINE-Z.", "cite_spans": [ { "start": 42, "end": 65, "text": "(Barrault et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "We focused on three translations: the official reference, REF1; the best-performing MT system (according to the official WMT manual evaluation), CUNI-DOCTRANSFORMER (Popel, 2020) ; and the best-performing online system, ONLINE-B.", "cite_spans": [ { "start": 165, "end": 178, "text": "(Popel, 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "We hired two professional translators (native Czech speakers) to translate the whole WMT20 test set, thus creating additional references REF2 and REF3. We also hired 18 annotators to judge the translation quality of REF1, CUNI-DOCTRANSFORMER and ONLINE-B. 2 The annotators assessed 90 of the 130 documents, using the RankME evaluation (Novikova et al., 2018) following the methodology of Popel et al. (2020) . In this RankME evaluation, fluency, adequacy and overall quality are evaluated in a source-based sentencelevel document-aware fashion, on a 0-10 scale, where all the evaluated translations are shown on the same screen, allowing thus better reliability in comparisons; see Section 5 for details. Table 1 shows the translation quality of the three references and two selected MT systems according to two manual evaluations, DA (Direct Assessment, Graham et al., 2013) and RankME, and four types of BLEU scores. The first three types use REF1, REF2 and REF3, respectively, as the reference translation in BLEU. The fourth type uses BLEU with two reference translations: REF2+3. While both manual evaluations, DA and RankME, agree that REF1 is better than both CUNI-DOCTRANSFORMER and ONLINE-B, the automatic metric BLEU evaluates one of the two MT systems as better than REF1. 3 For brevity, we report only BLEU, but we confirmed this with several other automatic metrics, e.g. chrF (Popovi\u0107, 2015) .", "cite_spans": [ { "start": 256, "end": 257, "text": "2", "ref_id": null }, { "start": 335, "end": 358, "text": "(Novikova et al., 2018)", "ref_id": "BIBREF4" }, { "start": 388, "end": 407, "text": "Popel et al. (2020)", "ref_id": "BIBREF6" }, { "start": 855, "end": 875, "text": "Graham et al., 2013)", "ref_id": "BIBREF2" }, { "start": 1390, "end": 1405, "text": "(Popovi\u0107, 2015)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 705, "end": 712, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "The reason for this surprising observation is that most sentences in REF1 are actually post-edited versions of ONLINE-B, as we show in Sections 3.1 and 4 and as was acknowledged by the agency after our investigation. Thus, REF1 and ONLINE-B are more similar than if REF1 had been translated from scratch. 4 It is well-known that BLEU (and other automatic metrics based on similarity with reference translations) is biased when evaluating a system which was used as a basis for post-editing the reference translation.", "cite_spans": [ { "start": 305, "end": 306, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Automatic analysis of references", "sec_num": "3" }, { "text": "It is important to note that the official (manual) evaluation carried out by WMT for the affected English-to-Czech translation direction was source-based DA. The annotators of the DA thus were not affected by the quality of references; instead, they blindly rated REF1 as if it was another competing translation system. The resulting scores of DA document that source-level DA is sufficiently reliable, robust to invalid references. At the same time, it was a little surprising to us that \"mere post-editing\" can increase translation quality so substantially that REF1 significantly outperformed all other systems. Despite the remaining translation errors in REF1, see below, the translator/post-editor did the job well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic analysis of references", "sec_num": "3" }, { "text": "For a finer analysis, we process the news test set at the level of individual documents in the following.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic analysis of references", "sec_num": "3" }, { "text": "Our first suspicion that REF1 is actually post-edited ONLINE-B stems from the fact that ONLINE-B achieved the highest BLEU REF1 score (i.e. BLEU with REF1 as the reference) out of all the MT systems, including the best one according to the manual evaluation, CUNI-DOCTRANSFORMER, as shown in Table 1 . In order to confirm this suspicion, we wanted to automatically find documents where the probability of being post-edited is the highest.", "cite_spans": [], "ref_spans": [ { "start": 292, "end": 299, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Automatic detection of post-editing", "sec_num": "3.1" }, { "text": "Below, we suggest two methods for such document-level automatic detection of post-editing. The first method needs just an output of another MT system. The second method needs one or more additional human references.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic detection of post-editing", "sec_num": "3.1" }, { "text": "We selected CUNI-DOCTRANSFORMER as the other MT system for two reasons. First, it is the best MT system in English-Czech WMT20 according to the official manual evaluation. Second, as far as we know, CUNI-DOCTRANSFORMER was not available online at the time of creating the WMT20 references, so it could not be used as the basis for post-editing REF1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detection using another MT system", "sec_num": "3.1.1" }, { "text": "For each document d, we computed", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detection using another MT system", "sec_num": "3.1.1" }, { "text": "Detection1 = BLEU REF1 (ONLINE-B, d)\u2212 BLEU REF1 (CUNI-DOCTRANSFORMER, d). (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detection using another MT system", "sec_num": "3.1.1" }, { "text": "This score was positive for 104 out of the 130 documents. In other words, for 80% of documents, the reference is more similar to ONLINE-B than to the best-performing CUNI-DOCTRANSFORMER, a likely indicator that most of the documents were post-edited.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detection using another MT system", "sec_num": "3.1.1" }, { "text": "We inspected manually three documents with the most negative Detection1 score and did not find any clues of post-editing, but we noticed the quality of ONLINE-B was low for these documents, so perhaps the translator throw away the MT output and translated these documents from scratch (or post-edited so heavily that the original MT output cannot be detected).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detection using another MT system", "sec_num": "3.1.1" }, { "text": "We inspected manually three documents with the most positive Detection1 score and observed these well translated with a reasonably high quality by ONLINE-B, and required just few minor postedits, as was done in REF1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detection using another MT system", "sec_num": "3.1.1" }, { "text": "Finally, we inspected sentences from other documents and found further signals of post-editing (even when the Detection1 score was not high enough to be a convincing proof alone). These examples are discussed in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detection using another MT system", "sec_num": "3.1.1" }, { "text": "A similar detection can be used with an additional human reference instead of an MT system. We had available two such references, REF2 and REF3. We thus opted for a slightly different detection formula which allows to use two-reference BLEU:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detection using additional references", "sec_num": "3.1.2" }, { "text": "For each document d, we computed", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detection using additional references", "sec_num": "3.1.2" }, { "text": "Detection2 = BLEU ONLINE-B (REF1, d)\u2212 BLEU REF2+3 (REF1, d). (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detection using additional references", "sec_num": "3.1.2" }, { "text": "This resulted in a similar ordering of documents as the Detection1 method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detection using additional references", "sec_num": "3.1.2" }, { "text": "3.2 Human translation is more similar to MT than other humans", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detection using additional references", "sec_num": "3.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "For each document d, we computed Score3 = BLEU REF2+3 (REF1, d)\u2212 BLEU REF2+3 (ONLINE-B, d).", "eq_num": "(3)" } ], "section": "Detection using additional references", "sec_num": "3.1.2" }, { "text": "This Score3 score was negative for 96 out of the 130 documents. This means that the similarity of ONLINE-B to the additional references REF2 and REF3 is higher than the similarity of REF1 to REF2 and REF3. When focusing just on REF2, we can see that BLEU REF2 (ONLB) = 31.08 > 28.91 = BLEU REF2 (REF1). This is very surprising given our hypothesis that REF2 is actually translated from scratch without any post-editing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detection using additional references", "sec_num": "3.1.2" }, { "text": "We have two possible explanations for this. First, the REF1 translator tried to \"hide\" the fact that the translation is post-edited, by doing edits which do not affect the translation quality. Second, the REF1 translator actually improved the ONLINE-B translation quality by post-edits which result in less literal translations, while the REF2 translator opted more frequently translations which were likely to be independently produced also by the MT system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detection using additional references", "sec_num": "3.1.2" }, { "text": "Given the fact that both DA and RankME manual evaluations show REF1 is significantly better than ONLINE-B, we hypothesize most of the post-edits were actually improvements. We noticed just a few opposite cases (see below, category 3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detection using additional references", "sec_num": "3.1.2" }, { "text": "In this part of our study, we would like to present a classification of post-editing changes observed in texts which we claim to be post-edited machine translations. These changes signal that the reference translation REF1 has been actually created by post-editing a MT system. For this purpose, we used MT-ComparEval (Klejch et al., 2015) to select 27 sentences which show the highest n-gram overlap with the suspected MT system. We analyzed the edits made by a post-editor in a MT output and compared the source text (English), the MT output (Czech) and the output of the manual post-editing process (Czech).", "cite_spans": [ { "start": 318, "end": 339, "text": "(Klejch et al., 2015)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Post-editing changes typology", "sec_num": "4" }, { "text": "Based on the particular changes found after comparing these three versions of our sentences, we defined the following categories (with examples where SRC is the source sentence, ONLB is the MT ONLINE-B and REF1 is the human post-editing). In each category, we would like to present particularly noticeable changes which we assume to be clear evidence of the post-editing procedure. We classify these changes into three categories:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-editing changes typology", "sec_num": "4" }, { "text": "\u2022 Category 1: minor changes, particularly focused on grammar categories, in long adopted structures from the MT (preserving or improving the overall quality of the output),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-editing changes typology", "sec_num": "4" }, { "text": "\u2022 Category 2: unnecessary shifts (without a significant impact on the quality of the output),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-editing changes typology", "sec_num": "4" }, { "text": "\u2022 Category 3: negative shifts (errors made by the post-editor, preserving or even worsening the quality of the MT output).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-editing changes typology", "sec_num": "4" }, { "text": "Furthermore, we also found these errors or conspicuous structures in the post-edited output:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-editing changes typology", "sec_num": "4" }, { "text": "\u2022 Category 4: errors adopted from the MT which the post-editor has not discovered; therefore, they have been preserved in the final text of REF1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-editing changes typology", "sec_num": "4" }, { "text": "For all these four categories, we can state they prove the final output is a result of a post-editing process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-editing changes typology", "sec_num": "4" }, { "text": "4.1 Typology 1: Changes by the post-editor", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-editing changes typology", "sec_num": "4" }, { "text": "Category 3: Errors in writing: SRC their 17-month-old daughter ONLB jejich 17m\u011bs\u00ed\u010dn\u00ed dcerou REF1 jejich 17tim\u011bs\u00ed\u010dn\u00ed dcerou The standard Czech grammar allows only forms 17m\u011bs\u00ed\u010dn\u00ed or sedmn\u00e1ctim\u011bs\u00ed\u010dn\u00ed (17-month). The form 17tim\u011bs\u00ed\u010dn\u00ed (where \"ti\" reflects the pronunciation) is considered an error. The present tense (n\u00e1sleduje = follows) used in ONLB was changed into past tense (n\u00e1sledovala = followed). Changes between past and present tense (while keeping the same verb) occurred relatively often (8 cases) in the investigated 27 sentences. Despite the changes made, the semantic defectiveness of ONLB has been preserved. In Czech, the Nobel prize is Nobelova cena and the word laure\u00e1t\u016f (genitive sg.) is wrong. In this case, Laure\u00e1t Nobelovy ceny would be acceptable. The correct comparative of vn\u00edmav\u00fd = receptive is vn\u00edmav\u011bj\u0161\u00ed. The phrase v\u00edce vn\u00edmav\u00fd is considered non-standard and rather rare. 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Changes in spelling", "sec_num": "4.1.1" }, { "text": "Category 3: Changes incompatible with the meaning of the source text: SRC from some 3 billion cu ft at the start of 2019 ONLB z p\u0159ibli\u017en\u011b 3 miliard cu ft na za\u010d\u00e1tku roku 2019 REF1 ze sou\u010dasn\u00fdch 3 miliard kubick\u00fdch stop na za\u010d\u00e1tku roku 2019 ONLB uses p\u0159ibli\u017en\u011b = approximately, but REF1 changed it to sou\u010dasn\u00fdch = current, which is wrong because the article was written in September 2019, when the amount was already much higher than 3 billion cu ft, according to the source text. Given the context of the source article, officers should be translated as policist\u00e9 = police officers. It is questionable whether the translation d\u016fstojn\u00edci = commissioned officers is acceptable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Changes in accuracy of information", "sec_num": "4.1.3" }, { "text": "Category 4: Errors in meaning of a syntactic structure: SRC an invite from Khloe to a 'Taco Tuesday' dinner at her mansion ONLB pozv\u00e1n\u00ed od Khloe na ve\u010de\u0159i \"Taco Tuesday\" u jej\u00edho s\u00eddla REF1 pozv\u00e1n od Khlo\u00e9 na ve\u010de\u0159i v \"Taco Tuesday\" u jej\u00edho s\u00eddla", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typology", "sec_num": "4.2" }, { "text": "In the highlighted example, REF1 added only the preposition v = in, which bears out the superficial reading of the machine translation output a dinner in a \"Taco Tuesday\" restaurant near her mansion. However, the original meaning is quite different: \"Taco Tuesday\" is a custom of going out to eat tacos, not a name of a restaurant. In this example, solely the verb prefix changed (vydat \u2192 podat), but not the noun (obvin\u011bn\u00ed = accusation, charge). The correct translation would be podat trestn\u00ed ozn\u00e1men\u00ed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typology", "sec_num": "4.2" }, { "text": "In our blind RankME evaluation by 18 human judges (6 professional translators, 6 students from MA Study Program Translation: Czech and English at the Institute of Translation Studies, Charles University's Faculty of Arts, 6 and 6 nonprofessionals with excellent knowledge of the English language), 90 documents (887 segments, typically sentences) were evaluated on a sentence level in terms of adequacy, fluency and overall quality (as defined by Popel et al. (2020) ). Every document was scored by two evaluators, and every evaluator scored ten different documents. Then we could compare the ratings of the post-edited REF1 and of the suspected ONLB. According to the ratings, the translation quality is, in all cases, better in REF1 compared to ONLB. The post-editor improved the quality of the ONLB in all three categories: in adequacy (increased by 2.23 on average), fluency (2.70) and overall quality (2.55), on a 0-10 scale. As could be expected, the most significant improvement occurred in fluency. Apparently, the post-editor was more sensitive to errors in fluency rather than adequacy, as shown also in our analysis of post-editing changes.", "cite_spans": [ { "start": 447, "end": 466, "text": "Popel et al. (2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Human evaluation -RankME", "sec_num": "5" }, { "text": "We suspected that WMT20 reference translations were actually post-edits of one of the participating systems, ONLINE-B, and not created independently. We proposed two methods to detect this situation, both confirming our suspicion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In a subsequent manual analysis, we provided numerous examples of translation choices in the reference translation which are extremely unlikely to happen when translating from scratch. 7 The result of this analysis is a draft typology of post-editing strategies. We see this typology as an interesting basis for further inspection of translations in a world where post-editing becomes the industry standard.", "cite_spans": [ { "start": 185, "end": 186, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "By contrasting Direct Assessment scores with our manual evaluation (RankME), we observed the post-editor improved primarily fluency of the translation and less so its adequacy. It would be useful to confirm this observation on a larger sample.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The additional references REF2 and REF3 were not available before our RankME evaluation started. We plan to evaluate them in future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Actually, both MT systems are better than all three references according to all BLEU scores, with a single exception of BLEUREF3(ONLINE-B) = 26.39 < 26.43 = BLEUREF3(REF2), which is not statistically significant (bootstrap resampling, p < 0.05). Obviously, we cannot use e.g. BLEUREF1 to judge the quality of REF1 (it would be 100, by definition).4 One of the instructions for the translation agency preparing references for WMT 2020 was: All translations should be \"from scratch\", without post-editing from MT. Using postediting would bias the evaluation, so we need to avoid it. We can detect post-editing so will reject translations that are post-edited.(Barrault et al., 2020). Unfortunately, the WMT organizers did not detect the post-editing in this case (as we do in this paper) and did not reject the translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Such analytic comparatives can be seen as a proof of translationese(Toury, 1995), i.e. in this case, influenced by the English analytic comparative more receptive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://utrl.ff.cuni.cz/ 7 Such choices are also likely to be changed in paraphrasing, which opens a new view on the results ofFreitag et al. (2020), who show that using paraphrases as references in BLEU may lead to higher correlation with human evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The work was supported by the grants 19-26934X (NEUREM3) and GX20-16819X (LUSyD) by the Czech Science Foundation. The work has been using language resources developed and distributed by the LINDAT/CLARIAHCZ project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2018101).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Proceedings of the Fifth Conference on Machine Translation", "authors": [ { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Magdalena", "middle": [], "last": "Biesialska", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Marta", "middle": [ "R" ], "last": "Costa-Juss\u00e0", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Grundkiewicz", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Joanis", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Kocmi", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Chi-Kiu", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Ljube\u0161i\u0107", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Makoto", "middle": [], "last": "Morishita", "suffix": "" }, { "first": "Masaaki", "middle": [], "last": "Nagata", "suffix": "" }, { "first": "Toshiaki", "middle": [], "last": "Nakazawa", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "1--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lo\u00efc Barrault, Magdalena Biesialska, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljube\u0161i\u0107, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshi- aki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (WMT20). In Proceedings of the Fifth Conference on Machine Translation, pages 1-55, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "BLEU might be guilty but references are not innocent", "authors": [ { "first": "Markus", "middle": [], "last": "Freitag", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Isaac", "middle": [], "last": "Caswell", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "61--71", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.5" ] }, "num": null, "urls": [], "raw_text": "Markus Freitag, David Grangier, and Isaac Caswell. 2020. BLEU might be guilty but references are not innocent. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 61-71, Online. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Continuous measurement scales in human evaluation of machine translation", "authors": [ { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Alistair", "middle": [], "last": "Moffat", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Zobel", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse", "volume": "", "issue": "", "pages": "33--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2013. Continuous measurement scales in human evaluation of machine translation. In Pro- ceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 33-41, Sofia, Bulgaria. Association for Computational Lin- guistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "MT-ComparEval: Graphical evaluation interface for Machine Translation development", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Klejch", "suffix": "" }, { "first": "Eleftherios", "middle": [], "last": "Avramidis", "suffix": "" } ], "year": 2015, "venue": "The Prague Bulletin of Mathematical Linguistics", "volume": "104", "issue": "", "pages": "63--74", "other_ids": { "DOI": [ "10.1515/pralin-2015-0014" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Klejch, Eleftherios Avramidis, Aljoscha Bur- chardt, and Martin Popel. 2015. MT-ComparEval: Graphical evaluation interface for Machine Transla- tion development. The Prague Bulletin of Mathe- matical Linguistics, 104:63-74.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "RankME: Reliable human ratings for natural language generation", "authors": [ { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "72--78", "other_ids": { "DOI": [ "10.18653/v1/N18-2012" ] }, "num": null, "urls": [], "raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, and Verena Rieser. 2018. RankME: Reliable human ratings for natural language generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 72-78, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "-Polish Systems in WMT20: Robust Document-Level Training", "authors": [], "year": null, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "269--273", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Popel. 2020. CUNI English-Czech and English- Polish Systems in WMT20: Robust Document- Level Training. In Proceedings of the Fifth Confer- ence on Machine Translation, pages 269-273, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Transforming machine translation: a deep learning system reaches news translation quality comparable to human professionals", "authors": [ { "first": "Martin", "middle": [], "last": "Popel", "suffix": "" }, { "first": "Marketa", "middle": [], "last": "Tomkova", "suffix": "" }, { "first": "Jakub", "middle": [], "last": "Tomek", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Zden\u011bk", "middle": [], "last": "\u017dabokrtsk\u00fd", "suffix": "" } ], "year": 2020, "venue": "Nature Communications", "volume": "11", "issue": "4381", "pages": "1--15", "other_ids": { "DOI": [ "10.1038/s41467-020-18073-9" ] }, "num": null, "urls": [], "raw_text": "Martin Popel, Marketa Tomkova, Jakub Tomek, \u0141ukasz Kaiser, Jakob Uszkoreit, Ond\u0159ej Bojar, and Zden\u011bk \u017dabokrtsk\u00fd. 2020. Transforming machine translation: a deep learning system reaches news translation quality comparable to human profession- als. Nature Communications, 11(4381):1-15.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "chrF: character n-gram F-score for automatic MT evaluation", "authors": [ { "first": "Maja", "middle": [], "last": "Popovi\u0107", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "392--395", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maja Popovi\u0107. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. ACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A call for clarity in reporting BLEU scores", "authors": [ { "first": "Matt", "middle": [], "last": "Post", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "186--191", "other_ids": { "DOI": [ "10.18653/v1/W18-6319" ] }, "num": null, "urls": [], "raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Belgium, Brussels. Association for Computa- tional Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Descriptive translation studies and beyond", "authors": [ { "first": "Gideon", "middle": [], "last": "Toury", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gideon Toury. 1995. Descriptive translation studies and beyond. John Benjamins Publishing.", "links": null } }, "ref_entries": {} } }