{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:29:06.144212Z" }, "title": "Is this translation error critical?: Classification-based Human and Automatic Machine Translation Evaluation Focusing on Critical Errors", "authors": [ { "first": "Katsuhito", "middle": [], "last": "Sudoh", "suffix": "", "affiliation": {}, "email": "sudoh@is.naist.jp" }, { "first": "Kosuke", "middle": [], "last": "Takahashi", "suffix": "", "affiliation": {}, "email": "kosuke.takahashi.th0@is.naist.jp" }, { "first": "Satoshi", "middle": [], "last": "Nakamura", "suffix": "", "affiliation": {}, "email": "s-nakamura@is.naist.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper discusses a classification-based approach to machine translation evaluation, as opposed to a common regression-based approach in the WMT Metrics task. Recent machine translation usually works well but sometimes makes critical errors due to just a few wrong word choices. Our classificationbased approach focuses on such errors using several error type labels, for practical machine translation evaluation in an age of neural machine translation. We have made additional annotations on the WMT 2015-2017 Metrics datasets with fluency and adequacy labels to distinguish different types of translation errors from syntactic and semantic viewpoints. We present our human evaluation criteria for the corpus development and automatic evaluation experiments using the corpus. The human evaluation corpus will be publicly available at https://github.com/ ksudoh/wmt15-17-humaneval.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This paper discusses a classification-based approach to machine translation evaluation, as opposed to a common regression-based approach in the WMT Metrics task. Recent machine translation usually works well but sometimes makes critical errors due to just a few wrong word choices. Our classificationbased approach focuses on such errors using several error type labels, for practical machine translation evaluation in an age of neural machine translation. We have made additional annotations on the WMT 2015-2017 Metrics datasets with fluency and adequacy labels to distinguish different types of translation errors from syntactic and semantic viewpoints. We present our human evaluation criteria for the corpus development and automatic evaluation experiments using the corpus. The human evaluation corpus will be publicly available at https://github.com/ ksudoh/wmt15-17-humaneval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Most machine translation (MT) studies still evaluate their results using BLEU (Papineni et al., 2002) because of its simple, language-agnostic, and model-free methodology. Recent remarkable advances in neural MT (NMT) have cast an important challenge in its evaluation; NMT usually generates a fluent translation that cannot always be evaluated precisely by simple surface-based evaluation metrics like BLEU.", "cite_spans": [ { "start": 78, "end": 101, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A recent trend in the MT evaluation is to use a large-scale pre-trained model like BERT (Devlin et al., 2019) . Shimanaka et al. (2019) proposed BERT Regressor based on sentence-level regression using a fine-tuned BERT model, as an extension of their prior study using sentence embeddings (Shimanaka et al., 2018) . Zhang et al. (2020) proposed BERTScore based on hard token-level alignment using cosine similarity of contextualized token embeddings. Zhao et al. (2019) proposed MoverScore based on soft token-level alignment using Word Mover's Distance (Kusner et al., 2015) . Sellam et al. (2020) proposed BLEURT that incorporates auxiliary task signals into the pre-training of a BERT-based sentence-level regression model. These methods aim to evaluate a translation hypothesis using the corresponding reference with a high correlation to human judgment.", "cite_spans": [ { "start": 88, "end": 109, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 112, "end": 135, "text": "Shimanaka et al. (2019)", "ref_id": "BIBREF21" }, { "start": 289, "end": 313, "text": "(Shimanaka et al., 2018)", "ref_id": "BIBREF20" }, { "start": 316, "end": 335, "text": "Zhang et al. (2020)", "ref_id": "BIBREF25" }, { "start": 451, "end": 469, "text": "Zhao et al. (2019)", "ref_id": "BIBREF26" }, { "start": 554, "end": 575, "text": "(Kusner et al., 2015)", "ref_id": "BIBREF10" }, { "start": 578, "end": 598, "text": "Sellam et al. (2020)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The evaluation of this kind of MT evaluation, often called meta-evaluation, is usually based on some benchmarks. The meta-evaluation in the recent studies uses the WMT Metrics task dataset consisting of human judgment on MT results. The human judgment is given in the form of Human Direct Assessment (DA) (Graham et al., 2016) , a 100-point rating scale. The Human DA results are standardized into z-scores (human DA scores, hereinafter) and used as the evaluation and optimization objective of regression-based MT evaluation methods. Recent MT evaluation methods achieved more than 0.8 in Pearson correlation on WMT 2017 test set 1 . However, Takahashi et al. (2020) reported a weaker correlation in low human DA score ranges. Such a finding suggests the difficulty of the MT evaluation on low-quality results.", "cite_spans": [ { "start": 305, "end": 326, "text": "(Graham et al., 2016)", "ref_id": "BIBREF6" }, { "start": 644, "end": 667, "text": "Takahashi et al. (2020)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we focus on the problem in the evaluation of low-quality translations that cause serious misunderstanding. Judging erroneous translations in the 100-point rating scale would be very difficult and unstable, because the extent of errors cannot be mapped easily into a one-dimensional space. Suppose we are evaluating a translation hypothesis, (1) It is our duty to remain at his sides with its reference, It is not our duty to remain at his sides. 2 The 1 The correlation got worse in the newer WMT datasets (Ma et al., 2018 (Ma et al., , 2019 due to noise in human judgement (Sellam et al., 2020) .", "cite_spans": [ { "start": 460, "end": 461, "text": "2", "ref_id": null }, { "start": 466, "end": 467, "text": "1", "ref_id": null }, { "start": 520, "end": 536, "text": "(Ma et al., 2018", "ref_id": "BIBREF14" }, { "start": 537, "end": 555, "text": "(Ma et al., , 2019", "ref_id": "BIBREF15" }, { "start": 588, "end": 609, "text": "(Sellam et al., 2020)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 This example is taken from the Metrics dataset of WMT difference in this example is just in one missing word not in the hypothesis, but it may cause a serious misunderstanding. Such translation errors are considered as critical ones by professional translators. There are several metrics for translation quality assessment (QA) proposed in the translators' community, such as LISA QA Metric 3 and Multidimentional Quality Metrics (MQM) 4 . These metrics use a couple of error seriousness categories (Minor, Major, Critical) in several viewpoints, such as mistranslation, accuracy, and terminology. The missing negation is a kind of critical error. Nevertheless, most existing automatic MT evaluation metrics fail to penalize such errors. Human DA is also difficult from this viewpoint. Suppose we have other translation hypotheses, (2) He bought some bags at a duty-free store. and (3) Not is to duty remain it sides his at. for the same reference. We can easily identify these hypotheses are wrong. However, evaluating them together with (1) in the same 100-point rating scale by mapping these differences into one dimension is not trivial. This work pursues a classification-based human and automatic MT evaluation based on the multidimensional evaluation. Current NMT technologies would still be far from the level of professional human translators but are also utilized in various applications. MT in practical applications should be evaluated as same as human translations by practical metrics, not just by incremental and engineering-oriented metrics like BLEU.", "cite_spans": [ { "start": 501, "end": 525, "text": "(Minor, Major, Critical)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose a classification-based MT evaluation framework motivated by the discussion about critical errors. In human evaluation, we use conventional evaluation dimensions of fluency and adequacy (LDC, 2005) and define several categories different from a conventional 1-5 Likert scale. We developed a corpus with such additional annotations on WMT Metrics dataset and found that human DA scores penalize incomprehensible and unrelated MT hypotheses more than those with other critical errors that cause serious misunderstanding and contradiction. We then implemented a classification-based automatic MT evaluation using the corpus and conducted experiments on the 2015.", "cite_spans": [ { "start": 196, "end": 207, "text": "(LDC, 2005)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3 http://producthelp.sdl.com/SDL_ TMS_2011/en/Creating_and_Maintaining_ Organizations/Managing_QA_Models/LISA_ QA_Model.htm 4 https://www.dfki.de/en/web/ research/projects-and-publications/ publications-overview/publication/7717/ WMT Metrics test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "MT evaluation has evolved along with the advance of MT technologies. White et al. (1994) reviewed some attempts of human evaluation and presented adequacy, fluency, and comprehension results in the early 1990s. The Quality Panel approach presented in their paper was motivated by the evaluation of human translations, but it was finally abandoned due to human evaluation difficulties. Callison-Burch et al. (2007) presented meta-evaluation of the MT evaluation in WMT shared tasks. According to the findings there, the WMT shared tasks had employed ranking-based human evaluation for a while. Snover et al. (2006) defined Human-targeted Translation Edit Rate (HTER) that measures the translation quality by the required number of postedits on a translation hypothesis. Denkowski and Lavie (2010) and Graham et al. (2012) discussed the differences among those human evaluation approaches. Graham et al. (2016) proposed human DA for the MT evaluation, and DA has been used as standard human evaluation in recent WMT Metrics tasks.", "cite_spans": [ { "start": 69, "end": 88, "text": "White et al. (1994)", "ref_id": "BIBREF24" }, { "start": 385, "end": 413, "text": "Callison-Burch et al. (2007)", "ref_id": "BIBREF1" }, { "start": 593, "end": 613, "text": "Snover et al. (2006)", "ref_id": "BIBREF22" }, { "start": 769, "end": 795, "text": "Denkowski and Lavie (2010)", "ref_id": "BIBREF3" }, { "start": 800, "end": 820, "text": "Graham et al. (2012)", "ref_id": "BIBREF7" }, { "start": 888, "end": 908, "text": "Graham et al. (2016)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "There is another line of human MT evaluation studies focusing on semantics. Lo and Wu (2011) proposed MEANT and its human evaluation variant HMEANT based on semantic frames. Birch et al. (2016) proposed HUME based on a semantic representation called UCCA. This kind of fine-grained semantic evaluation requires some linguistic knowledge for annotators but enables explainable evaluation instead. However, the meaning of the sentence can be changed by small changes, as discussed later in section 3. Looking at sub-structures and using their coverage in the MT evaluation may suffer from this problem.", "cite_spans": [ { "start": 76, "end": 92, "text": "Lo and Wu (2011)", "ref_id": "BIBREF13" }, { "start": 174, "end": 193, "text": "Birch et al. (2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "One recent approach has been proposed by Popovic (Popovic, 2020; Popovi\u0107, 2020) . Her work analyzed the differences between comprehensibility and adequacy in machine translation outputs. The human annotations in her work are major and minor errors in comprehensibility and adequacy on words and phrases. These fine-grained annotations are helpful for detailed translation error detection. The focus of our work is different; we are going to develop sentence-level MT evaluation through simpler human and automatic evaluation schemes.", "cite_spans": [ { "start": 49, "end": 64, "text": "(Popovic, 2020;", "ref_id": "BIBREF18" }, { "start": 65, "end": 79, "text": "Popovi\u0107, 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this work, we suggest revisiting the classification-based evaluation with fluency and adequacy, for absolute human and automatic evaluation. DA-based human evaluation is beneficial in demonstrating the correlation with automatic evaluation metrics. However, it is not very intuitive in the evaluation of different kinds of translation errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our work is also related to some studies using semantic equivalence and contradiction. BLEURT (Sellam et al., 2020) employed NLI in its pretraining phase. NLI includes contradiction identification, which should also contribute to the MT evaluation. BLEURT has revealed its advantage in the example shown in Table 1 . Kry\u015bci\u0144ski et al. (2019) proposed a weakly-supervised method for training an abstractive summarization model using adversarial summaries to improve the factual consistency between a source document and a summary. They also focused on an NLI-like semantic classification for their adversarial training. Classification-based automatic MT evaluation models can be trained similarly, using related and adversarial data.", "cite_spans": [ { "start": 94, "end": 115, "text": "(Sellam et al., 2020)", "ref_id": "BIBREF19" }, { "start": 317, "end": 341, "text": "Kry\u015bci\u0144ski et al. (2019)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 307, "end": 314, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The main focus of this work is to penalize critical errors in translation hypotheses that cause serious misunderstanding. This kind of translation errors must be avoided, as well as possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Translation Errors", "sec_num": "3" }, { "text": "Suppose we have some translation hypotheses with their reference, The Pleiades cluster is situated 445 light-years from Earth 5 . The translation hypotheses are artificial ones with some adversarial edits over the reference, as shown in the second column of Table 1 . The hypothesis hyp1 is a paraphrase, hyp2 and hyp3 have errors on \"light-years\", hyp4 has a wrong negation, hyp5 to hyp7 have errors on named entities, hyp8 is a shuffled word sentence, and hyp9 would come from a completely different sentence; the hypotheses have non-trivial problems except hyp1.", "cite_spans": [], "ref_spans": [ { "start": 258, "end": 265, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Critical Translation Errors", "sec_num": "3" }, { "text": "We put automatic evaluation scores in the table using BLEU-4 6 , chrF 7 , BERTScore 8 , and BLEURT 9 . hyp9 is correctly penalized by all the 5 This example is taken from the Metrics dataset of WMT 2017.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Translation Errors", "sec_num": "3" }, { "text": "6 sacrebleu fingerprint: BLEU+case.lc+numrefs.1+smooth. exp+tok.13a+version.1.4.8 7 sacrebleu fingerprint: chrF2+case.lc+numchars.6+numref s.1+space.False+version.1.4.8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Translation Errors", "sec_num": "3" }, { "text": "8 Authors' implementation https://github.com/ Tiiiger/bert_score with fingerprint: roberta-large L17 no-idf version=0.3.2(hug trans=2.8.0)-rescaled 9 Authors' implementation https://github.com/ google-research/bleurt metrics, but the other results are mixed. BLEU-4 penalizes hyp1 and hyp3 more than the others. chrF and BERTScore penalize hyp3. BLEURT penalizes hyp4 and gives lower scores on hyp2 and hyp5-7 than BERTScore. BLEU-4, BERTScore, and BLEURT penalize hyp8, while chrF gives the same score on it as hyp3. Here, we would regard hyp4, hyp8, and hyp9 as bad translations. However, we cannot identify the other erroneous translation just using the automatic scores. These observations suggest that current evaluation metrics do not always capture these critical translation errors by one or two wrong word choices. Recent NMT sometimes generates translations competitive with human translators, so they should be evaluated as same as human translations in practice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Translation Errors", "sec_num": "3" }, { "text": "On the other hand, MT sometimes generates incomprehensible sentences with various kind of errors, even though NMT works much better than conventional statistical MT, especially in fluency. Such incomprehensible translations are also very problematic as well as content errors in easy-tounderstand and fluent translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Translation Errors", "sec_num": "3" }, { "text": "However, it is not easy to penalize both of them in a single evaluation criterion. Existing automatic evaluation methods often fail to penalize content errors, although they work well for incomprehensible and unrelated sentences, as revealed by the adversarial examples in Table 1 . In this work, we aim to differentiate these errors motivated by the conventional evaluation dimensions of fluency and adequacy (LDC, 2005) .", "cite_spans": [ { "start": 410, "end": 421, "text": "(LDC, 2005)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 273, "end": 280, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Critical Translation Errors", "sec_num": "3" }, { "text": "We have developed a new human evaluation corpus from the viewpoints of fluency and adequacy. The evaluation corpus is available at GitHub 10 under CC BY-NC-SA 4.0 11 . In this section, we present the details of the corpus. Here, the human evaluation is designed in monolingual way; an MT hypothesis is evaluated against only its reference, supposing the reference is semantically equivalent to the source language input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation Corpus", "sec_num": "4" }, { "text": "We made a contract with a linguistic data development company to conduct the human evaluation 12 with three annotators who are native speaker of English and had work experiences of translation into English. We provide a set of English sentence pairs to the annotators: translation hypotheses and the corresponding references. No specific training was conducted before the evaluation. The annotators can ask questions to a moderator in the company, and the moderator asked them to the first author. The annotators conducted the evaluation independently, referring to the evaluation criteria below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation Corpus", "sec_num": "4" }, { "text": "We chose the WMT 2015-2017 Metrics datasets to give additional annotations. The MT results in the dataset and the corresponding human DA scores have been used in many existing automatic MT evaluation studies. The total number of pairs of hypothesis and reference sentences was 9,280, consisting of 2,000 pairs from WMT 2015, 3,360 pairs from WMT 2016, and 3,920 pairs from WMT 2017 datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "We propose the following evaluation criteria in fluency and adequacy, shown in Tables 2 and 3 , respectively.", "cite_spans": [], "ref_spans": [ { "start": 79, "end": 93, "text": "Tables 2 and 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Evaluation Criteria", "sec_num": "4.2" }, { "text": "The fluency criteria in Table 2 extend conventional ones by LDC (2005) , with a comprehension viewpoint in the lower range. The lowest judgment Incomprehensible corresponds to LDC's fluency criterion \"1: Incomprehensible,\" but is not limited to disfluency problems. The category Poor means the difficulty of comprehension. The other categories are defined mainly from a fluency viewpoint. When a sentence is incomprehensible such as hyp8 in Table 1 , we cannot evaluate its contents in the adequacy evaluation. On the other hand, hyp9 is not related to the reference and should be judged as a critical error in adequacy, even though it is easy-to-understand and looks fluent. These criteria were also motivated by the acceptability criteria (Goto et al., 2011) . By the acceptability criteria, a hypothesis that lacks important information (i.e., its adequacy is not 5 in the five-point scale) is always judged as the worst, and better labels are given according to grammatical correctness and fluency.", "cite_spans": [ { "start": 60, "end": 70, "text": "LDC (2005)", "ref_id": "BIBREF11" }, { "start": 741, "end": 760, "text": "(Goto et al., 2011)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 24, "end": 31, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 441, "end": 448, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Fluency", "sec_num": "4.2.1" }, { "text": "Our adequacy criteria in Table 3 are different from the conventional ones (LDC, 2005 ) that focused on the amount of important information. We defined the adequacy of a translation hypothesis focusing Category Explanation Incomprehensible (F)", "cite_spans": [ { "start": 74, "end": 84, "text": "(LDC, 2005", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 25, "end": 32, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Adequacy", "sec_num": "4.2.2" }, { "text": "The sentence is not comprehensible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adequacy", "sec_num": "4.2.2" }, { "text": "Some contents are not easy to understand by typographical/grammatical errors and problematic expressions Fair (B)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Poor (D)", "sec_num": null }, { "text": "All the contents are easy to understand in spite of some typographical/grammatical errors Good (A) All the contents are easy to understand and free from grammatical errors, but some expressions are not very fluent Excellent (S) All the contents are easy to understand, and all the expressions are flawless on the delivery of the correct information, based on the discussion in section 3. Our criteria put more focus on possible misunderstanding by a translation hypothesis; we consider a translation may cause serious misunderstanding even if most parts of the translations are correct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Poor (D)", "sec_num": null }, { "text": "First, we use the category Incomprehensible for such hypotheses that are also classified into Incomprehensible in fluency. Then, we divide critical content errors into three types: Unrelated, Contradiction, and Serious. Unrelated indicates the unrelatedness, as shown by hyp9 in Table 1 . It is expected to appear in poor translations in a very low-resourced condition. The category Contradiction indicates the contradiction with the reference, such as a negation flip at hyp4 and a number error at hyp5 in Table 1 . This label was motivated by the task of natural language inference (NLI), which has also been used for the pre-training of MT evaluation (Sellam et al., 2020) . The category Serious covers the other kind of serious content errors such as hyp6, and hyp7 in Table 1 . These hypotheses deliver somewhat related but different information compared to the reference. The intermediate categories Fair and Good are used for major and minor errors, respectively.", "cite_spans": [ { "start": 654, "end": 675, "text": "(Sellam et al., 2020)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 279, "end": 286, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 507, "end": 514, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 773, "end": 780, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Poor (D)", "sec_num": null }, { "text": "The contents cannot be understood due to fluency and comprehension issues, so the hypothesis is not eligible for the adequacy evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Category Explanation Incomprehensible (N)", "sec_num": null }, { "text": "The hypothesis delivers information that is not related to the ref-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unrelated (O)", "sec_num": null }, { "text": "erence Contra- diction (C)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unrelated (O)", "sec_num": null }, { "text": "The hypothesis delivers information that contradicts the reference Serious (F)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unrelated (O)", "sec_num": null }, { "text": "The hypothesis delivers information that may cause serious misunderstanding due to some content errors but does not contradict the reference Fair (B)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unrelated (O)", "sec_num": null }, { "text": "The hypothesis has some problems in its contents but does not cause a serious misunderstanding Good (A) The hypothesis has some minor problems in its contents that do not make a misunderstanding Excellent (S)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unrelated (O)", "sec_num": null }, { "text": "The hypothesis delivers information equivalent to the reference. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unrelated (O)", "sec_num": null }, { "text": "We conducted some analyses on the human evaluation corpus mainly in the differences among the three annotators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analyses", "sec_num": "4.3" }, { "text": "We analyzed annotation differences among the three annotators (named A, B, and C), especially their labeling biases. Tables 4 and 5 show the annotation distributions for the three annotators on fluency and adequacy, respectively. We can see some differences among the annotators; for example, annotator B was very strict for using the best category Excellent in both dimensions, and annotator C gave more bad labels (Contradiction and Serious) than the others.", "cite_spans": [], "ref_spans": [ { "start": 117, "end": 131, "text": "Tables 4 and 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Annotation Bias", "sec_num": "4.3.1" }, { "text": "On average, the translation hypotheses in the WMT Metrics dataset for 2015-2017 still include many translation errors. The error tendency would be different on newer data consisting of many recent neural MT results. It is worth investigating recent MT results in future studies. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation Bias", "sec_num": "4.3.1" }, { "text": "We compared our human evaluation labels with the human DA scores (standardized z-scores) given in the WMT Metrics data. Tables 6 and 7 show the mean and standard deviation values of human DA scores for each human evaluation label. The human DA score ranges of the fluency and adequacy labels had almost the same partial orders among different annotators, although they still reflect the annotation bias shown in Tables 4 and 5 ; annotator B had a higher standard in fluency evaluation than the others.", "cite_spans": [], "ref_spans": [ { "start": 120, "end": 134, "text": "Tables 6 and 7", "ref_id": "TABREF7" }, { "start": 412, "end": 427, "text": "Tables 4 and 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Comparison with Human Direct Assessment Scores", "sec_num": "4.3.2" }, { "text": "One important finding here is the differences among the adequacy categories Incomprehensible, Unrelated, Contradiction and Serious in Table 7 . The sentences with Unrelated were scored the worst by the human DA. However, critical content errors suggested by the labels Contradiction and Serious were penalized less than the ones with Incomprehensible and Unrelated. Such content errors should also be identified as critical translation errors in practice.", "cite_spans": [], "ref_spans": [ { "start": 134, "end": 141, "text": "Table 7", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Comparison with Human Direct Assessment Scores", "sec_num": "4.3.2" }, { "text": "We also measured pairwise agreement among the three annotators using the \u03ba coefficient (Carletta, 1996) and label concordance rate. The results are shown in Table 8 . The inter-annotator agreement was not high enough but \u03ba values are also com-parable to the previous studies on older WMT datasets (Callison-Burch et al., 2007; Denkowski and Lavie, 2010) 13 . The agreement in fluency was lower than that in adequacy, especially on A-B and B-C, due to very high fluency standard of the annotator B. The agreement would improve with careful pre-annotation training and more example-based evaluation guidelines, because the annotators gave us feedback about the difficulty in discrimination among different categories.", "cite_spans": [ { "start": 87, "end": 103, "text": "(Carletta, 1996)", "ref_id": "BIBREF2" }, { "start": 297, "end": 326, "text": "(Callison-Burch et al., 2007;", "ref_id": "BIBREF1" }, { "start": 327, "end": 353, "text": "Denkowski and Lavie, 2010)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 157, "end": 164, "text": "Table 8", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Inter-annotator Agreement", "sec_num": "4.3.3" }, { "text": "We conducted experiments using the evaluation corpus, to investigate the performance of automatic classification-based MT evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "Among the evaluation corpus, we reserved the WMT 2017 portion (3,920 samples; 560 for each language pair -cs-en, de-en, fi-en, lv-en, ru-en, tr-en, and zh-en) for the test set, chose 536 samples randomly for the development set, and used the remained 4,824 samples for the training set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.1.1" }, { "text": "We took agreements among the three different annotators for the experiments by the following heuristics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.1.1" }, { "text": "\u2022 If two or three annotators gave the same label, it was used as the agreement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.1.1" }, { "text": "\u2022 If the annotators' judgment were different from each other, the worst label was used as the agreement. The label order was Incomprehensible < Poor < Fair < Good < Excellent for fluency and Contradiction < Serious < Incomprehensible < Unrelated < Fair < Good < Excellent for adequacy 14 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.1.1" }, { "text": "Tables 9 and 10 show the label statistics on the training, development, and test sets after applying the heuristics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.1.1" }, { "text": "We used a simple sentence-level automatic MT evaluation framework, which takes hypothesis and reference sentences as the input and predicts the label. Since the task in the experiments was classification, the evaluation model was trained with the classification objective, softmax cross-entropy over the category distribution. We trained and used independent models for fluency and adequacy. We implemented the evaluator using Hugging-Face Transformers 15 and its pre-trained RoBERTa model (roberta-large) . The model was fine-tuned to predict a label through an additional feed-forward layer taking the vector for [CLS] token as the input, using a softmax crossentropy loss. Due to the label imbalance shown in Tables 9 and 10 , we applied a sample-wise loss scaling with weights that were inversely proportional to the number of training instances with the labels. A label weight for a category c was defined as:", "cite_spans": [], "ref_spans": [ { "start": 712, "end": 727, "text": "Tables 9 and 10", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Automatic Evaluation Method", "sec_num": "5.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w c = max c \u2208C count c count c ,", "eq_num": "(1)" } ], "section": "Automatic Evaluation Method", "sec_num": "5.1.2" }, { "text": "where C is a set of categories. We employed the Adam optimizer (Kingma and Ba, 2015) and continued the training for 30 epochs with the initial learning rate of 1e-5. We tried different minibatch sizes (4, 8, 16) and dropout rates in the additional feed-forward layer (0.1, 0.3, 0.5, 0.75) 16 , and used the ones resulting in the best classification accuracy on the development set: 4 and 0.75 for fluency, 8 and 0.5 for adequacy, respectively. Table 12 : Precision, recall, and F1-score in fluency prediction.", "cite_spans": [ { "start": 201, "end": 204, "text": "(4,", "ref_id": null }, { "start": 205, "end": 207, "text": "8,", "ref_id": null }, { "start": 208, "end": 211, "text": "16)", "ref_id": null }, { "start": 289, "end": 291, "text": "16", "ref_id": null } ], "ref_spans": [ { "start": 444, "end": 452, "text": "Table 12", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Automatic Evaluation Method", "sec_num": "5.1.2" }, { "text": "We show the statistics of the prediction results by a confusion matrices and precision/recall/F1-scores. Tables 11 and 12 are from the fluency prediction, and Tables 13 and 14 are from the adequacy prediction.", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 176, "text": "Tables 11 and 12 are from the fluency prediction, and Tables 13 and 14", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "In the fluency prediction, the classification accuracy on the test set was 0.578 (2,267 correct predictions out of 3,920), and that on the training and development sets was 0.999 and 0.647, respectively. Most of the incorrect predictions were in adjacent categories, and the fraction of serious misrecognition in distant categories (Incomprehensible \u2192 {Good, Fair}, Poor \u2192 Excellent, Good \u2192 Incomprehensible, and Excellent \u2192 {Incomprehensible, Poor}) was not so large (0.43%; 17 out of 3,920).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "The prediction performance in Table 12 suggests the best and worst categories (Excellent and Incomprehensible) can be predicted more accurately than the intermediate categories.", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 38, "text": "Table 12", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "In the adequacy prediction, the classification accuracy on the test set was 0.600 (2,351 correct predictions out of 3,920), and the results on the training and development sets were 0.998 and 0.632, respectively. The prediction of less frequent categories (Unrelated and Contradiction) did not work well despite the instance weighting in training. The result suggests we should use more negative examples in training for more accurate predictions on them. The prediction performance in Table 14 suggests the hypotheses with Incomprehensible can be identified more accurately than the others. Predictions of the other categories were still difficult. However, 93.5% of the hypotheses with the predicted label Excellent were good translations labeled Excellent or Good (144 out of 154); this finding would be beneficial in practice. The most serious confusion in this result was between Serious (critical) and Fair (okay). More fine-grained discrimination is needed to judge them. Figure 1 (a) and (b) show the learning curves. The training set accuracy was almost saturated around 20 training epochs, but the development set accuracy was not stable until 30 epochs.", "cite_spans": [], "ref_spans": [ { "start": 486, "end": 494, "text": "Table 14", "ref_id": "TABREF1" }, { "start": 979, "end": 987, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "In summary, these experiments suggest our classification-based MT evaluation with absolute categories is promising, while we still need more negative examples. More data collections, including data augmentation, would be helpful, along with a further investigation of prediction models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "In this paper, we present our approach to classification-based human and automatic MT evaluation, focusing on critical translation errors in MT outputs. We revisited the use of fluency and adequacy metrics with some modifications on evaluation criteria, motivated by our thoughts on the critical content errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "We developed a human evaluation corpus based on the criteria using the WMT Metrics dataset, which will be publicly available upon publication. Our corpus analyses revealed the human DA penalizes unrelated and incomprehensible hypotheses much more than contradiction and other critical errors in the content. We also conducted automatic r\\p Inc. Unr. Con. Ser. Fair Good Exc. Incomprehensible 224 0 0 83 38 4 1 Unrelated 0 1 0 13 5 0 0 Contradiction 0 0 8 9 13 10 0 Serious 37 0 8 385 242 45 0 Fair 29 0 13 Table 14 : Precision, recall, and F1-score in adequacy prediction.", "cite_spans": [], "ref_spans": [ { "start": 375, "end": 537, "text": "Incomprehensible 224 0 0 83 38 4 1 Unrelated 0 1 0 13 5 0 0 Contradiction 0 0 8 9 13 10 0 Serious 37 0 8 385 242 45 0 Fair 29 0 13", "ref_id": "TABREF1" }, { "start": 538, "end": 546, "text": "Table 14", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "MT evaluation experiments using the human evaluation corpus and achieved around 60% classification accuracy both in fluency and adequacy. Our future work includes further development of human evaluation corpora that are not limited to WMT Metrics data, and data augmentation methods to tackle the label imbalance problem. It is also promising to apply the classification-based automatic MT evaluation to the neural MT training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "https://github.com/ksudoh/ wmt15-17-humaneval 11 https://creativecommons.org/licenses/ by-nc-sa/4.0/12 The human evaluation was conducted without formal ethical review.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that we had three annotators who evaluated all the sentences.14 We used this heuristic order because of the importance of content errors suggested by Contradiction and Serious.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The dropout rate in RoBERTa was kept unchanged from its default value of 0.1. We also tried to increase it in the pilot test, but that resulted worse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank anonymous reviewers for their comments and suggestions. This work is supported by JST PRESTO (JPMJPR1856).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "HUME: Human UCCA-based evaluation of machine translation", "authors": [ { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1264--1274", "other_ids": { "DOI": [ "10.18653/v1/D16-1134" ] }, "num": null, "urls": [], "raw_text": "Alexandra Birch, Omri Abend, Ond\u0159ej Bojar, and Barry Haddow. 2016. HUME: Human UCCA-based evaluation of machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1264-1274, Austin, Texas. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "(meta-) evaluation of machine translation", "authors": [ { "first": "Chris", "middle": [], "last": "Callison", "suffix": "" }, { "first": "-", "middle": [], "last": "Burch", "suffix": "" }, { "first": "Cameron", "middle": [], "last": "Fordyce", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Schroeder", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Second Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "136--158", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (meta-) evaluation of machine translation. In Pro- ceedings of the Second Workshop on Statistical Ma- chine Translation, pages 136-158, Prague, Czech Republic. Association for Computational Linguis- tics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Assessing agreement on classification tasks: The kappa statistic", "authors": [ { "first": "Jean", "middle": [], "last": "Carletta", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "2", "pages": "249--254", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jean Carletta. 1996. Assessing agreement on classifica- tion tasks: The kappa statistic. Computational Lin- guistics, 22(2):249-254.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Choosing the Right Evaluation for Machine Translation: an Examination of Annotator and Automatic Metric Performance on Human Judgment Tasks", "authors": [ { "first": "Michael", "middle": [], "last": "Denkowski", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Ninth Biennial Conference of AMTA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Denkowski and Alon Lavie. 2010. Choos- ing the Right Evaluation for Machine Translation: an Examination of Annotator and Automatic Metric Performance on Human Judgment Tasks . In Pro- ceedings of the Ninth Biennial Conference of AMTA 2010.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Overview of the Patent Machine Translation Task at the NTCIR-9 Workshop", "authors": [ { "first": "Isao", "middle": [], "last": "Goto", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Ka", "middle": [ "Po" ], "last": "Chow", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Tsou", "suffix": "" } ], "year": 2011, "venue": "Proceedings of NTCIR-9 Workshop Meeting", "volume": "", "issue": "", "pages": "559--578", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isao Goto, Bin Lu, Ka Po Chow, Eiichiro Sumita, and Benjamin Tsou. 2011. Overview of the Patent Machine Translation Task at the NTCIR-9 Work- shop. In Proceedings of NTCIR-9 Workshop Meet- ing, pages 559-578.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Is all that glitters in machine translation quality estimation really gold?", "authors": [ { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Meghan", "middle": [], "last": "Dowling", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Eskevich", "suffix": "" }, { "first": "Teresa", "middle": [], "last": "Lynn", "suffix": "" }, { "first": "Lamia", "middle": [], "last": "Tounsi", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COL-ING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "3124--3134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yvette Graham, Timothy Baldwin, Meghan Dowling, Maria Eskevich, Teresa Lynn, and Lamia Tounsi. 2016. Is all that glitters in machine translation qual- ity estimation really gold? In Proceedings of COL- ING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3124-3134, Osaka, Japan. The COLING 2016 Orga- nizing Committee.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Measurement of progress in machine translation", "authors": [ { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Harwood", "suffix": "" }, { "first": "Alistair", "middle": [], "last": "Moffat", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Zobel", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Australasian Language Technology Association Workshop", "volume": "", "issue": "", "pages": "70--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yvette Graham, Timothy Baldwin, Aaron Harwood, Alistair Moffat, and Justin Zobel. 2012. Measure- ment of progress in machine translation. In Proceed- ings of the Australasian Language Technology Asso- ciation Workshop 2012, pages 70-78, Dunedin, New Zealand.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Adam: A Method for Stochastic Optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Third International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In Proceed- ings of the Third International Conference on Learn- ing Representations.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Evaluating the Factual Consistency of Abstractive Text Summarization", "authors": [ { "first": "Wojciech", "middle": [], "last": "Kry\u015bci\u0144ski", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Mccann", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wojciech Kry\u015bci\u0144ski, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Evaluating the Fac- tual Consistency of Abstractive Text Summarization. arXiv preprint arXiv: 1910:12840.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "From Word Embeddings To Document Distances", "authors": [ { "first": "Matt", "middle": [], "last": "Kusner", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Kolkin", "suffix": "" }, { "first": "Kilian", "middle": [], "last": "Weinberger", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 32nd International Conference on Machine Learning", "volume": "", "issue": "", "pages": "957--966", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From Word Embeddings To Doc- ument Distances. In Proceedings of the 32nd In- ternational Conference on Machine Learning, vol- ume 37 of Proceedings of Machine Learning Re- search, pages 957-966, Lille, France. PMLR.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Linguistic Data Annotation Specification", "authors": [ { "first": "", "middle": [], "last": "Ldc", "suffix": "" } ], "year": 2005, "venue": "Assessment of Fluency and Adequacy in Translations Revision", "volume": "1", "issue": "5", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "LDC. 2005. Linguistic Data Annotation Specifica- tion: Assessment of Fluency and Adequacy in Trans- lations Revision 1.5, January 25, 2005 . Technical report, Linguistic Data Consortium.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretrain- ing Approach. arXiv preprint arXiv: 1907:11692.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "MEANT: An inexpensive, high-accuracy, semi-automatic metric for evaluating translation utility based on semantic roles", "authors": [ { "first": "Chi-Kiu", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "220--229", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chi-kiu Lo and Dekai Wu. 2011. MEANT: An inex- pensive, high-accuracy, semi-automatic metric for evaluating translation utility based on semantic roles. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 220-229, Portland, Oregon, USA. Association for Computational Lin- guistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Results of the WMT18 metrics shared task: Both characters and embeddings achieve good performance", "authors": [ { "first": "Qingsong", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers", "volume": "", "issue": "", "pages": "671--688", "other_ids": { "DOI": [ "10.18653/v1/W18-6450" ] }, "num": null, "urls": [], "raw_text": "Qingsong Ma, Ond\u0159ej Bojar, and Yvette Graham. 2018. Results of the WMT18 metrics shared task: Both characters and embeddings achieve good perfor- mance. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 671-688, Belgium, Brussels. Association for Com- putational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Results of the WMT19 metrics shared task: Segment-level and strong MT systems pose big challenges", "authors": [ { "first": "Qingsong", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Johnny", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "2", "issue": "", "pages": "62--90", "other_ids": { "DOI": [ "10.18653/v1/W19-5302" ] }, "num": null, "urls": [], "raw_text": "Qingsong Ma, Johnny Wei, Ond\u0159ej Bojar, and Yvette Graham. 2019. Results of the WMT19 metrics shared task: Segment-level and strong MT sys- tems pose big challenges. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 62-90, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Informative manual evaluation of machine translation output", "authors": [], "year": null, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "5059--5069", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.444" ] }, "num": null, "urls": [], "raw_text": "Maja Popovi\u0107. 2020. Informative manual evalua- tion of machine translation output. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 5059-5069, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "On the differences between human translations", "authors": [ { "first": "Maja", "middle": [], "last": "Popovic", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", "volume": "", "issue": "", "pages": "365--374", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maja Popovic. 2020. On the differences between hu- man translations. In Proceedings of the 22nd An- nual Conference of the European Association for Machine Translation, pages 365-374, Lisboa, Portu- gal. European Association for Machine Translation.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "BLEURT: Learning robust metrics for text generation", "authors": [ { "first": "Thibault", "middle": [], "last": "Sellam", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7881--7892", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.704" ] }, "num": null, "urls": [], "raw_text": "Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7881-7892, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "RUSE: Regressor using sentence embeddings for automatic machine translation evaluation", "authors": [ { "first": "Hiroki", "middle": [], "last": "Shimanaka", "suffix": "" }, { "first": "Tomoyuki", "middle": [], "last": "Kajiwara", "suffix": "" }, { "first": "Mamoru", "middle": [], "last": "Komachi", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers", "volume": "", "issue": "", "pages": "751--758", "other_ids": { "DOI": [ "10.18653/v1/W18-6456" ] }, "num": null, "urls": [], "raw_text": "Hiroki Shimanaka, Tomoyuki Kajiwara, and Mamoru Komachi. 2018. RUSE: Regressor using sentence embeddings for automatic machine translation eval- uation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 751-758, Belgium, Brussels. Association for Com- putational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Machine Translation Evaluation with BERT Regressor", "authors": [ { "first": "Hiroki", "middle": [], "last": "Shimanaka", "suffix": "" }, { "first": "Tomoyuki", "middle": [], "last": "Kajiwara", "suffix": "" }, { "first": "Mamoru", "middle": [], "last": "Komachi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.12679" ] }, "num": null, "urls": [], "raw_text": "Hiroki Shimanaka, Tomoyuki Kajiwara, and Mamoru Komachi. 2019. Machine Translation Evalua- tion with BERT Regressor. arXiv preprint arXiv: 1907.12679.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A Study of Translation Edit Rate with Targeted Human Annotation", "authors": [ { "first": "Matthew", "middle": [], "last": "Snover", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Micciulla", "middle": [], "last": "Linnea", "suffix": "" }, { "first": "John", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 7th Conference of the Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea, Micciulla, and John Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Annota- tion. In Proceedings of the 7th Conference of the As- sociation for Machine Translation in the Americas (AMTA-2006).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Automatic machine translation evaluation using source language inputs and cross-lingual language model", "authors": [ { "first": "Kosuke", "middle": [], "last": "Takahashi", "suffix": "" }, { "first": "Katsuhito", "middle": [], "last": "Sudoh", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Nakamura", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3553--3558", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.327" ] }, "num": null, "urls": [], "raw_text": "Kosuke Takahashi, Katsuhito Sudoh, and Satoshi Naka- mura. 2020. Automatic machine translation evalua- tion using source language inputs and cross-lingual language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 3553-3558, Online. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The ARPA MT evaluation methodologies: Evolution, lessons, and future approaches", "authors": [ { "first": "John", "middle": [ "S" ], "last": "White", "suffix": "" }, { "first": "Theresa", "middle": [ "A" ], "last": "O'connell", "suffix": "" }, { "first": "Francis", "middle": [ "E" ], "last": "O'mara", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the First Conference of the Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John S. White, Theresa A. O'Connell, and Francis E. O'Mara. 1994. The ARPA MT evaluation method- ologies: Evolution, lessons, and future approaches. In Proceedings of the First Conference of the As- sociation for Machine Translation in the Americas, Columbia, Maryland, USA.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "BERTScore: Evaluating Text Generation with BERT", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Varsha", "middle": [], "last": "Kishore", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Eighth International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating Text Generation with BERT. In Pro- ceedings of the Eighth International Conference on Learning Representations.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance", "authors": [ { "first": "Wei", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Maxime", "middle": [], "last": "Peyrard", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Christian", "middle": [ "M" ], "last": "Meyer", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "563--578", "other_ids": { "DOI": [ "10.18653/v1/D19-1053" ] }, "num": null, "urls": [], "raw_text": "Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Chris- tian M. Meyer, and Steffen Eger. 2019. MoverScore: Text generation evaluating with contextualized em- beddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 563-578, Hong Kong, China. Association for Computational Lin- guistics.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "Mean (standard deviation) of Direct Assessment scores for labels by the three annotators (fluency)", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "Learning curves in classification accuracy over training epochs.", "num": null }, "TABREF1": { "content": "", "text": "", "num": null, "type_str": "table", "html": null }, "TABREF2": { "content": "
", "text": "Evaluation criteria in Fluency. Labels in parentheses are the ones used in the evaluation corpus.", "num": null, "type_str": "table", "html": null }, "TABREF3": { "content": "
", "text": "Evaluation criteria in Adequacy. Labels in parentheses are the ones used in the evaluation corpus.", "num": null, "type_str": "table", "html": null }, "TABREF5": { "content": "
AdequacyABCAve.
Incomprehensible 0.098 0.099 0.098 0.098
Unrelated0.004 0.001 0.011 0.005
Contradiction 0.009 0.019 0.086 0.038
Serious0.205 0.187 0.311 0.234
Fair0.374 0.343 0.146 0.288
Good0.233 0.296 0.271 0.267
Excellent0.076 0.005 0.076 0.069
", "text": "Annotation distributions for the three annotators (fluency).", "num": null, "type_str": "table", "html": null }, "TABREF6": { "content": "", "text": "", "num": null, "type_str": "table", "html": null }, "TABREF7": { "content": "
MetricA-BA-CB-C
Fluency\u03ba 0.2860 0.3773 0.2489 r 0.4512 0.5113 0.4014
Adequacy\u03ba 0.3947 0.2684 0.2774 r 0.5459 0.5870 0.5752
", "text": "Mean (standard deviation) of Direct Assessment scores for labels by the three annotators (adequacy)", "num": null, "type_str": "table", "html": null }, "TABREF8": { "content": "
: Inter-annotator agreement in \u03ba coefficient and
label concordance rate (r) on our human evaluation cor-
pus. The fluency metric has five categories and the ad-
equacy metric has seven categories.
FluencyTraining Dev.Test
Incomprehensible54574282
Poor99296602
Fair1,655 196 1,341
Good80880899
Excellent82490796
", "text": "", "num": null, "type_str": "table", "html": null }, "TABREF9": { "content": "", "text": "Label statistics of fluency dataset.", "num": null, "type_str": "table", "html": null }, "TABREF11": { "content": "
", "text": "Label statistics of adequacy dataset.", "num": null, "type_str": "table", "html": null }, "TABREF13": { "content": "
Fluency Precision Recall F1-score
Incomprehensible0.769 0.7300.749
Poor0.581 0.4380.499
Fair0.613 0.5830.598
Good0.439 0.6230.515
Excellent0.698 0.5690.627
Ave.0.620 0.5890.598
", "text": "Confusion matrix in fluency prediction. The bold numbers represent correct predictions. The overall classification accuracy was 0.578.", "num": null, "type_str": "table", "html": null }, "TABREF15": { "content": "
Adequacy Precision Recall F1-score
Incomprehensible0.762 0.6400.696
Unrelated1.000 0.0530.100
Contradiction0.211 0.2000.205
Serious0.515 0.5370.526
Fair0.592 0.6090.600
Good0.642 0.6620.652
Excellent0.545 0.4470.491
Ave.0.609 0.4500.467
", "text": "Confusion matrix in adequacy prediction. The bold numbers represent correct predictions. The overall classification accuracy was 0.600.", "num": null, "type_str": "table", "html": null } } } }