ACL-OCL / Base_JSON /prefixE /json /eval4nlp /2021.eval4nlp-1.24.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:38:54.875057Z"
},
"title": "Explainable Quality Estimation: CUNI Eval4NLP Submission",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Pol\u00e1k",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Charles University <polak",
"location": {}
},
"email": ""
},
{
"first": "Muskaan",
"middle": [],
"last": "Singh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Charles University <polak",
"location": {}
},
"email": "singh@ufal.mff.cuni.cz"
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Charles University <polak",
"location": {}
},
"email": "bojar>@ufal.mff.cuni.cz"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our participating system in the shared task Explainable quality estimation of 2nd Workshop on Evaluation & Comparison of NLP Systems. The task of quality estimation (QE, a.k.a. reference-free evaluation) is to predict the quality of MT output at inference time without access to reference translations. In this proposed work, we first build a word-level quality estimation model, then we finetune this model for sentence-level QE. Our proposed models achieve near stateof-the-art results. In the word-level QE, we place 2nd and 3rd on the supervised Ro-En and Et-En test sets. In the sentence-level QE, we achieve a relative improvement of 8.86% (Ro-En) and 10.6% (Et-En) in terms of the Pearson correlation coefficient over the baseline model.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our participating system in the shared task Explainable quality estimation of 2nd Workshop on Evaluation & Comparison of NLP Systems. The task of quality estimation (QE, a.k.a. reference-free evaluation) is to predict the quality of MT output at inference time without access to reference translations. In this proposed work, we first build a word-level quality estimation model, then we finetune this model for sentence-level QE. Our proposed models achieve near stateof-the-art results. In the word-level QE, we place 2nd and 3rd on the supervised Ro-En and Et-En test sets. In the sentence-level QE, we achieve a relative improvement of 8.86% (Ro-En) and 10.6% (Et-En) in terms of the Pearson correlation coefficient over the baseline model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Quality Estimation (QE) or Confidence Estimation (CE) is a task of assessing the quality of machinetranslated text given the source without accessing the reference (Blatz et al., 2004; Specia et al., 2009) . QE can be assessed on sentence-level, wordlevel granularity or even document-level (Ive et al., 2018) . Sentence-level scores predict what score would a human annotator assign to the whole sentence; most commonly, direct assessment (Graham et al., 2017) or HTER (Snover et al., 2006) serve as the golden standard. Word-level QE indicates word-level errors in machine translation output or incorrectly translated words in the source. While automatic word-level scores are usually continuous, the gold truth is binary: some words are labeled as correct while some are labeled as wrong. This estimation provides an aid in the translation workflow. For instance, it can help to determine if the machine-translated sentence is good enough to be used as-is or if it requires a human translator for post-editing or translating from scratch (Kepler et al., 2019b) .",
"cite_spans": [
{
"start": 164,
"end": 184,
"text": "(Blatz et al., 2004;",
"ref_id": "BIBREF0"
},
{
"start": 185,
"end": 205,
"text": "Specia et al., 2009)",
"ref_id": "BIBREF18"
},
{
"start": 291,
"end": 309,
"text": "(Ive et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 440,
"end": 461,
"text": "(Graham et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 470,
"end": 491,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF14"
},
{
"start": 1041,
"end": 1063,
"text": "(Kepler et al., 2019b)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present our submission to the shared task of Explainable quality estimation of 2nd Workshop on Evaluation & Comparison of NLP Systems (Fomicheva et al., 2021 ). 1 Our solution is based on the XLM-R multilingual pretrained model (Conneau et al., 2020) . We first build a word-level quality estimation model. Then we finetune this model for sentence-level QE.",
"cite_spans": [
{
"start": 152,
"end": 175,
"text": "(Fomicheva et al., 2021",
"ref_id": "BIBREF2"
},
{
"start": 246,
"end": 268,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We make our code publicly available. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the past decade, most of the quality estimation systems depended heavily on feature engineering, linguistic information, and machine learning algorithms such as support vector machines or randomized decision trees (Specia et al., 2013 (Specia et al., , 2015 . In recent years, emerging neural-based QE have been outperforming earlier approaches on leaderboards of MT quality estimation (Kepler et al., 2019a) . For instance, POSTECH (Kim et al., 2017) , a purely neural system based on encoder-decoder recurrent neural network (referred to as a predictor) is stacked with bidirectional RNN (referred to as an estimator). This predictor-estimator QE system was the bestperforming one in WMT 2017. 3 It was further extended in DeepQuest architecture (Ive et al., 2018) . These systems required extensive pretraining, which makes them dependent on large parallel corpora and computationally expensive. To overcome this problem, cross-lingual embeddings (Ruder et al., 2019) were used to reduce the burden of deep neural network architecture. TransQuest used these cross-lingual embeddings and was the best-performing sentence-level QE model at WMT 2020 QE Shared Task . For the sentence-level QE task, the model was finetuned on multilingual pre-trained representations. For the word-level QE task, the authors used direct assessment (DA) quality scores from the MLQE-PE dataset. Motivated by this work, we finetune our word-level model to yield sentence-level QE.",
"cite_spans": [
{
"start": 217,
"end": 237,
"text": "(Specia et al., 2013",
"ref_id": "BIBREF17"
},
{
"start": 238,
"end": 260,
"text": "(Specia et al., , 2015",
"ref_id": "BIBREF16"
},
{
"start": 389,
"end": 411,
"text": "(Kepler et al., 2019a)",
"ref_id": "BIBREF6"
},
{
"start": 436,
"end": 454,
"text": "(Kim et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 751,
"end": 769,
"text": "(Ive et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 953,
"end": 973,
"text": "(Ruder et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Overall, the tremendous progress in the field of quality estimation is achieved thanks to the annual focus of the shared task organized by WMT and thanks to the annotated data released in these tasks, leading to the development of various opensource systems such as QuEst (Specia et al., 2013) , QuEst++ (Specia et al., 2015) , deepQuest (Ive et al., 2018) , OpenKiwi (Kepler et al., 2019b) and Tran-sQuest (Ranasinghe et al., 2020) .",
"cite_spans": [
{
"start": 272,
"end": 293,
"text": "(Specia et al., 2013)",
"ref_id": "BIBREF17"
},
{
"start": 304,
"end": 325,
"text": "(Specia et al., 2015)",
"ref_id": "BIBREF16"
},
{
"start": 338,
"end": 356,
"text": "(Ive et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 368,
"end": 390,
"text": "(Kepler et al., 2019b)",
"ref_id": "BIBREF7"
},
{
"start": 407,
"end": 432,
"text": "(Ranasinghe et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The task consists of building a quality estimation system that (1) predicts the quality score for an input pair of the source text and MT hypothesis, and (2) provides word-level evidence for its predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "3"
},
{
"text": "The dataset for the shared task consists of training, development, and test sets. The training and development sets are Estonian-English (Et-En) and Romanian-English (Ro-En) partitions of the MLQE-PE dataset (Fomicheva et al., 2020) . The test set consists of sentence-level quality scores and word-level error annotations for these two language pairs. The goal of the shared task is to estimate the word quality in unsupervised settings (no training data for word-level QE whatsoever). However, participating systems can also be labeled as \"unconstrained\" and use word-level QE training data.",
"cite_spans": [
{
"start": 208,
"end": 232,
"text": "(Fomicheva et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "4"
},
{
"text": "Additionally, there are zero-shot test sets for two language pairs, i.e., German-Chinese (De-Zh) and Russian-German (Ru-De), where no sentence-level OR word-level annotations were available at training time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "4"
},
{
"text": "In the proposed system, we use a pre-trained XLM-R model (Conneau et al., 2020) to obtain representations of input sentences in continuous space. XML-R is trained on large-scale multilingual Com-monCrawl datasets. We have two separate models for the word-level and sentence-level quality estimation.",
"cite_spans": [
{
"start": 57,
"end": 79,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "5"
},
{
"text": "The pre-trained XLM-R model uses BPE encoding (Sennrich et al., 2016) for input tokenization. In order to input two sentences to the model (i.e., the source and the MT candidate), the sentences are concatenated with two </s> tokens 4 between them:",
"cite_spans": [
{
"start": 46,
"end": 69,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Representation",
"sec_num": "5.1"
},
{
"text": "<s> s 1 , ..., s m </s></s> t 1 , ..., t n </s>.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Representation",
"sec_num": "5.1"
},
{
"text": "In the word-level QE, the task is to predict for each source and target word if it was translated correctly. We use the XLM-R model extended a linear layer on top of the hidden-states output (see Figure 1 ). We use cross-entropy loss. The BPE encoding might break a word into more tokens; this is especially true for the less frequent or misspelled words (Pol\u00e1k, 2020) . E.g., \"misstake\" (with double s) might be broken down into two tokens: \"_miss\" and \"take\". Because we are interested in wordlevel predictions and not token-level predictions, we label only the first token of each word and put the \"ignore\" label to others (including the special tokens <s> and </s>).",
"cite_spans": [
{
"start": 355,
"end": 368,
"text": "(Pol\u00e1k, 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 196,
"end": 204,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Word-level QE",
"sec_num": "5.2"
},
{
"text": "We also experiment with an alternative representation: we put a <cls> token after each word. In this case, we ignore all labels except for <cls>. But as we will document later, this alternative does not bring any improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-level QE",
"sec_num": "5.2"
},
{
"text": "For the sentence-level QE, we use the XLM-R model extended with a linear layer on top of the pooled output. We finetune the model with mean square error loss. We normalize the scores to interval [0; 1].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-level QE",
"sec_num": "5.3"
},
{
"text": "In all our experiments, we use XLM-R large model 5 using Hugging Face Transformers (Wolf et al., 2020) . We run all our experiments on NVIDIA RTX 3090. To find the best parameters, we run a grid search. The optimal hyper-parameters are 2e \u22125 learning rate, three epochs, and batch size of 16. ",
"cite_spans": [
{
"start": 83,
"end": 102,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "6"
},
{
"text": "E(<s>) E_1 E_M E(</s>) T'_1 E'_N E(</s>) T(<s>) T_1 T_M T(</s>) E'_1 E(</s>) XLM-R T(</s>) T'_N T(</s>) <s>",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "6"
},
{
"text": "In this section, we provide the analysis across each level of experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantitative Analysis",
"sec_num": "6.1"
},
{
"text": "First, we consider having separate models for source and target sentence word-labels prediction. We compared the separate models with a joint one, but we did not find any statistically significant evidence of difference. We also consider the variant with <cls> token. Again, the model does not perform better. Additionally, we noticed a much larger GPU-memory consumption.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-level QE",
"sec_num": "6.1.1"
},
{
"text": "We were also interested in whether to use all training data (Ro-En and Et-En) or to match the test pair (two test sets are supervised with matching language pairs). We found out that it is better to include both language pairs (see Table 1 ). This suggests the model is learning the tasks independent of language pair and benefits from more training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 239,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Word-level QE",
"sec_num": "6.1.1"
},
{
"text": "We first tried a simple approach computing the sentence-level QE using the word labels. We defined the score as 1 minus the geometric mean of the word-level scores (i.e., the probability of a word being incorrectly translated). The scores surpassed the shared task author's XMOVER-SHAP (Zhao et al., 2020) baseline with Pearson correlation coefficients of 0.501 and 0.648 compared to 0.415 and 0.638 on Et-En or Ro-En development sets, respectively.",
"cite_spans": [
{
"start": 286,
"end": 305,
"text": "(Zhao et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-level QE",
"sec_num": "6.1.2"
},
{
"text": "Our second approach is again based on the XLM-R model. We tried to use the vanilla XLM-R pre-trained model as in the word-level QE task. The model failed to converge. To circumvent this, we finetuned the best-performing word-level QE model. This provided much better results (0.776 and 0.880 on Et-En or Ro-En development sets respectively).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-level QE",
"sec_num": "6.1.2"
},
{
"text": "To leverage all the models we trained during the hyper-parameter search we employ ensembling. We tried two approaches: (1) weighted geometric mean and (2) weighted arithmetic mean. The former failed to produce results as the number of models was too large (36 models).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensembling",
"sec_num": "6.1.3"
},
{
"text": "We estimate the model weights using Bayesian optimization with target sentence AUC as an objective function. We optimized the parameters on the development sets (for each Et-En and Ro-En separately). For the zero-shot test sets (De-Zh or Ru-De), we averaged the weights obtained for both development sets (Et-En and Ro-En). Table 2 documents slight improvement using ensembling compared to the best model.",
"cite_spans": [],
"ref_spans": [
{
"start": 324,
"end": 331,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Ensembling",
"sec_num": "6.1.3"
},
{
"text": "We were interested to see, how well the model performed on unseen language pairs. We had no bilingual speaker of the provided language pairs. Therefore, we assembled our own Slovak-German test set (we used Google Translate to obtain the translation and we introduced artificial errors). We draw three examples in Table 3 . We see the model also works on an unseen language pair (recall the XLM-R model was pre-trained on both languages, but we use different languages for the QE task finetuning).",
"cite_spans": [],
"ref_spans": [
{
"start": 313,
"end": 320,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "6.2"
},
{
"text": "In the first example (a correctly translated sentence), the model labels the words correctly (although we see a bit of uncertainty over the first, less frequent word). The second example changes the meaning (protested for instead of protested against). We see some hesitation in the third word, still, the predicted probabilities are incorrect. We hypothesize this may be due to the fact the model prefers predicting based on co-occurrences, not necessarily on the meaning (Kim et al., 2019) . In the third example, the model correctly detects the first mistranslated word both in the source and target sentence.",
"cite_spans": [
{
"start": 473,
"end": 491,
"text": "(Kim et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "6.2"
},
{
"text": "with other submissions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparative analysis of our submission",
"sec_num": "6.3"
},
{
"text": "In the shared task, 11 teams from different organizations participated. Out of the 11 different submissions, two teams submitted to the unconstrained track. The rest of the teams submitted to the constrained track as they did not use any supervision at the word level. Our submission was also in the unconstrained category. We report all the results from the leaderboard 6 in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 376,
"end": 383,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparative analysis of our submission",
"sec_num": "6.3"
},
{
"text": "The paper describes our submission to the shared task, EVAL4NLP, co-located with EMNLP. First, we built a word-level quality estimation model. Then we finetuned it for obtaining sentence-level QE. In the word-level QE, we placed 2nd and 3rd on the supervised Ro-En and Et-En test sets. In the sentence-level QE, we achieved a relative improvement of 8.86% (Ro-En) and 10.6% (Et-En) in terms of the Pearson correlation coefficient over the baseline model. Table 4 : Comparative results from the leaderboard. The scores were evaluated against error labels resulting from manual annotation. Missing word errors were ignored in this track. The main metrics for evaluation were AUC and AUPRC scores for word-level explanations.",
"cite_spans": [],
"ref_spans": [
{
"start": 455,
"end": 462,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://eval4nlp.github.io/sharedtask. html 2 https://github.com/pe-trik/ eval4nlp-20213 https://www.statmt.org/wmt17/ quality-estimation-task.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We follow the tokenization procedure in https: //huggingface.co/transformers/model_doc/ xlmroberta.html#xlmrobertatokenizerfast.5 https://huggingface.co/ xlm-roberta-large",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://competitions.codalab.org/ competitions/33038#results",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The work was supported by the grant 825303 \"Bergamot\" of European Union's Horizon 2020 research and innovation programme, 19-26934X \"NEUREM3\" of the Grant Agency of the Czech Republic, and START/SCI/089 (Babel Octopus: Robust Multi-Source Speech Translation) of the START Programme of Charles University.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Confidence estimation for machine translation",
"authors": [
{
"first": "John",
"middle": [],
"last": "Blatz",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Simona",
"middle": [],
"last": "Gandrabur",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Kulesza",
"suffix": ""
}
],
"year": 2004,
"venue": "Coling 2004: Proceedings of the 20th international conference on computational linguistics",
"volume": "",
"issue": "",
"pages": "315--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto San- chis, and Nicola Ueffing. 2004. Confidence esti- mation for machine translation. In Coling 2004: Proceedings of the 20th international conference on computational linguistics, pages 315-321.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The eval4nlp shared task on explainable quality estimation: Overview and results",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Piyawat",
"middle": [],
"last": "Lertvittayakumjorn",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Fomicheva, Piyawat Lertvittayakumjorn, Wei Zhao, Steffen Eger, and Yang Gao. 2021. The eval4nlp shared task on explainable quality estima- tion: Overview and results. In Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised quality estimation for neural machine translation",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Yankovskaya",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Aletras",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "539--555",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Fr\u00e9d\u00e9ric Blain, Francisco Guzm\u00e1n, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. 2020. Unsupervised quality estimation for neural machine translation. Transactions of the As- sociation for Computational Linguistics, 8:539-555.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Can machine translation systems be evaluated by the crowd alone",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Alistair",
"middle": [],
"last": "Moffat",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Zobel",
"suffix": ""
}
],
"year": 2017,
"venue": "Natural Language Engineering",
"volume": "23",
"issue": "1",
"pages": "3--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2017. Can machine translation sys- tems be evaluated by the crowd alone. Natural Lan- guage Engineering, 23(1):3-30.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Deepquest: a framework for neural-based quality estimation",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Ive",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3146--3157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Ive, Fr\u00e9d\u00e9ric Blain, and Lucia Specia. 2018. Deepquest: a framework for neural-based quality es- timation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3146-3157. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Unbabel's participation in the wmt19 translation quality estimation shared task",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Kepler",
"suffix": ""
},
{
"first": "Jonay",
"middle": [],
"last": "Tr\u00e9nous",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Treviso",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Vera",
"suffix": ""
},
{
"first": "Ant\u00f3nio",
"middle": [],
"last": "G\u00f3is",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Amin Farajian",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Ant\u00f3nio",
"suffix": ""
},
{
"first": "Andr\u00e9 Ft",
"middle": [],
"last": "Lopes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martins",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.10352"
]
},
"num": null,
"urls": [],
"raw_text": "Fabio Kepler, Jonay Tr\u00e9nous, Marcos Treviso, Miguel Vera, Ant\u00f3nio G\u00f3is, M Amin Farajian, Ant\u00f3nio V Lopes, and Andr\u00e9 FT Martins. 2019a. Unbabel's participation in the wmt19 translation quality estima- tion shared task. arXiv preprint arXiv:1907.10352.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Openkiwi: An open source framework for quality estimation",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Kepler",
"suffix": ""
},
{
"first": "Jonay",
"middle": [],
"last": "Tr\u00e9nous",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Treviso",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Vera",
"suffix": ""
},
{
"first": "Andr\u00e9 Ft",
"middle": [],
"last": "Martins",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.08646"
]
},
"num": null,
"urls": [],
"raw_text": "Fabio Kepler, Jonay Tr\u00e9nous, Marcos Treviso, Miguel Vera, and Andr\u00e9 FT Martins. 2019b. Openkiwi: An open source framework for quality estimation. arXiv preprint arXiv:1902.08646.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation",
"authors": [
{
"first": "Hyun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jong-Hyeok",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Seung-Hoon",
"middle": [],
"last": "Na",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "562--568",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hyun Kim, Jong-Hyeok Lee, and Seung-Hoon Na. 2017. Predictor-estimator using multilevel task learning with stack propagation for neural quality es- timation. In Proceedings of the Second Conference on Machine Translation, pages 562-568.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Qe bert: bilingual bert using multi-task learning for neural quality estimation",
"authors": [
{
"first": "Hyun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Joon-Ho",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Hyun-Ki",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Seung-Hoon",
"middle": [],
"last": "Na",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "85--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hyun Kim, Joon-Ho Lim, Hyun-Ki Kim, and Seung- Hoon Na. 2019. Qe bert: bilingual bert using multi-task learning for neural quality estimation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 85-89.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Spoken Language Translation via Phoneme Representation of the Source Language",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Pol\u00e1k",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Pol\u00e1k. 2020. Spoken Language Translation via Phoneme Representation of the Source Language. Master's thesis, Charles University, Faculty of Math- ematics and Physics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Transquest at wmt2020: Sentence-level direct assessment",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Orasan",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.05318"
]
},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe, Constantin Orasan, and Rus- lan Mitkov. 2020. Transquest at wmt2020: Sentence-level direct assessment. arXiv preprint arXiv:2010.05318.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A survey of cross-lingual word embedding models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Artificial Intelligence Research",
"volume": "65",
"issue": "",
"pages": "569--631",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder, Ivan Vuli\u0107, and Anders S\u00f8gaard. 2019. A survey of cross-lingual word embedding models. Journal of Artificial Intelligence Research, 65:569-631.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers",
"volume": "",
"issue": "",
"pages": "223--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Associa- tion for Machine Translation in the Americas: Tech- nical Papers, pages 223-231.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Findings of the WMT 2018 shared task on quality estimation",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Ram\u00f3n",
"middle": [
"F"
],
"last": "Astudillo",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "689--709",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6451"
]
},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Fr\u00e9d\u00e9ric Blain, Varvara Logacheva, Ram\u00f3n F. Astudillo, and Andr\u00e9 F. T. Martins. 2018. Findings of the WMT 2018 shared task on quality estimation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 689-709, Belgium, Brussels. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Multi-level translation quality prediction with quest++",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [],
"last": "Paetzold",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Scarton",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL-IJCNLP 2015 System Demonstrations",
"volume": "",
"issue": "",
"pages": "115--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Gustavo Paetzold, and Carolina Scarton. 2015. Multi-level translation quality prediction with quest++. In Proceedings of ACL-IJCNLP 2015 Sys- tem Demonstrations, pages 115-120.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Quest-a translation quality estimation framework",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Kashif",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Jos\u00e9 Gc De",
"middle": [],
"last": "Souza",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "79--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Kashif Shah, Jos\u00e9 GC De Souza, and Trevor Cohn. 2013. Quest-a translation quality es- timation framework. In Proceedings of the 51st An- nual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 79-84.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Estimating the sentence-level quality of machine translation systems",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Cancedda",
"suffix": ""
},
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Dymetman",
"suffix": ""
}
],
"year": 2009,
"venue": "EAMT",
"volume": "9",
"issue": "",
"pages": "28--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Marco Turchi, Nicola Cancedda, Nello Cristianini, and Marc Dymetman. 2009. Estimating the sentence-level quality of machine translation sys- tems. In EAMT, volume 9, pages 28-35.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "On the limitations of cross-lingual encoders as exposed by reference-free machine translation evaluation",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Maxime",
"middle": [],
"last": "Peyrard",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "West",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1656--1671",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Zhao, Goran Glava\u0161, Maxime Peyrard, Yang Gao, Robert West, and Steffen Eger. 2020. On the lim- itations of cross-lingual encoders as exposed by reference-free machine translation evaluation. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1656- 1671, Online. Association for Computational Lin- guistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Our model architecture for the word-level quality estimation task."
},
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "",
"html": null
},
"TABREF3": {
"content": "<table><tr><td colspan=\"5\">Results are word-level target side QE on development</td></tr><tr><td>sets.</td><td/><td/><td/><td/></tr><tr><td/><td>/Salvadorans/</td><td colspan=\"3\">/protested/ /against/ /bitcoin/</td></tr><tr><td/><td colspan=\"2\">Salv\u00e1dor\u010dania protestovali</td><td>proti</td><td>bitcoinu</td></tr><tr><td>1.</td><td colspan=\"2\">0.40 Salvadorianer protestierten 0.02</td><td>0.02 gegen</td><td>0.02 Bitcoin</td></tr><tr><td/><td>0.27</td><td>0.01</td><td>0.02</td><td>0.00</td></tr><tr><td/><td colspan=\"2\">Salv\u00e1dor\u010dania protestovali</td><td>proti</td><td>bitcoinu</td></tr><tr><td>2.</td><td colspan=\"4\">0.49 Salvadorianer protestierten f\u00fcr /for/ Bitcoin 0.03 0.14 0.04</td></tr><tr><td/><td>0.44</td><td>0.04</td><td>0.27</td><td>0.02</td></tr><tr><td/><td colspan=\"2\">Salv\u00e1dor\u010dania protestovali</td><td>proti</td><td>bitcoinu</td></tr><tr><td>3.</td><td>0.98 Somalis</td><td>0.03 protestierten</td><td>0.03 gegen</td><td>0.04 Bitcoin</td></tr><tr><td/><td>0.97</td><td>0.02</td><td>0.02</td><td>0.01</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Comparison of the best model and ensemble.",
"html": null
},
"TABREF4": {
"content": "<table><tr><td>: Examples of unsupervised Slovak-German</td></tr><tr><td>MT QE. Each number below a word represents a prob-</td></tr><tr><td>ability of being incorrectly translated generated by our</td></tr><tr><td>best-performing model. Words in slashes present En-</td></tr><tr><td>glish translations. Words in bold denote translation er-</td></tr><tr><td>rors.</td></tr></table>",
"type_str": "table",
"num": null,
"text": "",
"html": null
}
}
}
}