{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:57:34.070473Z"
},
"title": "Unsupervised Quality Estimation for Neural Machine Translation",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {}
},
"email": "m.fomicheva@sheffield.ac.uk"
},
{
"first": "Shuo",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Yankovskaya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Tartu",
"location": {}
},
"email": "lisa.yankovskaya@ut.ee"
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {}
},
"email": "f.blain@sheffield.ac.uk"
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": "",
"affiliation": {},
"email": "fguzman@fb.com"
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Tartu",
"location": {}
},
"email": "fishel@ut.ee"
},
{
"first": "Nikolaos",
"middle": [],
"last": "Aletras",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {}
},
"email": "n.aletras@sheffield.ac.uk"
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": "",
"affiliation": {},
"email": "vishrav@fb.com"
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {}
},
"email": "l.specia@sheffield.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Quality Estimation (QE) is an important component in making Machine Translation (MT) useful in real-world applications, as it is aimed to inform the user on the quality of the MT output at test time. Existing approaches require large amounts of expert annotated data, computation, and time for training. As an alternative, we devise an unsupervised approach to QE where no training or access to additional resources besides the MT system itself is required. Different from most of the current work that treats the MT system as a black box, we explore useful information that can be extracted from the MT system as a by-product of translation. By utilizing methods for uncertainty quantification, we achieve very good correlation with human judgments of quality, rivaling state-of-the-art supervised QE models. To evaluate our approach we collect the first dataset that enables work on both black-box and glass-box approaches to QE.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Quality Estimation (QE) is an important component in making Machine Translation (MT) useful in real-world applications, as it is aimed to inform the user on the quality of the MT output at test time. Existing approaches require large amounts of expert annotated data, computation, and time for training. As an alternative, we devise an unsupervised approach to QE where no training or access to additional resources besides the MT system itself is required. Different from most of the current work that treats the MT system as a black box, we explore useful information that can be extracted from the MT system as a by-product of translation. By utilizing methods for uncertainty quantification, we achieve very good correlation with human judgments of quality, rivaling state-of-the-art supervised QE models. To evaluate our approach we collect the first dataset that enables work on both black-box and glass-box approaches to QE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the advent of neural models, Machine Translation (MT) systems have made substantial progress, reportedly achieving near-human quality for high-resource language pairs (Hassan et al., 2018; Barrault et al., 2019) . However, translation quality is not consistent across language pairs, domains, and datasets. This is problematic for low-resource scenarios, where there is not enough training data and translation quality significantly lags behind. Additionally, neural MT (NMT) systems can be deceptive to the end user as they can generate fluent translations that differ in meaning from the original (Bentivogli et al., 2016; Castilho et al., 2017) .",
"cite_spans": [
{
"start": 172,
"end": 193,
"text": "(Hassan et al., 2018;",
"ref_id": "BIBREF26"
},
{
"start": 194,
"end": 216,
"text": "Barrault et al., 2019)",
"ref_id": null
},
{
"start": 604,
"end": 629,
"text": "(Bentivogli et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 630,
"end": 652,
"text": "Castilho et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Thus, it is crucial to have a feedback mechanism to inform users about the trustworthiness of a given MT output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Quality estimation (QE) aims to predict the quality of the output provided by an MT system at test time when no gold-standard human translation is available. State-of-the-art (SOTA) QE models require large amounts of parallel data for pretraining and in-domain translations annotated with quality labels for training (Kim et al., 2017a; Fonseca et al., 2019) . However, such large collections of data are only available for a small set of languages in limited domains.",
"cite_spans": [
{
"start": 317,
"end": 336,
"text": "(Kim et al., 2017a;",
"ref_id": "BIBREF33"
},
{
"start": 337,
"end": 358,
"text": "Fonseca et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Current work on QE typically treats the MT system as a black box. In this paper we propose an alternative glass-box approach to QE that allows us to address the task as an unsupervised problem. We posit that encoder-decoder NMT models Bahdanau et al., 2015; Vaswani et al., 2017) offer a rich source of information for directly estimating translation quality: (a) the output probability distribution from the NMT system (i.e., the probabilities obtained by applying the softmax function over the entire vocabulary of the target language); and (b) the attention mechanism used during decoding. Our assumption is that the more confident the decoder is, the higher the quality of the translation.",
"cite_spans": [
{
"start": 235,
"end": 257,
"text": "Bahdanau et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 258,
"end": 279,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF65"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While sequence-level probabilities of the top MT hypothesis have been used for confidence estimation in statistical MT (Specia et al., 2013; Blatz et al., 2004) , the output probabilities from deep Neural Networks (NNs) are generally not well calibrated, that is, not representative of the true likelihood of the predictions (Nguyen and O'Connor, 2015; Guo et al., 2017; Lakshminarayanan et al., 2017) . Moreover, softmax output probabilities tend to be overconfident and can assign a large probability mass to predictions that are far from the training data (Gal and Ghahramani, 2016) . To overcome such deficiencies, we propose ways to exploit output distributions beyond the top-1 prediction by exploring uncertainty quantification methods for better probability estimates (Gal and Ghahramani, 2016; Lakshminarayanan et al., 2017) . In our experiments, we account for different factors that can affect the reliability of model probability estimates in NNs, such as model architecture, training, and search (Guo et al., 2017) .",
"cite_spans": [
{
"start": 119,
"end": 140,
"text": "(Specia et al., 2013;",
"ref_id": "BIBREF59"
},
{
"start": 141,
"end": 160,
"text": "Blatz et al., 2004)",
"ref_id": "BIBREF4"
},
{
"start": 325,
"end": 352,
"text": "(Nguyen and O'Connor, 2015;",
"ref_id": "BIBREF46"
},
{
"start": 353,
"end": 370,
"text": "Guo et al., 2017;",
"ref_id": "BIBREF24"
},
{
"start": 371,
"end": 401,
"text": "Lakshminarayanan et al., 2017)",
"ref_id": "BIBREF39"
},
{
"start": 559,
"end": 585,
"text": "(Gal and Ghahramani, 2016)",
"ref_id": "BIBREF17"
},
{
"start": 776,
"end": 802,
"text": "(Gal and Ghahramani, 2016;",
"ref_id": "BIBREF17"
},
{
"start": 803,
"end": 833,
"text": "Lakshminarayanan et al., 2017)",
"ref_id": "BIBREF39"
},
{
"start": 1009,
"end": 1027,
"text": "(Guo et al., 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition, we study attention mechanism as another source of information on NMT quality. Attention can be interpreted as a soft alignment, providing an indication of the strength of relationship between source and target words (Bahdanau et al., 2015) . Although this interpretation is straightforward for NMT based on Recurrent Neural Networks (RNN) (Rikters and Fishel, 2017) , its application to current SOTA Transformer models with multihead attention (Vaswani et al., 2017) is challenging. We analyze to what extent meaningful information on translation quality can be extracted from multihead attention.",
"cite_spans": [
{
"start": 229,
"end": 252,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 352,
"end": 378,
"text": "(Rikters and Fishel, 2017)",
"ref_id": "BIBREF53"
},
{
"start": 457,
"end": 479,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF65"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To evaluate our approach in challenging settings, we collect a new dataset for QE with 6 language pairs representing NMT training in high, medium, and low-resource scenarios. To reduce the chance of overfitting to particular domains, our dataset is constructed from Wikipedia documents. We annotate 10K segments per language pair. By contrast to the vast majority of work on QE that uses semi-automatic metrics based on post-editing distance as gold standard, we perform quality labeling based on the Direct Assessment (DA) methodology (Graham et al., 2015b) , which has been widely used for popular MT evaluation campaigns in the recent years. At the same time, the collected data differs from the existing datasets annotated with DA judgments for the well known WMT Metrics task 1 in two important ways: We provide enough data to train supervised QE models and access to the NMT systems used to generate the translations, thus allowing for further exploration of the glass-box unsupervised approach to QE for NMT introduced in this paper.",
"cite_spans": [
{
"start": 536,
"end": 558,
"text": "(Graham et al., 2015b)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main contributions can be summarized as follows: (i) A new, large-scale dataset for sentence-level 2 QE annotated with DA rather than post-edit ing metrics ( \u00a74); (ii) A set of unsupervised quality indicators that can be produced as a by-product of NMT decoding and a thorough evaluation of how they correlate with human judgments of translation quality ( \u00a73 and \u00a75); (iii) The first attempt at analysing the attention distribution for the purposes of unsupervised QE in Transformer models ( \u00a73 and \u00a75); and (iv) The analysis on how model confidence relates to translation quality for different NMT systems ( \u00a76). Our experiments show that unsupervised QE indicators obtained from well-calibrated NMT model probabilities rival strong supervised SOTA models in terms of correlation with human judgments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "QE QE is typically addressed as a supervised machine learning task where the goal is to predict MT quality in the absence of reference translation. Traditional feature-based approaches relied on manually designed features, extracted from the MT system (glass-box features) or obtained from the source and translated sentences, as well as external resources, such as monolingual or parallel corpora (black-box features) (Specia et al., 2009) .",
"cite_spans": [
{
"start": 419,
"end": 440,
"text": "(Specia et al., 2009)",
"ref_id": "BIBREF60"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Currently, the best performing approaches to QE use NNs to learn useful representations for source and target sentences (Kim et al., 2017b; Wang et al., 2018; Kepler et al., 2019a) . A notable example is the Predictor-Estimator (PredEst) model (Kim et al., 2017b) , which consists of an encoder-decoder RNN (predictor) trained on parallel data for a word prediction task and a unidirectional RNN (estimator) that produces quality estimates leveraging the context representations generated by the predictor. Despite achieving strong performances, neural-based approaches are resource-heavy and require a significant amount of in-domain labeled data for training. They do not use any internal information from the MT system.",
"cite_spans": [
{
"start": 120,
"end": 139,
"text": "(Kim et al., 2017b;",
"ref_id": "BIBREF35"
},
{
"start": 140,
"end": 158,
"text": "Wang et al., 2018;",
"ref_id": "BIBREF69"
},
{
"start": 159,
"end": 180,
"text": "Kepler et al., 2019a)",
"ref_id": "BIBREF31"
},
{
"start": 244,
"end": 263,
"text": "(Kim et al., 2017b)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Existing work on glass-box QE is limited to features extracted from statistical MT, such as language model probabilities or number of hypotheses in the n-best list (Blatz et al., 2004; Specia et al., 2013) . The few approaches for unsupervised QE are also inspired by the work on statistical MT and perform significantly worse than supervised approaches (Popovi\u0107, 2012; Moreau and Vogel, 2012; Etchegoyhen et al., 2018) . For example, Etchegoyhen et al. (2018) use lexical translation probabilities from word alignment models and language model probabilities. Their unsupervised approach averages these features to produce the final score. However, it is largely outperformed by the neural-based supervised QE systems .",
"cite_spans": [
{
"start": 164,
"end": 184,
"text": "(Blatz et al., 2004;",
"ref_id": "BIBREF4"
},
{
"start": 185,
"end": 205,
"text": "Specia et al., 2013)",
"ref_id": "BIBREF59"
},
{
"start": 354,
"end": 369,
"text": "(Popovi\u0107, 2012;",
"ref_id": "BIBREF52"
},
{
"start": 370,
"end": 393,
"text": "Moreau and Vogel, 2012;",
"ref_id": "BIBREF45"
},
{
"start": 394,
"end": 419,
"text": "Etchegoyhen et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 422,
"end": 460,
"text": "For example, Etchegoyhen et al. (2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The only works that explore internal information from neural models as an indicator of translation quality rely on the entropy of attention weights in RNN-based NMT systems (Rikters and Fishel, 2017; Yankovskaya et al., 2018) . However, attention-based indicators perform competitively only when combined with other QE features in a supervised framework. Furthermore, this approach is not directly applicable to the SOTA Transformer model that uses multihead attention mechanism. Recent work on attention interpretability showed that attention weights in Transformer networks might not be readily interpretable (Vashishth et al., 2019; Vig and Belinkov, 2019) . Voita et al. (2019) show that different attention heads of Transformer have different functions and some of them are more important than others. This makes it challenging to extract information from attention weights in Transformer (see \u00a75).",
"cite_spans": [
{
"start": 173,
"end": 199,
"text": "(Rikters and Fishel, 2017;",
"ref_id": "BIBREF53"
},
{
"start": 200,
"end": 225,
"text": "Yankovskaya et al., 2018)",
"ref_id": "BIBREF73"
},
{
"start": 611,
"end": 635,
"text": "(Vashishth et al., 2019;",
"ref_id": "BIBREF64"
},
{
"start": 636,
"end": 659,
"text": "Vig and Belinkov, 2019)",
"ref_id": "BIBREF66"
},
{
"start": 662,
"end": 681,
"text": "Voita et al. (2019)",
"ref_id": "BIBREF68"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To the best of our knowledge, our work is the first on glass-box unsupervised QE for NMT that performs competitively with respect to the SOTA supervised systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The performance of QE systems has been typically assessed using the semi-automatic Human-mediated Translation Edit Rate (Snover et al., 2006) metric as gold standard. However, the reliability of this metric for assessing the performance of QE systems has been shown to be questionable . The current practice in MT evaluation is the so-called Direct Assessment (DA) of MT quality (Graham et al., 2015b) , where raters evaluate the MT on a continuous 1-100 scale. This method has been shown to improve the reproducibility of manual evaluation and to provide a more reliable gold standard for automatic evaluation metrics (Graham et al., 2015a) .",
"cite_spans": [
{
"start": 120,
"end": 141,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF57"
},
{
"start": 379,
"end": 401,
"text": "(Graham et al., 2015b)",
"ref_id": "BIBREF22"
},
{
"start": 619,
"end": 641,
"text": "(Graham et al., 2015a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "QE Datasets",
"sec_num": null
},
{
"text": "DA methodology is currently used for manual evaluation of MT quality at the WMT translation tasks, as well as for assessing the performance of reference-based automatic MT evaluation metrics at the WMT Metrics Task (Bojar et al., 2016 (Bojar et al., , 2017 Ma et al., 2018 Ma et al., , 2019 . Existing datasets with sentence-level DA judgments from the WMT Metrics Task could in principle be used for benchmarking QE systems. However, they contain only a few hundred segments per language pair and thus hardly allow for training supervised systems, as illustrated by the weak correlation results for QE on DA judgments based on the Metrics Task data recently reported by Fonseca et al. (2019) . Furthermore, for each language pair the data contains translations from a number of MT systems often using different architectures, and these MT systems are not readily available, making it impossible for experiments on glass-box QE. Finally, the judgments are either crowd-sourced or collected from task participants and not professional translators, which may hinder the reliability of the labels. We collect a new dataset for QE that addresses these limitations ( \u00a74).",
"cite_spans": [
{
"start": 215,
"end": 234,
"text": "(Bojar et al., 2016",
"ref_id": "BIBREF6"
},
{
"start": 235,
"end": 256,
"text": "(Bojar et al., , 2017",
"ref_id": "BIBREF5"
},
{
"start": 257,
"end": 272,
"text": "Ma et al., 2018",
"ref_id": "BIBREF41"
},
{
"start": 273,
"end": 290,
"text": "Ma et al., , 2019",
"ref_id": "BIBREF42"
},
{
"start": 671,
"end": 692,
"text": "Fonseca et al. (2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "QE Datasets",
"sec_num": null
},
{
"text": "Uncertainty Quantification Uncertainty quantification in NNs is typically addressed using a Bayesian framework where the point estimates of their weights are replaced with probability distributions (MacKay, 1992; Graves, 2011; Welling and Teh, 2011; Tran et al., 2019) . Various approximations have been developed to avoid high training costs of Bayesian NNs, such as Monte Carlo Dropout (Gal and Ghahramani, 2016) or model ensembling (Lakshminarayanan et al., 2017) . The performance of uncertainty quantification methods is commonly evaluated by measuring calibration, that is, the relation between predictive probabilities and the empirical frequencies of the predicted labels, or by assessing generalization of uncertainty under domain shift (see \u00a76).",
"cite_spans": [
{
"start": 198,
"end": 212,
"text": "(MacKay, 1992;",
"ref_id": "BIBREF43"
},
{
"start": 213,
"end": 226,
"text": "Graves, 2011;",
"ref_id": "BIBREF23"
},
{
"start": 227,
"end": 249,
"text": "Welling and Teh, 2011;",
"ref_id": "BIBREF71"
},
{
"start": 250,
"end": 268,
"text": "Tran et al., 2019)",
"ref_id": "BIBREF63"
},
{
"start": 388,
"end": 414,
"text": "(Gal and Ghahramani, 2016)",
"ref_id": "BIBREF17"
},
{
"start": 435,
"end": 466,
"text": "(Lakshminarayanan et al., 2017)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "QE Datasets",
"sec_num": null
},
{
"text": "Only a few studies have analyzed calibration in NMT and they came to contradictory conclusions. Kumar and Sarawagi (2019) measure calibration error by comparing model probabilities and the percentage of times NMT output matches reference translation, and conclude that NMT probabilities are poorly calibrated. However, the calibration error metrics they use are designed for binary classification tasks and cannot be easily transferred to NMT (Kuleshov and Liang, 2015) . analyze uncertainty in NMT by comparing predictive probability distributions with the empirical distribution observed in human translation data. They conclude that NMT models are well calibrated. However, this approach is limited by the fact that there are many possible correct translations for a given sentence and only one human translation is available in practice. Although the goal of this paper is to devise an unsupervised solution for the QE task, the analysis presented here provides new insights into calibration in NMT. Different from existing work, we study the relation between model probabilities and human judgments of translation correctness.",
"cite_spans": [
{
"start": 96,
"end": 121,
"text": "Kumar and Sarawagi (2019)",
"ref_id": "BIBREF38"
},
{
"start": 443,
"end": 469,
"text": "(Kuleshov and Liang, 2015)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "QE Datasets",
"sec_num": null
},
{
"text": "Uncertainty quantification methods have been successfully applied to various practical tasks, for example, neural semantic parsing (Dong et al., 2018) , hate speech classification (Miok et al., 2019) , or back-translation for NMT (Wang et al., 2019) . Wang et al. (2019) , whose work is the closest to our work, explore a small set of uncertaintybased metrics to minimize the weight of erroneous synthetic sentence pairs for back translation in NMT. However, improved NMT training with weighted synthetic data does not necessarily imply better prediction of MT quality. In fact, metrics that Wang et al. (2019) report to perform the best for back-translation do not perform well for QE (see \u00a73.2).",
"cite_spans": [
{
"start": 131,
"end": 150,
"text": "(Dong et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 180,
"end": 199,
"text": "(Miok et al., 2019)",
"ref_id": "BIBREF44"
},
{
"start": 230,
"end": 249,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF70"
},
{
"start": 252,
"end": 270,
"text": "Wang et al. (2019)",
"ref_id": "BIBREF70"
},
{
"start": 592,
"end": 610,
"text": "Wang et al. (2019)",
"ref_id": "BIBREF70"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "QE Datasets",
"sec_num": null
},
{
"text": "We assume a sequence-to-sequence NMT architecture consisting of encoder-decoder networks using attention (Bahdanau et al., 2015) . The encoder maps the input sequence x = x 1 , . . . , x I into a sequence of hidden states, which is summarized into a single vector using attention mechanism (Bahdanau et al., 2015; Vaswani et al., 2017) . Given this representation the decoder generates an output sequence y = y 1 , . . . , y T of length T . The probability of generating y is factorized as:",
"cite_spans": [
{
"start": 105,
"end": 128,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 290,
"end": 313,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 314,
"end": 335,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF65"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised QE for NMT",
"sec_num": "3"
},
{
"text": "p( y| x, \u03b8) = T t=1 p(y t | y