ACL-OCL / Base_JSON /prefixT /json /tacl /2020.tacl-1.35.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:57:34.070473Z"
},
"title": "Unsupervised Quality Estimation for Neural Machine Translation",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {}
},
"email": "m.fomicheva@sheffield.ac.uk"
},
{
"first": "Shuo",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Yankovskaya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Tartu",
"location": {}
},
"email": "lisa.yankovskaya@ut.ee"
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {}
},
"email": "f.blain@sheffield.ac.uk"
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": "",
"affiliation": {},
"email": "fguzman@fb.com"
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Tartu",
"location": {}
},
"email": "fishel@ut.ee"
},
{
"first": "Nikolaos",
"middle": [],
"last": "Aletras",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {}
},
"email": "n.aletras@sheffield.ac.uk"
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": "",
"affiliation": {},
"email": "vishrav@fb.com"
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {}
},
"email": "l.specia@sheffield.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Quality Estimation (QE) is an important component in making Machine Translation (MT) useful in real-world applications, as it is aimed to inform the user on the quality of the MT output at test time. Existing approaches require large amounts of expert annotated data, computation, and time for training. As an alternative, we devise an unsupervised approach to QE where no training or access to additional resources besides the MT system itself is required. Different from most of the current work that treats the MT system as a black box, we explore useful information that can be extracted from the MT system as a by-product of translation. By utilizing methods for uncertainty quantification, we achieve very good correlation with human judgments of quality, rivaling state-of-the-art supervised QE models. To evaluate our approach we collect the first dataset that enables work on both black-box and glass-box approaches to QE.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Quality Estimation (QE) is an important component in making Machine Translation (MT) useful in real-world applications, as it is aimed to inform the user on the quality of the MT output at test time. Existing approaches require large amounts of expert annotated data, computation, and time for training. As an alternative, we devise an unsupervised approach to QE where no training or access to additional resources besides the MT system itself is required. Different from most of the current work that treats the MT system as a black box, we explore useful information that can be extracted from the MT system as a by-product of translation. By utilizing methods for uncertainty quantification, we achieve very good correlation with human judgments of quality, rivaling state-of-the-art supervised QE models. To evaluate our approach we collect the first dataset that enables work on both black-box and glass-box approaches to QE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the advent of neural models, Machine Translation (MT) systems have made substantial progress, reportedly achieving near-human quality for high-resource language pairs (Hassan et al., 2018; Barrault et al., 2019) . However, translation quality is not consistent across language pairs, domains, and datasets. This is problematic for low-resource scenarios, where there is not enough training data and translation quality significantly lags behind. Additionally, neural MT (NMT) systems can be deceptive to the end user as they can generate fluent translations that differ in meaning from the original (Bentivogli et al., 2016; Castilho et al., 2017) .",
"cite_spans": [
{
"start": 172,
"end": 193,
"text": "(Hassan et al., 2018;",
"ref_id": "BIBREF26"
},
{
"start": 194,
"end": 216,
"text": "Barrault et al., 2019)",
"ref_id": null
},
{
"start": 604,
"end": 629,
"text": "(Bentivogli et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 630,
"end": 652,
"text": "Castilho et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Thus, it is crucial to have a feedback mechanism to inform users about the trustworthiness of a given MT output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Quality estimation (QE) aims to predict the quality of the output provided by an MT system at test time when no gold-standard human translation is available. State-of-the-art (SOTA) QE models require large amounts of parallel data for pretraining and in-domain translations annotated with quality labels for training (Kim et al., 2017a; Fonseca et al., 2019) . However, such large collections of data are only available for a small set of languages in limited domains.",
"cite_spans": [
{
"start": 317,
"end": 336,
"text": "(Kim et al., 2017a;",
"ref_id": "BIBREF33"
},
{
"start": 337,
"end": 358,
"text": "Fonseca et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Current work on QE typically treats the MT system as a black box. In this paper we propose an alternative glass-box approach to QE that allows us to address the task as an unsupervised problem. We posit that encoder-decoder NMT models Bahdanau et al., 2015; Vaswani et al., 2017) offer a rich source of information for directly estimating translation quality: (a) the output probability distribution from the NMT system (i.e., the probabilities obtained by applying the softmax function over the entire vocabulary of the target language); and (b) the attention mechanism used during decoding. Our assumption is that the more confident the decoder is, the higher the quality of the translation.",
"cite_spans": [
{
"start": 235,
"end": 257,
"text": "Bahdanau et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 258,
"end": 279,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF65"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While sequence-level probabilities of the top MT hypothesis have been used for confidence estimation in statistical MT (Specia et al., 2013; Blatz et al., 2004) , the output probabilities from deep Neural Networks (NNs) are generally not well calibrated, that is, not representative of the true likelihood of the predictions (Nguyen and O'Connor, 2015; Guo et al., 2017; Lakshminarayanan et al., 2017) . Moreover, softmax output probabilities tend to be overconfident and can assign a large probability mass to predictions that are far from the training data (Gal and Ghahramani, 2016) . To overcome such deficiencies, we propose ways to exploit output distributions beyond the top-1 prediction by exploring uncertainty quantification methods for better probability estimates (Gal and Ghahramani, 2016; Lakshminarayanan et al., 2017) . In our experiments, we account for different factors that can affect the reliability of model probability estimates in NNs, such as model architecture, training, and search (Guo et al., 2017) .",
"cite_spans": [
{
"start": 119,
"end": 140,
"text": "(Specia et al., 2013;",
"ref_id": "BIBREF59"
},
{
"start": 141,
"end": 160,
"text": "Blatz et al., 2004)",
"ref_id": "BIBREF4"
},
{
"start": 325,
"end": 352,
"text": "(Nguyen and O'Connor, 2015;",
"ref_id": "BIBREF46"
},
{
"start": 353,
"end": 370,
"text": "Guo et al., 2017;",
"ref_id": "BIBREF24"
},
{
"start": 371,
"end": 401,
"text": "Lakshminarayanan et al., 2017)",
"ref_id": "BIBREF39"
},
{
"start": 559,
"end": 585,
"text": "(Gal and Ghahramani, 2016)",
"ref_id": "BIBREF17"
},
{
"start": 776,
"end": 802,
"text": "(Gal and Ghahramani, 2016;",
"ref_id": "BIBREF17"
},
{
"start": 803,
"end": 833,
"text": "Lakshminarayanan et al., 2017)",
"ref_id": "BIBREF39"
},
{
"start": 1009,
"end": 1027,
"text": "(Guo et al., 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition, we study attention mechanism as another source of information on NMT quality. Attention can be interpreted as a soft alignment, providing an indication of the strength of relationship between source and target words (Bahdanau et al., 2015) . Although this interpretation is straightforward for NMT based on Recurrent Neural Networks (RNN) (Rikters and Fishel, 2017) , its application to current SOTA Transformer models with multihead attention (Vaswani et al., 2017) is challenging. We analyze to what extent meaningful information on translation quality can be extracted from multihead attention.",
"cite_spans": [
{
"start": 229,
"end": 252,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 352,
"end": 378,
"text": "(Rikters and Fishel, 2017)",
"ref_id": "BIBREF53"
},
{
"start": 457,
"end": 479,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF65"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To evaluate our approach in challenging settings, we collect a new dataset for QE with 6 language pairs representing NMT training in high, medium, and low-resource scenarios. To reduce the chance of overfitting to particular domains, our dataset is constructed from Wikipedia documents. We annotate 10K segments per language pair. By contrast to the vast majority of work on QE that uses semi-automatic metrics based on post-editing distance as gold standard, we perform quality labeling based on the Direct Assessment (DA) methodology (Graham et al., 2015b) , which has been widely used for popular MT evaluation campaigns in the recent years. At the same time, the collected data differs from the existing datasets annotated with DA judgments for the well known WMT Metrics task 1 in two important ways: We provide enough data to train supervised QE models and access to the NMT systems used to generate the translations, thus allowing for further exploration of the glass-box unsupervised approach to QE for NMT introduced in this paper.",
"cite_spans": [
{
"start": 536,
"end": 558,
"text": "(Graham et al., 2015b)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main contributions can be summarized as follows: (i) A new, large-scale dataset for sentence-level 2 QE annotated with DA rather than post-edit ing metrics ( \u00a74); (ii) A set of unsupervised quality indicators that can be produced as a by-product of NMT decoding and a thorough evaluation of how they correlate with human judgments of translation quality ( \u00a73 and \u00a75); (iii) The first attempt at analysing the attention distribution for the purposes of unsupervised QE in Transformer models ( \u00a73 and \u00a75); and (iv) The analysis on how model confidence relates to translation quality for different NMT systems ( \u00a76). Our experiments show that unsupervised QE indicators obtained from well-calibrated NMT model probabilities rival strong supervised SOTA models in terms of correlation with human judgments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "QE QE is typically addressed as a supervised machine learning task where the goal is to predict MT quality in the absence of reference translation. Traditional feature-based approaches relied on manually designed features, extracted from the MT system (glass-box features) or obtained from the source and translated sentences, as well as external resources, such as monolingual or parallel corpora (black-box features) (Specia et al., 2009) .",
"cite_spans": [
{
"start": 419,
"end": 440,
"text": "(Specia et al., 2009)",
"ref_id": "BIBREF60"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Currently, the best performing approaches to QE use NNs to learn useful representations for source and target sentences (Kim et al., 2017b; Wang et al., 2018; Kepler et al., 2019a) . A notable example is the Predictor-Estimator (PredEst) model (Kim et al., 2017b) , which consists of an encoder-decoder RNN (predictor) trained on parallel data for a word prediction task and a unidirectional RNN (estimator) that produces quality estimates leveraging the context representations generated by the predictor. Despite achieving strong performances, neural-based approaches are resource-heavy and require a significant amount of in-domain labeled data for training. They do not use any internal information from the MT system.",
"cite_spans": [
{
"start": 120,
"end": 139,
"text": "(Kim et al., 2017b;",
"ref_id": "BIBREF35"
},
{
"start": 140,
"end": 158,
"text": "Wang et al., 2018;",
"ref_id": "BIBREF69"
},
{
"start": 159,
"end": 180,
"text": "Kepler et al., 2019a)",
"ref_id": "BIBREF31"
},
{
"start": 244,
"end": 263,
"text": "(Kim et al., 2017b)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Existing work on glass-box QE is limited to features extracted from statistical MT, such as language model probabilities or number of hypotheses in the n-best list (Blatz et al., 2004; Specia et al., 2013) . The few approaches for unsupervised QE are also inspired by the work on statistical MT and perform significantly worse than supervised approaches (Popovi\u0107, 2012; Moreau and Vogel, 2012; Etchegoyhen et al., 2018) . For example, Etchegoyhen et al. (2018) use lexical translation probabilities from word alignment models and language model probabilities. Their unsupervised approach averages these features to produce the final score. However, it is largely outperformed by the neural-based supervised QE systems .",
"cite_spans": [
{
"start": 164,
"end": 184,
"text": "(Blatz et al., 2004;",
"ref_id": "BIBREF4"
},
{
"start": 185,
"end": 205,
"text": "Specia et al., 2013)",
"ref_id": "BIBREF59"
},
{
"start": 354,
"end": 369,
"text": "(Popovi\u0107, 2012;",
"ref_id": "BIBREF52"
},
{
"start": 370,
"end": 393,
"text": "Moreau and Vogel, 2012;",
"ref_id": "BIBREF45"
},
{
"start": 394,
"end": 419,
"text": "Etchegoyhen et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 422,
"end": 460,
"text": "For example, Etchegoyhen et al. (2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The only works that explore internal information from neural models as an indicator of translation quality rely on the entropy of attention weights in RNN-based NMT systems (Rikters and Fishel, 2017; Yankovskaya et al., 2018) . However, attention-based indicators perform competitively only when combined with other QE features in a supervised framework. Furthermore, this approach is not directly applicable to the SOTA Transformer model that uses multihead attention mechanism. Recent work on attention interpretability showed that attention weights in Transformer networks might not be readily interpretable (Vashishth et al., 2019; Vig and Belinkov, 2019) . Voita et al. (2019) show that different attention heads of Transformer have different functions and some of them are more important than others. This makes it challenging to extract information from attention weights in Transformer (see \u00a75).",
"cite_spans": [
{
"start": 173,
"end": 199,
"text": "(Rikters and Fishel, 2017;",
"ref_id": "BIBREF53"
},
{
"start": 200,
"end": 225,
"text": "Yankovskaya et al., 2018)",
"ref_id": "BIBREF73"
},
{
"start": 611,
"end": 635,
"text": "(Vashishth et al., 2019;",
"ref_id": "BIBREF64"
},
{
"start": 636,
"end": 659,
"text": "Vig and Belinkov, 2019)",
"ref_id": "BIBREF66"
},
{
"start": 662,
"end": 681,
"text": "Voita et al. (2019)",
"ref_id": "BIBREF68"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To the best of our knowledge, our work is the first on glass-box unsupervised QE for NMT that performs competitively with respect to the SOTA supervised systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The performance of QE systems has been typically assessed using the semi-automatic Human-mediated Translation Edit Rate (Snover et al., 2006) metric as gold standard. However, the reliability of this metric for assessing the performance of QE systems has been shown to be questionable . The current practice in MT evaluation is the so-called Direct Assessment (DA) of MT quality (Graham et al., 2015b) , where raters evaluate the MT on a continuous 1-100 scale. This method has been shown to improve the reproducibility of manual evaluation and to provide a more reliable gold standard for automatic evaluation metrics (Graham et al., 2015a) .",
"cite_spans": [
{
"start": 120,
"end": 141,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF57"
},
{
"start": 379,
"end": 401,
"text": "(Graham et al., 2015b)",
"ref_id": "BIBREF22"
},
{
"start": 619,
"end": 641,
"text": "(Graham et al., 2015a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "QE Datasets",
"sec_num": null
},
{
"text": "DA methodology is currently used for manual evaluation of MT quality at the WMT translation tasks, as well as for assessing the performance of reference-based automatic MT evaluation metrics at the WMT Metrics Task (Bojar et al., 2016 (Bojar et al., , 2017 Ma et al., 2018 Ma et al., , 2019 . Existing datasets with sentence-level DA judgments from the WMT Metrics Task could in principle be used for benchmarking QE systems. However, they contain only a few hundred segments per language pair and thus hardly allow for training supervised systems, as illustrated by the weak correlation results for QE on DA judgments based on the Metrics Task data recently reported by Fonseca et al. (2019) . Furthermore, for each language pair the data contains translations from a number of MT systems often using different architectures, and these MT systems are not readily available, making it impossible for experiments on glass-box QE. Finally, the judgments are either crowd-sourced or collected from task participants and not professional translators, which may hinder the reliability of the labels. We collect a new dataset for QE that addresses these limitations ( \u00a74).",
"cite_spans": [
{
"start": 215,
"end": 234,
"text": "(Bojar et al., 2016",
"ref_id": "BIBREF6"
},
{
"start": 235,
"end": 256,
"text": "(Bojar et al., , 2017",
"ref_id": "BIBREF5"
},
{
"start": 257,
"end": 272,
"text": "Ma et al., 2018",
"ref_id": "BIBREF41"
},
{
"start": 273,
"end": 290,
"text": "Ma et al., , 2019",
"ref_id": "BIBREF42"
},
{
"start": 671,
"end": 692,
"text": "Fonseca et al. (2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "QE Datasets",
"sec_num": null
},
{
"text": "Uncertainty Quantification Uncertainty quantification in NNs is typically addressed using a Bayesian framework where the point estimates of their weights are replaced with probability distributions (MacKay, 1992; Graves, 2011; Welling and Teh, 2011; Tran et al., 2019) . Various approximations have been developed to avoid high training costs of Bayesian NNs, such as Monte Carlo Dropout (Gal and Ghahramani, 2016) or model ensembling (Lakshminarayanan et al., 2017) . The performance of uncertainty quantification methods is commonly evaluated by measuring calibration, that is, the relation between predictive probabilities and the empirical frequencies of the predicted labels, or by assessing generalization of uncertainty under domain shift (see \u00a76).",
"cite_spans": [
{
"start": 198,
"end": 212,
"text": "(MacKay, 1992;",
"ref_id": "BIBREF43"
},
{
"start": 213,
"end": 226,
"text": "Graves, 2011;",
"ref_id": "BIBREF23"
},
{
"start": 227,
"end": 249,
"text": "Welling and Teh, 2011;",
"ref_id": "BIBREF71"
},
{
"start": 250,
"end": 268,
"text": "Tran et al., 2019)",
"ref_id": "BIBREF63"
},
{
"start": 388,
"end": 414,
"text": "(Gal and Ghahramani, 2016)",
"ref_id": "BIBREF17"
},
{
"start": 435,
"end": 466,
"text": "(Lakshminarayanan et al., 2017)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "QE Datasets",
"sec_num": null
},
{
"text": "Only a few studies have analyzed calibration in NMT and they came to contradictory conclusions. Kumar and Sarawagi (2019) measure calibration error by comparing model probabilities and the percentage of times NMT output matches reference translation, and conclude that NMT probabilities are poorly calibrated. However, the calibration error metrics they use are designed for binary classification tasks and cannot be easily transferred to NMT (Kuleshov and Liang, 2015) . analyze uncertainty in NMT by comparing predictive probability distributions with the empirical distribution observed in human translation data. They conclude that NMT models are well calibrated. However, this approach is limited by the fact that there are many possible correct translations for a given sentence and only one human translation is available in practice. Although the goal of this paper is to devise an unsupervised solution for the QE task, the analysis presented here provides new insights into calibration in NMT. Different from existing work, we study the relation between model probabilities and human judgments of translation correctness.",
"cite_spans": [
{
"start": 96,
"end": 121,
"text": "Kumar and Sarawagi (2019)",
"ref_id": "BIBREF38"
},
{
"start": 443,
"end": 469,
"text": "(Kuleshov and Liang, 2015)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "QE Datasets",
"sec_num": null
},
{
"text": "Uncertainty quantification methods have been successfully applied to various practical tasks, for example, neural semantic parsing (Dong et al., 2018) , hate speech classification (Miok et al., 2019) , or back-translation for NMT (Wang et al., 2019) . Wang et al. (2019) , whose work is the closest to our work, explore a small set of uncertaintybased metrics to minimize the weight of erroneous synthetic sentence pairs for back translation in NMT. However, improved NMT training with weighted synthetic data does not necessarily imply better prediction of MT quality. In fact, metrics that Wang et al. (2019) report to perform the best for back-translation do not perform well for QE (see \u00a73.2).",
"cite_spans": [
{
"start": 131,
"end": 150,
"text": "(Dong et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 180,
"end": 199,
"text": "(Miok et al., 2019)",
"ref_id": "BIBREF44"
},
{
"start": 230,
"end": 249,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF70"
},
{
"start": 252,
"end": 270,
"text": "Wang et al. (2019)",
"ref_id": "BIBREF70"
},
{
"start": 592,
"end": 610,
"text": "Wang et al. (2019)",
"ref_id": "BIBREF70"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "QE Datasets",
"sec_num": null
},
{
"text": "We assume a sequence-to-sequence NMT architecture consisting of encoder-decoder networks using attention (Bahdanau et al., 2015) . The encoder maps the input sequence x = x 1 , . . . , x I into a sequence of hidden states, which is summarized into a single vector using attention mechanism (Bahdanau et al., 2015; Vaswani et al., 2017) . Given this representation the decoder generates an output sequence y = y 1 , . . . , y T of length T . The probability of generating y is factorized as:",
"cite_spans": [
{
"start": 105,
"end": 128,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 290,
"end": 313,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 314,
"end": 335,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF65"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised QE for NMT",
"sec_num": "3"
},
{
"text": "p( y| x, \u03b8) = T t=1 p(y t | y <t , x, \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised QE for NMT",
"sec_num": "3"
},
{
"text": "where \u03b8 represents model parameters. The decoder produces the probability distribution p(y t | y <t , x, \u03b8) over the system vocabulary at each time step using the softmax function. The model is trained to minimize cross-entropy loss. We use SOTA Transformers (Vaswani et al., 2017) for the encoder and decoder in our experiments.",
"cite_spans": [
{
"start": 259,
"end": 281,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF65"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised QE for NMT",
"sec_num": "3"
},
{
"text": "In what follows, we propose unsupervised quality indicators based on: (i) output probability distribution obtained either from a standard deter-ministic NMT ( \u00a73.1) or (ii) using uncertainty quantification ( \u00a73.2), and (iii) attention weights ( \u00a73.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised QE for NMT",
"sec_num": "3"
},
{
"text": "We start by defining a simple QE measure based on sequence-level translation probability normalized by length:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting the Softmax Distribution",
"sec_num": "3.1"
},
{
"text": "TP = 1 T T t=1 log p(y t | y <t , x, \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting the Softmax Distribution",
"sec_num": "3.1"
},
{
"text": "However, 1-best probability estimates from the softmax output distribution may tend towards overconfidence, which would result in high probability for unreliable MT outputs. We propose two metrics that exploit output probability distribution beyond the average of top-1 predictions. First, we compute the entropy of softmax output distribution over target vocabulary of size V at each decoding step and take an average to obtain a sentence-level measure:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting the Softmax Distribution",
"sec_num": "3.1"
},
{
"text": "Softmax-Ent = \u2212 1 T T t=1 V v=1 p(y v t ) log p(y v t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting the Softmax Distribution",
"sec_num": "3.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting the Softmax Distribution",
"sec_num": "3.1"
},
{
"text": "p(y t ) represents the conditional distribu- tion p(y t | x, y <t , \u03b8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting the Softmax Distribution",
"sec_num": "3.1"
},
{
"text": "If most of the probability mass is concentrated on a few vocabulary words, the generated target word is likely to be correct. By contrast, if softmax probabilities approach a uniform distribution picking any word from the vocabulary is equally likely and the quality of the resulting translation is expected to be low.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting the Softmax Distribution",
"sec_num": "3.1"
},
{
"text": "Second, we hypothesize that the dispersion of probabilities of individual words might provide useful information that is inevitably lost when taking an average. Consider, as an illustration, that the sequences of word probabilities [0.1, 0.9] and [0.5, 0.5] have the same mean, but might indicate very different behavior of the NMT system, and consequently, different output quality. To formalize this intuition we compute the standard deviation of word-level log-probabilities,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting the Softmax Distribution",
"sec_num": "3.1"
},
{
"text": "Sent-Std = E[P 2 ] \u2212 (E[P]) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting the Softmax Distribution",
"sec_num": "3.1"
},
{
"text": "where P = p(y 1 ), . . . , p(y T ) represents wordlevel log-probabilities for a given sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting the Softmax Distribution",
"sec_num": "3.1"
},
{
"text": "It has been argued in recent work that deep neural networks do not properly represent model uncertainty (Gal and Ghahramani, 2016; Lakshminarayanan et al., 2017) . Uncertainty quantification in deep learning typically relies on the Bayesian formalism (MacKay, 1992; Graves, 2011; Welling and Teh, 2011; Gal and Ghahramani, 2016; Tran et al., 2019) . Bayesian NNs learn a posterior distribution over parameters that quantifies model or epistemic uncertainty, i.e., our lack of knowledge as to which model generated the training data. 3 Bayesian NNs usually come with prohibitive computational costs and various approximations have been developed to alleviate this. In this paper we explore the Monte Carlo (MC) dropout (Gal and Ghahramani, 2016) .",
"cite_spans": [
{
"start": 104,
"end": 130,
"text": "(Gal and Ghahramani, 2016;",
"ref_id": "BIBREF17"
},
{
"start": 131,
"end": 161,
"text": "Lakshminarayanan et al., 2017)",
"ref_id": "BIBREF39"
},
{
"start": 251,
"end": 265,
"text": "(MacKay, 1992;",
"ref_id": "BIBREF43"
},
{
"start": 266,
"end": 279,
"text": "Graves, 2011;",
"ref_id": "BIBREF23"
},
{
"start": 280,
"end": 302,
"text": "Welling and Teh, 2011;",
"ref_id": "BIBREF71"
},
{
"start": 303,
"end": 328,
"text": "Gal and Ghahramani, 2016;",
"ref_id": "BIBREF17"
},
{
"start": 329,
"end": 347,
"text": "Tran et al., 2019)",
"ref_id": "BIBREF63"
},
{
"start": 718,
"end": 744,
"text": "(Gal and Ghahramani, 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifying Uncertainty",
"sec_num": "3.2"
},
{
"text": "Dropout is a method introduced by Srivastava et al. 2014to reduce overfitting when training neural models. It consists in randomly masking neurons to zero based on a Bernoulli distribution. Gal and Ghahramani (2016) use dropout at test time before every weight layer. They perform several forward passes through the network and collect posterior probabilities generated by the model with parameters perturbed by dropout. Mean and variance of the resulting distribution can then be used to represent model uncertainty.",
"cite_spans": [
{
"start": 190,
"end": 215,
"text": "Gal and Ghahramani (2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifying Uncertainty",
"sec_num": "3.2"
},
{
"text": "We propose two flavors of MC dropout-based measures for unsupervised QE. First, we compute the expectation and variance for the set of sentencelevel probability estimates obtained by running N stochastic forward passes through the MT model with model parameters\u03b8 perturbed by dropout:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifying Uncertainty",
"sec_num": "3.2"
},
{
"text": "D-TP = 1 N N n=1 TP\u03b8 n D-Var = E[TP 2 \u03b8 ] \u2212 (E[TP\u03b8]) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifying Uncertainty",
"sec_num": "3.2"
},
{
"text": "where TP is sentence-level probability as defined in \u00a73.1. We also look at a combination of the two:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifying Uncertainty",
"sec_num": "3.2"
},
{
"text": "D-Combo = 1 \u2212 D-TP D-Var",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifying Uncertainty",
"sec_num": "3.2"
},
{
"text": "We note that these metrics have also been used by Wang et al. (2019) , but with the purpose of minimizing the effect of low-quality outputs on NMT training with back translations.",
"cite_spans": [
{
"start": 50,
"end": 68,
"text": "Wang et al. (2019)",
"ref_id": "BIBREF70"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifying Uncertainty",
"sec_num": "3.2"
},
{
"text": "Second, we measure lexical variation between the MT outputs generated for the same source segment when running inference with dropout. We posit that differences between likely MT hypotheses may also capture uncertainty and potential ambiguity and complexity of the original sentence. We compute an average similarity score (sim) between the set H of translation hypotheses:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifying Uncertainty",
"sec_num": "3.2"
},
{
"text": "D-Lex-Sim = 1 C |H| i=1 |H| j=1 sim(h i , h j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifying Uncertainty",
"sec_num": "3.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifying Uncertainty",
"sec_num": "3.2"
},
{
"text": "h i , h j \u2208 H, i = j and C = 2 \u22121 |H|(|H| \u2212 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifying Uncertainty",
"sec_num": "3.2"
},
{
"text": "is the number of pairwise comparisons for |H| hypotheses. We use Meteor (Denkowski and Lavie, 2014) to compute similarity scores.",
"cite_spans": [
{
"start": 72,
"end": 99,
"text": "(Denkowski and Lavie, 2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifying Uncertainty",
"sec_num": "3.2"
},
{
"text": "Attention weights represent the strength of connection between source and target tokens, which may be indicative of translation quality (Rikters and Fishel, 2017) . One way to measure it is to compute the entropy of the attention distribution:",
"cite_spans": [
{
"start": 136,
"end": 162,
"text": "(Rikters and Fishel, 2017)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "3.3"
},
{
"text": "Att-Ent = \u2212 1 I I i=1 J j=1 \u03b1 ji log \u03b1 ji",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "3.3"
},
{
"text": "where \u03b1 represents attention weights, I is the number of target tokens and J is the number of source tokens. This mechanism can be applied to any NMT model with encoder-decoder attention. We focus on attention in Transformer models, as it is currently the most widely used NMT architecture. Transformers rely on various types of attention, multiple attention heads, and multiple encoder and decoder layers. Encoder-decoder attention weights are computed for each head (H) and for each layer (L) of the decoder, as a result we get [H \u00d7 L] matrices with attention weights. It is not clear which combination would give the best results for QE. To summarize the information from different heads and layers, we propose to compute the entropy scores for each possible head/layer combination and then choose the minimum value or compute the average:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "3.3"
},
{
"text": "AW:Ent-Min = min {hl} (Att-Ent hl ) AW:Ent-Avg = 1 H \u00d7 L H h=1 L l=1 Att-Ent hl 4 Multilingual Dataset for QE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "3.3"
},
{
"text": "The quality of NMT translations is strongly affected by the amount of training data. To study our unsupervised QE indicators under different conditions, we collected data for 6 language pairs that includes high-, medium-, and low-resource conditions. To add diversity, we varied the directions into and out-of English, when permitted by the availability of expert annotators into non-English languages. Thus our dataset is composed by the high-resource English-German (En-De) and English-Chinese (En-Zh) pairs; by the mediumresource Romanian-English (Ro-En) and Estonian-English (Et-En) pairs; and by the low-resource Sinhala-English (Si-En) and Nepali-English (Ne-En) pairs. The dataset contains sentences extracted from Wikipedia and the MT outputs manually annotated for quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "3.3"
},
{
"text": "We follow the sampling process outlined in FLORES . First, we sampled documents from Wikipedia for English, Estonian, Romanian, Sinhala, and Nepali. Second, we selected the top 100 documents containing the largest number of sentences that are: (i) in the intended source language according to a language-id classifier 4 and (ii) have the length between 50 and 150 characters. In addition, we filtered out sentences that have been released as part of recent Wikipedia parallel corpora (Schwenk et al., 2019) , ensuring that our dataset is not part of parallel data commonly used for NMT training.",
"cite_spans": [
{
"start": 484,
"end": 506,
"text": "(Schwenk et al., 2019)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document and Sentence Sampling",
"sec_num": null
},
{
"text": "For every language, we randomly selected 10K sentences from the sampled documents and then translated them into English using the MT models described below. For German and Chinese we selected 20K sentences from the top 100 documents in English Wikipedia. To ensure sufficient representation of high-and low-quality translations for high-resource language pairs, we selected the sentences with minimal lexical overlap with respect to the NMT training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document and Sentence Sampling",
"sec_num": null
},
{
"text": "NMT systems For medium-and high-resource language pairs we trained the MT models based on the standard Transformer architecture (Vaswani et al., 2017) and followed the implementation details described in Ott et al. (2018b) . We used publicly available MT datasets such as Paracrawl (Espl\u00e0 et al., 2019) and Europarl (Koehn, 2005) .",
"cite_spans": [
{
"start": 128,
"end": 150,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF65"
},
{
"start": 204,
"end": 222,
"text": "Ott et al. (2018b)",
"ref_id": "BIBREF50"
},
{
"start": 282,
"end": 302,
"text": "(Espl\u00e0 et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 316,
"end": 329,
"text": "(Koehn, 2005)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document and Sentence Sampling",
"sec_num": null
},
{
"text": "Si-En and Ne-En MT systems were trained based on Big-Transformer architecture as defined in Vaswani et al. (2017) . For the low-resource language pairs, the models were trained following the FLORES semi-supervised setting , 5 which involves two iterations of backtranslation using the source and the target monolingual data. Table 1 specifies the amount of data used for training.",
"cite_spans": [
{
"start": 92,
"end": 113,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF65"
}
],
"ref_spans": [
{
"start": 325,
"end": 332,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Document and Sentence Sampling",
"sec_num": null
},
{
"text": "We followed the FLORES setup , which presents a form of DA (Graham et al., 2013) . The annotators are asked to rate each sentence from 0-100 according to the perceived translation quality. Specifically, the 0-10 range represents an incorrect translation; 11-29, a translation with few correct keywords, but the overall meaning is different from the source; 30-50, a translation with major mistakes; 51-69, a translation which is understandable and conveys the overall meaning of the source but contains typos or grammatical errors; 70-90, a translation that closely preserves the semantics of the source sentence; and 91-100, a perfect translation.",
"cite_spans": [
{
"start": 59,
"end": 80,
"text": "(Graham et al., 2013)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DA Judgments",
"sec_num": null
},
{
"text": "Each segment was evaluated independently by three professional translators from a single language service provider. To improve annotation consistency, any evaluation in which the range of scores among the raters was above 30 points was rejected, and an additional rater was requested to replace the most diverging translation rating until convergence was achieved. To further increase the reliability of the test and development partitions of the dataset, we requested an additional set of three annotations from a different group of annotators (i.e., from another language service provider) following the same annotation protocol, thus resulting in a total of six annotations per segment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DA Judgments",
"sec_num": null
},
{
"text": "Raw human scores were converted into zscores, that is, standardized according to each individual annotator's overall mean and standard deviation. The scores collected for each segment were averaged to obtain the final score. Such setting allows for the fact that annotators may genuinely disagree on some aspects of quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DA Judgments",
"sec_num": null
},
{
"text": "In Table 1 we show a summary of the statistics from human annotations. Besides the NMT training corpus size and the distribution of the DA scores for each language pair, we report mean 37.7 23.3 33.7 49.0 11.5 5.9 Table 1 : Multilingual QE dataset. Size of the NMT training corpus (size) and summary statistics for the raw DA scores (average, 25th percentile, median, and 75th percentile). As an indicator of annotators' consistency, the last two columns show the mean (avg) and standard deviation (std) of the absolute differences (diff) between the scores assigned by different annotators to the same segment.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": null
},
{
"start": 214,
"end": 221,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "DA Judgments",
"sec_num": null
},
{
"text": "and standard deviation of the average differences between the scores assigned by different annotators to each segment, as an indicator of annotation consistency. First, we observe that, as expected, the amount of training data per language pair correlates with the average quality of an NMT system. Second, we note that the distribution of human scores changes substantially across language pairs. In particular, we see very little variability in quality for En-De, which makes QE for this language pair especially challenging (see \u00a75). Finally, as shown in the right-most columns, annotation consistency is similar across language pairs and comparable to existing work that follows DA methodology for data collection. For example, Graham et al. (2013) report an average difference of 25 across annotators' scores.",
"cite_spans": [
{
"start": 732,
"end": 752,
"text": "Graham et al. (2013)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DA Judgments",
"sec_num": null
},
{
"text": "Data Splits To enable comparison between supervised and unsupervised approaches to QE, we split the data into 7K training partition, 1K development set, and two test sets of 1K sentences each. One of these test sets is used for the experiments in this paper, the other is kept blind for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DA Judgments",
"sec_num": null
},
{
"text": "Additional Data To support our discussion of the effect of NMT training on the correlation between predictive probabilities and perceived translation quality presented in \u00a76, we trained various alternative NMT system variants, translated and annotated 400 original Estonian sentences from our test set with each system variant. The data, the NMT models, and the DA judgments are available at https://github. com/facebookresearch/mlqe.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DA Judgments",
"sec_num": null
},
{
"text": "Below we analyze how our unsupervised QE indicators correlate with human judgments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "Benchmark Supervised QE Systems We compare the performance of the proposed unsupervised QE indicators against the best performing supervised approaches with available open-source implementation, namely, the Predictor-Estimator (PredEst) architecture (Kim et al., 2017b) provided by OpenKiwi toolkit (Kepler et al., 2019b) , and an improved version of the BiRNN model provided by DeepQuest toolkit (Ive et al., 2018) , which we refer to as BERT-BiRNN (Blain et al., 2020) .",
"cite_spans": [
{
"start": 250,
"end": 269,
"text": "(Kim et al., 2017b)",
"ref_id": "BIBREF35"
},
{
"start": 299,
"end": 321,
"text": "(Kepler et al., 2019b)",
"ref_id": "BIBREF32"
},
{
"start": 397,
"end": 415,
"text": "(Ive et al., 2018)",
"ref_id": "BIBREF29"
},
{
"start": 450,
"end": 470,
"text": "(Blain et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "5.1"
},
{
"text": "PredEst. We trained PredEst models (see \u00a72) using the same parameters as in the default configurations provided by Kepler et al. (2019b) . Predictor models were trained for 6 epochs on the same training and development data as the NMT systems, while the Estimator models were trained for 10 epochs on the training and development sets of our dataset (see \u00a74). Unlike Kepler et al. (2019b) , the Estimator was not trained using multitask learning, as our dataset currently does not contain any word-level annotation. We use the model corresponding to the best epoch as identified by the metric of reference on the development set: perplexity for the Predictor and Pearson correlation for the Estimator.",
"cite_spans": [
{
"start": 115,
"end": 136,
"text": "Kepler et al. (2019b)",
"ref_id": "BIBREF32"
},
{
"start": 367,
"end": 388,
"text": "Kepler et al. (2019b)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "5.1"
},
{
"text": "BERT-BiRNN. This model, similarly to the recent SOTA QE systems (Kepler et al., 2019a) , uses a large-scale pre-trained BERT model to obtain token-level representations that are then fed into two independent bidirectional RNNs to encode both the source sentence and its translation independently. The two resulting sentence representations are then concatenated as a weighted sum of their word vectors, using an attention mechanism. The final sentence-level representation is then fed to a sigmoid layer to produce the sentence-level quality estimates. During training, BERT was fine-tuned by unfreezing the weights of the last four layers along with the embedding layer. We used early stopping based on Pearson correlation on the development set, with a patience of 5.",
"cite_spans": [
{
"start": 64,
"end": 86,
"text": "(Kepler et al., 2019a)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "5.1"
},
{
"text": "Unsupervised QE For the dropout-based indicators (see \u00a73.2), we use dropout rate of 0.3, the same as for training the NMT models (see \u00a74). We perform N = 30 inference passes to obtain the posterior probability distribution. N was chosen following the experiments in related work (Dong et al., 2018; Wang et al., 2019) . However, we note that increasing N beyond 10 results in very small improvements on the development set. The implementation of stochastic decoding with MC dropout is available as part of the fairseq toolkit at https://github.com/ pytorch/fairseq. Table 2 shows Pearson correlation with DA for our unsupervised QE indicators and for the supervised QE systems. Unsupervised QE indicators are grouped as follows: Group I corresponds to the measurements obtained with standard decoding ( \u00a73.1); Group II contains indicators computed using MC dropout ( \u00a73.2); and Group III contains the results for attention-based indicators ( \u00a73.3). Group IV corresponds to the supervised QE models presented in \u00a75.1. We use the Hotelling-Williams test to compute significance of the difference between dependent correlations (Williams, 1959) with p-value < 0.05. For each language pair, results that are not significantly outperformed by any method are marked in bold; results that are not significantly outperformed by any other method from the same group are underlined.",
"cite_spans": [
{
"start": 279,
"end": 298,
"text": "(Dong et al., 2018;",
"ref_id": "BIBREF11"
},
{
"start": 299,
"end": 317,
"text": "Wang et al., 2019)",
"ref_id": "BIBREF70"
},
{
"start": 1125,
"end": 1141,
"text": "(Williams, 1959)",
"ref_id": "BIBREF72"
}
],
"ref_spans": [
{
"start": 566,
"end": 573,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Settings",
"sec_num": "5.1"
},
{
"text": "We observe that the simplest measure that can be extracted from NMT, sequence-level probability (TP), already performs competitively, in particular for the medium-resource language pairs. TP is consistently outperformed by D-TP, indicating that NMT output probabilities are not well calibrated. This confirms our hypothesis that estimating model uncertainty improves correlation with perceived translation quality. Furthermore, our approach performs competitively with strong supervised QE models. Dropout-based indicators significantly outperform PredEst and rival BERT-BiRNN for four language pairs. 6 These results position the proposed unsupervised QE methods as an attractive alternative to the supervised approach in the scenario where the NMT model used to generate the translations can be accessed.",
"cite_spans": [
{
"start": 602,
"end": 603,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation with Human Judgments",
"sec_num": "5.2"
},
{
"text": "For both unsupervised and supervised methods performance varies considerably across language pairs. The highest correlation is achieved for the medium-resource languages, whereas for highresource language pairs it is drastically lower. The main reason for this difference is a lower variability in translation quality for high-resource language pairs. Figure 2 shows scatter plots for Ro-En, which has the best correlation results, and En-De with the lowest correlation for all quality indicators. Ro-En has a substantial number of high-quality sentences, but the rest of the translations are uniformly distributed across the quality range. The distribution for En-De is highly skewed, as the vast majority of the translations are of high quality. In this case capturing meaningful variation appears to be more challenging, as the differences reflected by the DA may be more subtle than any of the QE methods is able to reveal.",
"cite_spans": [],
"ref_spans": [
{
"start": 352,
"end": 360,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Correlation with Human Judgments",
"sec_num": "5.2"
},
{
"text": "The reason for a lower correlation for Sinhala and Nepalese is different. For unsupervised indicators it can be due to the difference in model capacity 7 and the amount of training data. On the one hand, increasing depth and width of the model may negatively affect calibration (Guo et al., 2017) . On the other hand, due to the small amount of training data the model can overfit, resulting in inferior results both in terms of translation quality and correlation. It is noteworthy, however, that supervised QE system suffers a larger drop in performance than unsupervised indicators, as its predictor component requires large amounts of parallel data for training. We suggest, therefore, that unsupervised QE is more stable in lowresource scenarios than supervised approaches. We now look in more detail at the three groups of unsupervised measurements in Table 2 .",
"cite_spans": [
{
"start": 278,
"end": 296,
"text": "(Guo et al., 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 858,
"end": 865,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Correlation with Human Judgments",
"sec_num": "5.2"
},
{
"text": "Group I Average entropy of the softmax output (Softmax-Ent) and dispersion of the values of token-level probabilities (Sent-Std) achieve a significantly higher correlation than TP metric for four language pairs. Softmax-Ent captures uncertainty of the output probability distribution, which appears to be a more accurate reflection of the overall translation quality. Sent-Std captures a pattern in the sequence of token-level probabilities that helps detect low-quality translation illustrated in Figure 1 . Figure 1 shows two Et-En translations that have drastically different absolute DA scores of 62 and 1, but the difference in their sentencelevel log-probability is negligible: \u22120.50 and \u22120.48 for the first and second translations, respectively. By contrast, the sequences of tokenlevel probabilities are very different, as the second sentence has larger variation in the logprobabilities for adjacent words, with very high probabilities for high-frequency function words and low probabilities for content words.",
"cite_spans": [],
"ref_spans": [
{
"start": 498,
"end": 506,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 509,
"end": 517,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Correlation with Human Judgments",
"sec_num": "5.2"
},
{
"text": "Group II The best results are achieved by the D-Lex-Sim and D-TP metrics. Interestingly, D-Var has a much lower correlation, because Low Quality",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation with Human Judgments",
"sec_num": "5.2"
},
{
"text": "Tanganjikast p\u00fc\u00fctakse niiluse ahvenat ja kapentat.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original",
"sec_num": null
},
{
"text": "Nile perch and kapenta are fished from Lake Tanganyika. MT Output There is a silver thread and candle from Tanzeri.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference",
"sec_num": null
},
{
"text": "There will be a silver thread and a penny from Tanzer. There is an attempt at a silver greed and a carpenter from Tanzeri. There will be a silver bullet and a candle from Tanzer. The puzzle is being caught in the chicken's gavel and the coffin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dropout",
"sec_num": null
},
{
"text": "Original Siis aga v\u00f5ib tekkida seesmise ja v\u00e4lise vaate vahele l\u00f5he.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "High Quality",
"sec_num": null
},
{
"text": "This could however lead to a split between the inner and outer view. MT Output Then there may be a split between internal and external viewpoints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference",
"sec_num": null
},
{
"text": "Then, however, there may be a split between internal and external viewpoints. Then, however, there may be a gap between internal and external viewpoints. Then there may be a split between internal and external viewpoints. Then there may be a split between internal and external viewpoints. by only capturing variance it ignores the actual probability estimate assigned by the model to the given output. 8 Table 3 provides an illustration of how model uncertainty captured by MC dropout reflects the quality of MT output. The first example contains a low quality translation, with a high variability in MT hypotheses obtained with MC dropout. By contrast, MC dropout hypotheses for the second high-quality example are very similar and, in fact, constitute valid linguistic paraphrases of each other. This fact is directly exploited by the D-Lex-Sim metric that measures the variability between MT hypotheses generated with perturbed model parameters and performs on pair with D-TP. Besides capturing model uncertainty, D-Lex-Sim reflects the potential complexity of the source segments, as the number of different possible translations of the sentences is an indicator of their inherent ambiguity. 9",
"cite_spans": [],
"ref_spans": [
{
"start": 405,
"end": 412,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dropout",
"sec_num": null
},
{
"text": "Group III While our attention-based metrics also achieve a sensible correlation with human judgments, it is considerably lower than the rest of the unsupervised indicators. Attention may not provide enough information to be used as a quality indicator of its own, since there is no direct mapping between words in different languages, and, therefore, high entropy in attention weights does not necessarily indicate low translation quality. We leave experiments with combined attention and probability-based measures to future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dropout",
"sec_num": null
},
{
"text": "The use of multihead attention with multiple layers in Transformer may also negatively affect the results. As shown by Voita et al. (2019) , different attention heads are responsible for different functions. Therefore, combining the information coming from different heads and layers in a simple way may not be an optimal solution. To test whether this is the case, we computed attention entropy and its correlation with DA for all possible combinations of heads and layers. As shown in Table 2 , the best head/layer combination (AW : best head/layer) indeed significantly outperforms other attention-based measurements for all language pairs suggesting that this method should be preferred over simple averaging. Using the best head/layer combination for QE is limited by the fact that it requires validation on a dataset annotated with DA and thus is not fully unsupervised. This outcome opens an interesting direction for further experiments to automatically discover the best possible head/layer combination.",
"cite_spans": [
{
"start": 119,
"end": 138,
"text": "Voita et al. (2019)",
"ref_id": "BIBREF68"
}
],
"ref_spans": [
{
"start": 487,
"end": 494,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dropout",
"sec_num": null
},
{
"text": "In the previous section we studied the performance of our unsupervised quality indicators for different language pairs. In this section we validate our results by looking at two additional factors: domain shift and underlying NMT system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "One way to evaluate how well a model represents uncertainty is to measure the difference in model confidence under domain shift (Hendrycks and Gimpel, 2016; Lakshminarayanan et al., 2017; Snoek et al., 2019) . A well-calibrated model should produce low confidence estimates when tested on data points that are far away from the training data.",
"cite_spans": [
{
"start": 128,
"end": 156,
"text": "(Hendrycks and Gimpel, 2016;",
"ref_id": "BIBREF28"
},
{
"start": 157,
"end": 187,
"text": "Lakshminarayanan et al., 2017;",
"ref_id": "BIBREF39"
},
{
"start": 188,
"end": 207,
"text": "Snoek et al., 2019)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Shift",
"sec_num": "6.1"
},
{
"text": "Overconfident predictions on out-of-domain sentences would undermine the benefits of unsupervised QE for NMT. This is particularly relevant given the current wide use of NMT for translating mixed domain data online. Therefore, we conduct a small experiment to compare model confidence on in-domain and out-of-domain data. We focus on the Et-En language pair. We use the test partition of the MT training dataset as our in-domain sample. To generate the out-ofdomain sample, we sort our Wikipedia data (prior to sentence sampling stage in \u00a74) by distance to the training data and select the top 500 segments with the largest distance score. To compute distance scores we follow the strategy of Niehues and Pham (2019) that measures the test/training data distance based on the hidden states of NMT encoder.",
"cite_spans": [
{
"start": 693,
"end": 716,
"text": "Niehues and Pham (2019)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Shift",
"sec_num": "6.1"
},
{
"text": "We compute model posterior probabilities for the translations of the in-domain and out-ofdomain sample either obtained through standard decoding, or using MC dropout. TP obtains average values of \u22120.440 and \u22120.445 for indomain and out-of-domain data, respectively, whereas for D-TP these values are \u22120.592 and \u22120.685. The difference between in-domain and out-of-domain confidence estimates obtained by standard decoding is negligible. The difference between MC-dropout average probabilities for indomain vs. out-of-domain samples was found to be statistically significant under Student's t-test, with p-value < 0.01. Thus, expectation over predictive probabilities with MC dropout indeed provides a better estimation of model uncertainty for NMT, and therefore can improve the robustness of unsupervised QE on out-of-domain data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Shift",
"sec_num": "6.1"
},
{
"text": "Findings in the previous section suggest that using model probabilities results in fairly high correlation with human judgments for various language pairs. In this section we study how well these findings generalize to different NMT systems. The list of model variants that we explore is by no means exhaustive and was motivated by common practices in MT and by the factors that can negatively affect model calibration (number of training epochs) or help represent uncertainty (model ensembling). For this smallscale experiment we focus on Et-En. For each system variant we translated 400 sentences from the test partition of our dataset and collected the DA accordingly. As baseline, we use a standard Transformer model with beam search decoding. All system variants are trained using Fairseq implementation for 30 epochs, with the best checkpoint chosen according to the validation loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NMT Calibration across NMT Systems",
"sec_num": "6.2"
},
{
"text": "First, we consider three system variants with differences in architecture or training: RNN-based NMT (Bahdanau et al., 2015; Luong et al., 2015) , Mixture of Experts (MoE, He et al., 2018; Shen et al., 2019; Cho et al., 2019) , and model ensemble (Garmash and Monz, 2016) . Shen et al. (2019) use the MoE framework to capture the inherent uncertainty of the MT task where the same input sentence can have multiple Table 4 : Pearson correlation (r) between sequence-level output probabilities (TP) and average DA for translations generated by different NMT systems.",
"cite_spans": [
{
"start": 101,
"end": 124,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 125,
"end": 144,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF40"
},
{
"start": 166,
"end": 188,
"text": "(MoE, He et al., 2018;",
"ref_id": null
},
{
"start": 189,
"end": 207,
"text": "Shen et al., 2019;",
"ref_id": "BIBREF55"
},
{
"start": 208,
"end": 225,
"text": "Cho et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 247,
"end": 271,
"text": "(Garmash and Monz, 2016)",
"ref_id": "BIBREF18"
},
{
"start": 274,
"end": 292,
"text": "Shen et al. (2019)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [
{
"start": 414,
"end": 421,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "NMT Calibration across NMT Systems",
"sec_num": "6.2"
},
{
"text": "correct translations. A mixture model introduces a multinomial latent variable to control generation and produce a diverse set of MT hypotheses. In our experiment we use hard mixture model with uniform prior and 5 mixture components. To produce the translations we generate from a randomly chosen component with standard beam search. To obtain the probability estimates we average the probabilities from all mixture components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NMT Calibration across NMT Systems",
"sec_num": "6.2"
},
{
"text": "Previous work has used model ensembling as a strategy for representing model uncertainty (Lakshminarayanan et al., 2017; Pearce et al., 2018) . 10 In NMT, ensembling has been used to improve translation quality. We train four Transformer models initialized with different random seeds. At decoding time predictive distributions from different models are combined by averaging.",
"cite_spans": [
{
"start": 89,
"end": 120,
"text": "(Lakshminarayanan et al., 2017;",
"ref_id": "BIBREF39"
},
{
"start": 121,
"end": 141,
"text": "Pearce et al., 2018)",
"ref_id": "BIBREF51"
},
{
"start": 144,
"end": 146,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NMT Calibration across NMT Systems",
"sec_num": "6.2"
},
{
"text": "Second, we consider two alternatives to beam search: diverse beam search (Vijayakumar et al., 2016) and sampling. For sampling, we generate translations one token at a time by sampling from the model conditional distribution p(y j | y <j , x, \u03b8), until the end of sequence symbol is generated. For comparison, we also compute the D-TP metric for the standard Transformer model on the subset of 400 segments considered for this experiment. Table 4 shows the results. Interestingly, the correlation between output probabilities and DA is not necessarily related to the quality of MT outputs. For example, sampling produces much higher correlation although the quality is much 10 Note that MC dropout discussed in \u00a73.2 can be interpreted as an ensemble model combination where the predictions are averaged over an ensemble of NNs (Lakshminarayanan et al., 2017) . lower. This is in line with previous work that indicates that sampling results in better calibrated probability distribution than beam search (Ott et al., 2018a) . System variants that promote diversity in NMT outputs (diverse beam search and MoE) do not achieve any improvement in correlation over standard Transformer model.",
"cite_spans": [
{
"start": 674,
"end": 676,
"text": "10",
"ref_id": null
},
{
"start": 827,
"end": 858,
"text": "(Lakshminarayanan et al., 2017)",
"ref_id": "BIBREF39"
},
{
"start": 1003,
"end": 1022,
"text": "(Ott et al., 2018a)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [
{
"start": 439,
"end": 446,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "NMT Calibration across NMT Systems",
"sec_num": "6.2"
},
{
"text": "The best results both in quality and QE are achieved by ensembling, which provides additional evidence that better uncertainty quantification in NMT improves correlation with human judgments. MC dropout achieves very similar results. We recommend using either of these two methods for NMT systems with unsupervised QE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NMT Calibration across NMT Systems",
"sec_num": "6.2"
},
{
"text": "The final question we address is how the correlation between translation probabilities and translation quality is affected by the amount of training. We train our base Et-En Transformer system for 60 epochs. We generate and evaluate translations after each epoch. We use the test partition of the MT training set and assess translation quality with Meteor evaluation metric. Figure 3 shows the average Meteor scores (blue) and Pearson correlation (orange) between segment-level Meteor scores and translation probabilities from the MT system for each epoch. Interestingly, as the training continues test quality stabilizes whereas the relation between model probabilities and translation quality is deteriorated. During training, after the model is able to correctly classify most of the training examples, the loss can be further minimized by increasing the confidence of predictions (Guo et al., 2017) . Thus longer training does not affect output quality but damages calibration.",
"cite_spans": [
{
"start": 884,
"end": 902,
"text": "(Guo et al., 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 375,
"end": 383,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "NMT Calibration across Training Epochs",
"sec_num": "6.3"
},
{
"text": "We have devised an unsupervised approach to QE where no training or access to any additional resources besides the MT system is required. Besides exploiting softmax output probability distribution and the entropy of attention weights from the NMT model, we leverage uncertainty quantification for unsupervised QE. We show that, if carefully designed, the indicators extracted from the NMT system constitute a rich source of information, competitive with supervised QE methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "We analyzed how different MT architectures and training settings affect the relation between predictive probabilities and translation quality. We showed that improved translation quality does not necessarily imply a stronger correlation between translation quality and predictive probabilities. Model ensemble have been shown to achieve optimal results both in terms of translation quality and when using output probabilities as an unsupervised quality indicator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Finally, we created a new multilingual dataset for QE covering various scenarios for MT development including low-and high-resource language pairs. Both the dataset and the MT models needed to reproduce the results of our experiments are available at https://github.com/ facebookresearch/mlqe. This work can be extended in many directions. First, our sentence-level unsupervised metrics could be adapted for QE at other levels (word, phrase, and document). Second, the proposed metrics can be combined as features in supervised QE approaches. Finally, other methods for uncertainty quantification, as well as other types of uncertainty, can be explored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "h t t p : / /www.statmt.org/wmt19/metricstask.html.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "While the paper covers QE at sentence level, the extension of our unsupervised metrics to word-level QE would be straightforward and we leave it for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A distinction is typically made between epistemic and aleatoric uncertainty, where the latter captures the noise inherent to the observations(Kendall and Gal, 2017). We leave modeling this distinction in NMT for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://fasttext.cc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://bit.ly/36YaBlU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We note that PredEst models are systematically and significantly outperformed by BERT-BiRNN. This is not surprising, as large-scale pretrained representations have been shown to boost model performance for QE(Kepler et al., 2019a) and other natural language processing tasks(Devlin et al., 2019).7 Models for these languages were trained using Transformer-Big architecture fromVaswani et al. (2017).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is in contrast with the work byWang et al. (2019) where D-Var appears to be one of the best performing metric for NMT training with back-translation demonstrating an essential difference between this task and QE.9 Note that D-Lex-Sim involves generating N additional translation hypotheses, whereas the D-TP only requires re-scoring an existing translation output and is thus less expensive in terms of time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Marina Fomicheva, Lisa Yankovskaya, Fr\u00e9d\u00e9ric Blain, Mark Fishel, Nikolaos Aletras, and Lucia Specia were supported by funding from the Bergamot project (EU H2020 grant no. 825303).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Repre- sentations, ICLR 2015.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 Conference on Machine Translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation",
"authors": [
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Shervina",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Mathias",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "2",
"issue": "",
"pages": "1--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lo\u00efc Barrault, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervina Malmasi, Christof Monz, Mathias M\u00fcller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 Conference on Machine Translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural versus phrase-based machine translation quality: A case study",
"authors": [
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Bisazza",
"suffix": ""
},
{
"first": "Mauro",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "257--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luisa Bentivogli, Arianna Bisazza, Mauro Cettolo, and Marcello Federico. 2016. Neural versus phrase-based machine translation qual- ity: A case study. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 257-267.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Quality in, quality out: Learning from actual mistakes",
"authors": [
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Aletras",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fr\u00e9d\u00e9ric Blain, Nikolaos Aletras, and Lucia Specia. 2020. Quality in, quality out: Learning from actual mistakes. In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Confidence Estimation for Machine Translation",
"authors": [
{
"first": "John",
"middle": [],
"last": "Blatz",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Simona",
"middle": [],
"last": "Gandrabur",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Kulesza",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Sanchis",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Ueffing",
"suffix": ""
}
],
"year": 2004,
"venue": "COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "315--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto Sanchis, and Nicola Ueffing. 2004. Confidence Estimation for Machine Transla- tion. In COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics, pages 315-321.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Results of the WMT17 metrics shared task",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Kamran",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "489--513",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Yvette Graham, and Amir Kamran. 2017. Results of the WMT17 metrics shared task. In Proceedings of the Second Conference on Machine Translation, pages 489-513, Copenhagen, Denmark. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Results of the WMT16 metrics shared task",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Kamran",
"suffix": ""
},
{
"first": "Milo\u0161",
"middle": [],
"last": "Stanojevi\u0107",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "199--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Yvette Graham, Amir Kamran, and Milo\u0161 Stanojevi\u0107. 2016. Results of the WMT16 metrics shared task. In Proceedings of the First Conference on Machine Translation: Vol- ume 2, Shared Task Papers, pages 199-231, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Is neural machine translation the new state of the art?",
"authors": [
{
"first": "Sheila",
"middle": [],
"last": "Castilho",
"suffix": ""
},
{
"first": "Joss",
"middle": [],
"last": "Moorkens",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Gaspari",
"suffix": ""
},
{
"first": "Iacer",
"middle": [],
"last": "Calixto",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Tinsley",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2017,
"venue": "The Prague Bulletin of Mathematical Linguistics",
"volume": "108",
"issue": "1",
"pages": "109--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sheila Castilho, Joss Moorkens, Federico Gaspari, Iacer Calixto, John Tinsley, and Andy Way. 2017. Is neural machine translation the new state of the art? The Prague Bulletin of Mathematical Linguistics, 108(1):109-120.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Mixture content selection for diverse sequence generation",
"authors": [
{
"first": "Jaemin",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3112--3122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jaemin Cho, Minjoon Seo, and Hannaneh Hajishirzi. 2019. Mixture content selection for diverse sequence generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP- IJCNLP), pages 3112-3122.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Meteor universal: Language specific translation evaluation for any target language",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the EACL 2014 Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation eval- uation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Ma- chine Translation.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Lan- guage Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Confidence modeling for neural semantic parsing",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "743--753",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Dong, Chris Quirk, and Mirella Lapata. 2018. Confidence modeling for neural semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 743-753. Association for Computational Linguistics, Melbourne, Australia.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "ParaCrawl: Web-scale parallel corpora for the languages of the EU",
"authors": [
{
"first": "Miquel",
"middle": [],
"last": "Espl\u00e0",
"suffix": ""
},
{
"first": "Mikel",
"middle": [],
"last": "Forcada",
"suffix": ""
},
{
"first": "Gema",
"middle": [],
"last": "Ram\u00edrez-S\u00e1nchez",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of Machine Translation Summit XVII",
"volume": "2",
"issue": "",
"pages": "118--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miquel Espl\u00e0, Mikel Forcada, Gema Ram\u00edrez- S\u00e1nchez, and Hieu Hoang. 2019. ParaCrawl: Web-scale parallel corpora for the languages of the EU. In Proceedings of Machine Translation Summit XVII Volume 2: Translator, Project and User Tracks, pages 118-119, Dublin, Ireland. European Association for Machine Translation.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Supervised and unsupervised minimalist quality estimators: Vicomtechs participation in the",
"authors": [
{
"first": "Eva",
"middle": [
"Mart\u00ednez"
],
"last": "Thierry Etchegoyhen",
"suffix": ""
},
{
"first": "Andoni",
"middle": [],
"last": "Garcia",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Azpeitia",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thierry Etchegoyhen, Eva Mart\u00ednez Garcia, and Andoni Azpeitia. 2018. Supervised and unsupervised minimalist quality estimators: Vicomtechs participation in the WMT 2018",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Quality Estimation Task",
"authors": [],
"year": null,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "782--787",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quality Estimation Task. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 782-787.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Findings of the WMT 2019 Shared Tasks on Quality Estimation",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Federmann",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martins, Mark Fishel, and Christian Federmann. 2019. Findings of the WMT 2019 Shared Tasks on Quality Estimation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 1-10.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Dropout as a Bayesian approximation: Representing model uncertainty in deep learning",
"authors": [
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1050--1059",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In Inter- national Conference on Machine Learning, pages 1050-1059.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Ensemble learning for multi-source neural machine translation",
"authors": [
{
"first": "Ekaterina",
"middle": [],
"last": "Garmash",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1409--1418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ekaterina Garmash and Christof Monz. 2016. Ensemble learning for multi-source neural machine translation. In Proceedings of COLING 2016, the 26th International Confer- ence on Computational Linguistics: Technical Papers, pages 1409-1418.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Is all that glitters in machine translation quality estimation really gold?",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Meghan",
"middle": [],
"last": "Dowling",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Eskevich",
"suffix": ""
},
{
"first": "Teresa",
"middle": [],
"last": "Lynn",
"suffix": ""
},
{
"first": "Lamia",
"middle": [],
"last": "Tounsi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "3124--3134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yvette Graham, Timothy Baldwin, Meghan Dowling, Maria Eskevich, Teresa Lynn, and Lamia Tounsi. 2016. Is all that glitters in machine translation quality estimation really gold? In Proceedings of COLING 2016, the 26th International Con- ference on Computational Linguistics: Techni- cal Papers, pages 3124-3134.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Accurate evaluation of segment-level machine translation metrics",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Nitika",
"middle": [],
"last": "Mathur",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1183--1191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yvette Graham, Timothy Baldwin, and Nitika Mathur. 2015a. Accurate evaluation of segment-level machine translation metrics. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1183-1191.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Continuous measurement scales in human evaluation of machine translation",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Alistair",
"middle": [],
"last": "Moffat",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Zobel",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse",
"volume": "",
"issue": "",
"pages": "33--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2013. Continuous measure- ment scales in human evaluation of machine translation. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 33-41.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Can machine translation systems be evaluated by the crowd alone",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2015,
"venue": "Natural Language Engineering",
"volume": "",
"issue": "",
"pages": "1--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2015b. Can machine translation systems be evaluated by the crowd alone. Natural Language Engineering, pages 1-28.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Practical variational inference for neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
}
],
"year": 2011,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2348--2356",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves. 2011. Practical variational inference for neural networks. In Advances in Neural Information Processing Systems, pages 2348-2356.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "On calibration of modern neural networks",
"authors": [
{
"first": "Chuan",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Geoff",
"middle": [],
"last": "Pleiss",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "1321--1330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning- Volume 70, pages 1321-1330. JMLR. org.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The FLORES evaluation datasets for lowresource machine translation: Nepali-English and Sinhala-English",
"authors": [
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Peng-Jen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "6097--6110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francisco Guzm\u00e1n, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The FLORES evaluation datasets for low- resource machine translation: Nepali-English and Sinhala-English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 6097-6110, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Achieving human parity on automatic Chinese to English news translation",
"authors": [
{
"first": "Hany",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Aue",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Vishal",
"middle": [],
"last": "Chowdhary",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Xuedong",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.05567"
]
},
"num": null,
"urls": [],
"raw_text": "Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys- Dowmunt, William Lewis, Mu Li, et al. 2018. Achieving human parity on automatic Chinese to English news translation. arXiv preprint arXiv:1803.05567.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Sequence to sequence mixture model for diverse machine translation",
"authors": [
{
"first": "Xuanli",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "583--592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuanli He, Gholamreza Haffari, and Mohammad Norouzi. 2018. Sequence to sequence mixture model for diverse machine translation. In Proceed- ings of the 22nd Conference on Computational Natural Language Learning, pages 583-592.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A baseline for detecting misclassified and out-ofdistribution examples in neural networks",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Hendrycks",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1610.02136"
]
},
"num": null,
"urls": [],
"raw_text": "Dan Hendrycks and Kevin Gimpel. 2016. A baseline for detecting misclassified and out-of- distribution examples in neural networks. arXiv preprint arXiv:1610.02136.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "DeepQuest: a framework for neural-based quality estimation",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Ive",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of COLING 2018, the 27th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Ive, Fr\u00e9d\u00e9ric Blain, and Lucia Specia. 2018. DeepQuest: a framework for neural-based quality estimation. In Proceedings of COLING 2018, the 27th International Conference on Computational Linguistics: Technical Papers. Santa Fe, New Mexico.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "What uncertainties do we need in Bayesian deep learning for computer vision?",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Kendall",
"suffix": ""
},
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5574--5584",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Kendall and Yarin Gal. 2017. What uncertainties do we need in Bayesian deep learning for computer vision? In Advances in Neural Information Processing Systems, pages 5574-5584.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Unbabel's Participation in the WMT19 Translation Quality Estimation Shared Task",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Kepler",
"suffix": ""
},
{
"first": "Jonay",
"middle": [],
"last": "Tr\u00e9nous",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Treviso",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Vera",
"suffix": ""
},
{
"first": "Ant\u00f3nio",
"middle": [],
"last": "G\u00f3is",
"suffix": ""
},
{
"first": "Ant\u00f3nio",
"middle": [
"V"
],
"last": "Amin Farajian",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Lopes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martins",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "78--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabio Kepler, Jonay Tr\u00e9nous, Marcos Treviso, Miguel Vera, Ant\u00f3nio G\u00f3is, M Amin Farajian, Ant\u00f3nio V. Lopes, and Andr\u00e9 F. T. Martins. 2019a. Unbabel's Participation in the WMT19 Translation Quality Estimation Shared Task. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 78-84.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "OpenKiwi: An open source framework for quality estimation",
"authors": [
{
"first": "F\u00e1bio",
"middle": [],
"last": "Kepler",
"suffix": ""
},
{
"first": "Jonay",
"middle": [],
"last": "Tr\u00e9nous",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Treviso",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Vera",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics-System Demonstrations",
"volume": "",
"issue": "",
"pages": "117--122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F\u00e1bio Kepler, Jonay Tr\u00e9nous, Marcos Treviso, Miguel Vera, and Andr\u00e9 F. T. Martins. 2019b. OpenKiwi: An open source framework for quality estimation. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics-System Demonstrations, pages 117-122, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation",
"authors": [
{
"first": "Hyun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jong-Hyeok",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Seung-Hoon",
"middle": [],
"last": "Na",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "562--568",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hyun Kim, Jong-Hyeok Lee, and Seung-Hoon Na. 2017a. Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation. In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Tasks Papers, pages 562-568.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation",
"authors": [
{
"first": "Hyun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jong-Hyeok",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Seung-Hoon",
"middle": [],
"last": "Na",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "562--568",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hyun Kim, Jong-Hyeok Lee, and Seung-Hoon Na. 2017b. Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation. In Proceedings of the Second Conference on Machine Translation, pages 562-568.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Europarl: A parallel corpus for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2005,
"venue": "MT summit",
"volume": "5",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, volume 5, pages 79-86.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Calibrated structured prediction",
"authors": [
{
"first": "Volodymyr",
"middle": [],
"last": "Kuleshov",
"suffix": ""
},
{
"first": "Percy",
"middle": [
"S"
],
"last": "Liang",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3474--3482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Volodymyr Kuleshov and Percy S. Liang. 2015. Calibrated structured prediction. In Advances in Neural Information Processing Systems, pages 3474-3482.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Calibration of encoder decoder models for neural machine translation",
"authors": [
{
"first": "Aviral",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Sunita",
"middle": [],
"last": "Sarawagi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.00802"
]
},
"num": null,
"urls": [],
"raw_text": "Aviral Kumar and Sunita Sarawagi. 2019. Calibration of encoder decoder models for neural machine translation. arXiv preprint arXiv:1903.00802.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Simple and scalable predictive uncertainty estimation using deep ensembles",
"authors": [
{
"first": "Balaji",
"middle": [],
"last": "Lakshminarayanan",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Pritzel",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Blundell",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "6402--6413",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. 2017. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems, pages 6402-6413.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Pro- cessing, pages 1412-1421, Lisbon, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Results of the WMT18 metrics shared task: Both characters and embeddings achieve good performance",
"authors": [
{
"first": "Qingsong",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "671--688",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qingsong Ma, Ond\u0159ej Bojar, and Yvette Graham. 2018. Results of the WMT18 metrics shared task: Both characters and embeddings achieve good performance. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 671-688, Belgium, Brussels. Association for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Results of the WMT19 metrics shared task: Segment-level and strong MT systems pose big challenges",
"authors": [
{
"first": "Qingsong",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Johnny",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "62--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qingsong Ma, Johnny Wei, Ond\u0159ej Bojar, and Yvette Graham. 2019. Results of the WMT19 metrics shared task: Segment-level and strong MT systems pose big challenges. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 62-90, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Bayesian Methods for Adaptive Models",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mackay",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David J. C. MacKay. 1992. Bayesian Methods for Adaptive Models. Ph.D. thesis, California Institute of Technology.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Prediction uncertainty estimation for hate speech classification",
"authors": [
{
"first": "Kristian",
"middle": [],
"last": "Miok",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Nguyen-Doan",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Bla\u017e\u0161krlj",
"suffix": ""
},
{
"first": "Marko",
"middle": [],
"last": "Zaharie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Robnik-\u0160ikonja",
"suffix": ""
}
],
"year": 2019,
"venue": "Statistical Language and Speech Processing",
"volume": "",
"issue": "",
"pages": "286--298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristian Miok, Dong Nguyen-Doan, Bla\u017e\u0160krlj, Daniela Zaharie, and Marko Robnik-\u0160ikonja. 2019. Prediction uncertainty estimation for hate speech classification. Carlos Mart\u00edn-Vide, Matthew Purver, and Senja Pollak, editors, In Statistical Language and Speech Processing, pages 286-298, Cham. Springer International Publishing.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Quality estimation: an experimental study using unsupervised similarity measures",
"authors": [
{
"first": "Erwan",
"middle": [],
"last": "Moreau",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2012,
"venue": "7th Workshop on Statistical Machine Translation",
"volume": "120",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erwan Moreau and Carl Vogel. 2012. Quality estimation: an experimental study using unsu- pervised similarity measures. In 7th Workshop on Statistical Machine Translation, page 120.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Posterior calibration and exploratory analysis for natural language processing models",
"authors": [
{
"first": "Khanh",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Connor",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1587--1598",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khanh Nguyen and Brendan O'Connor. 2015. Posterior calibration and exploratory analy- sis for natural language processing models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Pro- cessing, pages 1587-1598, Lisbon, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Modeling confidence in sequence-to-sequence models",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Ngoc-Quan",
"middle": [],
"last": "Pham",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of The 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "575--583",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Niehues and Ngoc-Quan Pham. 2019. Mod- eling confidence in sequence-to-sequence models. In Proceedings of The 12th Interna- tional Conference on Natural Language Gen- eration, pages 575-583.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Analyzing uncertainty in neural machine translation",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2018a. Analyzing uncertainty in neural machine translation. In International Conference on Machine Learning.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT 2019: Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Dem- onstrations.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Scaling neural machine translation",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018b. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1-9.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Uncertainty in neural networks: Bayesian ensembling",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Pearce",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Zaki",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Brintrup",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Neel",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.05546"
]
},
"num": null,
"urls": [],
"raw_text": "Tim Pearce, Mohamed Zaki, Alexandra Brintrup, and Andy Neel. 2018. Uncertainty in neural networks: Bayesian ensembling. arXiv preprint arXiv:1810.05546.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Morpheme-and pos-based ibm1 scores and language model scores for translation quality estimation",
"authors": [
{
"first": "Maja",
"middle": [],
"last": "Popovi\u0107",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "133--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maja Popovi\u0107. 2012. Morpheme-and pos-based ibm1 scores and language model scores for translation quality estimation. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 133-137. Association for Computational Linguistics.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Confidence through attention",
"authors": [
{
"first": "Mat\u012bss",
"middle": [],
"last": "Rikters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of MT Summit XVI",
"volume": "",
"issue": "",
"pages": "299--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mat\u012bss Rikters and Mark Fishel. 2017. Confidence through attention. In Proceedings of MT Summit XVI, pages 299-311. Nagoya, Japan.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Hongyu",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.05791"
]
},
"num": null,
"urls": [],
"raw_text": "Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzm\u00e1n. 2019. WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia. arXiv preprint arXiv:1907.05791.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Mixture models for diverse machine translation: Tricks of the trade",
"authors": [
{
"first": "Tianxiao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.07816"
]
},
"num": null,
"urls": [],
"raw_text": "Tianxiao Shen, Myle Ott, Michael Auli, and Marc'Aurelio Ranzato. 2019. Mixture models for diverse machine translation: Tricks of the trade. arXiv preprint arXiv:1902.07816.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Can you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift",
"authors": [
{
"first": "Jasper",
"middle": [],
"last": "Snoek",
"suffix": ""
},
{
"first": "Yaniv",
"middle": [],
"last": "Ovadia",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Fertig",
"suffix": ""
},
{
"first": "Balaji",
"middle": [],
"last": "Lakshminarayanan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Sebastian Nowozin",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Sculley",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Dillon",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nado",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "13969--13980",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jasper Snoek, Yaniv Ovadia, Emily Fertig, Balaji Lakshminarayanan, Sebastian Nowozin, D. Sculley, Joshua Dillon, Jie Ren, and Zachary Nado. 2019. Can you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift. In Advances in Neural Information Processing Systems, pages 13969-13980.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of association for machine translation in the Americas",
"volume": "200",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of associa- tion for machine translation in the Americas, volume 200.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Findings of the WMT 2018 Shared Task on Quality Estimation",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Ram\u00f3n",
"middle": [],
"last": "Astudillo",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "689--709",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Fr\u00e9d\u00e9ric Blain, Varvara Logacheva, Ram\u00f3n Astudillo, and Andr\u00e9 F. T. Martins. 2018. Findings of the WMT 2018 Shared Task on Quality Estimation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 689-709.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "QuEst-A Translation Quality Estimation Framework",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Kashif",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "G",
"middle": [
"C"
],
"last": "Jos\u00e9",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "De Souza",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "79--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Kashif Shah, Jos\u00e9 G. C. De Souza, and Trevor Cohn. 2013. QuEst-A Translation Quality Estimation Framework. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 79-84.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Estimating the sentence-level quality of machine translation systems",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Cancedda",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Dymetman",
"suffix": ""
},
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
}
],
"year": 2009,
"venue": "13th Conference of the European Association for Machine Translation",
"volume": "",
"issue": "",
"pages": "28--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Marco Turchi, Nicola Cancedda, Marc Dymetman, and Nello Cristianini. 2009. Estimating the sentence-level quality of machine translation systems. In 13th Conference of the European Association for Machine Translation, pages 28-37.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Dropout: a simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "The Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neu- ral networks. In Advances in Neural Informa- tion Processing Systems, pages 3104-3112.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Bayesian layers: A module for neural network uncertainty",
"authors": [
{
"first": "Dustin",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Dusenberry",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "14633--14645",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dustin Tran, Mike Dusenberry, Mark van der Wilk, and Danijar Hafner. 2019. Bayesian layers: A module for neural network uncertainty. In Advances in Neural Information Processing Systems, pages 14633-14645.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Attention interpretability across NLP tasks",
"authors": [
{
"first": "Shikhar",
"middle": [],
"last": "Vashishth",
"suffix": ""
},
{
"first": "Shyam",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Singh Tomar",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.11218"
]
},
"num": null,
"urls": [],
"raw_text": "Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh Tomar, and Manaal Faruqui. 2019. Attention interpretability across NLP tasks. arXiv preprint arXiv:1909.11218.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Analyzing the structure of attention in a transformer language model",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Vig",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "63--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jesse Vig and Yonatan Belinkov. 2019. Analyzing the structure of attention in a transformer language model. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 63-76.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Diverse beam search: Decoding diverse solutions from neural sequence models",
"authors": [
{
"first": "K",
"middle": [],
"last": "Ashwin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Vijayakumar",
"suffix": ""
},
{
"first": "Ramprasath",
"middle": [
"R"
],
"last": "Cogswell",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Selvaraju",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Crandall",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1610.02424"
]
},
"num": null,
"urls": [],
"raw_text": "Ashwin K. Vijayakumar, Michael Cogswell, Ramprasath R. Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Di- verse beam search: Decoding diverse solutions from neural sequence models. arXiv preprint arXiv:1610.02424.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Voita",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Talbot",
"suffix": ""
},
{
"first": "Fedor",
"middle": [],
"last": "Moiseev",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5797--5808",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5797-5808, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Alibaba submission for WMT18 quality estimation task",
"authors": [
{
"first": "Jiayi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Fengming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Boxing",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yangbin",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Luo",
"middle": [],
"last": "Si",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "809--815",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiayi Wang, Kai Fan, Bo Li, Fengming Zhou, Boxing Chen, Yangbin Shi, and Luo Si. 2018. Alibaba submission for WMT18 quality estimation task. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 809-815, Belgium, Brussels. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "Improving backtranslation with uncertainty-based confidence estimation",
"authors": [
{
"first": "Shuo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "791--802",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuo Wang, Yang Liu, Chao Wang, Huanbo Luan, and Maosong Sun. 2019. Improving back- translation with uncertainty-based confidence estimation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 791-802.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "Bayesian learning via stochastic gradient langevin dynamics",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Welling",
"suffix": ""
},
{
"first": "Yee",
"middle": [
"W"
],
"last": "Teh",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 28th International Conference on Machine Learning (ICML-11)",
"volume": "",
"issue": "",
"pages": "681--688",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Max Welling and Yee W. Teh. 2011. Bayesian learning via stochastic gradient langevin dy- namics. In Proceedings of the 28th Inter- national Conference on Machine Learning (ICML-11), pages 681-688.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "Regression Analysis",
"authors": [
{
"first": "Evan",
"middle": [
"James"
],
"last": "",
"suffix": ""
},
{
"first": "Williams",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1959,
"venue": "",
"volume": "14",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evan James Williams. 1959. Regression Analysis, 14, Wiley, New York.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "Quality estimation with force-decoded attention and cross-lingual embeddings",
"authors": [
{
"first": "Elizaveta",
"middle": [],
"last": "Yankovskaya",
"suffix": ""
},
{
"first": "Andre",
"middle": [],
"last": "Tattar",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elizaveta Yankovskaya, Andre Tattar, and Mark Fishel. 2018. Quality estimation with force-decoded attention and cross-lingual em- beddings. In Proceedings of the Third Con- ference on Machine Translation, Volume 2: Shared Tasks Papers. Brussels, Belgium.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "0.546 0.635 0.763 0.273 0.371",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Token-level probabilities of high-quality (left) and low-quality (right) Et-En translations.",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "Scatter plots for the correlation between D-TP (x-axis) and standardized DA scores (y-axis) for Ro-En (top) and En-De (bottom).",
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"num": null,
"text": "Pearson correlation between translation quality and model probabilities (orange), and Meteor (blue) over training epochs.",
"type_str": "figure"
},
"TABREF1": {
"html": null,
"text": "Pearson (r) correlation between unsupervised QE indicators and human DA judgments. Results that are not significantly outperformed by any method are marked in bold; results that are not significantly outperformed by any other method from the same group are underlined.",
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF2": {
"html": null,
"text": "Example of MC dropout for a low-quality (top) and a high-quality (bottom) MT outputs.",
"num": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}