{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:38:52.504094Z" }, "title": "On the Evaluation of Machine Translation n-best Lists", "authors": [ { "first": "Jacob", "middle": [], "last": "Bremerman", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Maryland", "location": { "settlement": "College Park" } }, "email": "" }, { "first": "Huda", "middle": [], "last": "Khayrallah", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": {} }, "email": "" }, { "first": "Douglas", "middle": [ "W" ], "last": "Oard", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Maryland", "location": { "settlement": "College Park" } }, "email": "oard@umd.edu" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": {} }, "email": "post@cs.jhu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The standard machine translation evaluation framework measures the single-best output of machine translation systems. There are, however, many situations where n-best lists are needed, yet there is no established way of evaluating them. This paper establishes a framework for addressing n-best evaluation by outlining three different questions one could consider when determining how one would define a 'good' n-best list and proposing evaluation measures for each question. The first and principal contribution is an evaluation measure that characterizes the translation quality of an entire n-best list by asking whether many of the valid translations are placed near the top of the list. The second is a measure that uses gold translations with preference annotations to ask to what degree systems can produce ranked lists in preference order. The third is a measure that rewards partial matches, evaluating the closeness of the many items in an n-best list to a set of many valid references. These three perspectives make clear that having access to many references can be useful when n-best evaluation is the goal.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "The standard machine translation evaluation framework measures the single-best output of machine translation systems. There are, however, many situations where n-best lists are needed, yet there is no established way of evaluating them. This paper establishes a framework for addressing n-best evaluation by outlining three different questions one could consider when determining how one would define a 'good' n-best list and proposing evaluation measures for each question. The first and principal contribution is an evaluation measure that characterizes the translation quality of an entire n-best list by asking whether many of the valid translations are placed near the top of the list. The second is a measure that uses gold translations with preference annotations to ask to what degree systems can produce ranked lists in preference order. The third is a measure that rewards partial matches, evaluating the closeness of the many items in an n-best list to a set of many valid references. These three perspectives make clear that having access to many references can be useful when n-best evaluation is the goal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Machine translation evaluation has traditionally focused on one-best translation results because many common use cases (translating a user manual, reading a news article, etc.) require only a single translation. There are, however, many scenarios in which n-best translation can be useful; examples include cross-language information retrieval, where query terms may not match in the single-best output, or language learning, where a learner is interested in whether their translation is acceptable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Optimizing translation systems for such applications might benefit from evaluation measures that focus on choosing among systems based on which produces the best list of translated sentences, what we refer to here for brevity as an n-best list. Often in these n-best scenarios, researchers first select 'good' MT systems (i.e., by BLEU) in the hope that these good systems will also produce good results beyond the top translation candidate. In this paper we test that hypothesis, using a newly available dataset to measure the quality of n-best lists directly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To look at the problem in this way we must first decide what properties of an n-best list we would consider 'good'. In this paper we explore three questions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. How well does an n-best list include correct translations and rank correct translations above incorrect ones? (Section 3: Headweighted Precision)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. How well does an n-best list rank translations in preference order, with the better (e.g., more commonly used) translations ahead of those that are valid, but less preferred? (Section 4: Preference Correlation)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. How close are all of the translations in an nbest list to one or more reference translations? (Section 5: Unweighted Partial Match)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We introduce measures for each of the three questions, using a ranking quality measure already widely used in information retrieval for question 1, correlation measures to address question 2, and variants of BLEU for question 3. In this latter study, we particularly note that n-best evaluation done in this way contrasts with a current standard used for both n-best and 1-best MT evaluation, 1-best single-reference BLEU. However, our purpose is not to argue for a single n-best evaluation measure, but rather to highlight that different measures produce different system rankings, and therefore it is crucial that researchers target weight \u79c1\u306f\u6c17\u5206\u304c\u826f\u304f\u306a\u308b\u3060\u308d\u3046\u3002 0.015 \u79c1\u306f\u6c17\u5206\u304c\u826f\u304f\u306a\u308b\u3067\u3057\u3087\u3046\u3002 0.008 \u79c1\u306f\u3044\u3044\u6c17\u5206\u306b\u306a\u308b\u3060\u308d\u3046\u3002 0.007 \u6c17\u5206\u304c\u826f\u304f\u306a\u308b\u3060\u308d\u3046\u3002 0.007 \u79c1\u306f\u6c17\u5206\u304c\u826f\u3044\u3060\u308d\u3046\u3002 0.006 carefully consider what questions to ask when evaluating systems. The measures we propose are illustrative as answers to our research questions, but are not the only solutions; many others might work. We aim to provide groundwork and encourage future work on the topic. Our investigation is made possible by the recent availability of annotations created for the Duolingo Simultaneous Translation and Paraphrase for Language Education (STAPLE) shared task, which contains an extensive (although not necessarily exhaustive) set of valid translations for each of several thousand \"input prompt\" sentences (Mayhew et al., 2020) .", "cite_spans": [ { "start": 1342, "end": 1363, "text": "(Mayhew et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Duolingo STAPLE dataset consists of thousands of English prompts, with large sets of valid translations of each, often numbering in the hundreds, each labeled with the relative frequency with which each valid translation was selected by language learners. Table 1 shows the five highestfrequency Japanese translations for the prompt \"I will feel well,\" where the weights of all 480 translations sum to one. As this example illustrates, the prompts are relatively short and simple sentences.", "cite_spans": [], "ref_spans": [ { "start": 260, "end": 267, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "The STAPLE Shared Task", "sec_num": "2" }, { "text": "In the 2020 STAPLE task, participating systems were asked to produce all and only the valid translations. Doing well at this task, which was evaluated using a variant of the F 1 measure, requires both ranking translations well and deciding where to truncate the n-best list (i.e., the choice of n). Our focus in this paper is on ranking quality, leaving the question of how best to evaluate truncation to other work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The STAPLE Shared Task", "sec_num": "2" }, { "text": "We compare systems from Khayrallah et al. (2020) 's submission to the 2020 Duolingo STA-PLE Shared Task. They were built using the data described in Table 2 . In total, we compare 38 Portuguese and 44 Japanese systems. This includes source JA PT Europarl (Koehn, 2005 ) -2,408k GlobalVoices 1 822k 1,585k OpenSubtitles (Lison and Tiedemann, 2016) 13,097k 196,960k Tatoeba (tatoeba.org) 1,537k 1,215k WikiMatrix (Schwenk et al., 2019) 9,013k 45,147k JW300 (Agi\u0107 and Vuli\u0107, 2019) 34,325k 39,023k QED (Abdelali et al., 2014) 9,064k 8,542k some bad systems, many good ones, and many incremental variations in between, especially at the top end. These systems ranked among the best for these languages on the STAPLE leaderboard. All were variations of the following standard training procedure. We used Transformer architectures (Vaswani et al., 2017) trained with fairseq (Ott et al., 2019) . Models included 6 encoder and decoder layers, a model size of 512, a feed forward layer size of 2048, and 8 attention heads. Models were trained with the ADAM optimizer (Kingma and Ba, 2015) with a dropout size of 0.1 and an effective batch size of 200k tokens. Model training was terminated when validation perplexity failed to increase for 10 consecutive epoch-level checkpoints.", "cite_spans": [ { "start": 24, "end": 48, "text": "Khayrallah et al. (2020)", "ref_id": "BIBREF6" }, { "start": 255, "end": 267, "text": "(Koehn, 2005", "ref_id": "BIBREF8" }, { "start": 319, "end": 346, "text": "(Lison and Tiedemann, 2016)", "ref_id": "BIBREF9" }, { "start": 411, "end": 433, "text": "(Schwenk et al., 2019)", "ref_id": "BIBREF13" }, { "start": 455, "end": 477, "text": "(Agi\u0107 and Vuli\u0107, 2019)", "ref_id": "BIBREF1" }, { "start": 498, "end": 521, "text": "(Abdelali et al., 2014)", "ref_id": "BIBREF0" }, { "start": 824, "end": 846, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF14" }, { "start": 868, "end": 886, "text": "(Ott et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 149, "end": 156, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 364, "end": 385, "text": "Tatoeba (tatoeba.org)", "ref_id": null } ], "eq_spans": [], "section": "The STAPLE Shared Task", "sec_num": "2" }, { "text": "Our systems varied in the following experimental parameters:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The STAPLE Shared Task", "sec_num": "2" }, { "text": "\u2022 Training on all the data in Table 2 , or just the data above the midline.", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 37, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "The STAPLE Shared Task", "sec_num": "2" }, { "text": "\u2022 Whether or not we fine-tuned on Duolingo STAPLE training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The STAPLE Shared Task", "sec_num": "2" }, { "text": "\u2022 Training on just the first million lines of each corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The STAPLE Shared Task", "sec_num": "2" }, { "text": "\u2022 Varying the effective batch size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The STAPLE Shared Task", "sec_num": "2" }, { "text": "\u2022 Limiting the training data to sentences containing at most 20% of tokens outside the Duolingo STAPLE training data vocabulary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The STAPLE Shared Task", "sec_num": "2" }, { "text": "We begin with our first question: how well does a system produce valid translations and rank them above invalid translations?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head-weighted Precision", "sec_num": "3" }, { "text": "A task that might be more aligned with such a question would be one that necessitates a strict, binary score for validity and is agnostic to where a truncation of the list might occur. For example, a language learner hoping to learn several valid possible ways to express a sentence in a target language may want to peruse many translation outputs, starting at the top of the list. It would be unknown where the user would stop, and it would be important that the translations are fully valid.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head-weighted Precision", "sec_num": "3" }, { "text": "Of course the space of valid translations in this framework could be enormous, so it is important to consider the effect of incompleteness in the set of valid references. We consider that in section 3.2, but first we introduce a measure for head-weighted precision with a simplifying assumption that the set of valid references is complete. Assuming we have this, we say for the purpose of question 1 that a good n-best list would have a lot of valid translations, and that it would place them near the head (i.e., the top) of the ranked list. We refer to this framework then as head-weighted precision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head-weighted Precision", "sec_num": "3" }, { "text": "We are not the first to need a measure for the quality of a ranked list-this is a central question in evaluation of search engines that produced ranked lists of documents. The simplest setup of the evaluation task in information retrieval is that documents are either on topic or off topic (i.e., relevant or not), and that it is only the order of the documents that matters. One widely used rank quality measure is uninterpolated Average Precision (AP), computed for a ranked list L of length k as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Head-Weighted Precision Measure", "sec_num": "3.1" }, { "text": "AP (L) = 1 N k i=1 T i i s i (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Head-Weighted Precision Measure", "sec_num": "3.1" }, { "text": "where N is the number of valid items, T i is the number of valid items at or above rank i, and s i is the true binary relevance for the item at rank i (0 or 1). The core of the computation is T i /i, which in information retrieval is called precision; AP is the expected value of precision, measured only at optimal stopping points (i.e., where precision is maximized by just having added one more valid item). With this measure, ranked lists earn perfect scores for ranking all valid items at the top and are punished for invalid items occurring between valid ones (with invalid items nearer the top having a more deleterious impact). Because variance in system behavior across conditions may be high, it", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Head-Weighted Precision Measure", "sec_num": "3.1" }, { "text": "is common to compare systems using Mean AP values (MAP) computed over a representative set of conditions. In information retrieval, conditions are topics, items are documents, and validity is relevance to the topic. In n-best MT, the conditions are a representative set of sentences to be translated (in the STAPLE task, the prompts), the items are system-produced translations, and validity is whether a translation is proper (i.e., present in the STAPLE gold translations).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Head-Weighted Precision Measure", "sec_num": "3.1" }, { "text": "MAP's reliance on binary validity rather than the preference order among valid translations simplified the generation of a gold standard, but the implicit assumption that the reference set of valid translations is complete is a potential concern. Due to the richness of human language, most sentences would admit an immense number of valid translations (Dreyer and Marcu, 2012) . Even the STAPLE dataset used in this paper, which contains hundreds of valid reference translations for many sentences, is surely still not complete. This effect results in systems being penalized for false negatives, receiving lower MAP scores than they should. However, when our goal is to compare systems, we are most interested in relative, not absolute, scores. So the question to be answered is whether missing data in the ground truth adversely affects comparisons between systems. Zobel (1998) introduced a clever way to characterize such an effect. The key idea is to ablate the ground truth, and to examine the effect of that ablation on system comparisons. If removing, say, half the ground truth resulted in few reversals in the preference order between systems, then one might reasonably assume that adding even more ground truth would have similarly small effects.", "cite_spans": [ { "start": 353, "end": 377, "text": "(Dreyer and Marcu, 2012)", "ref_id": "BIBREF3" }, { "start": 869, "end": 881, "text": "Zobel (1998)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Dealing With Incomplete Gold Data", "sec_num": "3.2" }, { "text": "The art in this approach is to design the ablation in a way that removes things that are most like the things that are likely missing. Zobel, using this technique to study the stability of MAP in information retrieval test collections, ablated relevance judgments that would not have been available had less effort been devoted to generating such judgments; in information retrieval these are the documents that no participating system retrieved at high rank. For the STAPLE dataset, a natural choice is to ablate the least frequent translations, since it seems reasonable to presume that if Duolingo was not aware of the validity of some translation, that translation is likely to be rather uncommon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dealing With Incomplete Gold Data", "sec_num": "3.2" }, { "text": "Such an ablation study requires a suite of systems and a measure that characterizes the swaps between MAP scores that occur. We therefore use the aforementioned 38 Japanese MT models and 44 Portuguese MT models. From each model we generate a 1000-best list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dealing With Incomplete Gold Data", "sec_num": "3.2" }, { "text": "We can use Spearman's \u03c1 to count the number of times the relative order of two systems is swapped. One limitation of \u03c1, however, is that we might care more about swaps near the top of the list of system rankings than lower down (i.e., a head-weighted measure). Another limitation is that we might care more about swaps between systems with very different MAP values than we do about swaps between systems with closer values (i.e., a gap-sensitive measure). In addition to \u03c1, we therefore also report Pearson Rank (Gao et al., 2016) , a more recently introduced correlation measure that is head-weighted and gap-sensitive. Figure 1 plots these correlations as progressively more common translations are ablated. The left side of the plots show how system rankings from an ablated data condition that only includes the most common translations correlate with rankings from the full data. Moving right, the correlations are compared for conditions containing more and more data, with the penultimate point representing a data condition where only the rarest translations have been removed. The flatness of the curve on the right side of the plot suggests (based on extrapolation to the right) that the presence of additional relatively uncommon translations would have been unlikely to result in many system swaps.", "cite_spans": [ { "start": 513, "end": 531, "text": "(Gao et al., 2016)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 622, "end": 630, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Dealing With Incomplete Gold Data", "sec_num": "3.2" }, { "text": "Pearson Rank, which accounts for headweightedness, additionally shows that few of the system swaps are between relatively good systems. This is likely because good systems will output high-frequency translations near the top of the nbest list, so as low-frequency translations are ablated, these good systems are less likely to be affected. From this we can conclude that, at least for Japanese and Portuguese, the binarized STAPLE task ground truth is sufficiently complete to support computation of MAP scores for individual systems that can reasonably be compared, allowing us to answer Question 1 using this measure. As introduced earlier, the STAPLE task uses a weighted macro F 1 measure for evaluation. The weighted macro F 1 measure is the same as the standard macro-averaged F 1 , but recall is replaced with weighted recall. Weighted recall is calculated by using frequency weight sums provided in the gold translation data for Weighted True Positive and Weighted False Negative terms, instead of the standard raw counts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dealing With Incomplete Gold Data", "sec_num": "3.2" }, { "text": "One difference of this measure compared to MAP is that it does not evaluate a model's ability to generate n-best lists in a pure sense (agnostic to where that list may be cut off). This is because the STAPLE systems must not only generate nbest lists, but also decide where to truncate each list, in order to maximize weighted macro F 1 . This means that an n-best list that outputs hundreds of valid translations only at the top of the list, but is truncated at rank 1000, would score poorly, due to precision issues. However, MAP is robust to this, since values at the very bottom of the list have only a small effect. The definition of MAP allows it to function properly for any size lists, potentially even infinite-size lists, which weighted macro F 1 does not. This is not to argue that MAP is a better measure than F 1 , but simply rather that they are different measures and may be better suited for separate goals.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to the STAPLE Metric", "sec_num": "3.3" }, { "text": "Despite this, we produce a correlation plot in Figure 2 to compare system MAP scores with system weighted macro F 1 scores. For these weighted macro F 1 scores, we used a thresholding technique that truncates n-best lists at a manually tuned fraction of the top hypothesis' model probability for each prompt. We find that the weighted macro F 1 values correlate very strongly with MAP. 2 We note that the correlation is particularly strong at the topend of systems compared to the bottom-end. This is ideal since understanding and trusting evaluation measures are particularly vital for choosing among the best of systems (often not as important for choosing among the worst). From this we conclude that MAP could have been a useful formative evaluation measure when tuning n-best MT systems for the STAPLE shared task and that these two measures may actually be answering a similar question despite the differences in their properties.", "cite_spans": [ { "start": 386, "end": 387, "text": "2", "ref_id": null } ], "ref_spans": [ { "start": 47, "end": 55, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Comparison to the STAPLE Metric", "sec_num": "3.3" }, { "text": "Our second question for n-best evaluation is how well models can rank translations in preference order. Since we have model scores from the trans-2 Spearman's \u03c1 = 0.956. lation model and relative prevalence from the STA-PLE dataset, one type of easily computed measure of quality for the model scores would be their degree of correlation with the STAPLE score for each translation (which indicates which of the translations are more commonly used; i.e., their relative prevalence).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preference Correlation", "sec_num": "4" }, { "text": "One interesting aspect of this type of measure is that it relies on having frequency (or some other preference score) annotation information for each reference translation. Certain tasks may be better imagined to take advantage of such data. For example, a task in which models need to generate diverse translations may want to sample from valid outputs in a way that more closely reflects natural human variance. That is, it should sample a frequent translation more often than an infrequent one. Correlating a model's scores for translations with gold frequency scores may then be useful for such a case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preference Correlation", "sec_num": "4" }, { "text": "We consider how these Preference Correlation scores could be used for system rankings. We calculate both Spearman's \u03c1 and Pearson's r on all of our models in Japanese and Portuguese. 3 We then construct a scatterplot of the Preference Correlation and MAP scores for each system, as shown in Figure 3 . From the near-zero slope of a linear fit and the near-zero r 2 values, it is clear that both Spearman's \u03c1 and Pearson's r are measuring something very different from MAP. That is not to say that they are not good measures; rather, it says that they are measuring something different. MAP measures how reliably systems can place valid translations early in a ranked list; correlation to the gold standard preference order measures how reliably systems can place preferred translations ahead of less preferred translations.", "cite_spans": [ { "start": 183, "end": 184, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 291, "end": 299, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Preference Correlation", "sec_num": "4" }, { "text": "Both of these measures have additional limitations, which discourage their usage. First is the requirement for more nuanced gold data. While MAP only requires access to several valid translations, these measures require either a preference order or preference scores for those translations, which may be difficult to obtain. A second limitation is the handling of missing data. Japanese. Using many references and a larger depth correlates very well with MAP while 1-best 1-reference BLEU correlates poorly. Note: striped bars indicate a negative value; all references is 1536 for Japanese. two sets. If the intersection between these two sets is very small, it limits the usefulness of the measures. Though we have many frequency scores and relative rankings in the gold translations, if the nbest lists we use to compare do not contain many of those translations, the measures could be less reliable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preference Correlation", "sec_num": "4" }, { "text": "Finally, our third question is how close the translations are to the reference translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unweighted Partial Match", "sec_num": "5" }, { "text": "A task that may benefit from such a measure would be cross-language information retrieval (CLIR). In such a task, the machine translation sys-tem would serve as an upstream component of the pipeline. It is less likely for translations to need to be fully valid to be useful as compared to some other MT tasks. CLIR could benefit from combining the terms from several translation outputs regardless of if each entire sentence is perfectly valid. In this way, a measure that can assign partial credit to translations by matching n-grams as well as weighting all translations equally may be appropriate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unweighted Partial Match", "sec_num": "5" }, { "text": "For this, we turn to BLEU (Papineni et al., 2002) , which computes n-gram overlap between a system's translations and the available references. This raises the question of how many references we should use when we have very many available, and which of the system translations we should be using in this computation.", "cite_spans": [ { "start": 26, "end": 49, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Unweighted Partial Match", "sec_num": "5" }, { "text": "The STAPLE dataset provides an opportunity to explore this question. In this section, we compute BLEU measures with different numbers of references, to different depths in the n-best list. We find that at deep depths with many references BLEU ranks systems similarly to MAP, but that with fewer references its behavior is quite different.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unweighted Partial Match", "sec_num": "5" }, { "text": "In order to set up various configurations for our n-best BLEU measures, we perform a grid search over {1,5,10,100,1000}-best {1,5,10,100,all 4 }reference BLEU. In what we call x-best y-reference BLEU, x refers to how many top hypotheses from the system output are used, and y refers to how many references are used. When working with multiple hypotheses in an n-best list, we simply treat them as independent translations in a larger pseudo-corpus, pairing each with the relevant reference(s) for evaluation with BLEU.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unweighted Partial Match", "sec_num": "5" }, { "text": "After obtaining system scores under each of the BLEU configurations, we calculate the Spearman's \u03c1 and Pearson Rank correlation coefficients between system rankings from BLEU compared to those from MAP. We find similar patterns for both languages and both correlation metrics, so we show Spearman's correlation for Japanese in Figure 4 .", "cite_spans": [], "ref_spans": [ { "start": 327, "end": 335, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Unweighted Partial Match", "sec_num": "5" }, { "text": "An important observation is our finding that 1best, 1-reference BLEU does not correlate well with MAP. From this we conclude that when placing many valid translations near the top of the n-best list is important, as is the case in some applications, optimizing for 1-best 1-reference BLEU may be suboptimal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unweighted Partial Match", "sec_num": "5" }, { "text": "This situation is not improved by adding hypotheses to the pseudo-corpus, so long as only a single reference is used. However, increasing the number of references does bring the correlation to moderate strength even when still only evaluating at 1-best. Once the evaluation has access to several references, evaluating deeper in the n-best list further improves the correlation with MAP, and correlations at moderately deep depths are quite substantial (e.g., \u03c1 = 0.86 for 100-best 100-reference).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unweighted Partial Match", "sec_num": "5" }, { "text": "In Figure 5 , we zoom in on two of these BLEU configurations. We choose 1-best 1-reference, which represents a standard BLEU evaluation framework, and also 100-best 100-reference, since it showed nearly the highest correlation with MAP (increasing the depth to 1000 and the references to 1000 yields only slightly stronger correlation). As the slope and r 2 of the linear fit indicate, 1-best 1-reference BLEU has little value for predicting MAP, whereas 100-best 100-reference BLEU has substantial preditive power. Moreover, this relationship is strongest for higher values of MAP; this is important, because when seeking to choose the best system for some task, a system builder would choose from among the best-performing ones.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Unweighted Partial Match", "sec_num": "5" }, { "text": "To help explain this difference, we also performed a qualitative analysis. This revealed that systems with high 1-best 1-reference BLEU scores but low MAP produced valid translations at the top of the n-best list, but very poor translations (e.g., Latin characters in Japanese) deeper in the list. Systems with both higher MAP and 100-best 100-reference BLEU scores were better at producing reasonable sentences throughout the entire list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unweighted Partial Match", "sec_num": "5" }, { "text": "This makes sense as a system with a great translation at rank 1 but terrible translations between ranks 2 and 100, for example, will have a great 1best 1-reference BLEU but will be heavily punished by MAP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unweighted Partial Match", "sec_num": "5" }, { "text": "Access to frequency scores in STAPLE's gold data has provided a unique chance to use correlation between those scores and model scores as a way to evaluate systems. However, we see that these measures behave quite differently from MAP, and thus we would recommend use of Preference Correlation ( \u00a74) only in cases in which fine-grained distinctions between preference scores are important for the intended application.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "We also looked into using BLEU for n-best evaluation. MAP and BLEU seem like quite different ways of evaluating in terms of how they approach the problem. MAP relies on binary scores, gives no partial credit, and weighs translations at the top of the list higher. BLEU on the other hand looks at closeness, allowing for partial credit, and treats all sentences equally, no matter where they appear in the n-best list. It could be expected then that these measures would differ, as we see when comparing MAP to 1-best 1-reference BLEU. However, as we increase depth and number of references for BLEU, the correlation of the resulting system rankings increases substantially and ultimately they yield quite similar system rankings. From this we can conclude that, at least for the systems we have experimented with, and for the language learning task that the STAPLE dataset models, systems that find many good translations also tend to rank those translations well. Thus, with enough references the choice between MAP and BLEU might be made based on efficiency. We suspect that the ability of many-best many-reference to work with partial matches might give it advantages over MAP when the number of available references is more limited than in STAPLE, but we leave ablation studies to test that hypothesis to future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Of course, the requirement for large numbers of references, which are generally expensive to obtain, is a limitation. However, this is a separate consideration; in the Duolingo dataset that was the center of our study, they were produced organically within that task; in other settings, if evaluation of n-best lists were to be important enough, the requisite investments to create the required resources could in some cases be made. In such a setting, techniques such as crowdsourcing or monolingual paraphrase generation might possibly be leveraged to reduce costs. Moreover, our ablation study indicates that the ground truth need not be completely comprehensive to be useful.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "A second limitation is that our experiments were conducted on the relatively simple sentences used for the Duolingo STAPLE shared task. As with any study, it remains to be seen how well it generalizes to other settings, including other datasets. But this does not detract from our findings on the STAPLE dataset, which was after all motivated by a real language learning task that benefits large number of of people.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Perhaps our most salient general observation is that it seems that having access to more references and evaluating deeper in the list makes for better evaluation of n-best lists. Of course, this benefit must be balanced against the cost of generating the requisite number of references.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "We have shown how different metrics can be used to characterize n-best list quality. In particular, we have introduced MAP as a measure for n-best list quality for machine translation systems. MAP rewards systems that place good translations near the top of the list. BLEU, computed over a pseudocorpus built from n-best lists, and against large reference sets, ranks systems similarly to MAP. In both cases, the key distinguishing feature from typical MT system evaluation is the use of large reference sets, which yields insights unavailable with shallower evaluations using only a single reference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "MAP is but one measure among many that have been used to characterize the quality of ranked lists in other settings. As future work, we would be interested in exploring the use of measures such as inferred average precision (infAP) that are designed to be particularly robust to missing data in the gold standard (Aslam and Yilmaz, 2007) , and measures such as normalized Discounted Cumulative Gain (nDCG) (J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002) that represent multiple degrees of utility (thus requiring more nuanced ground truth, as what we have in STAPLE).", "cite_spans": [ { "start": 313, "end": 337, "text": "(Aslam and Yilmaz, 2007)", "ref_id": "BIBREF2" }, { "start": 406, "end": 437, "text": "(J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Spearmans's \u03c1 considers only relative rankings, while Pearson's r additionally considers the difference in scores. Neither is head-weighted. We compute Pearson's r in log space, excluding system translations not in the STAPLE references. Another option would have been to use the model to force-decode the STAPLE references. We observe similar trends when doing so.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "all is up to 720 in Portuguese, and 1536 in Japanese", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research has been supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract FA8650-17-C-9117. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies of ODNI, IARPA, or the U.S. Government.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The AMARA corpus: Building parallel language resources for the educational domain", "authors": [ { "first": "Ahmed", "middle": [], "last": "Abdelali", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzman", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", "volume": "", "issue": "", "pages": "1856--1862", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ahmed Abdelali, Francisco Guzman, Hassan Sajjad, and Stephan Vogel. 2014. The AMARA corpus: Building parallel language resources for the educa- tional domain. In Proceedings of the Ninth Interna- tional Conference on Language Resources and Eval- uation (LREC'14), pages 1856-1862, Reykjavik, Iceland. European Language Resources Association (ELRA).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "JW300: A widecoverage parallel corpus for low-resource languages", "authors": [ { "first": "Zeljko", "middle": [], "last": "Agi\u0107", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3204--3210", "other_ids": { "DOI": [ "10.18653/v1/P19-1310" ] }, "num": null, "urls": [], "raw_text": "Zeljko Agi\u0107 and Ivan Vuli\u0107. 2019. JW300: A wide- coverage parallel corpus for low-resource languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204-3210, Florence, Italy. Association for Compu- tational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Inferring document relevance from incomplete information", "authors": [ { "first": "A", "middle": [], "last": "Javed", "suffix": "" }, { "first": "Emine", "middle": [], "last": "Aslam", "suffix": "" }, { "first": "", "middle": [], "last": "Yilmaz", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Sixteenth ACM Conference on Information and Knowledge Management, CIKM 2007", "volume": "", "issue": "", "pages": "633--642", "other_ids": { "DOI": [ "10.1145/1321440.1321529" ] }, "num": null, "urls": [], "raw_text": "Javed A. Aslam and Emine Yilmaz. 2007. Infer- ring document relevance from incomplete informa- tion. In Proceedings of the Sixteenth ACM Con- ference on Information and Knowledge Manage- ment, CIKM 2007, Lisbon, Portugal, November 6- 10, 2007, pages 633-642. ACM.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "HyTER: Meaning-equivalent semantics for translation evaluation", "authors": [ { "first": "Markus", "middle": [], "last": "Dreyer", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "162--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Markus Dreyer and Daniel Marcu. 2012. HyTER: Meaning-equivalent semantics for translation evalu- ation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 162-171, Montr\u00e9al, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Pearson rank: A head-weighted gap-sensitive scorebased correlation coefficient", "authors": [ { "first": "Ning", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Mossaab", "middle": [], "last": "Bagdouri", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Oard", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "941--944", "other_ids": { "DOI": [ "10.1145/2911451.2914728" ] }, "num": null, "urls": [], "raw_text": "Ning Gao, Mossaab Bagdouri, and Douglas Oard. 2016. Pearson rank: A head-weighted gap-sensitive score- based correlation coefficient. pages 941-944.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Cumulated gain-based evaluation of IR techniques", "authors": [ { "first": "Kalervo", "middle": [], "last": "J\u00e4rvelin", "suffix": "" }, { "first": "Jaana", "middle": [], "last": "Kek\u00e4l\u00e4inen", "suffix": "" } ], "year": 2002, "venue": "ACM Trans. Inf. Syst", "volume": "20", "issue": "4", "pages": "422--446", "other_ids": { "DOI": [ "10.1145/582415.582418" ] }, "num": null, "urls": [], "raw_text": "Kalervo J\u00e4rvelin and Jaana Kek\u00e4l\u00e4inen. 2002. Cumu- lated gain-based evaluation of IR techniques. ACM Trans. Inf. Syst., 20(4):422-446.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The JHU submission to the 2020 Duolingo shared task on simultaneous translation and paraphrase for language education", "authors": [ { "first": "Huda", "middle": [], "last": "Khayrallah", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Bremerman", "suffix": "" }, { "first": "D", "middle": [], "last": "Arya", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Mc-Carthy", "suffix": "" }, { "first": "Winston", "middle": [], "last": "Murray", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Post", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fourth Workshop on Neural Generation and Translation", "volume": "", "issue": "", "pages": "188--197", "other_ids": { "DOI": [ "10.18653/v1/2020.ngt-1.22" ] }, "num": null, "urls": [], "raw_text": "Huda Khayrallah, Jacob Bremerman, Arya D. Mc- Carthy, Kenton Murray, Winston Wu, and Matt Post. 2020. The JHU submission to the 2020 Duolingo shared task on simultaneous translation and para- phrase for language education. In Proceedings of the Fourth Workshop on Neural Generation and Translation, pages 188-197, Online. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Europarl: A parallel corpus for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "MT summit", "volume": "5", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, vol- ume 5, pages 79-86.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "OpenSub-titles2016: Extracting large parallel corpora from movie and TV subtitles", "authors": [ { "first": "Pierre", "middle": [], "last": "Lison", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "923--929", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pierre Lison and J\u00f6rg Tiedemann. 2016. OpenSub- titles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 923-929, Por- toro\u017e, Slovenia. European Language Resources As- sociation (ELRA).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Simultaneous translation and paraphrase for language education", "authors": [ { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Klinton", "middle": [], "last": "Bicknell", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brust", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Mcdowell", "suffix": "" }, { "first": "Will", "middle": [], "last": "Monroe", "suffix": "" }, { "first": "Burr", "middle": [], "last": "Settles", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Mayhew, Klinton Bicknell, Chris Brust, Bill McDowell, Will Monroe, and Burr Settles. 2020. Si- multaneous translation and paraphrase for language education. In Proceedings of the ACL Workshop on Neural Generation and Translation (WNGT). ACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "authors": [ { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Ng", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)", "volume": "", "issue": "", "pages": "48--53", "other_ids": { "DOI": [ "10.18653/v1/N19-4009" ] }, "num": null, "urls": [], "raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Min- nesota. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Wikimatrix: Mining 135m parallel sentences in 1620 language pairs from Wikipedia. CoRR", "authors": [ { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Shuo", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Hongyu", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzm\u00e1n. 2019. Wikimatrix: Mining 135m parallel sentences in 1620 language pairs from Wikipedia. CoRR, abs/1907.05791.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "How reliable are the results of large-scale information retrieval experiments? SI-GIR '98", "authors": [ { "first": "Justin", "middle": [], "last": "Zobel", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "307--314", "other_ids": { "DOI": [ "10.1145/290941.291014" ] }, "num": null, "urls": [], "raw_text": "Justin Zobel. 1998. How reliable are the results of large-scale information retrieval experiments? SI- GIR '98, pages 307-314, New York, NY, USA. As- sociation for Computing Machinery.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Spearman's \u03c1 and Pearson Rank correlation scores for MAP system rankings for Japanese (left) and Portugese (right) under different data ablation settings. MAP results obtained for STAPLE are reliable for system ranking despite incomplete data, with results slightly more reliable for Japanese than Portuguese.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF2": { "text": "Correlation of system rankings based onWeighted Macro F 1 and MAP in Japanese. r 2 = 0.836, slope = 1.058.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF3": { "text": "Correlation of system rankings based on Preference Correlation scores (Spearman's \u03c1 and Pearson's r) vs. MAP score in Japanese (left) and Portuguese (right). Rankings do not correlate. Japanese: r 2 for Spearman's = 0.009, slope = -0.145; r 2 for Pearson's = 0.046, slope = -0.356. Portuguese: r 2 for Spearman's = -0.001, slope = -0.010; r 2 for Pearson's = -0.002, slope = -0.083. Best viewed in color.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF4": { "text": "System ranking correlation scores (Spearman's \u03c1) between MAP and different configurations of BLEU (x-references and y-best outputs per source) in", "uris": null, "num": null, "type_str": "figure" }, "FIGREF5": { "text": "System scores in Japanese by MAP and BLEU (left: 1-best 1-ref, right: 100-best 100-ref). 1-best 1reference BLEU does not correlate well with MAP (\u03c1 = \u22120.14), but 100-best 100-reference BLEU correlates highly (\u03c1 = 0.86). According to MAP, choosing the system with the highest BLEU would result in poor n-best lists in 1-best 1-ref (left) and strong n-best lists in 100-best 100-ref(right). 1-best: r 2 = 0.022, slope = -27.817 100-best: r 2 = 0.487, slope = 98.448.", "uris": null, "num": null, "type_str": "figure" }, "TABREF0": { "text": "The top five valid Japanese translations for the STAPLE prompt i will feel well.", "content": "