{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:10:42.998030Z" }, "title": "Using PRMSE to evaluate automated scoring systems in the presence of label noise", "authors": [ { "first": "Anastassia", "middle": [], "last": "Loukina", "suffix": "", "affiliation": {}, "email": "aloukina@ets.org" }, { "first": "Nitin", "middle": [], "last": "Madnani", "suffix": "", "affiliation": {}, "email": "nmadnani@ets.org" }, { "first": "Aoife", "middle": [], "last": "Cahill", "suffix": "", "affiliation": {}, "email": "acahill@ets.org" }, { "first": "Lili", "middle": [], "last": "Yao", "suffix": "", "affiliation": {}, "email": "lili.yao@gmail.com" }, { "first": "Matthew", "middle": [ "S" ], "last": "Johnson", "suffix": "", "affiliation": {}, "email": "msjohnson@ets.org" }, { "first": "Brian", "middle": [], "last": "Riordan", "suffix": "", "affiliation": {}, "email": "briordan@ets.org" }, { "first": "Daniel", "middle": [ "F" ], "last": "Mccaffrey", "suffix": "", "affiliation": {}, "email": "dmccaffrey@ets.org" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The effect of noisy labels on the performance of NLP systems has been studied extensively for system training. In this paper, we focus on the effect that noisy labels have on system evaluation. Using automated scoring as an example, we demonstrate that the quality of human ratings used for system evaluation have a substantial impact on traditional performance metrics, making it impossible to compare system evaluations on labels with different quality. We propose that a new metric, proportional reduction in mean squared error (PRMSE), developed within the educational measurement community, can help address this issue, and provide practical guidelines on using PRMSE.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "The effect of noisy labels on the performance of NLP systems has been studied extensively for system training. In this paper, we focus on the effect that noisy labels have on system evaluation. Using automated scoring as an example, we demonstrate that the quality of human ratings used for system evaluation have a substantial impact on traditional performance metrics, making it impossible to compare system evaluations on labels with different quality. We propose that a new metric, proportional reduction in mean squared error (PRMSE), developed within the educational measurement community, can help address this issue, and provide practical guidelines on using PRMSE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "NLP systems are usually trained and evaluated using human labels. For automated scoring systems, these would be scores assigned by human raters. However, human raters do not always agree on the scores they assign (Eckes, 2008; Ling et al., 2014; Davis, 2016; Carey et al., 2011) and the inter-rater agreement can vary substantially across prompts as well as across applications. For example, in the ASAP-AES data (Shermis, 2014), the agreement varies from Pearson's r=0.63 to r=0.85 across \"essay sets\" (writing prompts) .", "cite_spans": [ { "start": 213, "end": 226, "text": "(Eckes, 2008;", "ref_id": "BIBREF4" }, { "start": 227, "end": 245, "text": "Ling et al., 2014;", "ref_id": "BIBREF13" }, { "start": 246, "end": 258, "text": "Davis, 2016;", "ref_id": "BIBREF2" }, { "start": 259, "end": 278, "text": "Carey et al., 2011)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In many automated scoring studies, the data for training and evaluating the system are randomly sampled from the same dataset, which means that the quality of human labels may affect both system training and evaluation. Notably, the effect of label quality on training and evaluation may not be the same. Previous studies (Reidsma and Carletta, 2008; Loukina et al., 2018) suggest that when annotation noise is relatively random, a system trained on noisier annotations may perform as well as a system trained on clean annotations. On the other hand, noise in the human labels used for evaluation can have a substantial effect on the estimates of system performance even if the noise is random.", "cite_spans": [ { "start": 322, "end": 350, "text": "(Reidsma and Carletta, 2008;", "ref_id": "BIBREF21" }, { "start": 351, "end": 372, "text": "Loukina et al., 2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, our focus is the effect of noise in human labels on system evaluation. How do we compare two systems evaluated on datasets with different quality of human labels? While there exist several public data sets that can be used to benchmark and compare automated scoring systems, in many practical and research applications the scoring systems are customized for a particular task and, thus, cannot be evaluated appropriately on a public dataset. As a result, the research community has to rely on estimates of system performance to judge the effectiveness of the proposed approach. In an industry context, the decision to deploy a system is often contingent on system performance meeting certain thresholds which may even be codified as company-or industry-wide standards.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A typical solution to the problem of different human-human agreement across evaluation datasets is to use human-human agreement itself as a baseline when evaluating a system (Shermis, 2014) . In this case, the system can be evaluated either via a binary distinction (did its performance reach human-human agreement?) or by looking at the differences in agreement metrics as measured between two humans and between a single human and the machine, known as \"degradation\" (Williamson et al., 2012 ). Yet how do we interpret these numbers? Is a system that exceeds a humanhuman agreement of r=0.4 on one dataset better than another that performs just below a humanhuman agreement of r=0.9 on a different dataset?", "cite_spans": [ { "start": 174, "end": 189, "text": "(Shermis, 2014)", "ref_id": "BIBREF24" }, { "start": 469, "end": 493, "text": "(Williamson et al., 2012", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we use simulated data to demonstrate that the rate of human-human agreement has a substantial effect on estimates of system performance, making it difficult to compare systems that are evaluated on different datasets. We also show that this problem cannot be resolved by simply looking at the difference between human-human and machine-human agreement. We then show that one possible solution is to use proportional reduction in mean squared error (PRMSE) (Haberman, 2008) , a metric developed in the educational measurement community, which relies on classical test theory and can adjust for human error when computing estimates of system performance.", "cite_spans": [ { "start": 471, "end": 487, "text": "(Haberman, 2008)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The effect of noisy labels on machine learning algorithms has been extensively studied in terms of their effect on system training in both general machine learning literature (see, for example, Fr\u00e9nay and Verleysen (2014) for a comprehensive review), NLP (Reidsma and Carletta, 2008; Beigman Klebanov and Beigman, 2009; Schwartz et al., 2011; Plank et al., 2014; Mart\u00ednez Alonso et al., 2015; Jamison and Gurevych, 2015) and automated scoring (Horbach et al., 2014; Zesch et al., 2015) .", "cite_spans": [ { "start": 255, "end": 283, "text": "(Reidsma and Carletta, 2008;", "ref_id": "BIBREF21" }, { "start": 284, "end": 319, "text": "Beigman Klebanov and Beigman, 2009;", "ref_id": "BIBREF0" }, { "start": 320, "end": 342, "text": "Schwartz et al., 2011;", "ref_id": "BIBREF23" }, { "start": 343, "end": 362, "text": "Plank et al., 2014;", "ref_id": "BIBREF19" }, { "start": 363, "end": 392, "text": "Mart\u00ednez Alonso et al., 2015;", "ref_id": "BIBREF18" }, { "start": 393, "end": 420, "text": "Jamison and Gurevych, 2015)", "ref_id": "BIBREF11" }, { "start": 443, "end": 465, "text": "(Horbach et al., 2014;", "ref_id": "BIBREF10" }, { "start": 466, "end": 485, "text": "Zesch et al., 2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "One key insight that emerged from such work is that the nature of the noise is extremely important for the system performance. Machine learning algorithms are greatly affected by systematic noise but are less sensitive to random noise (Reidsma and Carletta, 2008; Reidsma and op den Akker, 2008) . A typical case of random noise is when the labeling is done by multiple annotators which minimizes the individual bias introduced by any single annotator. For example, in a study on crowdsourcing NLP tasks, Snow et al. (2008) showed that a system trained on a set of non-expert annotations obtained from multiple annotators outperformed a system trained with labels from one expert, on average.", "cite_spans": [ { "start": 235, "end": 263, "text": "(Reidsma and Carletta, 2008;", "ref_id": "BIBREF21" }, { "start": 264, "end": 295, "text": "Reidsma and op den Akker, 2008)", "ref_id": "BIBREF20" }, { "start": 505, "end": 523, "text": "Snow et al. (2008)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "The studies discussed so far vary the model training set, or training regime, or both while keeping the evaluation set constant. Fewer studies have considered how inter-annotator agreement may affect system evaluation when the training set is held constant. These studies have shown that in the case of evaluation, the label quality is likely to have a substantial impact on the estimates of system performance even if the annotation noise is random. Reidsma and Carletta (2008) used simulated data to explore the effect of noisy labels on classifier performance. They showed that the performance of the model, measured using Cohen's Kappa, when evaluated against the 'real' (or gold-standard) labels was higher than the performance when evaluated against the 'observed' labels with added random noise. This is because for some instances, the classifier's predictions were correct, but the 'observed' labels contained errors. Loukina et al. (2018) used two different datasets to train and evaluate an automated system for scoring spoken language proficiency. They showed that training an automated system on perfect labels did not give any advantage over training the system on noisier labels, confirming previous findings that automated scoring systems are likely to be robust to random noise in the data. At the same time, the choice of evaluation set led to very different estimates of system performance regardless of what data was used to train the system.", "cite_spans": [ { "start": 451, "end": 478, "text": "Reidsma and Carletta (2008)", "ref_id": "BIBREF21" }, { "start": 926, "end": 947, "text": "Loukina et al. (2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Metrics such as Pearson's correlation or quadratically-weighted kappa, commonly used to evaluate automated scoring systems (Williamson et al., 2012; Yannakoudakis and Cummins, 2015; Haberman, 2019) , compare automated scores to observed human scores without correcting for any errors in human scores. In order to account for differences in human-human agreement, these are then compared to the same metrics computed for the human raters using measures such as \"degradation\": the difference between human-human and humanmachine agreement (Williamson et al., 2012) .", "cite_spans": [ { "start": 123, "end": 148, "text": "(Williamson et al., 2012;", "ref_id": "BIBREF26" }, { "start": 149, "end": 181, "text": "Yannakoudakis and Cummins, 2015;", "ref_id": "BIBREF27" }, { "start": 182, "end": 197, "text": "Haberman, 2019)", "ref_id": "BIBREF7" }, { "start": 537, "end": 562, "text": "(Williamson et al., 2012)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In this paper, we build on findings from the educational measurement community to explore an alternative approach where estimates of system performance are corrected for measurement error in the human labels. Classical test theory (Lord and Novick, 1968) assumes that the human holistic score is composed of the test's true score and some measurement error. A \"true\" score is defined as the expected score over an infinite number of independent administrations of the test. While such true scores are latent variables, unobservable in real life, their underlying distribution and measurement error can be estimated if a subset of responses is scored by two independently and randomly chosen raters. Haberman 2008 2019proposed a new metric called proportional reduction in mean squared error (PRMSE) which evaluates how well the machine scores predict the true score, after adjusting for the measurement error. The main contribution of this paper is a further demonstration of the utility of this metric in the context of automated scoring. Outside of educational measurement, a similar approach has been been explored in pattern recognition by Lam and Stork (2003) , for example, who used estimated error rates in human labels to adjust performance estimates.", "cite_spans": [ { "start": 231, "end": 254, "text": "(Lord and Novick, 1968)", "ref_id": "BIBREF14" }, { "start": 1144, "end": 1164, "text": "Lam and Stork (2003)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "We further explore how agreement between human raters affects the evaluation of automated scoring systems. We focus on a specific case where the human rating process is organized in such a way that annotator bias is minimized. In other words, the label noise can be considered random. We also assume that the scores produced by an automated scoring system are on a continuous scale. This is typical for many automated scoring contexts including essay scoring (Shermis, 2014), speech scoring (Zechner et al., 2009) and, to some extent, content scoring (Madnani et al., 2017a; Riordan et al., 2019) but, of course, not for all possible contexts: for example, some of the SemEval 2013 shared tasks on short answer scoring (Dzikovska et al., 2016) use a different scoring approach.", "cite_spans": [ { "start": 491, "end": 513, "text": "(Zechner et al., 2009)", "ref_id": "BIBREF30" }, { "start": 551, "end": 574, "text": "(Madnani et al., 2017a;", "ref_id": "BIBREF16" }, { "start": 575, "end": 596, "text": "Riordan et al., 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In this paper, we use simulated gold-standard (or \"true\") scores, human scores and system scores for a set of 10,000 responses. Since \"true\" scores are not available for real data, using simulated data allows us to compare multiple raters and systems to the known ground-truth. 1 We focus on evaluation only and make no assumptions about the quality of the labels in the training set or any other aspects of system training. The only thing we know is that different human raters and different systems in our data set assign different scores and have different performances when evaluated against true scores.", "cite_spans": [ { "start": 278, "end": 279, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Simulated data", "sec_num": "3" }, { "text": "As our gold-standard, we use a set of continuous scores simulated for each response and consider these to be the correct \"true\" score for the response. Note that the continuous nature of goldstandard scores allows us to capture the intuition that some responses fall between the ordinal score points usually assigned by human raters. To create such gold-standard scores, we randomly sampled 10,000 values from a normal distribution using the mean and standard deviation of human scores observed in a large-scale assessment (mean=3.844, std=0.74). Since the scores in the large-scale assessment we use as reference varied from 1 to 6, the gold-standard scores below 1 and above 6 were also truncated to 1 and 6 respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simulated data", "sec_num": "3" }, { "text": "Next, we simulated scores from 200 human raters for each of these 10,000 \"responses\". For each rater, its score for a response was modeled as the gold-standard score for the response plus a random error. We model different groups of raters: with low (inter-rater correlation r=0.4), moderate (r=0.55), average (r=0.65) and high (r=0.8) agreement. The correlations for different categories were informed by correlations we have observed in empirical data from various studies. The errors for each rater were drawn from a normal distribution with a mean of 0. We chose the standard deviation values used to sample the errors in order to create 4 categories of 50 raters, each defined by a specific average inter-rater correlation. Since in most operational scenarios, human raters assign an integer score, all our simulated human scores were rounded to integers and truncated to lie in [1, 6], if necessary. Table 1 : A description of the 4 categories of simulated human raters used in this study. The table shows the label of each category, the number of raters in the category, the average correlation between pairs of raters within the category, and the mean and standard deviation of the scores assigned by raters in the category.", "cite_spans": [], "ref_spans": [ { "start": 906, "end": 913, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Simulated data", "sec_num": "3" }, { "text": "For each response, we also simulated 25 automated scores. Like human scores, automated scores were simulated as gold-standard scores plus random error. We chose the standard deviation values used to sample the random errors so as to obtain specific levels of performance against the gold-standard scores: the worst system had a Root Mean Squared Error (RMSE) of 0.74 score points while the best system had an error of 0.07 score points. Since the interpretation of RMSE depends on the score scale, we chose these values as the percentage of gold-standard score variance. Table 2 summarizes different automated systems simulated for this study. We created 5 categories of systems with 5 systems in each category. For the worst systems (\"poor\"), the mean squared error was equal to the variance of gold-standard scores (R 2 =0). In other words, in terms of scoring error, a system from the \"poor\" category performed no better than a constant. 2 For the best system (from the \"perfect\" category), the mean squared error was only 0.1% of gold-standard score variance with the system achieving an R 2 of 0.99. The systems within each category were very close in terms of performance as measured by mean squared error but the actual simulated scores for each system were different. These simulated systems will help evaluate whether performance metrics can both differentiate systems with different performance and correctly determine when two systems have similar performance. Table 2 : A description of the 5 categories of simulated systems used in this study. The table shows the label of each category, the number of systems in the category, the average R 2 of the systems within the category, and the r when evaluating the systems in the category against the gold-standard scores (\"GS\"). The last column shows the average correlation of the systems' scores with simulated rater scores from the \"Average\" category.", "cite_spans": [], "ref_spans": [ { "start": 571, "end": 578, "text": "Table 2", "ref_id": null }, { "start": 1472, "end": 1479, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Simulated data", "sec_num": "3" }, { "text": "To summarize, the final simulated dataset consisted of 10,000 \"responses\". Each response had 1 \"gold-standard\" score, 200 \"human\" scores and 25 \"system\" scores. 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simulated data", "sec_num": "3" }, { "text": "We first considered how the quality of human labels affects the estimates of the metrics that are typically used to evaluate automated scoring engines. For the analyses in this section, we used the scores from one of our simulated systems from the \"High\" system category (R 2 with gold-standard scores =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problems with traditional metrics 4.1 Rating quality and performance", "sec_num": "4" }, { "text": "2 R 2 = 1 \u2212 (y i \u2212\u0177 i ) 2 (y i \u2212\u0233) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problems with traditional metrics 4.1 Rating quality and performance", "sec_num": "4" }, { "text": "where yi are the observed values (human scores),\u0177i are the predicted values and\u0233 is the mean of observed score. R 2 standardizes the MSE by the total variance of the observed values leading to a more interpretable metric that generally varies from 0 to 1, where 1 corresponds to perfect prediction and 0 indicates that the model is no more accurate than simply using mean value as the prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problems with traditional metrics 4.1 Rating quality and performance", "sec_num": "4" }, { "text": "3 The data and the code are publicly available at https://github.com/EducationalTestingService/ prmse-simulations. We encourage the readers to use this code to run further simulations with varying input parameters. 0.8). We then randomly sampled 50 pairs of simulated raters from each rater category and evaluated the human-machine agreement for each pair. We used both the score from the first rater in the pair as well as the average of the the two rater scores in the pair as our reference score and computed four metrics: Pearson's r 4 , quadratically-weighted kappa (QWK) 5 , R 2 , and degradation (correlation between the scores of the two humans minus the correlation between scores of our chosen system and the reference human score). Figure 1 shows how these metrics for the same system vary depending on the human agreement in the evaluation dataset.", "cite_spans": [], "ref_spans": [ { "start": 743, "end": 751, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Problems with traditional metrics 4.1 Rating quality and performance", "sec_num": "4" }, { "text": "As the figure shows, the estimates of performance for the same set of scores vary drastically depending on the quality of human ratings whether we use the score from the first human rater or the average of the two scores. For example, estimates of correlation vary from mean r = 0.69 when computed against the average scores of two raters with low agreement to r = 0.86 when computed against the average score of two raters with high agreement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problems with traditional metrics 4.1 Rating quality and performance", "sec_num": "4" }, { "text": "The difference between r = 0.69 and r = 0.86 is considerable and, at face value, could influence both deployment decisions in an industry context as well as conclusions in a research context. Yet all it actually reflects is the amount of noise in human labels: both correlations were computed using the same set of automated scores. Looking at degradation does not resolve the issue: the degradation in our simulation varied from \u22120.05 to \u22120.30. It is obvious that the metrics improve when the humanhuman agreement goes from low to high, regardless of which metric is used, and do not provide a stable estimate of model performance. This pattern is consistent across different sets of automated scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problems with traditional metrics 4.1 Rating quality and performance", "sec_num": "4" }, { "text": "Given how much the estimates of system performance vary depending on the quality of human ratings, it is clear that the quality of human ratings will also affect the comparison between different systems if they are evaluated on different datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rating quality and ranking", "sec_num": "4.2" }, { "text": "To demonstrate this, we randomly sampled 25 pairs of simulated raters with different levels of human-human agreement, the same as the number of simulated systems in our data, and \"assigned\" a different pair to each system. Each pair of raters Figure 1 : The effect of human-human agreement on the evaluation results for the same set of automated scores against either the first human rater or the average of two human raters. Note that the metrics are on different scales.", "cite_spans": [], "ref_spans": [ { "start": 243, "end": 251, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Rating quality and ranking", "sec_num": "4.2" }, { "text": "is always sampled from the same rater category but different systems are evaluated on pairs from different rater categories. Thus, for example, 3 of 5 systems in the \"low\" system category were evaluated against rater pairs with \"high\" agreement, while the remaining two systems in that category were evaluated against rater pairs with \"average\" agreement. At the same time, for \"medium\" category systems, 3 out of 5 systems were evaluated on raters with \"low\" agreement (see also Table 1 in the Appendix). This simulation was designed to mimic, in a simplified fashion, a situation where different research studies might evaluate their systems on datasets with different quality of human ratings 6 .", "cite_spans": [], "ref_spans": [ { "start": 480, "end": 487, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Rating quality and ranking", "sec_num": "4.2" }, { "text": "We then evaluated each system against their assigned rater pairs using the standard agreement metrics and ranked the systems based on each of the metrics. The results are presented in the first four subplots in Figure 2 . 7 For comparison, we also evaluated the systems against a single pair of raters from the \"average\" rater category, i.e., using the same rater pair for each system. The system ranking when systems are evaluated against this same rater pair are shown as red dots. The figure shows that when different systems are evaluated against the same pair of raters, their ranking is consistent with what we know to be the correct ranking in our simulated dataset. However, when different systems are evaluated against different pairs of raters, their ranking can vary depending on the quality of the ratings and the chosen metric. All metricsexcept degradation -correctly ranked the worst performing systems (in the \"poor\" system category), 6 Note that the random assignment between rater categories and systems is a key aspect of this simulation since we are exploring a situation where the system performance is independent of the quality of human labels used to evaluate such systems. 7 The last subplot will be explained in \u00a75.2.", "cite_spans": [ { "start": 951, "end": 952, "text": "6", "ref_id": null }, { "start": 1198, "end": 1199, "text": "7", "ref_id": null } ], "ref_spans": [ { "start": 211, "end": 219, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Rating quality and ranking", "sec_num": "4.2" }, { "text": "but they could not reliably differentiate between the other categories. In our simulated dataset, we see substantial overlaps in R 2 between systems in the \"medium\", \"high\", and \"perfect\" system categories, with even larger overlaps for other metrics. Notably, when rater quality differs across the datasets used to evaluate a system, the degradation between human-human and system-human agreement, a common way to control for differences in said rater quality, does not always provide accurate system rankings. In our simulated dataset, based on degradation, some of the systems from the \"perfect\" system category ranked lower than some of the systems from the \"medium\" system category.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rating quality and ranking", "sec_num": "4.2" }, { "text": "Figure 1 showed that evaluating system scores against the average of two raters leads to higher estimates of agreement than when the system is evaluated against a single rater. This is not surprising: in our simulated dataset, the rater error is modeled as random and averaging across several simulated raters means that errors can cancel out when the number of raters is sufficiently large. In fact, we expect that evaluating the system against the average of multiple raters should provide performance estimates close to the known performance against the gold-standard scores. In this section, we simulated a situation where each response is scored by up to 50 raters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What if we had more than two raters?", "sec_num": "4.3" }, { "text": "For each category of raters, we randomly ordered the raters within this category and computed the cumulative average score of an increasing number of raters. We then evaluated the same system from the \"high\" system category used in \u00a74.1 against this cumulative average score. The results are presented in Figure 3 . The red lines indicate the values when evaluating the system's performance against the gold-standard scores. As expected, for all rater categories, the performance estimates for the system approach the known gold-standard performance as the number of raters increases.", "cite_spans": [], "ref_spans": [ { "start": 305, "end": 313, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "What if we had more than two raters?", "sec_num": "4.3" }, { "text": "The simulations in the previous sections demonstrate that the values of metrics usually used to evaluate automated scoring systems are directly dependent on the quality of human ratings used to evaluate the system. In fact, the effect of human label quality can be so large such that two identical systems may appear drastically different while the performance of two very different systems may appear very similar. One possible solution is to collect additional ratings for each response from multiple raters as we showed in \u00a74.3. This solution is likely to be too expensive to be feasible: for example, in our simulated dataset, we would need to collect at least 10 additional ratings for each re-sponse in order to obtain stable estimates of system performance, more if the rater agreement is low.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PRMSE with reference to true scores", "sec_num": "5" }, { "text": "The solution we propose comes from the educational measurement community and draws on test theory methods to adjust the system performance estimates for measurement error.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PRMSE with reference to true scores", "sec_num": "5" }, { "text": "The main idea behind PRMSE is to evaluate the automated scores against the true scores rather than the observed human scores. Classical test theory assumes that the human label H consists of the true score T and a measurement error E and Var(H) = Var(T ) + Var(E). While it is impossible to compare system scores to the latent true scores for each individual response, it is possible to use the variability in human ratings to estimate the rater error and to compute an overall measure of agreement between automated scores and true scores after subtracting the rater error from the vari-ance of the human labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The definition of PRMSE", "sec_num": "5.1" }, { "text": "Just like R 2 , PRMSE relies on the concepts of mean squared error (MSE) and proportional reduction in mean squared error (hence PRMSE), but in this case, these measures are computed between the automated score M and the true score T instead of the human label H, where MSE = E(M \u2212 T ) 2 and PRMSE = 1 \u2212 MSE Var(T ) . Also similar to R 2 , PRMSE is expected to fall between 0 and 1. A value of 0 indicates that system scores explain none of the variance of the true scores, while a value of 1 implies that system scores explains all the variance of true scores. In general, the higher the PRMSE, the better the system scores are at predicting the true scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The definition of PRMSE", "sec_num": "5.1" }, { "text": "We provide a detailed derivation for PRMSE in the Appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The definition of PRMSE", "sec_num": "5.1" }, { "text": "A Python implementation of PRMSE is available in RSMTool in the rsmtool.utils.prmse module 8 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The definition of PRMSE", "sec_num": "5.1" }, { "text": "In this section, we show how PRMSE can help address the issues discussed in \u00a74. We first considered the case where the same system is evaluated against ratings of different quality. As shown in \u00a74.1, all traditional metrics of system performance are affected by human-human agreement and, therefore, estimates for these metrics vary depending on which pair of raters is used to evaluate the system. Therefore, in this section, we only compare PRMSE to R 2 . Figure 4 : R 2 with average human score and PRMSE for the same system when evaluated against human ratings with different levels of agreement. The red line shows the value of R 2 when evaluating system performance against gold-standard scores.", "cite_spans": [], "ref_spans": [ { "start": 458, "end": 466, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "PRMSE and human-human agreement", "sec_num": "5.2" }, { "text": "We used the same pairs of raters and the same systems as in \u00a74.1 to compute PRMSE and then compared its values to the values of R 2 for the same pair of raters. Both these metrics rely on comparing the mean prediction error to the variance of goldstandard scores. For R 2 , the gold-standards scores are the observed human-assigned scores that are available and can be used for computation. The gold-standard scores for PRMSE are the latent true scores that cannot be used directly: the metric is instead computed using the observed human scores and the estimates of rater variance as explained in the previous section. 9 Figure 4 shows the values of R 2 when evaluating the same system against different categories of human raters and the values of PRMSE for the same evaluations. While R 2 , as we have already seen, varies between 0.43 and 0.71 depending on the quality of human ratings, PRMSE remains relatively stable between 0.76 and 0.82. We also note that the values of PRMSE are centered around the R 2 between system scores and gold-standard scores (0.8 in this case), as expected.", "cite_spans": [], "ref_spans": [ { "start": 622, "end": 630, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "PRMSE and human-human agreement", "sec_num": "5.2" }, { "text": "Next, we considered whether PRMSE can help obtain stable system rankings when systems are evaluated against human ratings with different qualities. We used the same combinations of simulated rater pairs and systems as in \u00a74.2 and computed PRMSE for each system and rater pair. We then ranked the systems based on their PRMSE values. The results are presented in the last subplot in Figure 2. The figure shows that even though different systems were evaluated against human ratings of different quality, their final ranking based on PRMSE was consistent with the known correct ranking based on the gold-standard scores.", "cite_spans": [], "ref_spans": [ { "start": 382, "end": 388, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "PRMSE and human-human agreement", "sec_num": "5.2" }, { "text": "In summary, PRMSE is more robust to the quality of human ratings used for system evaluation and can reliably rank systems regardless of the quality of human labels used to evaluate them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PRMSE and human-human agreement", "sec_num": "5.2" }, { "text": "In \u00a75.2, we considered a situation where all responses are double-scored. In reality, often only a subset of responses has several scores available to compute inter-rater agreement. The formula for PRMSE presented in the Appendix also allows us to compute PRMSE in such a situation: in this case, the variance of human errors is computed using only the double-scored responses. The prediction error Figure 5 : The distribution of PRMSE values depending on the percentage (left) or number (right) of double-scored responses. Different colors indicate levels of inter-rater agreement, i.e, rater category. The dotted line shows the known R 2 against gold-standard scores. Some PRMSE values for N=100 and \"low\" agreement were around 1.6 and are omitted for clarity. PRMSE values > 1 indicate that sample size is too small to reliably estimate error variance. and variance are computed using all responses in the sample and either the average of two scores when available or the single available score. The numbers are adjusted for the percentage of the total number of ratings available for each response.", "cite_spans": [], "ref_spans": [ { "start": 399, "end": 407, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "PRMSE and double-scoring", "sec_num": "5.3" }, { "text": "To test how PRMSE values depend on the percentage of double scored responses, we randomly sampled 50 pairs of raters from each rater category and created, for each of these 200 pairs, 7 new datasets each with a different percentage of doublescored responses. We then computed PRMSE for a randomly selected system from the \"high\" category for each of these 1,400 datasets. To check whether it is the percentage of double-scored responses that matters or the number of double-scored responses, we also computed a second PRMSE value over only the double-scored responses available in each case. For example, when simulating the scenario where we only have 10% of the responses doublescored, we compute two PRMSE values: (a) over the full dataset (10,000 responses) with 10% (1,000) double-scored and 90% (9,000) single-scored responses and (b) over a smaller dataset that only includes the 1,000 double-scored responses. The results are shown in Figure 5 (see also Table 2 in the Appendix). These results show that PRMSE values are much more stable with a larger number of double-scored responses and what matters is the total number of double-scored responses, not their percentage in the sample. There is substantial variability in PRMSE values when the number of double-scored responses is low, especially when computed on human ratings with low inter-rater agreement. In our simulated experiments, consistent values of PRMSE (to the first decimal) were achieved with 1,000 responses if the quality of human ratings was moderate-to-high. More responses would be necessary to reliably estimate PRMSE with low inter-rater agreement.", "cite_spans": [], "ref_spans": [ { "start": 943, "end": 951, "text": "Figure 5", "ref_id": null }, { "start": 962, "end": 969, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "PRMSE and double-scoring", "sec_num": "5.3" }, { "text": "The performance of automated systems is often lower on data with lower human-human agreement. While this may mean that responses harder to score for humans are also harder to score for machines, our analyses show that this is not always true. Furthermore, since subsets of the same dataset are often used for both system training and evaluation, separating the effect of noisy labels on training from that on evaluation may be impossible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "In this paper, we showed that even for the same set of automated scores, estimates of system performance depend directly on the the quality of the human labels used to compute the agreement metrics. We also showed that using standard performance metrics to compare two systems may be misleading if the systems are evaluated against human scores with different inter-rater agreements. Comparing system performance to human-human agreement using degradation does not resolve this issue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "We proposed that a new metric, PRMSE, developed within the educational measurement community for evaluation is an effective way to obtain estimates of system performance that are adjusted for human-human agreement. PRMSE provides system evaluation against 'true' scores, thus making it possible to compare different systems on the same scale and offering a performance metric that is robust to the quality of human labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "We emphasize that PRMSE does not affect the evaluation results when the systems are evaluated on the same set of human labels, for example, in the context of a shared task or a benchmark dataset. However, it can help compare system performance across studies as well as within studies, for example, when the dataset includes multiple items with varying levels of human-human agreement in their respective human scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "The theory behind PRMSE makes certain assumptions about the nature of the rater error: it is assumed to be random with a mean of 0 and finite variance. Furthermore, the rater error is assumed to be independent of the item and its true score. There are several steps one can take to make sure the data meets these assumptions. For example, a standard way to randomize rater error is to set up the scoring process in a way such that multiple raters each score a different set of responses. Furthermore, one should additionally check whether human ratings have similar mean and variance. We note that other models discussed in the NLP literature (see \u00a72), made other assumptions, for example that noisier labeling is more likely for some items (\"hard\" cases) than others. The performance of PRMSE under such conditions remains subject for future studies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Finally, while PRMSE can adjust estimates of system performance for human error, it does not fully address the issue of different datasets. Users of automated scoring still need to use their judgement -or additional extrinsic criteria -to decide whether two systems can be deemed comparable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "We conclude with guidelines for using PRMSE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Practical guidelines for PRMSE", "sec_num": "7" }, { "text": "\u2022 PRMSE estimates of system performance are robust to human-human agreement and can be used to compare systems across datasets. \u2022 PRMSE computation assumes that the rating process is set up to randomize rater error: e.g. even if most responses only have a single score, the scoring process should involve multiple raters each scoring a different set of responses to minimize individual rater bias. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Practical guidelines for PRMSE", "sec_num": "7" }, { "text": "The table below shows how systems from different categories were assigned to different pairs of raters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A The distribution between system and rater categories", "sec_num": null }, { "text": "Low Moderate Average High Poor 1 3 0 1 Low 0 0 2 3 Medium 3 0 1 1 High 2 1 1 1 Perfect 2 0 2 1 Table 3 : The distribution between different systems and different pairs of raters. The table shows how many systems from each system category were evaluated using pairs of raters from different rater categories.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 129, "text": "Average High Poor 1 3 0 1 Low 0 0 2 3 Medium 3 0 1 1 High 2 1 1 1 Perfect 2 0 2 1 Table 3", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Human-human agreement System", "sec_num": null }, { "text": "B Deriving the PRMSE formula Let \u2022 N denote the total number of responses in the evaluation set \u2022 c i denote the number of human ratings for response i, \u2022 H ij denote human rating j = 1, . . . , c i for response i, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human-human agreement System", "sec_num": null }, { "text": "\u2022H i = 1 c i c i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human-human agreement System", "sec_num": null }, { "text": "j=1 H ij denote the average human rating for response i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human-human agreement System", "sec_num": null }, { "text": "\u2022H = i c iHi i c i denote the average of all human ratings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human-human agreement System", "sec_num": null }, { "text": "\u2022 Let M i denote the predicted score for response i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human-human agreement System", "sec_num": null }, { "text": "The true human score model assumes a hypothetical infinite population/sequence of human raters that could score responses and assumes that the raters a response actually receives are an unbiased sample from this population. The raters H ij are assumed to have the same error variance and the errors e ij are uncorrelated. The model defines the true human score by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human-human agreement System", "sec_num": null }, { "text": "T i = lim c i \u2192\u221e 1 c i c i j=1 Y ij = E[H ij ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human-human agreement System", "sec_num": null }, { "text": "(1) and the error ij as ij = H ij \u2212 T i , or stated differently H ij = T i + ij .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human-human agreement System", "sec_num": null }, { "text": "If we have only two ratings per response then we estimate the error variance by recognizing", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.1 Estimating the error variance", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "V = 1 2 E[(H i2 \u2212 H i1 ) 2 ]", "eq_num": "(2)" } ], "section": "B.1 Estimating the error variance", "sec_num": null }, { "text": "which can easily be estimated with the unbiased estimatorV", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.1 Estimating the error variance", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= 1 2N N i=1 (H i2 \u2212 H i1 ) 2", "eq_num": "(3)" } ], "section": "B.1 Estimating the error variance", "sec_num": null }, { "text": "When we have more than two raters, the variance of rater errors is computed as a pooled variance estimator. We first calculate the within-subject variance of human ratings V i for each response i using denominator c i \u2212 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.1 Estimating the error variance", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "V i = c j=1 (H i,j \u2212H i ) 2 c i \u2212 1", "eq_num": "(4)" } ], "section": "B.1 Estimating the error variance", "sec_num": null }, { "text": "We then take a weighted average of those withinresponses variances:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.1 Estimating the error variance", "sec_num": null }, { "text": "V = N i=1 V i * (c i \u2212 1) N i=1 (c i \u2212 1) (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.1 Estimating the error variance", "sec_num": null }, { "text": "B.2 Estimating true score variance An unbiased estimator of the true score variance i\u015d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.1 Estimating the error variance", "sec_num": null }, { "text": "V T \u2261 Var(T ) = N i=1 c i (H i \u2212H) 2 \u2212 (N \u2212 1)V c \u2022 \u2212 N i=1 c 2 i c\u2022 (6) where c \u2022 = N", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.1 Estimating the error variance", "sec_num": null }, { "text": "i=1 c i is the total number of observed human scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.1 Estimating the error variance", "sec_num": null }, { "text": "We estimate the mean squared error of the automated scores M i with the following unbiased estimator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.3 Estimating mean squared error", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ") = 1 c \u2022 N i=1 c i (H i \u2212 M i ) 2 \u2212 NV", "eq_num": "(7)" } ], "section": "MSE(T |M", "sec_num": null }, { "text": "29 B.4 Estimating PRMSE With estimators for the MSE and the variance of the true score available, estimation of PRMSE is simple.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MSE(T |M", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "PRMSE = 1 \u2212 MSE(T |M ) V T", "eq_num": "(8)" } ], "section": "MSE(T |M", "sec_num": null }, { "text": "C Impact of double-scoring ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MSE(T |M", "sec_num": null }, { "text": "cf.Reidsma and Carletta (2008);Yannakoudakis and Cummins (2015) who also used simulated data to model system evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use raw correlation coefficients, not z-transforms, as is the norm in automated scoring literature.5 QWK for continuous scores was computed cf. Haberman (2019) as implemented in RSMTool(Madnani et al., 2017b)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://rsmtool.readthedocs.io/en/stable/api. html#prmse-api", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Although the true scores are known in our simulation, the values of PRMSE in this and the following sections are computed using observed human scores only following the formulas in the Appendix, without using the simulated true scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Beata Beigman Klebanov, Oren Livne, Paul Deane and the three anonymous BEA reviewers for their comments and suggestions that greatly improved this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "From Annotator Agreement to Noise Models", "authors": [ { "first": "Eyal", "middle": [], "last": "Beata Beigman Klebanov", "suffix": "" }, { "first": "", "middle": [], "last": "Beigman", "suffix": "" } ], "year": 2009, "venue": "Computational Linguistics", "volume": "35", "issue": "4", "pages": "495--503", "other_ids": { "DOI": [ "10.1162/coli.2009.35.4.35402" ] }, "num": null, "urls": [], "raw_text": "Beata Beigman Klebanov and Eyal Beigman. 2009. From Annotator Agreement to Noise Models. Com- putational Linguistics, 35(4):495-503.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Does a Rater's Familiarity with a Candidate's Pronunciation Affect the Rating in Oral Proficiency Interviews? Language Testing", "authors": [ { "first": "Michael", "middle": [ "D" ], "last": "Carey", "suffix": "" }, { "first": "Robert", "middle": [ "H" ], "last": "Mannell", "suffix": "" }, { "first": "Peter", "middle": [ "K" ], "last": "Dunn", "suffix": "" } ], "year": 2011, "venue": "", "volume": "28", "issue": "", "pages": "201--219", "other_ids": { "DOI": [ "10.1177/0265532210393704" ] }, "num": null, "urls": [], "raw_text": "Michael D. Carey, Robert H. Mannell, and Peter K. Dunn. 2011. Does a Rater's Familiarity with a Can- didate's Pronunciation Affect the Rating in Oral Pro- ficiency Interviews? Language Testing, 28(2):201- 219.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The influence of training and experience on rater performance in scoring spoken language. Language Testing", "authors": [ { "first": "Larry", "middle": [], "last": "Davis", "suffix": "" } ], "year": 2016, "venue": "", "volume": "33", "issue": "", "pages": "117--135", "other_ids": { "DOI": [ "10.1177/0265532215582282" ] }, "num": null, "urls": [], "raw_text": "Larry Davis. 2016. The influence of training and ex- perience on rater performance in scoring spoken lan- guage. Language Testing, 33(1):117-135.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The joint student response analysis and recognizing textual entailment challenge: making sense of student responses in educational applications. Language Resources and Evaluation", "authors": [ { "first": "O", "middle": [], "last": "Myroslava", "suffix": "" }, { "first": "Rodney", "middle": [ "D" ], "last": "Dzikovska", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Nielsen", "suffix": "" }, { "first": "", "middle": [], "last": "Leacock", "suffix": "" } ], "year": 2016, "venue": "", "volume": "50", "issue": "", "pages": "67--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "Myroslava O. Dzikovska, Rodney D. Nielsen, and Claudia Leacock. 2016. The joint student response analysis and recognizing textual entailment chal- lenge: making sense of student responses in educa- tional applications. Language Resources and Evalu- ation, 50(1):67-93.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Rater types in writing performance assessments: A classification approach to rater variability", "authors": [ { "first": "Thomas", "middle": [], "last": "Eckes", "suffix": "" } ], "year": 2008, "venue": "Language Testing", "volume": "25", "issue": "2", "pages": "155--185", "other_ids": { "DOI": [ "10.1177/0265532207086780" ] }, "num": null, "urls": [], "raw_text": "Thomas Eckes. 2008. Rater types in writing perfor- mance assessments: A classification approach to rater variability. Language Testing, 25(2):155-185.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Classification in the presence of label noise: A survey", "authors": [ { "first": "Beno\u00eet", "middle": [], "last": "Fr\u00e9nay", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Verleysen", "suffix": "" } ], "year": 2014, "venue": "IEEE Transactions on Neural Networks and Learning Systems", "volume": "25", "issue": "5", "pages": "845--869", "other_ids": { "DOI": [ "10.1109/TNNLS.2013.2292894" ] }, "num": null, "urls": [], "raw_text": "Beno\u00eet Fr\u00e9nay and Michel Verleysen. 2014. Classifica- tion in the presence of label noise: A survey. IEEE Transactions on Neural Networks and Learning Sys- tems, 25(5):845-869.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "When can subscores have value?", "authors": [ { "first": "J", "middle": [], "last": "Shelby", "suffix": "" }, { "first": "", "middle": [], "last": "Haberman", "suffix": "" } ], "year": 2008, "venue": "Journal of Educational and Behavioral Statistics", "volume": "33", "issue": "", "pages": "204--229", "other_ids": { "DOI": [ "10.3102/1076998607302636" ] }, "num": null, "urls": [], "raw_text": "Shelby J. Haberman. 2008. When can subscores have value? Journal of Educational and Behavioral Statistics, 33:204-229.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Measures of Agreement Versus Measures of Prediction Accuracy", "authors": [ { "first": "J", "middle": [], "last": "Shelby", "suffix": "" }, { "first": "", "middle": [], "last": "Haberman", "suffix": "" } ], "year": 2019, "venue": "ETS Research Report Series", "volume": "2019", "issue": "1", "pages": "1--23", "other_ids": { "DOI": [ "10.1002/ets2.12258" ] }, "num": null, "urls": [], "raw_text": "Shelby J. Haberman. 2019. Measures of Agreement Versus Measures of Prediction Accuracy. ETS Re- search Report Series, 2019(1):1-23.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Repeater analysis for combining information from different assessments", "authors": [ { "first": "J", "middle": [], "last": "Shelby", "suffix": "" }, { "first": "L", "middle": [], "last": "Haberman", "suffix": "" }, { "first": "", "middle": [], "last": "Yao", "suffix": "" } ], "year": 2015, "venue": "Journal of Educational Measurement", "volume": "52", "issue": "", "pages": "223--251", "other_ids": { "DOI": [ "10.1111/jedm.12075" ] }, "num": null, "urls": [], "raw_text": "Shelby J. Haberman and L. Yao. 2015. Repeater anal- ysis for combining information from different as- sessments. Journal of Educational Measurement, 52:223-251.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Prediction of true test scores from observed item scores and ancillary data", "authors": [ { "first": "J", "middle": [], "last": "Shelby", "suffix": "" }, { "first": "L", "middle": [], "last": "Haberman", "suffix": "" }, { "first": "S", "middle": [], "last": "Yao", "suffix": "" }, { "first": "", "middle": [], "last": "Sinharay", "suffix": "" } ], "year": 2015, "venue": "British Journal of Mathematical and Statistical Psychology", "volume": "68", "issue": "", "pages": "363--385", "other_ids": { "DOI": [ "10.1111/bmsp.12052" ] }, "num": null, "urls": [], "raw_text": "Shelby J. Haberman, L. Yao, and S. Sinharay. 2015. Prediction of true test scores from observed item scores and ancillary data. British Journal of Math- ematical and Statistical Psychology, 68:363-385.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Finding a Tradeoff between Accuracy and Rater's Workload in Grading Clustered Short Answers", "authors": [ { "first": "Andrea", "middle": [], "last": "Horbach", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Magdalena", "middle": [], "last": "Wolska", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", "volume": "", "issue": "", "pages": "588--595", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrea Horbach, Alexis Palmer, and Magdalena Wol- ska. 2014. Finding a Tradeoff between Accuracy and Rater's Workload in Grading Clustered Short Answers. Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 588-595.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Noise or additional information? Leveraging crowdsource annotation item agreement for natural language tasks", "authors": [ { "first": "Emily", "middle": [ "K" ], "last": "Jamison", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2015, "venue": "Proceedings of EMNLP 2015", "volume": "", "issue": "", "pages": "291--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily K. Jamison and Iryna Gurevych. 2015. Noise or additional information? Leveraging crowdsource an- notation item agreement for natural language tasks. In Proceedings of EMNLP 2015, pages 291-297, Lisbon, Portugal. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Evaluating classifiers by means of test data with noisy labels", "authors": [ { "first": "Chuck", "middle": [ "P" ], "last": "Lam", "suffix": "" }, { "first": "David", "middle": [ "G" ], "last": "Stork", "suffix": "" } ], "year": 2003, "venue": "IJCAI International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "513--518", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chuck P. Lam and David G. Stork. 2003. Evaluating classifiers by means of test data with noisy labels. IJCAI International Joint Conference on Artificial Intelligence, pages 513-518.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A Study on the Impact of Fatigue on Human Raters when Scoring Speaking Responses", "authors": [ { "first": "Guangming", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Pamela", "middle": [], "last": "Mollaun", "suffix": "" }, { "first": "Xiaoming", "middle": [], "last": "Xi", "suffix": "" } ], "year": 2014, "venue": "", "volume": "31", "issue": "", "pages": "479--499", "other_ids": { "DOI": [ "10.1177/0265532214530699" ] }, "num": null, "urls": [], "raw_text": "Guangming Ling, Pamela Mollaun, and Xiaoming Xi. 2014. A Study on the Impact of Fatigue on Hu- man Raters when Scoring Speaking Responses. Lan- guage Testing, 31:479-499.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Statistical Theories of Mental Test Scores", "authors": [ { "first": "Frederic", "middle": [ "M" ], "last": "Lord", "suffix": "" }, { "first": "Melvin", "middle": [ "R" ], "last": "Novick", "suffix": "" } ], "year": 1968, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frederic M. Lord and Melvin R. Novick. 1968. Statisti- cal Theories of Mental Test Scores. Addison Wesley, Reading, MA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Using exemplar responses for training and evaluating automated speech scoring systems", "authors": [ { "first": "Anastassia", "middle": [], "last": "Loukina", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Zechner", "suffix": "" }, { "first": "James", "middle": [], "last": "Bruno", "suffix": "" }, { "first": "Beata", "middle": [ "Beigman" ], "last": "Klebanov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "1--12", "other_ids": { "DOI": [ "10.18653/v1/W18-0501" ] }, "num": null, "urls": [], "raw_text": "Anastassia Loukina, Klaus Zechner, James Bruno, and Beata Beigman Klebanov. 2018. Using exemplar responses for training and evaluating automated speech scoring systems. In Proceedings of the Thir- teenth Workshop on Innovative Use of NLP for Build- ing Educational Applications, pages 1-12, Strouds- burg, PA, USA. Association for Computational Lin- guistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A Large Scale Quantitative Exploration of Modeling Strategies for Content Scoring", "authors": [ { "first": "Nitin", "middle": [], "last": "Madnani", "suffix": "" }, { "first": "Anastassia", "middle": [], "last": "Loukina", "suffix": "" }, { "first": "Aoife", "middle": [], "last": "Cahill", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "457--467", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitin Madnani, Anastassia Loukina, and Aoife Cahill. 2017a. A Large Scale Quantitative Exploration of Modeling Strategies for Content Scoring. In Pro- ceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 457-467, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Building Better Open-Source Tools to Support Fairness in Automated Scoring", "authors": [ { "first": "Nitin", "middle": [], "last": "Madnani", "suffix": "" }, { "first": "Anastassia", "middle": [], "last": "Loukina", "suffix": "" }, { "first": "Alina", "middle": [ "Von" ], "last": "Davier", "suffix": "" }, { "first": "Jill", "middle": [], "last": "Burstein", "suffix": "" }, { "first": "Aoife", "middle": [], "last": "Cahill", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the First Workshop on ethics in Natural Language Processing", "volume": "", "issue": "", "pages": "41--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitin Madnani, Anastassia Loukina, Alina Von Davier, Jill Burstein, and Aoife Cahill. 2017b. Building Bet- ter Open-Source Tools to Support Fairness in Auto- mated Scoring. In Proceedings of the First Work- shop on ethics in Natural Language Processing, Va- lencia, Spain, April 4th, 2017, pages 41-52, Valen- cia. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Learning to parse with IAA-weighted loss", "authors": [ { "first": "Barbara", "middle": [], "last": "H\u00e9ctor Mart\u00ednez Alonso", "suffix": "" }, { "first": "Arne", "middle": [], "last": "Plank", "suffix": "" }, { "first": "Anders", "middle": [], "last": "Skjaerholt", "suffix": "" }, { "first": "", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1357--1361", "other_ids": {}, "num": null, "urls": [], "raw_text": "H\u00e9ctor Mart\u00ednez Alonso, Barbara Plank, Arne Skjaerholt, and Anders S\u00f8gaard. 2015. Learning to parse with IAA-weighted loss. In Proceedings of the 2015 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies, pages 1357- 1361, Denver, Colorado. Association for Computa- tional Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Learning part-of-speech taggers with inter-annotator agreement loss", "authors": [ { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "742--751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara Plank, Dirk Hovy, and Anders S\u00f8gaard. 2014. Learning part-of-speech taggers with inter-annotator agreement loss. In Proceedings of the 14th Confer- ence of the European Chapter of the Association for Computational Linguistics, pages 742-751, Gothen- burg, Sweden. Association for Computational Lin- guistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Exploiting subjective annotations", "authors": [ { "first": "Dennis", "middle": [], "last": "Reidsma", "suffix": "" }, { "first": "Rieks", "middle": [], "last": "Op Den Akker", "suffix": "" } ], "year": 2008, "venue": "COLING 2008 workshop on Human Judgments in Computational Linguistics", "volume": "", "issue": "", "pages": "8--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dennis Reidsma and Rieks op den Akker. 2008. Ex- ploiting subjective annotations. In COLING 2008 workshop on Human Judgments in Computational Linguistics, pages 8-16, Manchester, UK.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Reliability Measurement without Limits", "authors": [ { "first": "Dennis", "middle": [], "last": "Reidsma", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Carletta", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "3", "pages": "319--326", "other_ids": { "DOI": [ "10.1162/coli.2008.34.3.319" ] }, "num": null, "urls": [], "raw_text": "Dennis Reidsma and Jean Carletta. 2008. Reliability Measurement without Limits. Computational Lin- guistics, 34(3):319-326.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "How to account for mispellings: Quantifying the benefit of character representations in neural content scoring models", "authors": [ { "first": "Brian", "middle": [], "last": "Riordan", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Flor", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Pugh", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 14th Workshop on Innovative Use of NLP for Building Educational Applications (BEA)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian Riordan, Michael Flor, and Robert Pugh. 2019. How to account for mispellings: Quantifying the benefit of character representations in neural content scoring models. In Proceedings of the 14th Work- shop on Innovative Use of NLP for Building Educa- tional Applications (BEA).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Neutralizing linguistically problematic annotations in unsupervised dependency parsing evaluation", "authors": [ { "first": "Roy", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "663--672", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roy Schwartz, Omri Abend, Roi Reichart, and Ari Rap- poport. 2011. Neutralizing linguistically problem- atic annotations in unsupervised dependency parsing evaluation. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguistics: Human Language Technologies -Volume 1, HLT '11, pages 663-672, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "State-of-the-art automated essay scoring: Competition, results, and future directions from a United States demonstration", "authors": [ { "first": "D", "middle": [], "last": "Mark", "suffix": "" }, { "first": "", "middle": [], "last": "Shermis", "suffix": "" } ], "year": 2014, "venue": "Assessing Writing", "volume": "20", "issue": "", "pages": "53--76", "other_ids": { "DOI": [ "10.1016/j.asw.2013.04.001" ] }, "num": null, "urls": [], "raw_text": "Mark D. Shermis. 2014. State-of-the-art automated es- say scoring: Competition, results, and future direc- tions from a United States demonstration. Assessing Writing, 20:53-76.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Cheap and fast-but is it good?: Evaluating non-expert annotations for natural language tasks", "authors": [ { "first": "Rion", "middle": [], "last": "Snow", "suffix": "" }, { "first": "O'", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Connor", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Jurafsky", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08", "volume": "", "issue": "", "pages": "254--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and fast-but is it good?: Evaluating non-expert annotations for natu- ral language tasks. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing, EMNLP '08, pages 254-263, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A Framework for Evaluation and Use of Automated Scoring", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Williamson", "suffix": "" }, { "first": "Xiaoming", "middle": [], "last": "Xi", "suffix": "" }, { "first": "F", "middle": [ "Jay" ], "last": "Breyer", "suffix": "" } ], "year": 2012, "venue": "Educational Measurement: Issues and Practice", "volume": "31", "issue": "1", "pages": "2--13", "other_ids": { "DOI": [ "10.1111/j.1745-3992.2011.00223.x" ] }, "num": null, "urls": [], "raw_text": "David M. Williamson, Xiaoming Xi, and F. Jay Breyer. 2012. A Framework for Evaluation and Use of Au- tomated Scoring. Educational Measurement: Issues and Practice, 31(1):2-13.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Evaluating the performance of Automated Text Scoring systems", "authors": [ { "first": "Helen", "middle": [], "last": "Yannakoudakis", "suffix": "" }, { "first": "Ronan", "middle": [], "last": "Cummins", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 10th Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "213--223", "other_ids": { "DOI": [ "10.3115/v1/W15-0625" ] }, "num": null, "urls": [], "raw_text": "Helen Yannakoudakis and Ronan Cummins. 2015. Evaluating the performance of Automated Text Scor- ing systems. In Proceedings of the 10th Workshop on Innovative Use of NLP for Building Educational Applications, pages 213-223.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Penalized best linear prediction of true test scores", "authors": [ { "first": "Lili", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Shelby", "middle": [ "J" ], "last": "Haberman", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2019, "venue": "Psychometrika", "volume": "84", "issue": "1", "pages": "186--211", "other_ids": { "DOI": [ "10.1007/s11336-018-9636-7" ] }, "num": null, "urls": [], "raw_text": "Lili Yao, Shelby J. Haberman, and Mo Zhang. 2019a. Penalized best linear prediction of true test scores. Psychometrika, 84 (1):186-211.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Prediction of writing true scores in automated scoring of essays by best linear predictors and penalized best linear predictors", "authors": [ { "first": "Lili", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Shelby", "middle": [ "J" ], "last": "Haberman", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2019, "venue": "ETS", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1002/ets2.12248." ] }, "num": null, "urls": [], "raw_text": "Lili Yao, Shelby J. Haberman, and Mo Zhang. 2019b. Prediction of writing true scores in automated scor- ing of essays by best linear predictors and penalized best linear predictors. ETS Research Report RR-19- 13, ETS, Princeton, NJ.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Automatic scoring of non-native spontaneous speech in tests of spoken English", "authors": [ { "first": "Klaus", "middle": [], "last": "Zechner", "suffix": "" }, { "first": "Derrick", "middle": [], "last": "Higgins", "suffix": "" }, { "first": "Xiaoming", "middle": [], "last": "Xi", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Williamson", "suffix": "" } ], "year": 2009, "venue": "Speech Communication", "volume": "51", "issue": "10", "pages": "883--895", "other_ids": { "DOI": [ "10.1016/j.specom.2009.04.009" ] }, "num": null, "urls": [], "raw_text": "Klaus Zechner, Derrick Higgins, Xiaoming Xi, and David M. Williamson. 2009. Automatic scoring of non-native spontaneous speech in tests of spoken En- glish. Speech Communication, 51(10):883-895.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "; Haberman et al. (2015); Haberman and Yao (2015); Yao et al. (2019a,b); Zhang et al.", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "The ranking of systems from different categories when evaluated against randomly selected pairs of raters with different human-human agreement levels. The X axis shows the known ranking of the simulated systems in terms of their performance measured against the gold-standard scores. The red dots show the ranking when the systems are evaluated against the same pair of raters.", "uris": null, "type_str": "figure" }, "FIGREF2": { "num": null, "text": "The effect of number of raters on several common metrics. Each plot shows a different metric computed for a randomly chosen system in our dataset against an increasing number of human raters. The red line indicates the metric value computed against the gold-standard scores & different colors indicate different rater categories.", "uris": null, "type_str": "figure" }, "TABREF0": { "text": "", "html": null, "content": "
shows the correlations
between the simulated human rater scores within
each category.
Category # raters HH-corr meanstd
Low500.403.83 1.14
Moderate500.553.83 0.99
Average500.653.83 0.91
High500.803.83 0.83
", "type_str": "table", "num": null }, "TABREF2": { "text": "values above 1 indicate that the doublescored sample is too small. \u2022 PRMSE should be used in combination with other metrics of human-machine agreement. Torsten Zesch, Michael Heilman, and Aoife Cahill. 2015. Reducing Annotation Efforts in Supervised Short Answer Scoring. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 124-132, Denver, Colorado.", "html": null, "content": "
Mo Zhang, Lili Yao, Shelby J. Haberman, and Neil J.
Dorans. 2019. Assessing scoring accuracy and as-
sessment accuracy for spoken responses. In Au-
tomated Speaking Assessment, pages 32-58. Rout-
ledge.
\u2022 Both sets of human ratings used to estimate
PRMSE should have similar mean and variance
and similar agreement with system scores.
\u2022 Responses selected for double-scoring must be a
random sample of all responses.
\u2022 We recommend a total of at least 1000 double-
scored responses to reliably estimate the human
error. For human-human correlations > 0.65,
a smaller sample (such as 500) might suffice.
", "type_str": "table", "num": null }, "TABREF3": { "text": "shows the range of PRMSE values we observed for different number of double-scored responses and human-human agreement.", "html": null, "content": "
Human-human agreement
NLow Moderate Average High
1001.010.410.26 0.12
2500.460.300.15 0.09
5000.330.170.12 0.07
1,0000.240.130.08 0.06
2,5000.180.090.07 0.03
5,0000.080.070.04 0.02
10,000 0.060.030.02 0.02
", "type_str": "table", "num": null }, "TABREF4": { "text": "The range of observed PRMSE values for different number double-scored responses and different levels of human-human agreement.", "html": null, "content": "", "type_str": "table", "num": null } } } }