|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:38:34.797398Z" |
|
}, |
|
"title": "Probabilistic Extension of Precision, Recall, and F1 Score for More Thorough Evaluation of Classification Models", |
|
"authors": [ |
|
{ |
|
"first": "Reda", |
|
"middle": [], |
|
"last": "Yacouby", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "redaya@amazon.com" |
|
}, |
|
{ |
|
"first": "Dustin", |
|
"middle": [], |
|
"last": "Axman", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In pursuit of the perfect supervised NLP classifier, razor thin margins and low-resource testsets can make modeling decisions difficult. Popular metrics such as Accuracy, Precision, and Recall are often insufficient as they fail to give a complete picture of the model's behavior. We present a probabilistic extension of Precision, Recall, and F1 score, which we refer to as confidence-Precision (cPrecision), confidence-Recall (cRecall), and confidence-F1 (cF1) respectively. The proposed metrics address some of the challenges faced when evaluating large-scale NLP systems, specifically when the model's confidence score assignments have an impact on the system's behavior. We describe four key benefits of our proposed metrics as compared to their threshold-based counterparts. Two of these benefits, which we refer to as robustness to missing values and sensitivity to model confidence score assignments are self-evident from the metrics' definitions; the remaining benefits, generalization, and functional consistency are demonstrated empirically.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In pursuit of the perfect supervised NLP classifier, razor thin margins and low-resource testsets can make modeling decisions difficult. Popular metrics such as Accuracy, Precision, and Recall are often insufficient as they fail to give a complete picture of the model's behavior. We present a probabilistic extension of Precision, Recall, and F1 score, which we refer to as confidence-Precision (cPrecision), confidence-Recall (cRecall), and confidence-F1 (cF1) respectively. The proposed metrics address some of the challenges faced when evaluating large-scale NLP systems, specifically when the model's confidence score assignments have an impact on the system's behavior. We describe four key benefits of our proposed metrics as compared to their threshold-based counterparts. Two of these benefits, which we refer to as robustness to missing values and sensitivity to model confidence score assignments are self-evident from the metrics' definitions; the remaining benefits, generalization, and functional consistency are demonstrated empirically.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Supervised machine learning classifiers are typically trained to minimize error. This error is evaluated using one or multiple metrics, the choice of which has been a continuous debate in research and industry for multiple decades (Dinga et al., 2019; Brier, 1950) . Many criteria need to be considered when choosing a metric, including but not limited to: interpretability, computational cost, differentiability, and popularity in a specific field. As an example, a typical workflow of model development is to use a loss function such as cross-entropy or hinge loss during training for weight optimization, then use an easily interpretable metric such as Accuracy, Precision, or Recall when testing the model against a holdout sample of examples. This is because the mentioned loss functions are differentiable convex functions, enabling optimization algorithms such as gradient descent to find minima with reasonable computational cost. In contrast, the test-set evaluation metrics are often required to be easy to relate to the real-world problem the classifier is designed to help solve, in order to give a concrete idea of performance or success to the stakeholders.", |
|
"cite_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 251, |
|
"text": "(Dinga et al., 2019;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 252, |
|
"end": 264, |
|
"text": "Brier, 1950)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Essentially, all the criteria mentioned serve the same underlying purpose of driving modeling decisions. The heterogeneous nature of model evaluation illustrates how there could be no universal criteria for driving model decisions, or so-called \"best metric\", as each criterion could be advantageous under specific operating conditions (Hern\u00e1ndez-Orallo et al., 2012) , or even preferred by stakeholders for reasons that do not need to be scientifically driven (such as interpretability and business purposes).", |
|
"cite_spans": [ |
|
{ |
|
"start": 336, |
|
"end": 367, |
|
"text": "(Hern\u00e1ndez-Orallo et al., 2012)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the Natural Language Processing (NLP) industry, new challenges have risen in the past few years in terms of performance evaluation, due to the complexity and scalable design of modern NLP systems such as those powering Google Assistant, Amazon Alexa, or Apple's Siri (Sarikaya, 2017) . Such systems are built to support devices with a potentially limitless number of functionalities, as reflected by the Alexa Skill Developer Toolkit and Google Actions, allowing external developers to add additional functionality to the NLP system, supporting new phrases and therefore increasing the number of choices the system needs to disambiguate between.", |
|
"cite_spans": [ |
|
{ |
|
"start": 270, |
|
"end": 286, |
|
"text": "(Sarikaya, 2017)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This sharp rise in scale and complexity has made the most commonly used metrics (Accuracy, Precision, Recall, F1 score) insufficient in depicting a comprehensive picture of the impact introduced by changes in these systems. A key reason behind this gap is that classification models typically output an n-best list of model predictions, each associated with a confidence score (or probability score), and while simple systems (and most academic use-cases) only consider the highest-score prediction, more elaborate systems tend to leverage further information from the n-best to drive decisions. Metrics such as Accuracy, Precision and Recall simply compare the highest-score prediction with the test reference, ignoring the rest of the n-best output, while this ignored information often does impact the behavior of the NLP system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We provide 2 cases that exemplify the case of an NLP system being impacted by changes in the n-best output which are ignored by popular metrics: 1. Arbitration: Some specific criteria could be used to arbitrate between the n-best predictions rather than always choosing the highestscore prediction. For example, if the top prediction is un-actionable by the system (e.g. results in an error) and the second-best prediction meets some defined criteria, the system could fall-back to that prediction. In the case of a vocal assistant an example would be asking a TV to \"play frozen\", and the NLP model recognizes it as a request to play song called \"frozen\" as its top prediction, while the homonymous movie is the second-best prediction. The system could arbitrate and decide to use the second prediction specifically because the request was spoken to a TV, rather than a music player.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "It is common for largescale NLP systems to be multi-step, having domain-specialized models receive the output of an upstream NLP model as input, then attempting to correct potential mistakes. As an example, Named Entity Recognition (NER) in the Shopping domain is challenging for general purpose NLP models due to the large size of product catalogs and potentially ambiguous product names. A downstream Shoppingspecific NLP model can be applied on the upstream model's n-best for error correction, potentially re-ranking the n-best and adjusting confidence scores.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Many evaluation metrics capable of measuring changes in confidence score assignments already exist. In this document we will use the taxonomy introduced by Ferri et al. (2009) , classifying metrics into 3 categories:", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 175, |
|
"text": "Ferri et al. (2009)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "\u2022 Threshold-based metrics, using a qualitative understanding of error, such as Accuracy, Precision, and Recall.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "\u2022 Rank-based metrics, which evaluate how well the models ranks the examples. The Area Under the ROC Curve (AUC) is the most widely used in this category.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "\u2022 Probabilistic metrics, using a probabilistic understanding of error, as they consider the confidence scores assigned by the models in their measurements. Among these are Brier-score and Cross-entropy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Among these categories, probabilistic metrics have the potential to fill the evaluation gaps we described. In this paper we are proposing a probabilistic extension of threshold-based metrics. The goal is to introduce advantages of probabilistic metrics while retaining the relatability of thresholdbased metrics to the real-world operating cost function of the models, allowing for decision making that is both scientifically reliable and tied to the stakeholder's interests. We describe in Section 3 why other probabilistic metrics are not sufficient to fill the evaluation gaps we are addressing with the newly proposed metrics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "One of the primary benefits of probabilistic metrics is their ability to function more consistently in test-data sparse scenarios. It was demonstrated empirically by Wang et al. (2013) and Dinga et al. (2019) that probabilistic metrics are more reliable in discriminating between models, since they leverage the most information from the model's output. This does not necessarily make them better metrics, as we stated earlier how modeling decisions are closely tied to operating conditions, but allows them to be more data-efficient (require less data to reach statistically significant results). Recent developments in Transfer Learning (Pan and Yang, 2010; Conneau et al., 2020) demonstrated impressive ability to learn from small training sets (often referred to as Few-Shot Learning), showing a wide NLP community interest in improving data-efficiency during model training, but we have not found any publication related to data-efficient model testing. Usually the lack of training data would also imply a lack of test data, as they would be caused by the same underlying factor (expensive data collection and/or labelling, low-resource language), which highlights the value in developing ways to compare models with minimal test data requirements. As part of our investigation in this subject, we empirically show that our proposed metrics are more data efficient than their threshold-based counterparts, as they allow for modeling decisions with smaller test-sets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 166, |
|
"end": 184, |
|
"text": "Wang et al. (2013)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 189, |
|
"end": 208, |
|
"text": "Dinga et al. (2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 639, |
|
"end": 659, |
|
"text": "(Pan and Yang, 2010;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 660, |
|
"end": 681, |
|
"text": "Conneau et al., 2020)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "On a side note, some of the cases in which model confidence assignments are used in production require the scores to be probabilistically calibrated, as they are interpreted by the users as probabilities of events happening (e.g. disease prevention, weather forecasts). Probabilistic calibration refers to the reliability of the scores in reflecting the true probability of the predictions being correct (e.g. if a calibrated model predicted in n cases that event X will happen with probability p, then event X should happen in approximately p*n of those cases). The proposed metrics do not evaluate for probabilistic calibration. For such use-cases we suggest the combined usage of a probability calibration measure (e.g. the Expected Calibration Error (Guo et al., 2017) , the reliability component of Brierscore (Murphy, 1973) ) along with the proposed metrics, for a thorough evaluation of both performance and calibration.", |
|
"cite_spans": [ |
|
{ |
|
"start": 754, |
|
"end": 772, |
|
"text": "(Guo et al., 2017)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 815, |
|
"end": 829, |
|
"text": "(Murphy, 1973)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "In this document, we describe four benefits of the proposed metrics. In comparison with their threshold-based counterparts, our metrics:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "A. Have an equal or lower likelihood of being NaN (Robustness to NaN values).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "B. Are sensitive to changes in the model's confidence scores across the model's full n-best output (Sensitivity to model confidence score assignments).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "C. Have lower variance, making their point estimates more generalizable to unseen data, and allowing for better discriminancy between models (Generalization hypothesis) D. Provide the same ranking of performance of candidate models as their threshold-based counterpart's population value in the majority of cases (Functional Consistency hypothesis).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The first two are easily deduced from the metrics' definitions. The third and fourth benefits are demonstrated empirically in Section 6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "2 Definitions cPrecision, cRecall, and cF1-Score have the same mathematical formulations as Precision, Recall, and F1-Score, respectively, with the only difference being the usage of continuous (as opposed to binary) definitions of Positives and Negatives, based on the confidence score (or probability assignment) a classification model yields for each label. Let's start by defining some terminology to establish a formal definition. Consider:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "1. A dataset S : (x 1 , y 1 ), ..., (x n , y n ) \u2208 R p \u00d7 {C 1 , ..., C m }, where \u2022 x i is a vector of p features corresponding to sample i \u2022 y i is the class corresponding to sample i \u2022 {C 1 , ..., C m } is the set of possible classes 2. A classification model M : R p \u2192 {C 1 , .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": ".., C m } trained to predict label assignment given an input vector x i . The model assigns a confidence score (or probability if the model is probabilistically calibrated) to each possible class C j for any given input vector x i , signifying the model's confidence that C j is the true class for the given input vector (which can also be expressed as C j = y i ). Let's call this confidence score M (x i , C j ). The class with the highest confidence score will be the model's predicted class y i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "We have, for any sample i \u2208 {1, ..., n}:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "m j=1 M (x i , C j ) = 1 (1) y i = arg max j (M (x i , C j ))", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "By applying the model M on the full dataset S, we obtain a confidence score M (x i , C j ) for each i \u2208 {1, ..., n} and j \u2208 {1, ..., m}. Suppose S j denotes the set of samples with true class C j . We can build a probabilistic confusion matrix pCM as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "pCM (j ref , j hyp ) = i\u2208S j ref M (x i , C j hyp ) (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Intuitively, each cell (j ref , j hyp ) of the confusion matrix corresponds to the total confidence score assigned by the model to hypothesis j hyp for samples for which the true class is j ref . It is very similar to the usual definition of a confusion matrix, apart from the fact that we leverage all confidence scores as quantitative values as rather than just the highest-scoring class as a qualitative value. From this probabilistic confusion matrix, cRecall and cPrecision are calculated in the same way that Recall and Precision are from the non-probabilistic (regular) confusion matrix.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "We can also formulate it without using a confusion matrix, by using indicator functions. The commonly used definition of true positive for class C j is any model prediction for which y i = y i = C j . We can formalize it as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "T P C j = I y i =C j * I C j =y i (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "where: I X = 1 if X is true 0 if X is false We propose a continuous generalization as the confidence true positive:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "cT P C j = M (x i , C j ) * I C j =y i (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "As shown in Equation 5, we're simply replacing the binary I y i =C j from Equation 4 by the continuous M (x i , C j ). We can similarly define the confidence False Positive as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "cF P C j = M (x i , C j ) * I C j =y i", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Now that we formalized cT P and cF P , we can define cP recision and cRecall: cP recision = cT P cT P + cF P (7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "cRecall = cT P T P + F N", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Note the asymmetry between cPrecision and cRecall, as the denominator of cRecall is the same as the denominator of Recall (does not use the probabilistic extensions of F P and F N ). This is because T P + F N simply refers to the total number of samples labelled as the class being evaluated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error correction:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "In the publication Employ Decision Values for Soft-Classifier Evaluation with Crispy References, Zhu et al. (2018) have come to a similar formulation of probabilistic confusion matrix in the pursuit of a different goal. Zhu considered the use-case of soft-classification, where \"the classifier outputs not only a crispy prediction about its class label, but decision values which indicate to what extent does it belong to the all the classes as well\", while we're considering hard classification, where the hypothesis probabilities output by the classifier indicate a confidence score that the hypothesis is correct, rather than a measure of class membership. From the resulting confusion matrix, Zhu also formulated and empirically experimented with a probabilistic version of Precision and Recall, but only for binary classification. In our paper we dive deeper into the properties and potential of these metrics in multiclass hard classification when the model hypothesis confidence scores are impactful to the use case, especially in large-scale NLP systems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 114, |
|
"text": "Zhu et al. (2018)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Many publications (Dinga et al., 2019; Hossin M, 2015) have shed light on pitfalls of commonly used evaluation metrics, and introduced alternatives and best practices to avoid those pitfalls. However, the criticized metrics have maintained their status as the standard in most industries and in academia. Ling et al. (2003) and Vanderlooy and H\u00fcllermeier (2008) have proposed methodologies to evaluate metrics against each other. We decided however to approach this problem from a different perspective. We will only compare a metric to its proposed extended counterpart (e.g. F1 vs cF1), and will not claim our proposed metrics to be objectively better, but simply demonstrate advantages they introduce, and in which situations those advantages are useful. In many use-cases it might still be preferable to use the regular Precision and Recall.", |
|
"cite_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 38, |
|
"text": "(Dinga et al., 2019;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 39, |
|
"end": 54, |
|
"text": "Hossin M, 2015)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 323, |
|
"text": "Ling et al. (2003)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 361, |
|
"text": "Vanderlooy and H\u00fcllermeier (2008)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "There are many existing metrics that leverage the model's probability assignments over classes. Brier score (Brier, 1950) is an example, and is widely accepted as a standard in probabilistic weather forecasting. It is a strictly proper scoring rule, meaning it is uniquely optimized by reporting probabilistically calibrated model predictions. Using our earlier defined methodology, Brier-score can be calculated as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 121, |
|
"text": "(Brier, 1950)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "BS = 1 n n i=1 m j=1 (M (x i , C j ) \u2212 I C j =y i ) 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "which can be interpreted as a sum of squared errors between the predicted probability distribution and the true distribution. Brier-score is effective at giving a big picture of model performance beyond the top model hypothesis, along with an evaluation of probabilistic calibration.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "However, Brier-score is not appropriate for largescale NLP systems such as those described in Section 1, for two main reasons. The first one is that the data distribution is often imbalanced, as basic commands such as \"stop\" or \"play\" are dominating as compared to more niche features such as \"open halo on my xbox\", while the importance of a class is not reflected in its distribution (e.g. calling for emergency). This shows that these use cases require class-based measures, rather than aggregated ones like Brier-score. Secondly, it is important to be able to evaluate each class independently to understand the class tradeoffs (False Accepts and False Rejections from/towards competing classes), as different stakeholders are responsible for different functionalities. Additionally, Brier-score can be difficult to interpret and explain to non-technical stakeholders, as compared to other common metrics such as Precision and Recall. The concerns presented in this paragraph also hold for other probabilistic metrics we found in literature, such as the Probabilistic Confusion Entropy (Wang et al., 2013) and metrics usually used as loss functions during training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1090, |
|
"end": 1109, |
|
"text": "(Wang et al., 2013)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Another widely popular metric is the Area Under the Receiver Operating Characteristic Curve (AUC), which is originally for binary classifiers, but has been generalized to handle multi-class (Hand and Till, 2001 ). AUC is taxonomized as a rank-based metric rather than a probabilistic metric, as it is only sensitive to changes in confidence scores when those changes cause a difference in the ranking of test samples. Multiple extensions of AUC have been proposed to allow it to better leverage probability score assignments, such as the pAUC (Ferri et al., 2004) and soft-AUC (Calders and Jaroszewicz, 2007) but these extensions were only defined and analyzed in the binary classification case, and were also questioned by Vanderlooy and H\u00fcllermeier (2008) through empirical experiments indicating that the variants fail to be more effective than the original AUC.", |
|
"cite_spans": [ |
|
{ |
|
"start": 190, |
|
"end": 210, |
|
"text": "(Hand and Till, 2001", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 543, |
|
"end": 563, |
|
"text": "(Ferri et al., 2004)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 577, |
|
"end": 608, |
|
"text": "(Calders and Jaroszewicz, 2007)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 724, |
|
"end": 757, |
|
"text": "Vanderlooy and H\u00fcllermeier (2008)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Precision, Recall, and F1-Score have a denominator which in some cases can be equal to zero, making it impossible to calculate an estimate of the metric. In the case of Recall, this denominator (TP + FN) would be equal to zero for any label that is not present in the test-set (no sample in the test-set is assigned this label as its ground truth). As Recall and cRecall have the same denominator, this situation would also cause cRecall to be NaN for those labels. Precision would be NaN for any label that is not hypothesized by the model when making predictions in the test-set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Benefit A: Robustness to NaN values", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Notably, cPrecision does not fall victim to the same issue (except in the extreme case when the model never assigns any confidence to that label). This is due to the fact that cPrecision considers that the model is making soft predictions (confidence score assignments) for all labels for each test sample. This quality makes cPrecision more robust than Precision. 5 Benefit B: Model confidence score sensitivity", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Benefit A: Robustness to NaN values", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Intuitively, it appears as a good characteristic for a model to be more confident when being correct and less confident when being wrong. Even though in some use cases this might not result in better outcomes for the stakeholders since only the highestscore prediction is actually used in the final application. Nonetheless, selecting a better model can yield benefits in the long term, or when noise/outof-sample data is introduced, as the score assignments reflect how well the model understands the underlying data. We did not deem it necessary to perform an empirical demonstration of this quality, as it is a central aspect in the definition of the proposed metrics. In order to concretely illustrate the practical use of Benefit B in an NLP production environment, consider a case where a company is building a semiautomated text labelling pipeline, where a model automatically labels text samples when the top prediction's confidence score is higher than a chosen threshold (e.g. 0.99), and sends the remaining samples to humans for manual labelling (e.g. Amazon SageMaker Ground Truth as an NLP pipeline). In cases where the label space is large (common in large-scale NLP) human annotators cannot be familiar with all annotations. To address this, the pipeline presents the human annotator with the model's n-best output predictions as suggestions, to improve efficiency and reduce their burden. Evaluating this NLP model with threshold-based metrics would not be appropriate, as the full n-best output is used to influence the human annotator's decisions. Metrics such as Brier-score and Log-loss would be a step forward, but would not allow for a balanced class-based evaluation, and would not give visibility over the tradeoffs between classes. The latter metrics are also difficult to interpret, as the stakeholders are likely to be interested in measures they can easily relate to, and a potential break-down of which labels are more difficult to identify. In such case, cPrecision, cRecall, and cF1 would be appropriate, as they would bring the thorough evaluation of probabilistic metrics combined with the interpretability and robustness to class imbalance of Precision and Recall.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Benefit A: Robustness to NaN values", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In our experiments we directly compare each of the 3 threshold-based metrics with their probabilistic counterpart. Our goal is to support the claimed benefits of generalization and functional consistency. We present results on the SNLI dataset (Bowman et al., 2015), a dataset of paired sentences (560152 training samples and 10000 test samples) rated by annotators as either \"Neutral\", \"Entailment\", or \"Contradiction\" depending on how the sentences relate to each other. Please note that as SNLI does not suffer from data imbalance and only has 3 classes, this experiment is not intended to illustrate all of the advantages of the proposed metrics in large-scale commercial NLP pipelines. Instead, we simply use SNLI to experimentally support the generalization and functional consistency benefits.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We trained Sep-CNN (Denk, 2018) models (structure shown in Figure 7 in Appendix B) with different sets of randomly chosen hyperparameters and training samples. Sep-CNN was used here for its training and evaluation speed that eased speed of experimentation. More replication information can be found in Appendix A.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 67, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Testing Hypothesis C: Generalization", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Generalization is directly tied to the variance of the metrics. Metrics with lower variance will have tighter confidence bounds, which implies that their point estimates are closer to the true population values. This means that a point estimate calculated from a small dataset is more likely to be generalizable to unseen data. Lower variance also implies that less data is required to reach statistically significant results in discriminating models. For this hypothesis we simply need to demonstrate that our proposed metrics have lower variance than their threshold-based counterpart. We set-up our experi-ments also to show that our proposed metrics are better able to discriminate between models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Testing Hypothesis C: Generalization", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We introduce differences in the models by varying the percentage of the training dataset selected during subsampling for model training, and injecting noise by changing a certain percentage of the labels on that data to a random alternative label. Model 1 used 100% of the training data, Model 2 used 66.7% of the training data, 10% of which is altered to introduce noise, and finally Model 3 used 33.3% of the training data, 20% of which is altered to introduce noise. The goal of changing these two parameters is to create enough differentiation in performance between the models in order to have preliminary expectations of which models will perform best. Model 1 is expected to perform better than Model 2, which is expected to perform better than Model 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Testing Hypothesis C: Generalization", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We then ran predictions on test-set downsamplings of 1.0, 0.5, 0.2, 0.1, 0.05, 0.02, and 0.01 ratios for each of these models and used the bootstrap method (Efron, 1979) with 1000 resamplings, to calculate the mean and 95% confidence intervals for the F1 and cF1 scores for each class, for each model, on each test data down-sampling. The plots show two key elements: variances get smaller as the test-set size increases, and the variance of the probabilistic metrics is always lower than the variance of their threshold-based counterpart. We also used a F-test of equality of variance, Bartlett's test, and Levene's test to reject the null hypothesis that the variance of the thresholdbased metric and its probabilistic counterpart are equal, and obtained statistically significant results (pvalues < 0.05) in all cases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 169, |
|
"text": "(Efron, 1979)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Testing Hypothesis C: Generalization", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "In Figure 2 we compare F1 and cF1's abilities to discriminate between models, at different test-set sizes. The shaded region for each line represents the 95% confidence interval. The x-axis represents down-sampling ratios of the test-set used for each metric evaluation. We see the confidence intervals being further away from each other for cF1 as compared to F1, across all test-set sampling sizes, allowing for a statistically significant identification of which models have a better understanding of the underlying data. Figures 3 and 4 in Appendix B show the same but for Precision against cPrecision and Recall against cRecall respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 11, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 525, |
|
"end": 540, |
|
"text": "Figures 3 and 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Testing Hypothesis C: Generalization", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We hypothesize that for most pragmatic and realworld modeling decisions, the ranking of performance of candidate models by each metric (when compared to their threshold-based counterpart) is the same when tested on a test-set similar enough to the population distribution of test data. Figure 2 illustrates it, by showing consistency in model rankings for each metric at the 1.0 subsampling (full test-set size). In order to more empirically demonstrate this observation, we ran an experiment where we randomly generated 100 models by sampling from a selection of different possible hyperparameters. We then compared all models against each other, resulting in 4950 pairwise comparisons, using the 6 metrics considered (Precision, cPrecision, Recall, cRecall, F1, cF1) . From these results, we extracted all the cases in which both the probabilistic and the thresholded metric showed a statistically significant difference between the two models being compared (t-test with p=0.01). Among the latter cases, we counted the percentage of agreement (cases where both metrics agree on which model is better). The results for each class are demonstrated in ", |
|
"cite_spans": [ |
|
{ |
|
"start": 720, |
|
"end": 731, |
|
"text": "(Precision,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 732, |
|
"end": 743, |
|
"text": "cPrecision,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 744, |
|
"end": 751, |
|
"text": "Recall,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 752, |
|
"end": 760, |
|
"text": "cRecall,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 761, |
|
"end": 764, |
|
"text": "F1,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 765, |
|
"end": 769, |
|
"text": "cF1)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 286, |
|
"end": 295, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Testing Hypothesis D: Functional Consistency", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "As mentioned in the introduction, the proposed metrics do not evaluate for probabilistic calibration. There are many cases in which model probability scores are used in production with the expectation of reflecting reliable probabilities. In such cases, probabilistic calibration would have to be evaluated separately using a strictly proper scoring rule (Gneiting and Raftery, 2007) . Another aspect to consider is interpretability. The proposed extensions lack the degree of direct interpretability afforded by their threshold-based counterparts. We believe that they still have a high degree of interpretability when compared to other metrics such as Brier Score and Cross-Entropy. We believe that these downsides do not necessarily pose a problem as long as they are known to the users of the metrics, so they can take appropriate measures in cases when it is required.", |
|
"cite_spans": [ |
|
{ |
|
"start": 355, |
|
"end": 383, |
|
"text": "(Gneiting and Raftery, 2007)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Potential Shortcomings to Consider", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In this paper we only focused on classification, and not named entity recognition (NER), while NLP often requires both. Many NER specific metrics like SER (Makhoul et al., 1999) consider the possibility of having slot insertions or deletions, making them more appropriate for evaluating NER. In the future we hope to extend metrics like SER to gain these benefits.", |
|
"cite_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 177, |
|
"text": "(Makhoul et al., 1999)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We also hope to investigate the feasibility of altering these proposed metrics to be Strictly Proper Scoring Rules (Gneiting and Raftery, 2007) allowing for a dual assessment of probabilistic calibration and performance. Strict Proper Scoring will aid us as we plan to study the potential use of these metrics as a differentiable model loss for training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 143, |
|
"text": "(Gneiting and Raftery, 2007)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Finally, we hope to soon address the question of how to deal with output quantization where discrete confidence bins (HIGH, MED, LOW) rather than the continuous values are used by downstream tasks or customers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We introduced probabilistic extensions of widely used threshold-based metrics, and four benefits they provide as compared to their original counterparts. These benefits motivate the use of our proposed metrics in real-world problems where data is scarce and/or where the model confidence score assignments over its predictions are leveraged in production. We hope these metrics will allow for more reliable modeling decision-making in such cases. We hope this research will pave the way for further investigation into the challenge of model evaluation with under-representative test-sets. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "9" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors would like to thank Sreekar Bhaviripudi, Jack FitzGerald, Spyros Matsoukas, and Cedric Warny for reviewing this work and providing valuable feedback. The authors would also like to thank the anonymous reviewers for their insightful comments and suggestions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "All sentences were lowercased, periods were stripped from the ends of sentences. When found in the middle of sentences the periods are spaceseparated so that they are separate tokens. Sentence pairs were separated with the \"[SEP]\" token. Tokens were given indices up to the 20000th token, after which tokens are assigned to a reserved index indicating OOV. A max sequence length of 42 was chosen for speed, based on the distribution of lengths in the SNLI dataset. Samples longer than 42 were truncated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.1 Preprocessing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "All models were trained with:\u2022 150 epochs\u2022 Early stopping on validation loss with 8 epochs of patience\u2022 Randomized validation split with 9:1 train to validation ratio\u2022 Batch size 128\u2022 Learning rate 1e-3\u2022 Adam optimizer 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 Embedding Dim. 64, 128, 256 Dropout rate 0.0, 0.1, 0.2, 0.3, 0.4, 0.5 When generating and training the 100 models for the testing of Hypothesis D, we randomly drew hyperparameters from the following distributions, shown in Table 2 , with no two models sharing the same hyperparameters (checked for redundancy).", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 221, |
|
"text": "2,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 222, |
|
"end": 224, |
|
"text": "3,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 225, |
|
"end": 227, |
|
"text": "4,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 230, |
|
"text": "5,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 233, |
|
"text": "6,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 236, |
|
"text": "7,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 237, |
|
"end": 239, |
|
"text": "8,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 240, |
|
"end": 242, |
|
"text": "9,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 243, |
|
"end": 246, |
|
"text": "10,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 247, |
|
"end": 250, |
|
"text": "11,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 251, |
|
"end": 254, |
|
"text": "12,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 255, |
|
"end": 258, |
|
"text": "13,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 259, |
|
"end": 262, |
|
"text": "14,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 263, |
|
"end": 266, |
|
"text": "15,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 267, |
|
"end": 288, |
|
"text": "16 Embedding Dim. 64,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 289, |
|
"end": 293, |
|
"text": "128,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 294, |
|
"end": 297, |
|
"text": "256", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 493, |
|
"end": 500, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A.2 Model Training", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A large annotated corpus for learning natural language inference", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Samuel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabor", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Angeli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Potts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "632--642", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D15-1075" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Verification of forecasts expressed in terms of probability", |
|
"authors": [ |
|
{ |
|
"first": "Glenn", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Brier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1950, |
|
"venue": "Monthly Weather Review", |
|
"volume": "78", |
|
"issue": "1", |
|
"pages": "1--3", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1175/1520-0493(1950)078<0001:VOFEIT>2.0.CO;2" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Glenn W. Brier. 1950. Verification of forecasts ex- pressed in terms of probability. Monthly Weather Review, 78(1):1-3.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Efficient auc optimization for classification", |
|
"authors": [ |
|
{ |
|
"first": "Toon", |
|
"middle": [], |
|
"last": "Calders", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Szymon", |
|
"middle": [], |
|
"last": "Jaroszewicz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Knowledge Discovery in Databases: PKDD 2007", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "42--53", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-3-540-74976-9_8" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Toon Calders and Szymon Jaroszewicz. 2007. Efficient auc optimization for classification. In Knowledge Discovery in Databases: PKDD 2007, pages 42-53, Berlin, Heidelberg. Springer Berlin Heidelberg.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Unsupervised cross-lingual representation learning at scale", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kartikay", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vishrav", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wenzek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8440--8451", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.747" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Text classification with separable convolutional neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Timo", |
|
"middle": [], |
|
"last": "Denk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.13140/RG.2.2.22080.07683" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timo Denk. 2018. Text classification with separable convolutional neural networks.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Beyond accuracy: Measures for assessing machine learning models, pitfalls and guidelines", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Dinga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"J H" |
|
], |
|
"last": "Brenda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dick", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Penninx", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lianne", |
|
"middle": [], |
|
"last": "Veltman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andre", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Schmaal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Marquand", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1101/743138" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Dinga, Brenda W.J.H. Penninx, Dick J. Velt- man, Lianne Schmaal, and Andre F. Marquand. 2019. Beyond accuracy: Measures for assessing machine learning models, pitfalls and guidelines. bioRxiv.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Bootstrap methods: Another look at the jackknife", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Efron", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1979, |
|
"venue": "Ann. Statist", |
|
"volume": "7", |
|
"issue": "1", |
|
"pages": "1--26", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1214/aos/1176344552" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Efron. 1979. Bootstrap methods: Another look at the jackknife. Ann. Statist., 7(1):1-26.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "An experimental comparison of performance measures for classification", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Ferri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Hern\u00e1ndez-Orallo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Modroiu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Pattern Recogn. Lett", |
|
"volume": "30", |
|
"issue": "1", |
|
"pages": "27--38", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.patrec.2008.08.010" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Ferri, J. Hern\u00e1ndez-Orallo, and R. Modroiu. 2009. An experimental comparison of performance mea- sures for classification. Pattern Recogn. Lett., 30(1):27-38.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Modifying roc curves to incorporate predicted probabilities", |
|
"authors": [ |
|
{ |
|
"first": "C\u00e8sar", |
|
"middle": [], |
|
"last": "Ferri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Flach", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Second Workshop on ROC Analysis in ML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C\u00e8sar Ferri, Peter Flach, Jos\u00e9 Hern\u00e1ndez-orallo, and Athmane Senad. 2004. Modifying roc curves to incorporate predicted probabilities. In In Second Workshop on ROC Analysis in ML.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Strictly proper scoring rules, prediction, and estimation", |
|
"authors": [ |
|
{ |
|
"first": "Tilmann", |
|
"middle": [], |
|
"last": "Gneiting", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adrian", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Raftery", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Journal of the American Statistical Association", |
|
"volume": "102", |
|
"issue": "477", |
|
"pages": "359--378", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1198/016214506000001437" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tilmann Gneiting and Adrian E Raftery. 2007. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477):359-378.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "On calibration of modern neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Chuan", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoff", |
|
"middle": [], |
|
"last": "Pleiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kilian", |
|
"middle": [ |
|
"Q" |
|
], |
|
"last": "Weinberger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 34th International Conference on Machine Learning", |
|
"volume": "70", |
|
"issue": "", |
|
"pages": "1321--1330", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.5555/3305381.3305518" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neu- ral networks. In Proceedings of the 34th Interna- tional Conference on Machine Learning -Volume 70, ICML'17, page 1321-1330. JMLR.org.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A simple generalisation of the area under the roc curve for multiple class classification problems", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Hand", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Till", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Machine Learning", |
|
"volume": "45", |
|
"issue": "", |
|
"pages": "171--186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1023/A:1010920819831" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David J. Hand and Robert J. Till. 2001. A simple gen- eralisation of the area under the roc curve for multi- ple class classification problems. Machine Learning, 45(2):171-186.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A unified view of performance metrics: Translating threshold choice into expected classification loss", |
|
"authors": [ |
|
{ |
|
"first": "Jos\u00e9", |
|
"middle": [], |
|
"last": "Hern\u00e1ndez-Orallo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Flach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C\u00e8sar", |
|
"middle": [], |
|
"last": "Ferri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "J. Mach. Learn. Res", |
|
"volume": "13", |
|
"issue": "1", |
|
"pages": "2813--2869", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jos\u00e9 Hern\u00e1ndez-Orallo, Peter Flach, and C\u00e8sar Ferri. 2012. A unified view of performance metrics: Trans- lating threshold choice into expected classification loss. J. Mach. Learn. Res., 13(1):2813-2869.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A review on evaluation metrics for data classification evaluations", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Sulaiman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hossin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "International Journal of Data Mining & Knowledge Management Process", |
|
"volume": "5", |
|
"issue": "2", |
|
"pages": "1--11", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.5121/ijdkp.2015.5201" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sulaiman M.N Hossin M. 2015. A review on evalua- tion metrics for data classification evaluations. In- ternational Journal of Data Mining & Knowledge Management Process, 5(2):1-11.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Auc: A statistically consistent and more discriminating measure than accuracy", |
|
"authors": [ |
|
{ |
|
"first": "Charles", |
|
"middle": [ |
|
"X" |
|
], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jin", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harry", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 18th International Joint Conference on Artificial Intelligence, IJCAI'03", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "519--524", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charles X. Ling, Jin Huang, and Harry Zhang. 2003. Auc: A statistically consistent and more discriminat- ing measure than accuracy. In Proceedings of the 18th International Joint Conference on Artificial In- telligence, IJCAI'03, page 519-524, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Performance measures for information extraction", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Makhoul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Kubala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of DARPA Broadcast News Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "249--252", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Makhoul, Francis Kubala, Richard Schwartz, and Ralph Weischedel. 1999. Performance measures for information extraction. In In Proceedings of DARPA Broadcast News Workshop, pages 249-252.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "A new vector partition of the probability score", |
|
"authors": [ |
|
{ |
|
"first": "Allan", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Murphy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1973, |
|
"venue": "Journal of Applied Meteorology", |
|
"volume": "12", |
|
"issue": "4", |
|
"pages": "595--600", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1175/1520-0450(1973)012<0595:ANVPOT>2.0.CO;2" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Allan H. Murphy. 1973. A new vector partition of the probability score. Journal of Applied Meteorology, 12(4):595-600.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A survey on transfer learning", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "IEEE Transactions on Knowledge and Data Engineering", |
|
"volume": "22", |
|
"issue": "10", |
|
"pages": "1345--1359", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/TKDE.2009.191" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. J. Pan and Q. Yang. 2010. A survey on transfer learn- ing. IEEE Transactions on Knowledge and Data En- gineering, 22(10):1345-1359.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "The technology behind personal digital assistants: An overview of the system architecture and key components. IEEE Signal Processing Magazine", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Sarikaya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "34", |
|
"issue": "", |
|
"pages": "67--81", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/MSP.2016.2617341" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Sarikaya. 2017. The technology behind personal digital assistants: An overview of the system archi- tecture and key components. IEEE Signal Process- ing Magazine, 34(1):67-81.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "A critical analysis of variants of the auc", |
|
"authors": [ |
|
{ |
|
"first": "Stijn", |
|
"middle": [], |
|
"last": "Vanderlooy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eyke", |
|
"middle": [], |
|
"last": "H\u00fcllermeier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Machine Learning", |
|
"volume": "72", |
|
"issue": "3", |
|
"pages": "247--262", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/s10994-008-5070-x" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stijn Vanderlooy and Eyke H\u00fcllermeier. 2008. A criti- cal analysis of variants of the auc. Machine Learn- ing, 72(3):247-262.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Probabilistic confusion entropy for evaluating classifiers", |
|
"authors": [ |
|
{ |
|
"first": "Xiao-Ning", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jin-Mao", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Han", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gang", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hai-Wei", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Entropy", |
|
"volume": "15", |
|
"issue": "12", |
|
"pages": "4969--4992", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3390/e15114969" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiao-Ning Wang, Jin-Mao Wei, Han Jin, Gang Yu, and Hai-Wei Zhang. 2013. Probabilistic confu- sion entropy for evaluating classifiers. Entropy, 15(12):4969-4992.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Employ decision values for softclassifier evaluation with crispy references", |
|
"authors": [ |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Ban", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takeshi", |
|
"middle": [], |
|
"last": "Takahashi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daisuke", |
|
"middle": [], |
|
"last": "Inoue", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Neural Information Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "392--402", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-3-030-04212-7_34" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lei Zhu, Tao Ban, Takeshi Takahashi, and Daisuke Inoue. 2018. Employ decision values for soft- classifier evaluation with crispy references. In Neu- ral Information Processing, pages 392-402, Cham. Springer International Publishing.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "B Additional Figures Figure 3: Empirical comparison between Precision and cPrecision scores, across different levels of train-set sampling and noise, and different levels of test-set sampling. The y-axis represents the Precision and confidence-Precision values", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B Additional Figures Figure 3: Empirical comparison between Precision and cPrecision scores, across different levels of train-set sam- pling and noise, and different levels of test-set sampling. The y-axis represents the Precision and confidence- Precision values.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "below presents a comparison of the resulting variances of cF1 and F1. Figures 5 and 6 (Appendix B) show the same comparison but for cPrecision against Precision, and cRecall against Recall respectively.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "Empirical comparison of the variances of F1 and cF1, across different test-set sizes.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "Empirical comparison between F1 and cF1 scores, across different levels of train-set sampling and noise, and different levels of test-set sampling. The y-axis represents the F1 and confidence-F1 values.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"text": "Empirical comparison between Recall and cRecall scores, across different levels of train-set sampling and noise, and different levels of test-set sampling. The y-axis represents the Recall and confidence-Recall values.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"text": "Empirical comparison of the variances of Precision and cPrecision, across different test-set sizesFigure 6: Empirical comparison of the variances of Recall and cRecall, across different test-set sizes", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF5": { |
|
"text": "Sep-CNN model architecture used for experimentation on the NLP dataset", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"text": "These results indicate that comparable metrics (i.e. Precision and cPrecision, Recall and cRecall, F1 and cF1) agree the majority of the time.", |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">Metric Type Class</td><td>% Sig.</td><td>% Agree</td></tr><tr><td>(c)F1</td><td>entailment</td><td colspan=\"2\">93.29 89.19</td></tr><tr><td>(c)F1</td><td colspan=\"3\">contradiction 93.61 93.75</td></tr><tr><td>(c)F1</td><td>neutral</td><td colspan=\"2\">90.32 84.94</td></tr><tr><td colspan=\"2\">(c)Precision entailment</td><td colspan=\"2\">92.97 94.52</td></tr><tr><td colspan=\"4\">(c)Precision contradiction 94.20 92.62</td></tr><tr><td colspan=\"2\">(c)Precision neutral</td><td colspan=\"2\">89.78 80.11</td></tr><tr><td>(c)Recall</td><td>entailment</td><td colspan=\"2\">95.84 75.81</td></tr><tr><td>(c)Recall</td><td colspan=\"3\">contradiction 95.75 84.99</td></tr><tr><td>(c)Recall</td><td>neutral</td><td colspan=\"2\">91.71 81.13</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"text": "Percent of statistically significant model comparisons that agree between each pair of comparable metrics.", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |