{ "paper_id": "U03-1011", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:11:36.846046Z" }, "title": "Performance Metrics for Word Sense Disambiguation", "authors": [ { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Melbourne", "location": { "postCode": "3010", "settlement": "VIC", "country": "Australia" } }, "email": "tacohn@cs.mu.oz.au" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents the area under the Receiver Operating Characteristics (ROC) curve as an alternative metric for evaluating word sense disambiguation performance. The current metrics-accuracy, precision and recallwhile suitable for two-way classification, are shown to be inadequate when disambiguating between three or more senses. Specifically, these measures do not facilitate comparison with baseline performance nor are they sensitive to non-uniform misclassification costs. Both of these issues can be addressed using ROC analysis.", "pdf_parse": { "paper_id": "U03-1011", "_pdf_hash": "", "abstract": [ { "text": "This paper presents the area under the Receiver Operating Characteristics (ROC) curve as an alternative metric for evaluating word sense disambiguation performance. The current metrics-accuracy, precision and recallwhile suitable for two-way classification, are shown to be inadequate when disambiguating between three or more senses. Specifically, these measures do not facilitate comparison with baseline performance nor are they sensitive to non-uniform misclassification costs. Both of these issues can be addressed using ROC analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Word sense disambiguation (WSD) is one of the large open problems in the field of natural language processing, and in recent years has attracted considerable research interest (Ide and Veronis, 1998) . The increasing availability of large corpora along with electronic sense inventories (such as WordNet; Fellbaum (1998)) has permitted the application of a raft of machine learning techniques to the task and provided an empirical means of performance evaluation. Until recently, most performance evaluation was conducted on disparate data sets, with only the line and interest corpora being used in a significant number of studies (Leacock et al., 1993; Bruce and Wiebe, 1994) . SENSEVAL, a global evaluation performed in 1998 (Kilgarriff, 1998) and again in 2001 (Edmonds and Cotton, 2001) , provided a common set of disambiguation tasks and performance evaluation criteria, allowing an objective comparison between competing methods. These workshops included the tasks of disambiguating all words in a given text (the all-words task), and disambiguating each occurrence of a given word when it appears with a short context of a few surrounding sentences (the lexical sample task). Performance in the two tasks was measured in terms of precision and recall. Precision was defined as the proportion of classified instances that were correctly classified, and recall as the proportion of instances classified correctly -these allow for the possibility of an algorithm choosing not to classify a given instance. This evaluation criterion is insensitive to both the type of misclassification (is the predicted sense more closely related to the correct sense than other possible senses?) and the confidence with which the classifier has made the prediction (is the correct sense allocated a high probability despite not being given the highest value by the classifier?).", "cite_spans": [ { "start": 176, "end": 199, "text": "(Ide and Veronis, 1998)", "ref_id": "BIBREF8" }, { "start": 632, "end": 654, "text": "(Leacock et al., 1993;", "ref_id": "BIBREF12" }, { "start": 655, "end": 677, "text": "Bruce and Wiebe, 1994)", "ref_id": "BIBREF2" }, { "start": 728, "end": 746, "text": "(Kilgarriff, 1998)", "ref_id": "BIBREF10" }, { "start": 765, "end": 791, "text": "(Edmonds and Cotton, 2001)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These problems led Resnik and Yarowsky (1999) to suggest an evaluation metric to provide partial credit for incorrectly classified instances. They penalise probability mass assigned to incorrect senses weighted by what they term the communicative/semantic distance between the that predicted sense and the correct sense. Using such measures, systems that confuse homographs would be penalised most heavily, while those that confuse fine-grained senses would only attract a minor penalty. The score assigned to a particular algorithm is highly reliant on the distances between senses; altering the relative penalties may well promote a previously non-optimal classifier to be the best performing classifier.", "cite_spans": [ { "start": 19, "end": 45, "text": "Resnik and Yarowsky (1999)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to highlight the problems in the existing evaluation methods, it is worth clarifying the qualities such a method should possess. Ideally, the evaluation metric should provide the following features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) allow comparison of the performance of two or more classifiers on the same problem, ranking them in order of quality of prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) penalise incorrectly classified instances based on the distance, or confusability between the predicted and correct sense, when disambiguating between three or more sentences. These penalties are henceforth referred to as (non-uniform) misclassification costs. (3) allow comparison to baseline performancethat of the classifier which always predicts only the a priori majority sense. (4) provide a readily interpretable measure of performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper analyses the metrics that have been used in assessing WSD performance in light of the above criteria. An alternative metric, Receiver Operating Characteristics (ROC), is proposed and shown to have favourable properties with respect to the criteria. Section 2 describes the shortcomings of the current metrics. Section 3 shows how ROC analysis can be applied to WSD evaluation. Section 4 provides a discussion in the context of empirical studies and I conclude in section 5 with thoughts for future study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Many comparisons of WSD performance use predictive accuracy as the sole means of comparison. Accuracy is defined as the proportion of instances that were disambiguated correctly, and is often compared to a baseline -the performance of the classifier that predicts the majority sense for every instance. Baseline performance varies greatly between words: from lower than 10% to greater than 90%. Without some form of normalisation, comparison of the results of different classifiers on different problems is impossible. The kappa statistic (Carletta, 1996) may be used to normalise accuracy, adjusting the result for the expected agreement with the perfect classifier by chance, thus satisfying criterion (3).", "cite_spans": [ { "start": 539, "end": 555, "text": "(Carletta, 1996)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2" }, { "text": "Implicit in the use of accuracy is the assumption that misclassification costs are equal (or equivalently, the set of senses are all equally similar to one another). Dictionary definitions and indeed, linguistic intuitions, tell us that some sense pairs are more closely related than others. A number of dictionaries present sense hierarchies for words based on their similarities. The guidelines used by lexicographers to determine what constitutes a homograph or sense vary considerably between dictionaries. Even individual lexicographers differ in their systematic preferences as to whether they conflate similar senses into one ('lumpers') or present them as a disparate set ('splitters') (Kilgarriff, 1997; Landau, 2001 ). Depending on the dictionary's purpose, factors such as frequency of occurrence, semantic and syntactic similarity, pronunciation and etymology of a given word are considered (with differing priority) when identifying word's senses. Accordingly, sense definitions are rarely compatible between different dictionaries (or thesauri), presenting issues for WSD tasks using only a single source as the sense inventory. For a binary disambiguation task, misclassification costs should be uniform -we would not expect the cost of misclassifying an instance of sense a as sense b to be any different to the cost of misclassifying an instance of sense b as sense a . 1 However, most words have many more than two senses; Zipf (1945) found the most commonly used words tend to have a much greater degree of polysemy than infrequently used words. While accuracy provides a good measure for comparison (satisfying criterion 1) and is simple to comprehend (4), it does not account for non-uniform classification costs (2), meaning that the ranking given will often not reflect the real costs of errors.", "cite_spans": [ { "start": 694, "end": 712, "text": "(Kilgarriff, 1997;", "ref_id": "BIBREF9" }, { "start": 713, "end": 725, "text": "Landau, 2001", "ref_id": "BIBREF11" }, { "start": 1441, "end": 1452, "text": "Zipf (1945)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "2" }, { "text": "These problems with accuracy led to the adoption of precision and recall instead of (or in addition to) accuracy for performance measurement. The combination of precision and recall have been used as the primary means of performance evaluation in the SENSEVAL exercises.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Precision and recall", "sec_num": "2.1" }, { "text": "Precision and recall are commonly used metrics in information retrieval (IR) (Baeza-Yates and Ribeiro-Neto, 1999) . The retrieval task often involves finding a small number of relevant documents from a large data repository. Algorithms are ranked based on their precision/recall tradeoff; an algorithm can be said to be better than another if it has higher precision (recall) for the same or higher recall (precision). This provides only a loose ranking capacity (criterion 1).", "cite_spans": [ { "start": 77, "end": 113, "text": "(Baeza-Yates and Ribeiro-Neto, 1999)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Precision and recall", "sec_num": "2.1" }, { "text": "Precision by itself is not a highly relevant measure in WSD as it focuses solely on the positive classifications, treating the negative instances as junk. Unlike IR classification, when disambiguating two senses of an ambiguous word, the set of positives is equally important as the set of negatives, since each corresponds to a distinct sense. The classification question could just as easily be phrased in the negative -this should not affect the performance measure. While high recall on its own would constitute a passable WSD method (in that the set of positive instances are largely correctly classified), high precision alone does not say much about the performance of the method. Simply selecting a single correct positive instance will yield the best possible precision, however, this method will perform woefully. 2 Similarly, classifying all instances as positive will achieve a recall of 1.0 and a precision of Pr(P ) -the proportion of positive instances. As with predictive accuracy, the precision would need to be interpreted with respect to the baseline performance to allow comparisons between different tasks (hence having issues with criterion 3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Precision and recall", "sec_num": "2.1" }, { "text": "When extended to classification of three or more senses, these measures falter. In the case of SENSEVAL, the precision is redefined as the proportion of correctly predicted senses within the set of instances for which the algorithm hazarded a prediction, and recall as the proportion of correctly predicted senses over all instances. This implicitly allows classifiers to opt not to classify every instance. However non-exhaustive classifiers are of limited use, given that they must be combined with other classifiers in order to fully disambiguate a given text. Many tasks in which WSD forms a sub-task, such as machine translation (MT), require the word to be fully disambiguated -an unknown value is unacceptable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Precision and recall", "sec_num": "2.1" }, { "text": "Plotting the precision-recall curves (Manning and Schutze, 2000) allows for better performance ranking by optimising precision for a given level of recall. This goes some way in addressing the issues when assessing precision and recall with respect to criterion (1), however the problem ex-ists as to what recall limit is acceptable -there is no theoretical justification for choosing a specific value, and modifying the value may well alter the rankings of the classifiers. The F-measure (a harmonic mean between precision and recall), may be used for simpler ranking providing a single number for comparison (4). However the weighting assigned to precision and recall in the calculation of the mean needs to be chosen and again, theory does not suggest what values to use.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Precision and recall", "sec_num": "2.1" }, { "text": "Criterion 2is not satisfied by this evaluation metric. The precision and recall values for disambiguation tasks involving three or more senses are based on the number of correct responses, ignoring the types of misclassification. Hence this method suffers for the same problems of predictive accuracy in this regard. Combining precision and recall measured for a number of binary disambiguation tasks for a single word (either between every pairing of senses or between each sense and all other senses) may go some way to satisfying (2) while remaining sensitive to the misclassification costs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Precision and recall", "sec_num": "2.1" }, { "text": "Due to the insensitivity of accuracy and precision and recall to non-uniform misclassification costs, Resnik and Yarowsky (1999) proposed a metric incorporating the costs by weighting misclassification penalties by the distances between the predicted and correct senses. In such a manner misclassifications between fine-grained senses (eg., polysemy) will be penalised less harshly than those between coarser sense distinctions (eg., homonymy). They describe a sense hierarchy for the word bank derived from a single or multiple dictionaries, from which they derive a matrix of semantic distance between the senses.", "cite_spans": [ { "start": 102, "end": 128, "text": "Resnik and Yarowsky (1999)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic/communicative distance", "sec_num": "2.2" }, { "text": "The definition of a sense is a contentious issue within the field. The required granularity of sense distinctions varies with the task in which WSD is used. IR and speech synthesis require only coarse sense distinctions, however for MT and full text understanding much finer distinctions are required -often finer than offered by monolingual dictionaries. This would mean that the set of senses and the misclassification costs between senses, as approximated by the semantic distance, will be task dependent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic/communicative distance", "sec_num": "2.2" }, { "text": "In most sense-tagged corpora, sense definitions have been taken from dictionary meanings or thesaurus categories. Granularity aside, these definitions have been criticised for the level of disagreement between lexicographers themselves (Kilgarriff, 1997). These result in markedly different descriptions of senses in different dictionaries, with no one dictionary offering a definitive set of sense description or more formal representation than all others. There is no reliable method of combining dictionary senses to reflect the level of granularity required by the task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic/communicative distance", "sec_num": "2.2" }, { "text": "Resnik and Yarowsky went on to analyse the translation of different senses of a sample of ambiguous English words into 12 target languages. From this they estimated the probability of the senses being lexicalised differently in the translation into the target language. They found that between 52% (fine-grained polysemy) and 95% (homonymy) of senses were lexicalised differently on average in the target languages. They used these statistics to generate semantic distances between senses, reflecting the likelihood that the sense will have a different translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic/communicative distance", "sec_num": "2.2" }, { "text": "In such a scoring model the ranking of classifiers is highly sensitive to the sense hierarchy definition and its use in creating the distance matrix. If either of these were to change -and given the widespread disagreement between lexicographers with regard to sense definitions, this is highly possible -the set of classifiers would need to be reranked. Even when using the translation based measure of semantic distance, the use of a different set of target languages would be likely to affect the scoring. This has the potential to cause previously non-optimal classifiers to be re-ranked as optimal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic/communicative distance", "sec_num": "2.2" }, { "text": "The semantic/communicative distance measure improves on the accuracy measure in that it accounts for non-uniform misclassification costs (2), while still providing a ranking measure (1). Translation based semantic distance measures sidestep a number of the issues involved with the use of dictionary sense inventories but are not without problems. The method still requires normalisation with the baseline performance (3), although the kappa statistic could also be used here. What is lost is simplicity (4) -the score assigned is not readily interpretable, as it is based on the distance matrix, an artificial construct based on unfounded assumptions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic/communicative distance", "sec_num": "2.2" }, { "text": "Receiver Operation Characteristic (ROC) graphs are an evaluation technique born in the field of signal detection which have become de rigueur in machine learning in recent years (Provost and Fawcett, 1997; Provost and Fawcett, 2001) . A ROC graph plots the tradeoff between true positive rate and false negative rate in a binary classifier as a threshold value is modified. The true positive rate (TPR, or recall) is defined as the proportion of positive instances predicted as positive. The false positive rate (FPR, or fallout) is defined as the proportion of negative instances predicted as positive. The rationale behind graphing the relationship between these two factors for a given classifier is that various uses of the classifier may demand different optimisation criteria -such as maximising the TPR given a highest acceptable FPR, or finding the optimal classifier given the costs of errors and class distribution.", "cite_spans": [ { "start": 191, "end": 205, "text": "Fawcett, 1997;", "ref_id": "BIBREF16" }, { "start": 206, "end": 232, "text": "Provost and Fawcett, 2001)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "ROC, an alternative metric", "sec_num": "3" }, { "text": "Provost and Fawcett described an algorithm for creating a ROC curve for a binary classifier and introduce the ROC convex hull (ROCCH), a method for determining the set of potentially optimal classifiers regardless of the misclassification costs and class distributions. Srinivasan (1999) extended ROC analysis to deal with non-binary classifiers, representing the rate by which each class is traded off for another class as each axis of ROC space. This leads to c 2 \u2212 c dimensional ROC space, where c is the number of classes. The ROCCH can be calculated in O(n c ) time, where n is the number of points in ROC space.", "cite_spans": [ { "start": 270, "end": 287, "text": "Srinivasan (1999)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "ROC, an alternative metric", "sec_num": "3" }, { "text": "The sheer difficultly of visualising such high dimensional space prompted Fawcett to develop an alternative process. The area under the ROC curve (AUC) represents the probability that a binary classifier will rank a randomly chosen positive instance higher than a randomly chosen negative instance. This assigns a high score to those classifiers which form the majority of the ROCCH, or are consistently close to the hull. Fawcett (2001) extended AUC to cater for multiple classes by treating a c-dimensional classifier as c binary classifiers (each performing a one-vsall classification), giving:", "cite_spans": [ { "start": 423, "end": 437, "text": "Fawcett (2001)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "ROC, an alternative metric", "sec_num": "3" }, { "text": "AU C total = i AU C(c i ) \u2022 Pr(c i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ROC, an alternative metric", "sec_num": "3" }, { "text": "where Pr(c i ) is the prior probability of the i-th sense.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ROC, an alternative metric", "sec_num": "3" }, { "text": "WSD performance can be measured by the AUC metric, or by comparing a number of classifiers' performance curves in ROC space. Where the misclassification costs are known, the optimal classifier can be found simply by finding the point on the ROCCH with the lowest cost. The cost is simply the sum of the penalties assigned to incorrect classifications, which may be calculated from the semantic/communicative distances between senses as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ROC, an alternative metric", "sec_num": "3" }, { "text": "i Pr(c i ) j r ij d ij", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ROC, an alternative metric", "sec_num": "3" }, { "text": "where r ij is the proportion of instances of sense i classified as sense j, and d ij is the distance between senses i and j, which is zero when i = j.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ROC, an alternative metric", "sec_num": "3" }, { "text": "Where the misclassification costs are unknown or are not known precisely (as would be the case if Resnik and Yarowsky's was supplemented with confidence ranges for each cost), the ROCCH allows performance comparison between the different classifiers. The optimal sub-surface of the ROCCH can be found using the misclassification cost ranges meaning that only classifiers forming part of this sub-surface can be optimal. When the sub-surface is sufficiently small (i.e. the misclassification costs are known to a high degree of confidence) this should provide a good ranking of classifiers, as only a small number will form part of the optimal surface. This allows optimisation of learning methods that cannot incorporate non-uniform misclassification costs, as well as allowing optimisation where these costs are only known approximately and thus cannot be easily incorporated into classifier training. Storing the ROCCH allows this approach to be repeated if misclassification costs were to change.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ROC, an alternative metric", "sec_num": "3" }, { "text": "When the sub-surface is quite large (i.e. when misclassification costs are not known precisely), it is likely that a number of classifiers will lie on the optimal surface. The AUC could then be used to discriminate between these classifiers, ranking those classifiers which are consistently closer to the ROCCH higher than those which are not. While the AUC doesn't strictly indicate optimality, it does provide a reasonable approximation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ROC, an alternative metric", "sec_num": "3" }, { "text": "This method allows comparison and loose ranking of classifiers (criterion 1), in that a number of classifiers can be discarded. Given precise misclassification costs (2), the classifiers (and indeed combinations of classifiers) can be readily ranked. The baseline performance is implicitly used in the analysis: only those classifiers which achieve better results than (weighted) random combinations of the trivial classifiers will be considered (3). This method has the added benefit of being robust in the face of changing or imprecise misclassification costs. While it does not provide a readily interpretable measure (4), especially when considering the convex hull in high dimensional space, the AUC can provide such a measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ROC, an alternative metric", "sec_num": "3" }, { "text": "I have implemented three supervised WSD methods and analysed their performance using the three measures described above. All development was performed in the Natural Language Toolkit (Loper and Bird, 2002) and the source code is available as part of the toolkit. I implemented Yarowsky's (1994) decision list method, which he used for accent restoration in French and Spanish text (roughly similar to homograph disambiguation). This method uses the single most reliable piece of evidence in predicting the sense. I also implemented Brown et al.'s (1991) method, which was used for MT between French and English using decision trees to resolve the correct translation of each ambiguous word. Training uses the flipflop algorithm (Nadas et al., 1991) to determine which feature will maximise the mutual information between a binary division of the values for that feature and the set of most probable senses given the feature takes one of those values. Both of these methods used collocates in a small window around the word as features. Lastly, I created a naive Bayes classifier (Manning and Schutze, 2000), using the unordered bag of words around the ambiguous word as the feature space. Words occurring fewer than five times in the corpora were ignored. The three algorithms were compared on the interest corpus (Bruce and Wiebe, 1994) . The word interest has six senses in the corpus with differing degrees of similarity to each other. Four experiments were performed; the first involved disam- biguating between a pair of fine senses, which reported were difficult for human annotators (Bruce and Wiebe, 1998) , and the second and third involve pairs of more distinct senses. The last test involved disambiguating between all six senses. Table 1 shows the gloss for each sense and the senses used for each test.", "cite_spans": [ { "start": 183, "end": 205, "text": "(Loper and Bird, 2002)", "ref_id": null }, { "start": 277, "end": 294, "text": "Yarowsky's (1994)", "ref_id": "BIBREF20" }, { "start": 728, "end": 748, "text": "(Nadas et al., 1991)", "ref_id": "BIBREF15" }, { "start": 1314, "end": 1337, "text": "(Bruce and Wiebe, 1994)", "ref_id": "BIBREF2" }, { "start": 1590, "end": 1613, "text": "(Bruce and Wiebe, 1998)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 1742, "end": 1749, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Empirical results and discussion", "sec_num": "4" }, { "text": "The learning curve, show in Figure 1 , was constructed (in the same vein as Mooney's (1996) performance survey), showing the accuracy of each method on test 4 when trained with increasing amounts of data. It shows all three methods improving, with only the decision tree method showing signs of over-fitting. The accuracy, precision, recall and AUC values were measured and are shown in Table 2 . Each test was performed using 10-fold cross validation. The precision, recall and AUC values were calculated with respect to the minority sense for tests 1 -3. In test 4 both precision and recall are equal to the accuracy, as all three classifiers predict a sense for every instance. ROC curves were generated by ranking each instance (and predicted classification) in order of confidence, using the method described by Provost and Fawcett (2001) , from which the AUC measures were calculated. The ROC curves for tests 1 -3 are shown in Figure 2 .", "cite_spans": [ { "start": 76, "end": 91, "text": "Mooney's (1996)", "ref_id": "BIBREF14" }, { "start": 817, "end": 843, "text": "Provost and Fawcett (2001)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 28, "end": 36, "text": "Figure 1", "ref_id": "FIGREF1" }, { "start": 387, "end": 394, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 934, "end": 942, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Empirical results and discussion", "sec_num": "4" }, { "text": "The decision list classifier is shown to be significantly more accurate than the other classifiers, exceeding the baselines for all tests, and performing extremely well for test 3. The results for test 1 are interesting in that the decision list method manages to outperform the baseline performance of 97%. With so few instances no solid conclusions may be drawn, however, the high AUC for the decision tree method suggests that it would perform better (in terms of predictive accuracy) by adjusting its threshold. This would allow it to operate at a more suitable point on its ROC curve, rather than at the origin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical results and discussion", "sec_num": "4" }, { "text": "The increase in performance of all methods from test 2 to 3 is most likely due to the increase in data. There are roughly three times as many instances in test 3, providing more training examples. Otherwise, the problems are quite similar, with similar ratios between the two senses. The AUC values support these conclusions, with the decision list and decision tree consistently outperforming naive Bayes for the first three tests. This can also be seen in the ROC curves (Figure 2) , where these two classifiers largely dominate naive Bayes. Naive Bayes has a quite low AUC on all of the tests, while still being greater than the benchmark of 0.5. This is reflected in its lower accuracy in each test, however, in test 4, it outperforms the decision tree method despite having a much lower AUC. This suggests that the naive Bayes classifier is operating closer to the point which maximises accuracy on its ROC surface, whereas the decision tree is not. As earlier, this result suggests that the decision tree classifier should be operating with a lower threshold to achieve a higher accuracy. This is also evident in Figure 2 , where the curve for the decision tree method, while largely dominated by the decision list curve, is still quite close to the ROCCH.", "cite_spans": [], "ref_spans": [ { "start": 473, "end": 483, "text": "(Figure 2)", "ref_id": "FIGREF3" }, { "start": 1119, "end": 1127, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Empirical results and discussion", "sec_num": "4" }, { "text": "The highest accuracy classifier would fall on the ROC convex hull at a very steep gradient, due to the minority sense being treated as positive (m = T P R F P R = Pr(s b ) Pr(sa) where s a and s b are the minority and majority senses respectively). If misclassification costs were biased in favour of the minority sense, the difference in performance between the decision list and decision tree methods would be likely to be reduced, as can be seen from the proximity of their ROC curves at low gradients. The decision list classifier is shown to be superior to the other two, with higher AUC values on most tests and can be seen to be largely dominating the ROCCH for test 2 and test 3. If the misclassification costs are known at the time of training, a number of learning methods (i.e. naive Bayes) can incorporate them into the training phase, optimising the classifier with respect to these costs. However, this is not possible for all classifiers, requiring the use of ROC analysis to select the optimal classifier. While the accuracy, precision and recall measures are relatively useful for analysing tests 1 -3 (assuming uniform misclassification costs), they are not very useful in test 4 . The manner in which they aggregate the set of incorrect classifications together loses a great deal of information about the classifier performance. The additional effort required in performing ROC analysis is well rewarded, with much more informative measures of performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical results and discussion", "sec_num": "4" }, { "text": "The nebulous nature of the word sense along with differing lexicographic practices mean that the task of WSD is ill-defined. Both dictionary and corpus based definitions of word senses, while not always agreeing on sets of senses for a given word, do concur that some sense pairs are more closely related than others. These relationships have been quantified in deriving the semantic/communicative distance matrix. ROC analysis proves to be a viable method for analysing performance, addressing a number of shortcomings with the existing measures. It has been shown to be of particular value in measuring performance when disambiguating between three or more senses. It satisfies the objectives of ease of comparison (1), taking misclassification costs into account (2) and implicitly incorporates baseline performance (3), while providing a simple and understandable measure (4) through the AUC. It has the added benefit of being flexible in the face of changing or imprecise misclassification costs. This is of particular significance in WSD given the vigour of the debate over what constitutes a sense, and as to how senses relate to each other. However, ROC analysis suffers from complexity in the form of high dimensional ROC space and computational demands in finding the convex hull.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "SENSEVAL, and indeed the whole WSD field, stand to benefit from using ROC analysis as a performance metric. Further research into ROC analysis and its application to WSD and other natural language processing tasks can only help the field mature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "This may not be true for all WSD tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note also that selecting nothing will not yield a precision value at all, due to a division by zero.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Modern Information Retrieval", "authors": [ { "first": "Ricardo", "middle": [], "last": "Baeza", "suffix": "" }, { "first": "-", "middle": [], "last": "Yates", "suffix": "" }, { "first": "Berthier", "middle": [], "last": "Ribeiro-Neto", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ricardo Baeza-Yates and Berthier Ribeiro-Neto. 1999. Modern Information Retrieval. Addison Wesley.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Word-sense disambiguation using statistical methods", "authors": [ { "first": "F", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Stephen", "middle": [ "Della" ], "last": "Brown", "suffix": "" }, { "first": "Vincent", "middle": [ "J" ], "last": "Pietra", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the 29th Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "264--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F. Brown, Stephen Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1991. Word-sense disambigua- tion using statistical methods. In Proceedings of the 29th Meeting of the Association for Computational Linguis- tics, pages 264-270.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Word sense disambiguation using decomposable models", "authors": [ { "first": "Rebecca", "middle": [], "last": "Bruce", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "139--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rebecca Bruce and Janyce Wiebe. 1994. Word sense disam- biguation using decomposable models. In Proceedings of the 32nd Annual Meeting of the Association for Compu- tational Linguistics, pages 139-145, Las Cruces, US.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Word sense distinguishability and inter-coder agreement", "authors": [ { "first": "Rebecca", "middle": [], "last": "Bruce", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 3rd Conference on Empirical Methods in Natural Language Processing (EMNLP-98)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rebecca Bruce and Janyce Wiebe. 1998. Word sense dis- tinguishability and inter-coder agreement. In Proceed- ings of the 3rd Conference on Empirical Methods in Nat- ural Language Processing (EMNLP-98), Granada, Spain, June. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Assessing agreement on classification tasks: The kappa statistic", "authors": [ { "first": "Jean", "middle": [], "last": "Carletta", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "2", "pages": "249--254", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jean Carletta. 1996. Assessing agreement on classification tasks: The kappa statistic. Computational Linguistics, 22(2):249-254.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "SENSEVAL-2: An overview", "authors": [ { "first": "Philip", "middle": [], "last": "Edmonds", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Cotton", "suffix": "" } ], "year": 2001, "venue": "Proceedings of SENSEVAL-2: Second International Workshop on Evaluating Word Sense Disambiguation Systems", "volume": "", "issue": "", "pages": "1--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Edmonds and Scott Cotton. 2001. SENSEVAL-2: An overview. In Proceedings of SENSEVAL-2: Second International Workshop on Evaluating Word Sense Dis- ambiguation Systems, pages 1-5.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Using rule sets to maximize ROC performance", "authors": [ { "first": "Tom", "middle": [], "last": "Fawcett", "suffix": "" } ], "year": 2001, "venue": "2001 IEEE International Conference on Data Mining", "volume": "", "issue": "", "pages": "131--138", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Fawcett. 2001. Using rule sets to maximize ROC per- formance. In 2001 IEEE International Conference on Data Mining, pages 131-138.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "WordNet: An Electronic Lexical Database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Introduction to the special issue on word sense disambiguation: The state of the art", "authors": [ { "first": "Nancy", "middle": [], "last": "Ide", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Veronis", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nancy Ide and Jean Veronis. 1998. Introduction to the spe- cial issue on word sense disambiguation: The state of the art. Computational Linguistics, 24(1):140.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "I don't believe in word senses", "authors": [ { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" } ], "year": 1997, "venue": "Computers and the Humanities", "volume": "31", "issue": "2", "pages": "91--113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Kilgarriff. 1997. I don't believe in word senses. Computers and the Humanities, 31(2):91-113.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "SENSEVAL: An exercise in evaluating word sense disambiguation programs", "authors": [ { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the International Conference on Language Resources and Evaluation (LREC)", "volume": "", "issue": "", "pages": "581--588", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Kilgarriff. 1998. SENSEVAL: An exercise in evaluating word sense disambiguation programs. In Proceedings of the International Conference on Lan- guage Resources and Evaluation (LREC), pages 581- 588, Granada, Spain.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Dictionaries: The Art and Craft of Lexicography", "authors": [ { "first": "Sidney", "middle": [ "I" ], "last": "Landau", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sidney I. Landau. 2001. Dictionaries: The Art and Craft of Lexicography. Cambridge University Press, second edi- tion.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Towards building contextual representations of word senses using statistical models", "authors": [ { "first": "Claudia", "middle": [], "last": "Leacock", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Towell", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Voorhees", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics", "volume": "", "issue": "", "pages": "63--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claudia Leacock, Geoffrey Towell, and Ellen Voorhees. 1993. Towards building contextual representations of word senses using statistical models. In SIGLEX work- shop: Acquisition of Lexical Knowledge from Text, ACL. Edward Loper and Steven Bird. 2002. NLTK: The nat- ural language toolkit. In Proceedings of the Workshop on Effective Tools and Methodologies for Teaching Natu- ral Language Processing and Computational Linguistics, pages 63-70, Philadelphia, July. Association for Compu- tational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Foundations of Statistical Natural Language Processing", "authors": [ { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Manning", "suffix": "" }, { "first": "", "middle": [], "last": "Schutze", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning and Hinrich Schutze. 2000. Foun- dations of Statistical Natural Language Processing. MIT Press.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Comparative experiments on disambiguating word senses: An illustration of the role of bias in machine learning", "authors": [ { "first": "Raymond", "middle": [ "J" ], "last": "Mooney", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "82--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raymond J. Mooney. 1996. Comparative experiments on disambiguating word senses: An illustration of the role of bias in machine learning. In Eric Brill and Kenneth Church, editors, Proceedings of the Conference on Em- pirical Methods in Natural Language Processing, pages 82-91. Association for Computational Linguistics, Som- erset, New Jersey.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "An iterative flip-flop approximation of the most informative split in the construction of decision trees", "authors": [ { "first": "Arthur", "middle": [], "last": "Nadas", "suffix": "" }, { "first": "David", "middle": [], "last": "Nahamoo", "suffix": "" }, { "first": "Michael", "middle": [ "A" ], "last": "Picheny", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Powell", "suffix": "" } ], "year": 1991, "venue": "International Conference on Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arthur Nadas, David Nahamoo, Michael A. Picheny, and Jeffrey Powell. 1991. An iterative flip-flop approxima- tion of the most informative split in the construction of decision trees. In International Conference on Acoustics, Speech, and Signal Processing, New York.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Analysis and visualization of classifier performance: Comparison under imprecise class and cost distributions", "authors": [ { "first": "J", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Provost", "suffix": "" }, { "first": "", "middle": [], "last": "Fawcett", "suffix": "" } ], "year": 1997, "venue": "Third International Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "43--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Foster J. Provost and Tom Fawcett. 1997. Analysis and visualization of classifier performance: Comparison un- der imprecise class and cost distributions. In Third Inter- national Conference on Knowledge Discovery and Data Mining, pages 43-48.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Robust classification for imprecise environments", "authors": [ { "first": "J", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Provost", "suffix": "" }, { "first": "", "middle": [], "last": "Fawcett", "suffix": "" } ], "year": 2001, "venue": "Machine Learning", "volume": "42", "issue": "3", "pages": "203--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Foster J. Provost and Tom Fawcett. 2001. Robust classi- fication for imprecise environments. Machine Learning, 42(3):203-231.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Distinguishing systems and distinguishing senses: new evaluation methods for word sense disambiguation", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1999, "venue": "Natural Language Engineering", "volume": "5", "issue": "2", "pages": "113--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik and David Yarowsky. 1999. Distinguishing systems and distinguishing senses: new evaluation meth- ods for word sense disambiguation. Natural Language Engineering, 5(2):113-134.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Note on the location of optimal classifiers in n-dimensional ROC space", "authors": [ { "first": "Ashwin", "middle": [], "last": "Srinivasan", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashwin Srinivasan. 1999. Note on the location of optimal classifiers in n-dimensional ROC space. Technical Re- port PRG-TR-2-99, Oxford University Computing Labo- ratory, Oxford.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Decision lists for lexical ambiguity resolution: Application to accent restoration in Spanish and French", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "88--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky. 1994. Decision lists for lexical ambiguity resolution: Application to accent restoration in Spanish and French. In Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, pages 88-95.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The meaning-frequency relationship of words", "authors": [ { "first": "George", "middle": [], "last": "Zipf", "suffix": "" } ], "year": 1945, "venue": "In Journal of General Psychology", "volume": "3", "issue": "", "pages": "251--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Zipf. 1945. The meaning-frequency relationship of words. In Journal of General Psychology, volume 3, pages 251-256.", "links": null } }, "ref_entries": { "FIGREF1": { "type_str": "figure", "uris": null, "text": "Learning curves", "num": null }, "FIGREF3": { "type_str": "figure", "uris": null, "text": "ROC curves for tests 1 -3", "num": null }, "TABREF1": { "num": null, "content": "", "text": "Test descriptions and baselines.", "type_str": "table", "html": null }, "TABREF3": { "num": null, "content": "
", "text": "Results expressed as percentages.", "type_str": "table", "html": null } } } }