Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y04-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:35:15.039329Z"
},
"title": "High WSD accuracy using Naive Bayesian classifier with rich features",
"authors": [
{
"first": "Anh",
"middle": [],
"last": "Cuong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Japan Advanced Institute of Science and Technology (JAIST)",
"location": {
"addrLine": "1-1",
"postCode": "923-1292",
"settlement": "Asahidai, Ishikawa",
"region": "Tatsunokuchi",
"country": "Japan"
}
},
"email": "cuonganh@jaist.ac.jp"
},
{
"first": "Akira",
"middle": [],
"last": "Le",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Japan Advanced Institute of Science and Technology (JAIST)",
"location": {
"addrLine": "1-1",
"postCode": "923-1292",
"settlement": "Asahidai, Ishikawa",
"region": "Tatsunokuchi",
"country": "Japan"
}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Shimazu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Japan Advanced Institute of Science and Technology (JAIST)",
"location": {
"addrLine": "1-1",
"postCode": "923-1292",
"settlement": "Asahidai, Ishikawa",
"region": "Tatsunokuchi",
"country": "Japan"
}
},
"email": "shimazu@jaist.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Word Sense Disambiguation (WSD) is the task of choosing the right sense of an ambiguous word given a context. Using Naive Bayesian (NB) classifiers is known as one of the best methods for supervised approaches for WSD (Mooney, 1996; Pedersen, 2000), and this model usually uses only a topic context represented by unordered words in a large context. In this paper, we show that by adding more rich knowledge, represented by ordered words in a local context and collocations, the NB classifier can achieve higher accuracy in comparison with the best previously published results. The features were chosen using a forward sequential selection algorithm. Our experiments obtained 92.3% accuracy for four common test words (interest, line, hard, serve). We also tested on a large dataset, the DSO corpus, and obtained accuracies of 66.4% for verbs and 72.7% for nouns.",
"pdf_parse": {
"paper_id": "Y04-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "Word Sense Disambiguation (WSD) is the task of choosing the right sense of an ambiguous word given a context. Using Naive Bayesian (NB) classifiers is known as one of the best methods for supervised approaches for WSD (Mooney, 1996; Pedersen, 2000), and this model usually uses only a topic context represented by unordered words in a large context. In this paper, we show that by adding more rich knowledge, represented by ordered words in a local context and collocations, the NB classifier can achieve higher accuracy in comparison with the best previously published results. The features were chosen using a forward sequential selection algorithm. Our experiments obtained 92.3% accuracy for four common test words (interest, line, hard, serve). We also tested on a large dataset, the DSO corpus, and obtained accuracies of 66.4% for verbs and 72.7% for nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "WSD is always a difficult and important task in natural language processing. Its task is to determine the most appropriate sense for an ambiguous word given a context. Approaches for this work include supervised learning, unsupervised learning, and combinations of them. Except for the expense involved in building labeled datasets, supervised based methods generally give results with higher precision. Many supervised learning algorithms have been applied, such as Bayesian learning, Exemplar-Based learning, Decision Trees, Decision Lists, and Neural Networks. Despite their simplicity, NB methods are still effective when applied to WSD (Mooney, 1996; Pedersen, 2000) .",
"cite_spans": [
{
"start": 641,
"end": 655,
"text": "(Mooney, 1996;",
"ref_id": "BIBREF6"
},
{
"start": 656,
"end": 671,
"text": "Pedersen, 2000)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Before presenting the previous related studies and describing our approach, we need to define some terms that are used throughout in this paper. These are topic context, local context, and collocation. The first kind of information, which is always used for determining the senses of a word, is the topic context represented by a bag of surrounding words in a large context of the ambiguous word. The other informative resource is collocation. There are various definitions of collocation, and for our approach we define collocation as a sequence of words including the ambiguous word. Several studies, such as Leacock and Chodorow (1998) , used local context for disambiguating word senses. Like them, we define local context as the words (or tags of words) assigned with their position in relation to the ambiguous word in a local context. For example, suppose that we have a context of the ambiguous word interest as follows:",
"cite_spans": [
{
"start": 611,
"end": 638,
"text": "Leacock and Chodorow (1998)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "\"yields on money-market mutual funds continued to slide, amid signs that portfolio managers expect further declines in interest rates.\" Then the topic context includes the words: yields, money-market, mutual, funds, continued, . . .; Collocations include the expressions: interest rates, declines in interest, in interest rates, further declines in interest rate ,. . .; Local context is represented by the pairs: (declines,-2), (in,-1), (rates,1), (further, -3), . . .",
"cite_spans": [
{
"start": 187,
"end": 233,
"text": "money-market, mutual, funds, continued, . . .;",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Note that words in collocations and local contexts can be replaced by their part-of-speech tags, and then we will have new features. We also use other terms in the same meaning: unordered words as surrounding words, and ordered words as the words assigned with their positions. Mooney (1996) compared six supervised algorithms including NB, Perceptron, Decision-Tree, k Nearest-Neighbor classifier, logic-based DNF (disjunctive normal form), and CNF (conjunctive normal form), and concluded that NB and Perceptron are the best methods for WSD. He used only the words surrounding the ambiguous word as features for the classifiers. Pedersen (2000) proposed a simple but effective approach using Ensembles of NB classifiers. He showed that WSD accuracy can be improved by combining a number of simple classifiers into an ensemble. He built nine different NB classifiers based on using nine different sizes of the left and the right windows of context: 0, 1, 2, 3, 4, 5, 10, 20 and 50. His method was tested on two datasets of the words interest and line and achieved 89% and 88% accuracy, respectively. He also used only topic context for making decisions.",
"cite_spans": [
{
"start": 278,
"end": 291,
"text": "Mooney (1996)",
"ref_id": "BIBREF6"
},
{
"start": 631,
"end": 646,
"text": "Pedersen (2000)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Only a few papers have considered information other than topic context when using the NB model. Leacock and Chodorow (1998) used an NB classifier, and indicated that by combining topic context and local context they could achieve higher accuracy. In comparing NB methods with Exemplar-Based methods, Escudero (2000a) utilized most of the features used in Ng and Lee (1996) , and showed that exemplar-based algorithm outperforms the NB algorithm. However, these papers did not mention how to select appropriate features, so the features used in their papers do not contain enough information and some information, such as part-of-speech, may be redundant.",
"cite_spans": [
{
"start": 96,
"end": 123,
"text": "Leacock and Chodorow (1998)",
"ref_id": "BIBREF5"
},
{
"start": 300,
"end": 316,
"text": "Escudero (2000a)",
"ref_id": "BIBREF2"
},
{
"start": 355,
"end": 372,
"text": "Ng and Lee (1996)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In many WSD studies, authors use NB as a baseline method for comparison, but many of them use NB with only topic context while adding other information to their own methods. In this paper, we focus on two problems: The first is to determine whether a WSD system using NB will improve the accuracy of its prediction if more kinds of information than usual are used. The second is to discover which kinds of information will be useful for determining the senses of an ambiguous word. We first discuss which kinds of information will be most useful for sense determination, then use a forward sequential selection algorithm to extract the best subset of features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The experiments on some datasets widely used in WSD show that the accuracies will be much improved by combining three kinds of information: topic context, local context, and collocation. One more difference from previous studies is that we do not need to use information, such as part-of-speech tags, other than the words themselves in the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The rest of this paper is organized as follows: Section 2 briefly presents the NB classifier. Section 3 discusses choosing features for word sense disambiguation and shows the algorithm for feature selection. Section 4 shows our experiments and compares the results to those of the best previous studies when testing on four words: interest, line, serve, and hard. Section 5 shows our results and comparison with the others on the DSO corpus. Section 6 discusses the obtained results, and finally our conclusions are presented in section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Na\u00efve Bayes methods have been used in most classification work and were first used for WSD by Gale et al. (1992) . NB classifiers work on the assumption that all the feature variables representing a problem are conditionally independent given the classes. For word sense disambiguation, the context in which an ambiguous word occurs is represented by a vector of feature variables F=(f 1 , f 2 , . . . , f n ) and the sense of the ambiguous word is represented by classification variables (s 1 , s 2 , . . ., s k ). Choosing the right sense of the ambiguous word is finding the sense s i that maximizes the conditional probability P(w=s i |F).",
"cite_spans": [
{
"start": 94,
"end": 112,
"text": "Gale et al. (1992)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Bayesian Classifier",
"sec_num": "2."
},
{
"text": "Suppose C is the context of the target word w, and F=(f 1 , f 2 , . . . , f n ) is the set of features extracted from context C, to find the right sense s' of w given context C, we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Bayesian Classifier",
"sec_num": "2."
},
{
"text": ") ( ) | ( max arg ) ( ) ( ) | ( max arg ) | ( max arg ' i i s i i s i s s w P s w F P s w P F P s w F P F s w P s i i i = = = = = = = =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Bayesian Classifier",
"sec_num": "2."
},
{
"text": "The NB classifier works with the assumption that the features are conditional independent, so that we have\uff1a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Bayesian Classifier",
"sec_num": "2."
},
{
"text": ")] ( log ) ) | ( log( [ max arg ) ( ) | ( max arg ' i C f i j s i C f i j s s w P s w f P s w P s w f P s j i j i = + = = = = = \u2211 \u220f \u2208 \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Bayesian Classifier",
"sec_num": "2."
},
{
"text": "The features for WSD using a NB algorithm are terms such as words, collocations, and words assigned with their positions which are extracted from the context of the ambiguous word. The probability of sense s i , P(s i ), and the conditional probability of feature f j with observation of sense s i , , P(f j |s i ), are computed via Maximum-Likelihood Estimation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Bayesian Classifier",
"sec_num": "2."
},
{
"text": ") ( / ) , ( ) | ( / ) ( ) ( i i i i j i i s C s f C s w f P N s C s P = = = Where C(f j ,s i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Bayesian Classifier",
"sec_num": "2."
},
{
"text": "is the number of occurrences of f j in a context of sense s j in the training corpus, C(s i ) is the number of occurrences of s i in the training corpus, and N is the total number of occurrences of the ambiguous word w or the size of the training dataset. To avoid the effects of zero counts when estimating the conditional probabilities of the model, when meeting a new feature f j in a context of the test dataset, for each sense s i we set P(f j |w=s i ) equal 1/N.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Bayesian Classifier",
"sec_num": "2."
},
{
"text": "Two of the most important kinds of information for determining the senses of an ambiguous word are the topic of the context and relational information representing the structural relations between the target word and the surrounding words in a local context. A bag of unordered words in the context can determine the topic of the context and collocation can determine grammatical information. Ordered words in a local context are also an important resource for relational information. We did not use syntactical relations such as verb-object, which are used in Ng and Lee (1996) , because this information can be found in collocation features and a syntactic parser does not always output a correct result.",
"cite_spans": [
{
"start": 561,
"end": 578,
"text": "Ng and Lee (1996)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3."
},
{
"text": "Let w i be the word at position i in the context of the ambiguous word w and p i be the part-of-speech tag of w i . Note that word w appears precisely at position 0 and i will be negative (positive) if w i appears on the left (right) of w. We select the following features for the feature selection algorithm:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3."
},
{
"text": "F1 is a set of unordered words in the large context,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3."
},
{
"text": "F1= {\u2026, w -2 , w -1 , w 1 , w 2 , . . .} F2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3."
},
{
"text": "is a set of words assigned with their positions in the local context,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3."
},
{
"text": "F2 = {. . ., (w -2 ,-2), (w -1 ,-1), (w 1 ,1), (w 2 ,2), . . .}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3."
},
{
"text": "F3 is a set of part-of-speech tags assigned with their positions in the local context, {. . .,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3."
},
{
"text": "(p -2 ,-2), (p -1 ,-1), (p 1 ,1), (p 2 ,2), . . .} F4 is a set of collocations of words, F4 = {. . ., w -1 w, w -2 w -1 w, ww 1 , ww 1 w 2 , . . . .} F5 is a set of collocations of part-of-speech tags, F5 = {. . ., p -1 w, p -2 p -1 w, wp 1 , wp 1 p 2 , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3."
},
{
"text": ". . .} For example, suppose that we have a context of the ambiguous word line, in which each word is assigned with its part-of-speech, as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3."
},
{
"text": "coil <NNS> up<IN> the<DT> dry<JJ> line<NN> and<CC> stand<VB> midstream<NN> ,<,> rod<NN> in<IN> instant<NN> readiness <NN> .<.> Suppose that we use F2 and F3 with the same window size 2, collocation with maximum length (the length does not include the ambiguous word) 2, and F1 does not include stopped words. Then we have the features as follows: F1 = {coil, dry, stand, midstream, rod, instant, readiness} F2 = {(dry, -1), (the, -2), (and, 1), (stand, 2)} F3 = {(JJ, -1), (DT, -2), (CC, 1), (VB, 2)} F4 = {the dry line, dry line, dry line and, line and, line and stand} F5 = {DT JJ line, JJ line, JJ line CC, line CC, line CC VB} In our method, the feature selection algorithm has two steps: First, we must determine the appropriate sizes for the above kinds of features. For topic context we chose 50 as the left and right window size, similar to many other WSD studies. For local context and collocation features, we used the NB classifier itself as an evaluation function to find the most appropriate sizes for the windows of features in local context and for collocation lengths. Second, from the initially selected features, we used the Forward Sequential Selection (FSS) algorithm presented in Domingos (1997) for extracting the best subset of features. In FSS, the searching process starts with an empty set. First, feature subsets with only one feature are evaluated and the best feature (f*) is selected. Then, two feature combinations of f* with the other features are tested and the best subset is selected. The search goes on by adding one more feature to the subset at each step until we do not get any more performance improvement for the system.",
"cite_spans": [
{
"start": 1201,
"end": 1216,
"text": "Domingos (1997)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3."
},
{
"text": "Note that we do not use the feature selection on the whole features because of the big set of features (some thousands of features). We prefer the objective of selecting subsets based upon the kinds of features to that of extracting the best features from the whole. We followed the wrapper approach and used the NB classifier itself as the evaluation function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3."
},
{
"text": "Therefore, feature selection was divided into two steps as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3."
},
{
"text": "Step 1: Set 4 as the maximum size for both local context and collocation length. Based on the results obtained by testing on the four words using a 10-fold cross validation, find the most appropriate sizes for local context and collocation length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3."
},
{
"text": "Step At the first step of the feature selection algorithm, we used the feature set F2 as test data to get the best local context window size, and used set F4 to get the best collocation size. We implemented the algorithm with the maximum sizes of both local context and collocation runs from 1 to 4, and obtained the results shown in Table 1 From those results, we can see that there are no significant differences in obtained accuracies between using size 4 and size 3 for both local context and collocation. For sizes 1 and 2, the accuracies are much lower. Therefore, we chose 3 as the most appropriate size for both local context window and collocation length.",
"cite_spans": [],
"ref_spans": [
{
"start": 334,
"end": 341,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3."
},
{
"text": "At the second step of the algorithm, the average of results obtained from testing on the four words using a 10-fold cross validation is used as the evaluation function Eval(SF) for the feature set SF.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3."
},
{
"text": "In the algorithm, we used only the content words in topic context. This means that we removed the words with tags including determiners, articles, pronouns, auxiliary verbs, prepositions, adverbs, and numbers. Unlike some other studies, we used all terms (unordered words, ordered words, collocations) without requiring that their frequencies be greater than a determined threshold. This was because from our experimental results, we found that the NB classifier will perform better if it combines evidence from all of the features rather than making a decision by testing only a subset of features with highly frequencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3."
},
{
"text": "The results obtained in step 2 of the algorithm are shown in the tables below. Table 2 shows the results achieved at the first and second iterated steps; at the first step, F2 is proved to be the best information for determining word senses, and the combination of F2 and F1 is proved to be the best at the second iteration. Table 3 shows the results of the third and the fourth iterations, and we learn that the combination of three features sets, F2, F1, and F4, will give the highest accuracy, and the next iteration decreases the accuracy. Table 2 . Results at the first and second iterated steps Table 3 . Results at the third and fourth iterated steps",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 86,
"text": "Table 2",
"ref_id": null
},
{
"start": 325,
"end": 332,
"text": "Table 3",
"ref_id": null
},
{
"start": 544,
"end": 551,
"text": "Table 2",
"ref_id": null
},
{
"start": 601,
"end": 608,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3."
},
{
"text": "In summary, after running this function, we achieved {F1, F2, F4} as the best subset of features. In comparison with other studies, Leacock and Chodorow (1998) lacked collocations, Ng and Lee (1996) lacked local context, and Escudero (2000a, 200b) used local context and collocations with smaller sizes. In addition, all of them used part-of-speech information, and Ng and Lee (1996) added syntactical information to their features. Figure 1 shows intuitively the results of the feature selection algorithm at step 2. First, feature F2 is selected, next feature F1 is selected, then feature F4 is selected, and at the final iteration, no more features should be selected. ",
"cite_spans": [
{
"start": 132,
"end": 159,
"text": "Leacock and Chodorow (1998)",
"ref_id": "BIBREF5"
},
{
"start": 181,
"end": 198,
"text": "Ng and Lee (1996)",
"ref_id": "BIBREF7"
},
{
"start": 225,
"end": 247,
"text": "Escudero (2000a, 200b)",
"ref_id": null
},
{
"start": 366,
"end": 383,
"text": "Ng and Lee (1996)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 433,
"end": 441,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "F1",
"sec_num": null
},
{
"text": "In order to widely compare this method to others, we tested on four words which are used in numerous comparative studies of word sense disambiguation methodologies such as Pedersen (2000) , Ng and Lee (1996) , Bruce & Wiebe (1994) , and Leacock and Chodorow (1998) . These words include interest, line, serve, and hard. We obtained those data from Pedersen's homepage (1) . There are 2369 instances of interest with 6 senses, 4143 instances of line with 6 senses, 4378 instances of serve with 4 senses, and 4342 instances of hard with 3 senses. Note, however, that some of these studies did not use all four words in their experiments. We used a 10-fold cross validation for our experiment. Table 4 shows our results are much more accurate than the previous results.",
"cite_spans": [
{
"start": 172,
"end": 187,
"text": "Pedersen (2000)",
"ref_id": "BIBREF8"
},
{
"start": 190,
"end": 207,
"text": "Ng and Lee (1996)",
"ref_id": "BIBREF7"
},
{
"start": 210,
"end": 230,
"text": "Bruce & Wiebe (1994)",
"ref_id": "BIBREF0"
},
{
"start": 237,
"end": 264,
"text": "Leacock and Chodorow (1998)",
"ref_id": "BIBREF5"
},
{
"start": 368,
"end": 371,
"text": "(1)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 691,
"end": 698,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Bruce & Wiebe,",
"eq_num": "(1994)"
}
],
"section": "Experiments",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(%) Mooney,",
"eq_num": "(1996)"
}
],
"section": "Experiments",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(%) Ng & Lee,",
"eq_num": "(1996)"
}
],
"section": "Experiments",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(%) Leacock& Chodorow,",
"eq_num": "(1998)"
}
],
"section": "Experiments",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(%) Pedersen,",
"eq_num": "(2000)"
}
],
"section": "Experiments",
"sec_num": "4."
},
{
"text": "For evaluating on a large dataset, we tested the DSO corpus published in Ng and Lee (1996) , which contains 192,800 semantically annotated occurrences of 121 nouns and 70 verbs corresponding to most frequently used and ambiguous English words. This corpus is now available in the Linguistic Data Consortium (LDC) 2 . It contains sentences without part-of-speech tags, and in each sentence the ambiguous word is labeled with a sense. We did not use a part-of-speech tagger for this corpus and so for topic context we used only some stopped words including articles, determiners, pronouns, and auxiliary verbs. The obtained accuracies are 66.4% for verbs and 72.7% for nouns. We also experimented on DSO corpus using only topic context (feature F1) for comparison and achieved an average accuracy of 63.1%. Ng and Lee (1996) Table 5 . Results on DSO data Table 5 shows our experimental result along with results of Ng and Lee (1996) using Exemplar-based method and results of Escudero et al. (2000b) using a type of AdaBoost.MH boosting algorithm called LazyBoosting on the same dataset (DSO corpus). We and Escudero et al. used a 10-fold cross validation, but Ng and Lee used two different datasets, BC50 and WSJ6, for testing (see their paper for details). On average, our result is better than the best result of Ng and Lee, and also better than the result of Escudero et al. In another experiment we compared our results with Escudero et al. (2000b) when he separately tested on a group of 15 most frequent words in DSO corpus using an AdaBoost.MH boosting algorithm. Our average result is 71.7% while his is 68.6% (see Table 6 for the detailed comparison).",
"cite_spans": [
{
"start": 73,
"end": 90,
"text": "Ng and Lee (1996)",
"ref_id": "BIBREF7"
},
{
"start": 805,
"end": 822,
"text": "Ng and Lee (1996)",
"ref_id": "BIBREF7"
},
{
"start": 913,
"end": 930,
"text": "Ng and Lee (1996)",
"ref_id": "BIBREF7"
},
{
"start": 974,
"end": 997,
"text": "Escudero et al. (2000b)",
"ref_id": "BIBREF3"
},
{
"start": 1428,
"end": 1451,
"text": "Escudero et al. (2000b)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 823,
"end": 830,
"text": "Table 5",
"ref_id": null
},
{
"start": 853,
"end": 860,
"text": "Table 5",
"ref_id": null
},
{
"start": 1622,
"end": 1629,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Test on large data",
"sec_num": "5."
},
{
"text": "In this section, we will discuss the results obtained when using more information than the topic context for disambiguating word senses with a NB classifier. In evaluating the importance of different kinds of information for WSD, Table 1 shows that the words themselves in a context are more important than their part-of-speech tags. It also shows that, in terms of their usefulness for WSD, local context provides the most informative cues, followed by collocation, and third, by topic context. We can conclude that: by combining three kinds of information, topic context, local context, and collocation, the accuracy of WSD tasks can be improved. This conclusion is confirmed by the results in Table 2 and Table 3 , which show that when all three kinds of information are used, instead of using only topic context, the accuracy increases up to about 11.1% for the four words. For DSO corpus, the accuracy increases about 7.3% (see Table 5 ). These high increases indicate that our approach, which uses more information, can produce better results. The problem here is why there is a difference between the two improvements reported above. We can see that there are more examples in the four words data than in the DSO data. The high data density may be the reason why we achieved a high accuracy, and in this case, the information about part-of-speech may be redundant. That may also be the reason why the accuracy of testing on the four words is higher than its on the DSO corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 230,
"end": 237,
"text": "Table 1",
"ref_id": null
},
{
"start": 696,
"end": 715,
"text": "Table 2 and Table 3",
"ref_id": null
},
{
"start": 933,
"end": 940,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "Among some WSD studies using NB with multiple kinds of information, Leacock and Chodorow (1998) did not use collocation and they only used about 200 examples for training, therefore their result is much lower than ours (less than about 9%). Escudero et al. (2000b) use all kinds of information as in our experiment, but their results were lower than ours by about 3% because they used local context and collocation but with smaller sizes. One more reason why their results are lower than ours may be that both of them used part-of-speech information.",
"cite_spans": [
{
"start": 68,
"end": 95,
"text": "Leacock and Chodorow (1998)",
"ref_id": "BIBREF5"
},
{
"start": 241,
"end": 264,
"text": "Escudero et al. (2000b)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "In summary, the most important point here is that WSD using NB with more useful information than usual will give better results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "In this paper, we described our work on a WSD task using a NB classifier with multiple kinds of features. First, we selected the most informative features, and then used a forward sequential selection algorithm to choose the best set of features which include: unordered words in a large context, ordered words in a local context, and collocations. These features do not contain information which needs complicated analysis, such as a syntactic or even a part-of-speech parser. Then, we tested our method on some common words and the large DSO dataset, and obtained results that were better than the best previously published results. Thus, our work shows that WSD using Naive Bayesian classifier with richer features can obtain high accuracies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "In our future research, information about part-of-speech will be checked to determine whether it is useful in the case when we do not have full enough training data. Other important problem which also needs to be considered is how to remove redundant features as a whole, without having to consider the kinds of features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "http://www.ldc.upenn.edu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is partly conducted as a program for the \"Fostering Talent in Emergent Research Fields\" in Special Coordination Funds for Promoting Science and Technology by the Japanese Ministry of Education, Culture, Sports, Science and Technology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Word-Sense Disambiguation using Decomposable Models",
"authors": [
{
"first": "R",
"middle": [],
"last": "Bruce",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "139--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruce, R. and Wiebe, J. 1994. Word-Sense Disambiguation using Decomposable Models. Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics (ACL), pp. 139-145.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Context-sensitive feature selection for lazy learners",
"authors": [
{
"first": "P",
"middle": [],
"last": "Domingos",
"suffix": ""
}
],
"year": 1997,
"venue": "Artificial Intelligence Review",
"volume": "",
"issue": "11",
"pages": "227--253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Domingos, P. 1997. Context-sensitive feature selection for lazy learners, Artificial Intelligence Review, (11):227-253, 1997.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Naive Bayes and Exemplar-Based Approaches to Word Sense Disambiguation Revisited",
"authors": [
{
"first": "G",
"middle": [],
"last": "Escudero",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Marquez",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Rigau",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 14th European Conference on Artificial Intelligence (ECAI)",
"volume": "",
"issue": "",
"pages": "421--425",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Escudero G., Marquez L., and Rigau G. 2000a. Naive Bayes and Exemplar-Based Approaches to Word Sense Disambiguation Revisited. Proceedings of the 14th European Conference on Artificial Intelligence (ECAI), pp. 421-425.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Boosting Applied to Word Sense Disambiguation",
"authors": [
{
"first": "G",
"middle": [],
"last": "Escudero",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Rigau",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 11th European Conference on Machine Learning (ECML)",
"volume": "",
"issue": "",
"pages": "129--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Escudero G., M\u00e0rquez L. and Rigau G. 2000b. Boosting Applied to Word Sense Disambiguation. Proceedings of the 11th European Conference on Machine Learning (ECML), pp. 129-141.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Method for Disambiguation Word Sense in a Large Corpus",
"authors": [
{
"first": "W",
"middle": [],
"last": "Gale",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "Yarowsky",
"middle": [
"D"
],
"last": "",
"suffix": ""
}
],
"year": 1992,
"venue": "Computers and Humanities",
"volume": "26",
"issue": "",
"pages": "415--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gale W., Church K., and Yarowsky D. 1992. A Method for Disambiguation Word Sense in a Large Corpus. Computers and Humanities, vol. 26, pp. 415-439.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Using Corpus Statistics and WordNet Relations for Sense Identification",
"authors": [
{
"first": "C",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Chodorow",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "147--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leacock, C. and Chodorow, M. and Miller, G. 1998. Using Corpus Statistics and WordNet Relations for Sense Identification. Computational Linguistics, pages 147-165.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Comparative Experiments on Disambiguating Word Senses: An illustration of the role of bias in machine learning",
"authors": [
{
"first": "R",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "82--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mooney, R. J. 1996. Comparative Experiments on Disambiguating Word Senses: An illustration of the role of bias in machine learning. Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 82-91.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Integrating Multiple Knowledge Sources to Disambiguate Word Sense: An Exemplar-Based Approach",
"authors": [
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "H",
"middle": [
"B"
],
"last": "Lee",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 34th Annual Meeting of the Society for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "40--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ng, H.T. and Lee, H.B. 1996. Integrating Multiple Knowledge Sources to Disambiguate Word Sense: An Exemplar-Based Approach. Proceedings of the 34th Annual Meeting of the Society for Computational Linguistics (ACL), pp. 40-47.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Simple Approach to Building Ensembles of Naive Bayesian Classifiers for Word Sense Disambiguation",
"authors": [
{
"first": "T",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "63--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pedersen, T. 2000. A Simple Approach to Building Ensembles of Naive Bayesian Classifiers for Word Sense Disambiguation. Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL), pp. 63-69.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "1 http://www.d.umn.edu/~tpederse/data.The results of feature selection algorithm at step 2"
}
}
}
}