{ "paper_id": "Y02-1016", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:43:39.257240Z" }, "title": "A Korean Homonym Disambiguation System Based on Statistical Model Using weights", "authors": [ { "first": "Jun-Su", "middle": [], "last": "Kim", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Ulsan", "location": { "addrLine": "San29, Mugeo-dong", "postCode": "680-749", "settlement": "Nam-gu Ulsan", "country": "Korea" } }, "email": "jskim@cic.ulsan.ac.kr" }, { "first": "Wang-Woo", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Ulsan", "location": { "addrLine": "San29, Mugeo-dong", "postCode": "680-749", "settlement": "Nam-gu Ulsan", "country": "Korea" } }, "email": "wwlee@cic.ulsan.ac.kr" }, { "first": "Chang-Hwan", "middle": [], "last": "Kim", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Ulsan", "location": { "addrLine": "San29, Mugeo-dong", "postCode": "680-749", "settlement": "Nam-gu Ulsan", "country": "Korea" } }, "email": "" }, { "first": "Cheol-Young", "middle": [], "last": "Ock", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Ulsan", "location": { "addrLine": "San29, Mugeo-dong", "postCode": "680-749", "settlement": "Nam-gu Ulsan", "country": "Korea" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A homonym could be disambiguated by another words in the context as nouns, predicates used with the homonym. This paper using semantic information (co-occurrence data) obtained from definitions of part of speech (POS) tagged UMRD-S 1). In this research, we have analyzed the result of an experiment on a homonym disambiguation system based on statistical model, to which Bayes' theorem is applied, and suggested a model established of the weight of sense rate and the weight of distance to the adjacent words to improve the accuracy. The result of applying the homonym disambiguation system using semantic information to disambiguating homonyms appearing on the dictionary definition sentences showed average accuracy of 98.32% with regard to the most frequent 200 homonyms. We selected 49 (31 substantives and 18 predicates) out of the 200 homonyms that were used in the experiment, and performed an experiment on 50,703 sentences extracted from Sejong Project tagged corpus (i.e. a corpus of morphologically analyzed words) of 3.5 million words that includes one of the 49 homonyms. The result of experimenting by assigning the weight of sense rate(prior probability) and the weight of distance concerning the 5 words at the front/behind the homonym to be disambiguated showed better accuracy than disambiguation systems based on existing statistical models by 2.93%.", "pdf_parse": { "paper_id": "Y02-1016", "_pdf_hash": "", "abstract": [ { "text": "A homonym could be disambiguated by another words in the context as nouns, predicates used with the homonym. This paper using semantic information (co-occurrence data) obtained from definitions of part of speech (POS) tagged UMRD-S 1). In this research, we have analyzed the result of an experiment on a homonym disambiguation system based on statistical model, to which Bayes' theorem is applied, and suggested a model established of the weight of sense rate and the weight of distance to the adjacent words to improve the accuracy. The result of applying the homonym disambiguation system using semantic information to disambiguating homonyms appearing on the dictionary definition sentences showed average accuracy of 98.32% with regard to the most frequent 200 homonyms. We selected 49 (31 substantives and 18 predicates) out of the 200 homonyms that were used in the experiment, and performed an experiment on 50,703 sentences extracted from Sejong Project tagged corpus (i.e. a corpus of morphologically analyzed words) of 3.5 million words that includes one of the 49 homonyms. The result of experimenting by assigning the weight of sense rate(prior probability) and the weight of distance concerning the 5 words at the front/behind the homonym to be disambiguated showed better accuracy than disambiguation systems based on existing statistical models by 2.93%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The ambiguity, which is the most difficult problem in natural language processing (NLP), occurs inevitably in every analysis process including morphological analysis and syntactic analysis. Ambiguity problems occurring at some parts have been resolved to some degree. As the studies on semantic and discourse analysis are becoming active, more effort is being made to research schemes for word sense disambiguation (WSD). WSD refers to disambiguating the sense of a word contextually suitable for the sentence when it is used with two or more different meanings in sentences. [1, 3, 4, 5, 8] Studies for solving ambiguity are largely grouped into methods using dictionaries according to the pattern of learning data and ones using a corpus. In terms of methodology, there are largely methods using rules, ones using probability statistics, and ones using semantic hierarchy structure.", "cite_spans": [ { "start": 576, "end": 579, "text": "[1,", "ref_id": "BIBREF0" }, { "start": 580, "end": 582, "text": "3,", "ref_id": "BIBREF2" }, { "start": 583, "end": 585, "text": "4,", "ref_id": "BIBREF3" }, { "start": 586, "end": 588, "text": "5,", "ref_id": "BIBREF4" }, { "start": 589, "end": 591, "text": "8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A method using dictionaries is disadvantageous in that it is difficult to reflect the dynamic characteristic of a language [10, 11] , while advantageous in that it can abstract the detailed information of word senses [2, 3] . To disambiguate using a corpus, a large-sized semantic tagged corpus is required. However, a high-quality corpus is hard to find, and costly and time-consuming to build. But the method reflects the dynamic characteristic of a language.", "cite_spans": [ { "start": 123, "end": 127, "text": "[10,", "ref_id": "BIBREF9" }, { "start": 128, "end": 131, "text": "11]", "ref_id": "BIBREF10" }, { "start": 217, "end": 220, "text": "[2,", "ref_id": "BIBREF1" }, { "start": 221, "end": 223, "text": "3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the thesis, we abstract semantic information from a dictionary definition corpus based on the method suggested in J. Huh (2000) [3] research. We will study a plan to utilize the semantic information in the homonym disambiguation model based on Bayes' theorem. Semantic information extracted from definitions of part of speech (POS) tagged UMRD. We must classify definitions and titles according to the meaning before extracting semantic information. The structures of definitions are various, and are classified into 11 types by Cho(1999) [2] . The most frequent type is that head-word (hyponym), the title also is contained in semantic information. The semantic information is classified into two types. [2, 3] First has hyponym-hypernym relation between title-word and head-word (homonym) in definition. The other is extracted from definitions in which the homonym is used for defining other words. In other word, the homonym is located middle in the definitions. And the titles of 2 nd information are contained in the semantic information. Types merge semantic information. In formula 3, Hsk is the k-th sense of homonym H, and w; appearing in sentence C is a word associated with the semantic information of Hsk and has its frequency information. In addition, w; may appear in other frequency with different frequency [2] . Formula (2) represents the sum of probability for each of appearing words extracted from formula (3) to be identified as sense Hs k. And formula (1) is for disambiguating the sense of homonym H for the sentence C with the maximum of the sums by senses calculated in formula 2We experimented the statistical basic model (NB: Na\u00efve Bayes Model) for the selected 31 nouns and 18 predicates among the homonyms frequently appearing in the dictionary definitions against the 3.5 million POS tagged corpus of the Sejong Project. As the result of applying to all the words of 50,703 sentences containing the selected 49 homonyms, the accuracy was 77.67% for the nouns and 61.73% for predicates on the average. When applying only to the 5 words front / behind the homonyms, the accuracy was 72.87% for the nouns and 43.79% for predicates.", "cite_spans": [ { "start": 131, "end": 134, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 532, "end": 541, "text": "Cho(1999)", "ref_id": "BIBREF1" }, { "start": 542, "end": 545, "text": "[2]", "ref_id": "BIBREF1" }, { "start": 708, "end": 711, "text": "[2,", "ref_id": "BIBREF1" }, { "start": 712, "end": 714, "text": "3]", "ref_id": "BIBREF2" }, { "start": 1326, "end": 1329, "text": "[2]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we will examine the cases of erroneous analysis in the statistical basic model (NB) and search for a method to resolve them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Patterns in the Basic Model", "sec_num": "3.2" }, { "text": "[ The result of applying the extracted words and their frequencies to formula (3) and (2) is as [ Table 3 ]. Consequently the basic model selects _4(fruit)', so fails to disambiguate.", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 105, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Error Patterns in the Basic Model", "sec_num": "3.2" }, { "text": "Example sentence 1] a El i-r%I.Gii EIM EI H ili a0 1 1:1-1zl 321-X1- x l ELI- '1-11112-1-71--T-1 V0H H 111-.Y.21. 2-1 .acnic4c1- 1-[}11a1--", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Patterns in the Basic Model", "sec_num": "3.2" }, { "text": "The major reasons of disambiguation failure are, firstly, that the probability calculation for the frequency of semantic information used in disambiguation does not consider the use frequency of the homonym. For example, concerning the semantic information of `-arildeul-dar extracted from dictionary definitions in [ may not be determinant in disambiguating the corresponding homonym. Accordingly, in the present thesis, we extracted semantic information from the 5 words front / behind a homonym. [1, 4] Yet in this case, the semantic relevance may differ according to the distance from the homonym. Therefore, we should consider the location (the distance from the homonym) where the semantic information is found.", "cite_spans": [ { "start": 499, "end": 502, "text": "[1,", "ref_id": "BIBREF0" }, { "start": 503, "end": 505, "text": "4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Table 3. Probability drawn from NB Method", "sec_num": null }, { "text": "In the thesis, we suggest a method to resolve the two problems. Dictionary definition sense information, which is used as prior probability, varies greatly in the word types and frequencies according to the appearance frequencies of homonym (Hsi, Hs2 ,..., Hsn) senses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 3. Probability drawn from NB Method", "sec_num": null }, { "text": "In case word wj (E Hsi n Hs2 n... n Hsk ) appearing in sentence C appears commonly in several semantic information sets, it is highly likely to be selected as it has high probability through formula (3) when the frequency sum of the word is small. According to [Table 4 ], if a same word appears with LI N 1(a body part)' and LI K 4(fruit)', the frequency of with dfi 1(a body part)' should be 15 times higher than Effi _4(fruit)' to be possibly selected as II _1(a body part)' . However, frequency 15 is quite high number as a frequency of appearance, and is enough to disambiguate the homonym. Accordingly, we need a method to consider the number of words in the semantic information and the frequency in the Bayes theorem of the basic statistic model, considering the peculiar feature of vocabularies.", "cite_spans": [], "ref_spans": [ { "start": 261, "end": 269, "text": "[Table 4", "ref_id": null } ], "eq_spans": [], "section": "Table 4. Sampled Number of words and Sums of Frequencies in Semantic information Extracted from Dictionary Definitions", "sec_num": null }, { "text": "In this thesis, we assume that the word in the semantic information provides a solution, and will use words in the semantic information Using the number of words of nouns and predicates belonging to the senses of a homonym (Hsi, Hs2, Hs,,), we can obtain the weight of SR(Sense Rate) as formula (4) a number of word in Fis k SR(Hs k) = E a number of word in Hsi", "cite_spans": [ { "start": 295, "end": 298, "text": "(4)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Table 4. Sampled Number of words and Sums of Frequencies in Semantic information Extracted from Dictionary Definitions", "sec_num": null }, { "text": "Words in sense Hsk have the prior probability of P(wi n Hsk ) . By multiplying the weight of sense rate SR(Hsk ) to the existing probability, we obtain a new probability. 0.00 A statistical model (SR : statistical model with Sense Rate) that considers the weight of sense use frequency is completed by applying the probability that reflects the weight to formula (1) and formula (2) , which results formula (5).", "cite_spans": [ { "start": 379, "end": 382, "text": "(2)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "J=1", "sec_num": null }, { "text": "P(wi n HS k )x SR(Hs k) P(wi n Hs; ) x SR(Hs i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PSR k I )40=", "sec_num": null }, { "text": "i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PSR k I )40=", "sec_num": null }, { "text": "To [example sentence 1] on which the statistical basic model has failed, we applied the statistical model (SR) that reflects the weight of sense use frequency on the words extracted from [ Table 2 ], and found that the model disambiguated correctly as shown in [ Table 5 ]. ", "cite_spans": [], "ref_spans": [ { "start": 189, "end": 196, "text": "Table 2", "ref_id": "TABREF4" }, { "start": 263, "end": 270, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "PSR k I )40=", "sec_num": null }, { "text": "If we can utilize syntactic structure in disambiguating homonyms appearing in sentences, we may reduce unnecessary factors by selecting good-quality semantic information on the disambiguation. The present disambiguation model is based on a simple statistical model utilizing dictionary semantic information. Thus we attempt to resolve, using efficiently the information about the adjacent words. In the thesis, the disambiguation accuracy using the 5 words front / behind is not different significantly from that using the whole words. It is because the used semantic information is largely found in the adjacent words. In particular, it is even obvious that a word closer to the homonym among the 5 words front / behind it is more influential in disambiguation. Accordingly, we will apply the weight of distance appropriately. information, we have derived a weight of formula (6) . By applying the weight of distance Dis(H, wi) to formula (5) , which is a new weight considering the weight of sense rate, we reflect the distance from the hononym. The longer the distance is, less influential the word is to the disambiguation. Accordingly, if a word is found near the homonym, it records high probability, and if it is found distantly from the homonym , it records low probability. Table 6 . The result of applying weights of distance to probabilities with a deviation of 20% after applying weights of SR", "cite_spans": [ { "start": 877, "end": 880, "text": "(6)", "ref_id": "BIBREF5" }, { "start": 940, "end": 943, "text": "(5)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 1283, "end": 1290, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Consideration of the Distance between Words", "sec_num": "4.2" }, { "text": "To improve the efficiency of the method, we experimented a method in which the weights of distance are applied only when the difference of disambiguaty is insignificant (the deviation is within 20%) just with the statistical model (SR) considering the weights of sense rate. According to [ Table 5 ], the highest probability is 36.17%, followed by 23.83% and most are less than 20.%. Thus, when considering the weights of distance [ Table 6 ], we found, the disambiguation is correct. After abstracting 200 homonyms appearing in dictionary definitions, we selects 49 words (31 nouns and 18 predicates). Which are used in balance among senses, and applies them to the disambiguation model [ Table 7 ]. The analysis result by homonyms is as [Appendix 1]", "cite_spans": [], "ref_spans": [ { "start": 290, "end": 297, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 433, "end": 440, "text": "Table 6", "ref_id": null }, { "start": 690, "end": 697, "text": "Table 7", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Consideration of the Distance between Words", "sec_num": "4.2" }, { "text": "We abstracted 50,700 sentences that include the selected homonyms from the Sejong Project tagged corpus (around 3.5 million words), and performed automatic disambiguation using the statistical basic model (NB). After disambiguating the homonyms correctly through post-processing the automatically disambiguated sentences, we compared the accuracy rate. The average accuracy rate of the basic statistical model is as [ Table 8 ]. In the experiment, we attempted to disambiguate using the whole words and the 5 words front / behind the homonyms. In case of nouns at the existing statistical model (NB), the difference in accuracy rates between when disambiguating through the whole words and when doing through the 5 words is 4.8%. This indicates that lots of significant information is in the adjacent words and that, in a long sentence, words adjacent to homonym give semantic information enough to disambiguate. In addition, using adjacent words may exclude unnecessary information, which might occur when abstracting information from the whole words, For Predicate, the semantic informationabstracted from dictionary definitions is often insufficient for disambiguating homonyms. For example, t \u00b1' Et[but_da]' is closely associated with Effssot_daRpour)', but EF[ssot_da]' is not included in the sense information. As the result of adding it to the semantic informationand analyzing, the accuracy rate increased by 6%. The reason is that dictionary definition techniques limit the words to be used. Accordingly, a more efficient method to add necessary semantic informaton should be researched further in the future. Table 9 . The model considering the weights of SR [Table 9 ] is the analysis result using the model that obtains new probabilities considering the weights of sense rate (SR). As the result of considering the weights, the accurate rate increased by 1.7% for analysis on the whole words, and by 2.4% for analysis on the five words. When analyzing the whole words, accuracy rates increased in 29 homonyms, and when analyzing the five words, the rates increased in 31 homonyms. The result shows that applying the weights of sense rate to the basic statistical model makes disambiguation more efficient. In addtion, the weights for the 5 words front / behind homonyms are applied more effectively.", "cite_spans": [], "ref_spans": [ { "start": 418, "end": 425, "text": "Table 8", "ref_id": "TABREF9" }, { "start": 1619, "end": 1626, "text": "Table 9", "ref_id": null }, { "start": 1669, "end": 1677, "text": "[Table 9", "ref_id": null } ], "eq_spans": [], "section": "Number of", "sec_num": null }, { "text": "The result in [ Table 10 ] came from applying the weights of distance to the 5 words front / behind homonyms when the deviation of probability is 20% after the first disambiguation considering the weights of sense rate. The accuracy rate for the whole words has increased only by 0.61% from that of the basic statistical model (NB), and the increase is less than that when applying the weights of sense rate. For the five words front / behind homonyms, the analysis result shows the highest accuracy rate. Accordingly, a model that combines the two weights suggested in the thesis is most efficent.", "cite_spans": [], "ref_spans": [ { "start": 16, "end": 24, "text": "Table 10", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Table 10. The model applying the weights of distance to probabilities with .a deviation of 20% after applying weights of SR", "sec_num": null }, { "text": "According to the result of analyzing the cases that the accuracy rates fall in the model reflecting the weights of sense use frequency and the distance between words, the most significant cause appears to be lack of semantic information It is assumed that dictionaries restrain the use of extensive vocabularies and define the meanings of a word using a limited number of words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Patterns", "sec_num": "5.3" }, { "text": "First, according to the result of experimenting the statistical model reflecting the weights of sense rate and those of distances between words, which are suggested in the thesis, it is concluded that appropriate weights support disambiguation and that a further determinant weights should be explored for. Second, further researches are required for refining and expanding the semantic information abstracted from dictionary definitions. For refining, we should examine how nouns ( . . ) which are highly frequency because of the peculiar characteristic of dictionary definitions, affect disambiguation, and prepare a method to exclude unnecessary semantic information appropriately. It is also required to study on methods to abstract semantic information and to expand information using semantic networks, along with establishing a large-sized semantic tagged corpus by creating semantic tagged program for expanding semantic information.", "cite_spans": [ { "start": 481, "end": 482, "text": "(", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Further Researches", "sec_num": null }, { "text": ") Senses of homonym ideul-dal' : 5 CE _1(stay, permeate), \u00a7 C4 _4 (lift up, suggest {a fact or an example}), \u00a7 _5 (receive the action represented by the front noun )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "I i--i------a8foi-;------1-----WTOyo--T-----6634-----r---61:Wio-----t-------aiWo------1---.641162;---t j 2f-.:, 11 [Appendix]", "cite_spans": [ { "start": 112, "end": 114, "text": "11", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Word-Sense Disambiguation Using Statical Model of Roget's Corpora", "authors": [ { "first": "D", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Yarowsky(1992), \"Word-Sense Disambiguation Using Statical Model of Roget's Corpora\",COLING-92", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A Korean Noun Semantic Hirarchy based on Semantic Features", "authors": [ { "first": "P", "middle": [ "O" ], "last": "Cho", "suffix": "" }, { "first": "C", "middle": [ "Y" ], "last": "Ock", "suffix": "" } ], "year": 1999, "venue": "Proceeding of the 18th ICCPOL", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P.O. Cho and C.Y. Ock(1999), \"A Korean Noun Semantic Hirarchy based on Semantic Features\", Proceeding of the 18th ICCPOL Vol.1 .", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A Homonym Disambiguation System based on Semantic Information extracted from Definitions in dictionary", "authors": [ { "first": "J", "middle": [], "last": "Hur", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Hur(2001), \"A Homonym Disambiguation System based on Semantic Information extracted from Definitions in dictionary\", ICCPOL-2001", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Na\u00efve Bayes and Examplar-based Approaches to Word Sense Disambiguation Revisited", "authors": [ { "first": "G", "middle": [], "last": "Rigau", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Rigau(2000), \"Na\u00efve Bayes and Examplar-based Approaches to Word Sense Disambiguation Revisited\", ECAL", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Machine Learning and Natural Language Processing", "authors": [ { "first": "L", "middle": [], "last": "Marquez", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Marquez(2000), \"Machine Learning and Natural Language Processing\"", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Statistical Sense Disambiguation with Relatively Small Corpora Using Dictionary Definitions", "authors": [ { "first": "Alpha", "middle": [ "K" ], "last": "Luk", "suffix": "" } ], "year": 1995, "venue": "33 rd Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alpha k, Luk(1995), \"Statistical Sense Disambiguation with Relatively Small Corpora Using Dictionary Definitions\", 33 rd Annual Meeting of the ACL", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Word Sense Disambiguation Using Decomposable Models\"32rd Annual Meeting of the ACL", "authors": [ { "first": "R", "middle": [], "last": "Bruce", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "139--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Bruce(1994),\"Word Sense Disambiguation Using Decomposable Models\"32rd Annual Meeting of the ACL, pp. 139-145", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Word sense disambiguation using statistical methods", "authors": [ { "first": "P", "middle": [], "last": "Brown", "suffix": "" }, { "first": "V", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "S", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "R", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the 29th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "264--266", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Brown, V. Della Pietra, S. Della Pietra and R. Mercer(1991) Word sense disambiguation using statistical methods. In Proceedings of the 29th Annual Meeting of the ACL, pp.264-2'70", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A Connectionist Approach to Word sense Disambiguation", "authors": [ { "first": "G", "middle": [], "last": "Cottrell", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Cottrell(1989) A Connectionist Approach to Word sense Disambiguation. Pitman, London", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A Corpus-Based Approach to Language Learning", "authors": [ { "first": "E", "middle": [], "last": "Brill", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Brill(1993) A Corpus-Based Approach to Language Learning. Ph.D. thesis Computer and Information Science, University of Pennsylvania", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Word Sense Disambiguation From Unlabeled Data", "authors": [ { "first": "S", "middle": [], "last": "Park", "suffix": "" }, { "first": "B", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Y", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "330--332", "other_ids": {}, "num": null, "urls": [], "raw_text": "S.B Park, B.T Zhang, Y.H Kim(2000) \"Word Sense Disambiguation From Unlabeled Data\", KISS '2000 Spring B', pp330 -332", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The use of thesaurus for disambiguating verbs and its limitation", "authors": [ { "first": "Young-Bin", "middle": [], "last": "Song", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" }, { "first": "", "middle": [], "last": "Gi-Sun", "suffix": "" } ], "year": 2000, "venue": "Treatise collection of the 12th Korean Alphabet and Korean Language Information Processing Conference", "volume": "", "issue": "", "pages": "255--261", "other_ids": {}, "num": null, "urls": [], "raw_text": "Song, Young-bin, Choi, Gi-sun (2000) \"The use of thesaurus for disambiguating verbs and its limitation\", Treatise collection of the 12th Korean Alphabet and Korean Language Information Processing Conference, pp.255 -261", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "WordNet automatic mapping using disambiguation", "authors": [ { "first": "Chang-Gi", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Kun-Bae", "suffix": "" } ], "year": 2000, "venue": "Treatise collection of the 12 th Korean Alphabet and Korean Language Information Processing Conference", "volume": "", "issue": "", "pages": "262--268", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, Chang-gi, Lee, Kun-bae (2000) \"WordNet automatic mapping using disambiguation, Treatise collection of the 12 th Korean Alphabet and Korean Language Information Processing Conference, pp.262 -268", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Disambiguating verbs using corpus and dictionaries", "authors": [ { "first": "Jung-Mi", "middle": [], "last": "Cho", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cho, Jung-mi(1998) \"Disambiguating verbs using corpus and dictionaries\", Ph.D. these, Korea Advanced Institute of Science and Technology", "links": null } }, "ref_entries": { "FIGREF0": { "text": "The Process to extract Semantic Information", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": "NING+011/JKB 1:11/NING-oms... V-7/1\\ING+31-/JKS... E l Distance of Phrases\"'", "type_str": "figure", "uris": null, "num": null }, "FIGREF3": { "text": "Distance between a homonym and a word of semantic informationConsidering the absolute distance d(H)-d(w j ) between a homonym and a word used as semantic", "type_str": "figure", "uris": null, "num": null }, "FIGREF4": { "text": ")-d(w;) P(Hs k ,C) =1Ps (Hs k I x'; ) x Dis(H, w; ) (7) f=i", "type_str": "figure", "uris": null, "num": null }, "TABREF2": { "text": "Types of definitions for extracting semantic information", "type_str": "table", "num": null, "content": "
3Word Sense Disambiguation Model Based on Statistics
3.1 Statistic Model Based on Bayes' Theorem
In the WSD model utilizing the Semantic information of homonym sense abstracted from dictionary
definitions as the prior probability in Bayes' theorem, a homonym H appearing in an arbitrary sentence
C is disambiguated as one of senses Hs,, Hs2,Hsn
W(H ,C) = arg max Hs, P(Hs \"C)
P(Hsk ,C)= P(Hsk wi
.1=1
P(Hs k ( W j ) =P(w n Hs k)
P(w n Hs i)
", "html": null }, "TABREF4": { "text": ", the word appears once for each of A1_3(vessel)' and `1111_4(fruit)', but the numbers of words used in the definitions of `11_3(vessel)' and `1311_4(fruit)' are 24 and 513 respectively. Therefore, as ----a-r-t[deul-da]' is extracted once from 24 words and 513 words, the frequency should be normalized. Secondly, the present NB model does not analyze syntactic structure, and disambiguate a homonym simply according to what semantic information the sentence containing the homonym has. This is based on the assumption that, if the homonym is not used metaphorically or idiomatically, it is used with the words that are semantically related to the corresponding sense of the homonym, and if the homonym is used in a simple sentence, it is almost possible to disambiguate it without analyzing the syntactic structure. In complex sentences or compound sentences, however, the extracted semantic information", "type_str": "table", "num": null, "content": "
wordsensenumber of wordsFrequency sum
NounPred.NounPred.
16393072,3231,313
36682831,5931,114
tli1413067164102
4636382676178
ul-_16624661,6081,304
-a-_45152261,013516
841514619
", "html": null }, "TABREF6": { "text": "", "type_str": "table", "num": null, "content": "", "html": null }, "TABREF7": { "text": "Selection if Ambiguous Homonyms and Extraction of Test Sentences", "type_str": "table", "num": null, "content": "
5 Experiment and Analysis
5.1
AccuracyAccuracy
SentencesAll words 5 wordsrate (all words)rate (5 words)
Noun30,45123,65222,18977.67%72.87%
Predicate20,25212,5068,86861.75%43:79%
", "html": null }, "TABREF8": { "text": "Homonyms used in disambiguation experiments Comparison of the Basic Model and the Model based on Weights of Sense Rate", "type_str": "table", "num": null, "content": "
7-1 2-1[geo-ri],, -;1 . [gyeol-jeong], ?-j, 7 1 [gyeon-ggi], \u2022q [gulc], 71-[gi-gu],
7i ti[gi-won], 1--A-[nal], `[nun], EH [dae], ---, [dok], -g-, [deung], ,T-,[mot],
NounsLIN [bae], -'te r .:;i [bu-jeong], LI I [bi], ),1-[sang], 1,-; [seong], RI A qui-sal ,
(31)RI x I [ui-ji], 0 I ).\"4 [i-sang], ,,,171[fang-gi], T,,,1--'i-[jang-su], N [jeol],
-T--oFrju-j and ,[jung], x l a Di-do], Xl[cha], OF[chang],[cheol],
121pan], Et [pyo]
E-1-[gal-da],LI E-cligo-reu-dal,A O[goe-da],71Eilkici-dal,
PredicatesU-EF[dal-da], -2--EF[deul-dal, It El[mal-da], 51 4' c limat-da], '' Elimut-dat
(18)'-' Et[but-da], . --1E-F[swi-da,], AF EE[ssa-da], EF Elita-da], L. \"' Ei[sseu-da],
01 ___Eqi-reu-da], ' 1-Elicha-da], 71 Elikyeo-da], X I Et[ji-da],
5.2
", "html": null }, "TABREF9": { "text": "", "type_str": "table", "num": null, "content": "", "html": null }, "TABREF11": { "text": "A} q[sa-ram], cO, [il], [ttae], ...), verbs Ctiri[ha-da], r-l[dae-da], 91 r-I[it-sa], ...) and adverbs (21 q[eop-da], 91 [it-da], qlkeu-da], 3-1-r-tuak-da], V-[gat-da], cl[cla-reu-da], .", "type_str": "table", "num": null, "content": "
", "html": null } } } }