{ "paper_id": "U03-1013", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:11:46.797070Z" }, "title": "Resolving Sense Ambiguity of Korean Nouns Based on Concept Co-occurrence Information", "authors": [ { "first": "You-Jin", "middle": [], "last": "Chung", "suffix": "", "affiliation": { "laboratory": "", "institution": "Pohang University of Science and Technology (POSTECH) and Advanced Information Technology Research Center(AlTrc)", "location": { "addrLine": "San 31 Hyoja-dong, Nam-gu", "postCode": "790-784", "settlement": "Pohang", "country": "R. of Korea" } }, "email": "" }, { "first": "Jong-Hyeok", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "", "institution": "Pohang University of Science and Technology (POSTECH) and Advanced Information Technology Research Center(AlTrc)", "location": { "addrLine": "San 31 Hyoja-dong, Nam-gu", "postCode": "790-784", "settlement": "Pohang", "country": "R. of Korea" } }, "email": "jhlee@postech.ac.kr" }, { "first": "", "middle": [], "last": "Cobalt-J/K", "suffix": "", "affiliation": { "laboratory": "", "institution": "Pohang University of Science and Technology (POSTECH) and Advanced Information Technology Research Center(AlTrc)", "location": { "addrLine": "San 31 Hyoja-dong, Nam-gu", "postCode": "790-784", "settlement": "Pohang", "country": "R. of Korea" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "From the view point of the linguistic typology, Korean and Japanese have many grammatical similarities which enable it to easily construct a sense-tagged Korean corpus through an existing high-quality Japanese-to-Korean machine translation system. The sense-tagged corpus may serve as a knowledge source to extract useful clues for word sense disambiguation (WSD). This paper addresses a disambiguation model for Korean nouns, whose execution is based on the concept codes extracted from the sense-tagged corpus and the semantic similarity values over a thesaurus hierarchy. By the help of the automatically constructed sensetagged corpus, we overcome the knowledge acquisition bottleneck. Also, we show that the performance of word sense disambiguation can be improved by combining several base classifiers. In an experimental evaluation, the proposed model using a majority voting achieved an average precision of 77.75% with an improvement over the baseline by 15.00%, which is very promising for real world MT systems.", "pdf_parse": { "paper_id": "U03-1013", "_pdf_hash": "", "abstract": [ { "text": "From the view point of the linguistic typology, Korean and Japanese have many grammatical similarities which enable it to easily construct a sense-tagged Korean corpus through an existing high-quality Japanese-to-Korean machine translation system. The sense-tagged corpus may serve as a knowledge source to extract useful clues for word sense disambiguation (WSD). This paper addresses a disambiguation model for Korean nouns, whose execution is based on the concept codes extracted from the sense-tagged corpus and the semantic similarity values over a thesaurus hierarchy. By the help of the automatically constructed sensetagged corpus, we overcome the knowledge acquisition bottleneck. Also, we show that the performance of word sense disambiguation can be improved by combining several base classifiers. In an experimental evaluation, the proposed model using a majority voting achieved an average precision of 77.75% with an improvement over the baseline by 15.00%, which is very promising for real world MT systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Generally, a Korean homograph may be translated into a different Japanese equivalent depending on which sense is used in a given context. Thus, noun sense disambiguation is essential to the selection of an appropriate Japanese target word in Korean-to-Japanese translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Much research on word sense disambiguation has revealed that several different types of information can contribute to the resolution of lexical ambiguity. These include surrounding words (an unordered set of words surrounding a target word), local collocations (a short sequence of words near a target word, taking word order into account), syntactic relations (selectional restrictions), parts of speech, morphological forms, semantic context, etc (McRoy, 1992 , Yarowsky, 1992 , Ng and Zelle, 1997 .", "cite_spans": [ { "start": 449, "end": 461, "text": "(McRoy, 1992", "ref_id": "BIBREF3" }, { "start": 462, "end": 478, "text": ", Yarowsky, 1992", "ref_id": "BIBREF7" }, { "start": 479, "end": 499, "text": ", Ng and Zelle, 1997", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To extract such information, various types of knowledge sources have been utilized such as machine-readable dictionaries (MRD), thesauri, and computational lexicons. Since most MRDs and thesauri were created for human use and display inconsistencies, these resources have clear limitations. Sense-tagged corpora have been used as the most useful knowledge source for WSD. However, despite the value of sense-tagged corpora, two major obstacles impede the acquisition of lexical knowledge from corpora: the difficulties of manually sense-tagging a training corpus, and data sparseness (Ide and Veronis, 1998) . Manual sensetagging of a corpus is extremely costly, and at present, very few sense-tagged corpora are available.", "cite_spans": [ { "start": 584, "end": 607, "text": "(Ide and Veronis, 1998)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In our WSD approach, we construct a sensetagged corpus automatically by using a method based on similarities between Korean and Japanese. Our disambiguation model is based on the work of Li et al (2000) , especially focusing on the practicality of the method for application to real world MT systems. We alleviate the data sparseness problem by adopting a concept-based processing and reduce the number of features to a practical size by refinement processing.", "cite_spans": [ { "start": 187, "end": 202, "text": "Li et al (2000)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is organized as follows. Section 2 presents the overall system architecture. Section 3 explains the automatic construction of a sensetagged Korean corpus and the extraction of refined features for word sense disambiguation. Section 4 describes the construction of feature set and the learning of disambiguation models. In Section 5, the experimental results are given, showing that the proposed method may be useful for WSD in a real text. In this paper, Yale Romanization is used to represent Korean expressions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our disambiguation method consists of two phases. The first phase is the extraction of features for WSD and the second phase is the construction of disambiguation models. (see Figure 1 .)", "cite_spans": [], "ref_spans": [ { "start": 176, "end": 184, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "System Architecture", "sec_num": null }, { "text": "For practical reasons, a reasonably small number of features is essential to the design of disambiguation models. To construct a feature set of a reasonable size, we adopt Li's method (2000) , based on concept co-occurrence information (CCI). CCI are concept codes of words which co-occur with the target word for a specific syntactic relation. ", "cite_spans": [ { "start": 172, "end": 190, "text": "Li's method (2000)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "System Architecture", "sec_num": null }, { "text": "\u2022\u2022\u2022\u2022\u2022 \u2022\u2022\u2022\u2022\u2022 \u2022\u2022\u2022\u2022\u2022 \u2022\u2022\u2022\u2022\u2022 \u2022\u2022\u2022\u2022\u2022 \u2022\u2022\u2022\u2022\u2022 \u2022\u2022\u2022\u2022\u2022 \u2022\u2022\u2022\u2022\u2022 \u2022\u2022\u2022\u2022\u2022 \u2022\u2022\u2022\u2022\u2022 L 1 L 2 L 3 L 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Architecture", "sec_num": null }, { "text": "In accordance with Li's method, we automatically extract CCI from a corpus by constructing a sense-tagged Korean corpus. To accomplish this, we apply a Japanese-to-Korean MT system. Next, we extract CCI from the constructed corpus through partial parsing and scanning. To eliminate noise and to reduce the number of CCI, refinement processing is applied to the extracted raw CCI set. After completing refinement processing, we use the remaining CCI as features for disambiguation. The obtained feature set and the trained disambiguation models are stored in a dictionary for MT system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. Concept hierarchy of the Kadokawa thesaurus", "sec_num": null }, { "text": "Japanese and Korean are very similar in word order and lexical properties. Also, they have many nouns in common derived from Chinese characters. Because almost all Japanese common nouns represented by Chinese characters are monosemous, little transfer ambiguity is exhibited in Japanese-to-Korean translation of nouns, and we can obtain a sense-tagged Korean corpus of a good quality by using those linguistic similarities between Korean and Japanese.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Features for WSD Automatic Construction of Sense-tagged Corpus", "sec_num": null }, { "text": "For automatic construction of the sense-tagged corpus, we used a Japanese-to-Korean MT system called COBALT-J/K 1 . In the transfer dictionary of COBALT-J/K, nominal and verbal words are annotated with concept codes of the Kadokawa thesaurus (Ohno and Hamanishi, 1981) , which has a 4-level hierarchy of about 1,100 semantic classes, as shown in Figure 2 . Concept nodes in level L 1 , L 2 and L 3 are further divided into 10 subclasses.", "cite_spans": [ { "start": 242, "end": 268, "text": "(Ohno and Hamanishi, 1981)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 346, "end": 354, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Extraction of Features for WSD Automatic Construction of Sense-tagged Corpus", "sec_num": null }, { "text": "We made a slight modification of COBALT-J/K to enable it to produce Korean translations from a Japanese text, with all content words tagged with specific concept codes at level L 4 of the Kadokawa thesaurus. As a result, a sense-tagged Korean corpus of 1,060,000 sentences can be obtained from the Japanese corpus (Asahi Shinbun, Japanese Newspaper of Economics, etc.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Features for WSD Automatic Construction of Sense-tagged Corpus", "sec_num": null }, { "text": "The quality of the constructed sense-tagged corpus is a critical issue. To evaluate the quality, we collected 1,658 sample sentences (29,420 eojeols 2 ) from the corpus and checked their precision. The total number of errors was 789, and included such errors as morphological analysis, sense ambiguity resolution and unknown words. It corresponds to the accuracy of 97.3% (28,631 / 29,420 eojeols). The number of ambiguity resolution errors was 202 and it took only 0.69% of the overall corpus (202 / 29,420 eojeols). Considering the fact that the overall accuracy of the constructed corpus exceeds 97% and only a few sense ambiguity resolution errors were found in the Japanese-to-Korean translation of nouns, we regard the generated sense-tagged corpus as highly reliable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Features for WSD Automatic Construction of Sense-tagged Corpus", "sec_num": null }, { "text": "1 COBALT-J/K (Collocation-Based Language Translator from Japanese to Korean) is a high-quality practical MT system developed by POSTECH.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.3", "sec_num": null }, { "text": "2 An Eojeol is a Korean syntactic unit consisting of a content word and one or more function words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.3", "sec_num": null }, { "text": "Unlike English, Korean has almost no syntactic constraints on word order as long as the verb appears in the final position. The variable word order often results in discontinuous constituents. Instead of using local collocations by word order, Li et al. (2000) defined 13 patterns of CCI for homographs using syntactically related words in a sentence. Because we are concerned only with noun homographs, we adopt 11 patterns from them excluding verb patterns, as shown in Table 1 . The words in bold indicate the target homograph and the words in italic indicate Korean particles.", "cite_spans": [ { "start": 244, "end": 260, "text": "Li et al. (2000)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 472, "end": 479, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Extraction of Raw CCI", "sec_num": null }, { "text": "For a homograph W, concept frequency patterns (CFPs), i.e., (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Raw CCI", "sec_num": null }, { "text": "{,, ... , }, type i , W(S i ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Raw CCI", "sec_num": null }, { "text": ", are extracted from the sense-tagged training corpus for each CCI type i by partial parsing and pattern scanning, where k is the number of concept codes in type i , f i is the frequency of concept code C i appearing in the corpus, type i is an CCI type i, and W(S i ) is a homograph W with a sense S i . All concepts in CFPs are three-digit concept codes at level L 4 in the Kadokawa thesaurus. Table 2 demonstrates an example of CFP that can co-occur with the homograph 'nwun(eye)' in the form of the CCI type 2 and their frequencies.", "cite_spans": [], "ref_spans": [ { "start": 396, "end": 403, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Extraction of Raw CCI", "sec_num": null }, { "text": "The extracted CCI set is too numerous and too noisy to be used in a practical system, and must to be further selected. To eliminate noise and to reduce the number of CCI to a practical size, we ap- ply the refinement processing to the extracted CCI set. CCI refinement processing is composed of 2 processes: concept code discrimination and concept code generalization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCI Refinement Processing", "sec_num": null }, { "text": "In the extracted CCI set, the same concept code may appear for determining the different meanings of a homograph. To select the most probable concept codes, which frequently co-occur with the target sense of a homograph, Li defined the discrimination value of a concept code using Shannon's entropy. A concept code with low entropy has a large discrimination value. If the discrimination value of the concept code is larger than a threshold, the concept code is selected as useful information for deciding the word sense. Otherwise, the concept code is discarded. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept Code Discrimination", "sec_num": "3.3.1" }, { "text": "After concept discrimination, co-occurring concept codes in each CCI type must be further selected and the code generalized. To perform code generalization, Li adopted Smadja's work (Smadja, 1993) and defined the code strength using a code frequency and a standard deviation in each level of the concept hierarchy. The generalization filter selects the concept codes with a strength larger than a threshold. We perform this generalization processing on the Kadokawa thesaurus level L 4 and L 3 . After processing, the system stores the refined conceptual patterns ({C 1 , C 2 , C 3 , ...}, type i , W(S i )) as a knowledge source for WSD of real texts. These refined CCI are used as features for disam-biguation models. The more specific description of the CCI extraction is explained in Li (2000) .", "cite_spans": [ { "start": 182, "end": 196, "text": "(Smadja, 1993)", "ref_id": "BIBREF6" }, { "start": 788, "end": 797, "text": "Li (2000)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Concept Code Generalization", "sec_num": "3.3.2" }, { "text": "The feature set is constructed by integrating the extracted CCI into a single vector. Figure 3 3 demonstrates a construction example of the feature set for the homograph 'nwun' with the sense 'snow' and 'eye'. The left side is the extracted CCI for each sense after refinement processing. We construct the feature set for 'nwun' by merely integrating the concept codes in CCI set of both senses. The resulting feature set is partitioned into several subgroups depending on their CCI types, i.e., type 0, type 1, type 2 and type 8. Since the extracted CCI set are different according to each word, each homograph has a feature set of its own.", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 94, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Feature Set Construction", "sec_num": null }, { "text": "After constructing the feature set for WSD, we extract training patterns for each homograph from the previously constructed sense-tagged corpus. The construction of training pattern is performed in the following 2 steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Training Patterns", "sec_num": null }, { "text": "Step 1. Extract CCI from the context of the target homograph. The window size of the context is a single sentence. Consider, for example, the sentence in Figure 4 which has the meaning of \"Seeing her eyes filled with tears, \u2026\". The target homograph is the word 'nwun'. We extract its CCI from the sentence by partial parsing and pattern scanning. In Figure 4 , the words 'nwun' and 'kunye(her)' with the concept code 503 have the relation of , which corresponds to 'CCI type 2' in Table 1 . There is no syntactic relation between the words 'nwun' and 'nwunmul(tears)' with the concept code 078, so we assign 'CCI type 0' to the concept code 078.", "cite_spans": [], "ref_spans": [ { "start": 154, "end": 162, "text": "Figure 4", "ref_id": "FIGREF1" }, { "start": 350, "end": 358, "text": "Figure 4", "ref_id": "FIGREF1" }, { "start": 499, "end": 506, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Extraction of Training Patterns", "sec_num": null }, { "text": "Similarly, we can obtain all pairs of CCI types and their concept codes appearing in the context. All the extracted pairs are as follows: {, , }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Training Patterns", "sec_num": null }, { "text": "Step 2. Obtain the training pattern by calculating concept similarities between concept codes in the context CCI set and the feature set. Concept similarity calculation is performed only between the concept codes with the same CCI-type. This score represents that how much each node of the network relates to clues appearing in the target context. The calculated concept similarity score is assigned to each feature node as the activation strength for it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Training Patterns", "sec_num": null }, { "text": "Csim(C i , P j ) in Equation 1 is used to calculate the concept similarity between C i and P j , where MSCA(C i , P j ) is the most specific common ancestor of concept codes C i and P j , and weight is a weighting factor reflecting that C i as a descendant of P j is preferable to other cases. That is, if C i is a descendant of P j , we set weight to 1. Otherwise, we set weight to 0.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Training Patterns", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "weight P level C level P C MSCA level P C Csim j i j i j i \u00d7 + \u00d7 = ) ( ) ( )) , ( ( 2 ) , (", "eq_num": "(1)" } ], "section": "Extraction of Training Patterns", "sec_num": null }, { "text": "The similarity values between the target concept C i and each P j on the Kadokawa thesaurus hierarchy are shown in Figure 5 . These similarity values are computed using Equation 1. For example, in 'CCI-type 0' part calculation, the relation between the concept codes 274 and 26 corresponds to the relation between C i and P 4 in Figure 5 . So we assign the similarity 0.285 to the feature node labeled by 26. As another example, the concept codes 503 and 50 have a relation between C i and P 2 and we obtain the similarity 0.857. If more than two concept codes exist in one CCI-type, such as , the maximum similarity value among them is assigned to the input node, as in Equation 2. In Equation 2, C i is the concept code of the feature set, and P j is the concept codes in the pair which has the same CCItype as", "cite_spans": [], "ref_spans": [ { "start": 115, "end": 123, "text": "Figure 5", "ref_id": null }, { "start": 329, "end": 337, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Extraction of Training Patterns", "sec_num": null }, { "text": "C i . )) , ( ( max ) ( j i P i P C Csim C InputVal i = (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Training Patterns", "sec_num": null }, { "text": "The use of concept similarity scheme gives another advantage. By adopting this concept similarity calculation, we can achieve a broad coverage of the method. If we use the exact matching scheme instead of concept similarity, we may obtain only a few concept codes matched with the features. Consequently, sense disambiguation would fail because of the absence of clues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Training Patterns", "sec_num": null }, { "text": "Using the obtained feature set and training patterns, we learned 4 types of disambiguation models, such as neural network, decision tree, support vector machine and majority voting system. Neural network and decision tree have been used in a lot of pattern recognition problems because of their strong capability in classification. And recently, support vector machine have generated a great interest in the community of machine learning due to its excellent generalization performance in a wide variety of learning problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning of Disambiguation Models", "sec_num": "4.3" }, { "text": "From a statistical point of view, if the size of sample is small, generating different classifiers about the sample and combining them may result in more accurate prediction of new patterns. On the other hand, based on a computational view, if the sample is large enough, the nature of learning algorithm could lead to getting stuck in local optima. Therefore, a classifier combination is a way to expand the hypothesis space to represent the true function (Ardeshir, 2002) . In our experiment, we adopted a majority voting system for combining base classifiers. A majority voting selects the sense of the test pattern based on receiving more than half votes of base classifiers.", "cite_spans": [ { "start": 457, "end": 473, "text": "(Ardeshir, 2002)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Learning of Disambiguation Models", "sec_num": "4.3" }, { "text": "To find the best parameters for decision tree and support vector machine, we evaluated performance of each classifier on various parameters. For this evaluation, we used 942 test samples extracted from KIBS (Korean Information Base System) corpus. Table 3 and 4 are the evaluation results for decision tree and support vector machine respectively and we selected parameters which showed the best performance. The parameter settings for our system are listed below. Our WSD approach is a hybrid method, which combines the advantage of knowledge-based and corpus-based methods. Figure 6 shows our overall WSD algorithm. For a given homograph, sense disambiguation is performed as follows. First, we search a collocation dictionary. The Korean-to-Japanese translation system COBALT-K/J has an MWTU (Multi-Word Translation Units) dictionary, which contains idioms, compound words, collocations, etc. If a collocation of the target word exists in the MWTU dictionary, we simply determine the sense of the target word to the sense found in the dictionary. This method is based on the idea of 'one sense per collocation'. Next, we verify the selectional restrictions of verbs described in the dictionary. If we cannot find any matched patterns for selectional restrictions, we apply the machine learning classifiers. If we fail in all the previous stages, we assign the most frequently appearing sense in the training corpus to the target word. For an experimental evaluation, 15 Korean noun homographs were selected, along with a total of 1,200 test sentences in which one homograph appears (2 senses : 12 words, 3 senses : 2 words, 4 senses : 1 word). The test sentences were randomly selected from the KIBS corpus.", "cite_spans": [], "ref_spans": [ { "start": 248, "end": 255, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 576, "end": 584, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Learning of Disambiguation Models", "sec_num": "4.3" }, { "text": "The baseline results are shown in Table 5 , where the result A is the case when the most frequent sense was taken as the answer and the result B is the case when COL and SR stages were applied previously. Symbols COL, SR, ML and MFS in Table 5 and 6 indicate 4 stages of our method in Table 6 is the comparison results of 4 machine learning classifiers. To compare models with the same condition, we controlled the number of test samples which each model is applied to about 700. As shown in the table, the majority voting system showed the best performance above all other single classifiers and exceeded the baseline A by 15.00%. Even if we exclude the help of the collocation information and selectional restrictions described in the dictionary, we achieved the improvement of 7.58% over the baseline B. This result is very promising for real world MT systems and indicates that word sense disambiguation can be improved by classifier combination. Among the single classifiers, SVM was better than DT and NN (see ML stage in Table 6 ). Interestingly, however, when followed by MFS stage, NN overtook the performance of SVM.", "cite_spans": [], "ref_spans": [ { "start": 34, "end": 41, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 236, "end": 243, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 285, "end": 292, "text": "Table 6", "ref_id": "TABREF5" }, { "start": 1028, "end": 1035, "text": "Table 6", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Learning of Disambiguation Models", "sec_num": "4.3" }, { "text": "The results of classifiers for each word are shown in Table 7 . A shadowed cell indicates the best classifier on the word. Although the majority voting recorded the best results on only 2 words, it showed good results on other words steadily. We can recognize that the best classifier is different for each word. Some words have a decision tree as the best classifier and some have a neural network. From this observation, we guess that each word may have disambiguation property of its own and require a different machine learning method according to its property. So if we can identify the disambiguation characteristics of words, we will be able to improve the system performance by applying a different classifier for each word.", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 61, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Learning of Disambiguation Models", "sec_num": "4.3" }, { "text": "To resolve sense ambiguities in Korean-to-Japanese MT, this paper has proposed a practical word sense disambiguation method using concept co-occurrence information. We showed that sensetagged Korean corpus can be generated easily by using Japanese corpus and a machine translation system. In an experimental evaluation, the pro- ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "(931/1200)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "77.58%", "sec_num": null }, { "text": "(928/1200)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "77.33%", "sec_num": null }, { "text": "(933/1200) posed WSD model using a majority voting achieved an average precision of 77.75% with an improvement over the baseline by 15.00%. This result indicates that word sense disambiguation can be improved by combining base classifiers and the concept co-occurrence information-based approach is very promising for real world MT systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "77.75%", "sec_num": null }, { "text": "We plan further research on feature selection. Compared with the surface form information of lexical words, the concept codes are somewhat diluted information as clues for WSD. Thus we will be able to improve the performance of system if we add other features to our disambiguation model, such as lexical words and part of speech of surrounding words. Also, we have a plan to develop a new similarity measure to find the more feasible similarity values for our system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "77.75%", "sec_num": null }, { "text": "The concept codes inFigure 3are simplified ones for the ease of illustration. In reality there are 87 concept codes for 'nwun'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by the Korea Science and Engineering Foundation (KOSEF) through the Advanced Information Technology Research Center(AITrc).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Decision Tree Simplification for Classifier Ensembles", "authors": [ { "first": "G", "middle": [], "last": "Ardeshir", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ardeshir, G. (2002) Decision Tree Simplification for Classifier Ensembles. PHD Thesis, University of Sur- rey, U.K.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Word Sense Disambiguation: The State of the Art", "authors": [ { "first": "N", "middle": [], "last": "Ide", "suffix": "" }, { "first": "J", "middle": [], "last": "Veronis", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "1", "pages": "1--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ide, N. and Veronis, J. (1998) Word Sense Disambigua- tion: The State of the Art. Computational Linguistics, Vol.24, No.1, pp.1-40", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Lexical Transfer Ambiguity Resolution Using Automatically-Extracted Concept Cooccurrence Information", "authors": [ { "first": "H", "middle": [ "F" ], "last": "Li", "suffix": "" }, { "first": "N", "middle": [ "W" ], "last": "Heo", "suffix": "" }, { "first": "K", "middle": [ "H" ], "last": "Moon", "suffix": "" }, { "first": "J", "middle": [ "H" ], "last": "Lee", "suffix": "" }, { "first": "G", "middle": [ "B" ], "last": "Lee", "suffix": "" } ], "year": 2000, "venue": "International Journal of Computer Processing of Oriental Languages", "volume": "13", "issue": "1", "pages": "53--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, H. F., Heo, N. W., Moon, K. H., Lee, J. H. and Lee, G. B. (2000) Lexical Transfer Ambiguity Resolution Using Automatically-Extracted Concept Co- occurrence Information. International Journal of Computer Processing of Oriental Languages, Vol.13, No.1, pp.53-68", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Using Multiple Knowledge Sources for Word Sense Discrimination", "authors": [ { "first": "S", "middle": [], "last": "Mcroy", "suffix": "" } ], "year": 1992, "venue": "Computational Linguistics", "volume": "18", "issue": "1", "pages": "1--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "McRoy, S. (1992) Using Multiple Knowledge Sources for Word Sense Discrimination. Computational Lin- guistics, Vol.18, No.1, pp.1-30", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Corpus-Based Approaches to Semantic Interpretation in Natural Language Processing", "authors": [ { "first": "H", "middle": [ "T" ], "last": "Ng", "suffix": "" }, { "first": "J", "middle": [], "last": "Zelle", "suffix": "" } ], "year": 1997, "venue": "AI Magazine", "volume": "18", "issue": "4", "pages": "45--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ng, H. T. and Zelle, J. (1997) Corpus-Based Ap- proaches to Semantic Interpretation in Natural Lan- guage Processing. AI Magazine, Vol.18, No.4, pp.45- 64", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "New Synonym Dictionary", "authors": [ { "first": "S", "middle": [], "last": "Ohno", "suffix": "" }, { "first": "M", "middle": [], "last": "Hamanishi", "suffix": "" } ], "year": 1981, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ohno, S. and Hamanishi, M. (1981) New Synonym Dic- tionary. Kadokawa Shoten, Tokyo", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Retrieving Collocations from Text: Xtract", "authors": [ { "first": "F", "middle": [], "last": "Smadja", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "1", "pages": "143--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Smadja, F. (1993) Retrieving Collocations from Text: Xtract. Computational Linguistics, Vol.19, No.1, pp.143-177", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Word-Sense Disambiguation Using Statistical Models of Roget's Categories Trained on Large Corpora", "authors": [ { "first": "D", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1992, "venue": "Proceedings, COLING-92", "volume": "", "issue": "", "pages": "454--460", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yarowsky, D. (1992) Word-Sense Disambiguation Us- ing Statistical Models of Roget's Categories Trained on Large Corpora. In Proceedings, COLING-92. Nantes, pp. 454-460", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Figure 1. System Architecture", "num": null }, "FIGREF1": { "type_str": "figure", "uris": null, "text": "Construction of Training Pattern by Using Concept Similarity Calculation", "num": null }, "FIGREF3": { "type_str": "figure", "uris": null, "text": "Figure 5. Concept Similarity on the Kadokawa Thesaurus Hierarchy", "num": null }, "FIGREF4": { "type_str": "figure", "uris": null, "text": "-Kernel : RBF (width = 0.5) [Majority Voting (MV)] -Base classifiers : DT, NN, SVM Experimental Evaluation", "num": null }, "FIGREF5": { "type_str": "figure", "uris": null, "text": "Figure 6. The Proposed WSD Algorithm", "num": null }, "TABREF1": { "type_str": "table", "content": "
Table 2. Concept codes and frequencies in CFP
CCI typeStructure of pattern({<C i ,f i >}, type 2 , nwun(eye))
type 0unordered co-occurrence wordsCode Freq. Code Freq. Code Freq. Code Freq.
type 1noun + noun or noun + noun02819107812171264
type 2noun + uy + noun1438160517972774
type 3noun + other particles + noun32083316416742922
type 4 type 5 type 6 type 7 type 8 type 9noun + lo/ulo + verb noun + ey + verb noun + eygey + verb noun + eyse + verb noun + ul/lul + verb noun + i/ka + verb433 505 530 573 733 987 'other' in the table means the set of concept codes 4 501 13 503 10 504 11 6 507 12 508 27 513 5 6 538 11 552 4 557 7 5 709 5 718 5 719 4 5 819 4 834 4 966 4 9 other * 210
type 10verb + relativizer + nounwith the frequencies less than 4.
", "text": "Structure of CCI Patterns", "html": null, "num": null }, "TABREF2": { "type_str": "table", "content": "
Feature Setnwunmwul-i katuk-han kunye-uy nwun-ul po-mye
Refined CCI26input: \ub208\ubb3c\uc774\uac00 \ub4dd \ud55c\uadf8\ub140\uc758\ub208\uc744\ubcf4\uba70 \u2026
CCI type 0 input74 022 078[078] : (type 0) concept code : CCI type[274] (type 0) (type 2) [503]target word[331] (type 8)
CCI type 1080Training Pattern
input696Context CCI SetFeature Set
nwun 2 (eye) \u2022 CCI type 0 : {74, 078} \u2022 CCI type 2 : {50, 028, 419} \u2022 CCI type 8 : {23, 323}CCI type 8 input CCI type 2 input323 50 028 419 38 23 239CCI type 0 CCI type 1{078} {274} {none}Calculation Similarity50 26 74 022 078 080 696(0.900) (0.200) (0.000) (0.100) (1.000) (0.000) (0.000)CCI type 2 CCI type 0 input CCI type 1 input
13 nodesCCI type 2{503}028 419(0.000) (0.000)input
Figure 3. Construction of Feature Set for 'nwun'CCI type 8{331}23 38 239(0.000) (0.200) (0.000)CCI type 8 input
323(0.100)
", "text": "", "html": null, "num": null }, "TABREF3": { "type_str": "table", "content": "
Pruning Confidence LevelPrecision (correct / applied)
Level = 10%76.26% (546/716)
Level = 15%77.38% (561/725)
Level = 25%77.30% (555/718)
Level = 40%76.72% (547/713)
( number of test samples : 942)
", "text": "Evaluation Results for Decision Tree with Different Pruning Levels", "html": null, "num": null }, "TABREF4": { "type_str": "table", "content": "
Kernel FunctionPrecision (correct / applied)
Linear79.89% (556/696)
Polynomial (degree=2)80.60% (565/701)
Polynomial (degree=3)79.92% (569/712)
RBF (width=0.5)80.80% (568/703)
RBF (width=1.0)80.66% (563/698)
RBF (width=2.0)79.71% (554/695)
( number of test samples : 942)
", "text": "Evaluation Results for Support Vector Machine with Different Kernel Functions", "html": null, "num": null }, "TABREF5": { "type_str": "table", "content": "
Comparison Results of Classifiers
Precision (correct # / applied #)
StageModel 1 [DT]Model 2 [NN]Model 3 [SVM]Model 4 [MV]
COL100% (21/21)
SR91.14% (216/237)
ML77.38% (561/725)78.93% (558/707)80.80% (568/703)81.84% (568/694)
MFS53.00% (115/217)57.87% (136/235)51.46% (123/239)51.61% (128/248)
Total76.08% (913/1200)
", "text": "", "html": null, "num": null }, "TABREF6": { "type_str": "table", "content": "
. Baseline PerformanceTable 7. Comparison Results of Classifiers for Each
Precision (correct # / applied #)WordPrecision (correct # / applied #)
StageBaseline ABaseline BWordDTNNSVMMV
COL SRN/A (0/0) N/A (0/0)100% (21/21) 91.14% (216/237)kasa93.24% (69/74)77.46% (55/71)77.94% (53/68)89.39% (59/66)
MFS 62.75% (753/1200) Total 62.75% (753/1200) 70.17% (842/1200) 64.23% (605/942)kancang88.93% (47/56)85.25% (52/61)88.89% (48/54)87.72% (50/57)
keli79.17% (19/24)57.69% (15/26)76.92% (30/39)83.33% (20/24)
kyengki69.39% (34/49)77.27% (34/44)70.21% (33/47)70.83% (34/48)
kyengpi84.21% (32/38)75.56% (34/45)76.19% (32/42)77.27% (34/44)
kwutu86.44% (51/59)90.57% (48/53)87.27% (48/55)87.72% (50/57)
nwun91.84% (45/49)93.48% (43/46)93.62% (44/47)91.67% (44/48)
tali52.94% (27/51)52.38% (22/42)54.29% (19/35)52.63% (20/38)
pwuca82.61% (57/69)86.67% (39/45)87.10% (54/62)85.94% (55/64)
swumyen66.67% (22/33)65.38% (34/52)83.33% (30/36)80.56% (29/36)
yongki62.07% (36/58)83.33% (35/42)73.33% (33/45)75.56% (34/45)
uysa81.82% (9/11)78.00% (39/50)78.00% (39/50)83.33% (35/42)
yenki52.08%68.75%66.67%65.52%
(3 senses)(25/48)(22/32)(20/30)(19/29)
censin93.55%93.22%98.08%96.49%
(3 senses)(58/62)(55/59)(51/52)(55/57)
cenlyek68.18%79.49%79.49%76.92%
(4 senses)(30/44)(31/39)(31/39)(30/39)
", "text": "", "html": null, "num": null } } } }