{ "paper_id": "Y96-1039", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:38:25.444310Z" }, "title": "Estimating Point-of-View-based Similarity using POV Reinforcement and Similarity Propagation", "authors": [ { "first": "Kenji", "middle": [], "last": "Nagamatsu", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Tokyo", "location": {} }, "email": "" }, { "first": "Hidehiko", "middle": [], "last": "Tanaka", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Tokyo", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper. proposes a similarity measure which takes account of pointof-views (abbreviated to POV, hereafter) in the calculation of similarity values. So far many researches on similarity measures have been performed but none takes account of POVs. The similarity measure proposed in this paper is based on co-occurrence probabilities of words and this makes it possible to obtain preferable precision even if POVs are not given. This method consists of two parts of processes, POV reinforcement and similarity propagation. First, the POV reinforcement process, which affects the similarity between words, modifies the weights of links according to the relatedness between the link and the POV word. Second, the similarity propagation process propagates the weights of links and defines a similarity value for word pairs which do not actually cooccur in the corpus. Using those two processes this method becomes capable both to take POVs into consideration and to cope with the sparseness of corpora to some degree. This paper, however, focuses on the POV reinforcement and evaluates the effectiveness of the method..", "pdf_parse": { "paper_id": "Y96-1039", "_pdf_hash": "", "abstract": [ { "text": "This paper. proposes a similarity measure which takes account of pointof-views (abbreviated to POV, hereafter) in the calculation of similarity values. So far many researches on similarity measures have been performed but none takes account of POVs. The similarity measure proposed in this paper is based on co-occurrence probabilities of words and this makes it possible to obtain preferable precision even if POVs are not given. This method consists of two parts of processes, POV reinforcement and similarity propagation. First, the POV reinforcement process, which affects the similarity between words, modifies the weights of links according to the relatedness between the link and the POV word. Second, the similarity propagation process propagates the weights of links and defines a similarity value for word pairs which do not actually cooccur in the corpus. Using those two processes this method becomes capable both to take POVs into consideration and to cope with the sparseness of corpora to some degree. This paper, however, focuses on the POV reinforcement and evaluates the effectiveness of the method..", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Rapid growth of computer networks has increased the number of machine-readable texts and also made it possible for us to use various search engines to get desired documents. They are, however, keyword-based and strict. Even if some documents are related to a user's interest, he or she cannot obtain them as long as they do not contain the keywords given in advance. Otherwise he or she will be handed too many documents which contain just the keywords but are not necessarily related to his or her interest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To solve these problems, it is necessary to take account of similarity between words or concepts and to make use of the measure in search processing. However, because there are many similar words in a text, employing a similarity measure alone only will expand the range of relatedness and produce more and more results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "When considering meanings of words, we, human beings, do not consider the whole meanings at a time, rather some interesting aspects of their concepts just the same as we look at a landscape from some point-of-view. Hence in similarity of words it is required to take account of their POVs. This makes it possible both to expand the range of matching in some situations and to restrict the range in others. Expectation is that employing valid POVs has both the expansion and the restriction be suitable and search processing produces more appropriate results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper proposes a similarity measure between words which measure takes account of the effect of POVs. So far many researches on concept (or word) similarity have been performed but none handles POVs in their similarity measures. The proposed method utilizes co-occurrence probability-based similarity as a basis and extends this fundamental measure by weighting the values according to the relevance between input words and POV words. This fundamental measure and its evaluation with some traditional similarity measures are described in 2. The main part of the method, which handles the effect by POVs, consists of two processes, POV reinforcement and similarity propagation. The explanation for these processes and some related issues are presented in 3 and 3.2. Finally 4 shows the result of some experiments, which indicates the effectiveness of this method, and 5 discusses the problems of the method as well as its advantages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This section gives an overview of similarity measures (2.1) and evaluates the ability of some fundamental measures with a large amount of word pairs (2.2 and 2.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fundamental Similarity Measures", "sec_num": "2" }, { "text": "Similarity of words or concepts is a fundamental measure in natural language processing because it can be used in various processing. For example, in disambiguation of word senses it can help to detect appropriate word senses by selecting most similar word senses to the senses of context words. In sentence production similarity measures can also help to keep coherence of word sequences. The fact that most researches on similarity measures have been performed with relation to the word sense disambiguation indicates the significance of similarity measures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification of Similarity Measures", "sec_num": "2.1" }, { "text": "The similarity measures researched so far are classified as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification of Similarity Measures", "sec_num": "2.1" }, { "text": "1. Similarity based on the structure of thesauruses or taxonomies (Agirre 1995) (Resnik 1995) Because thesauruses or taxonomies contain even infrequently used words (or concepts), the similarity measures of this type can define similarity values to most word pairs. The range of the word pairs they can handle is, thus, broad. In contrast, these measures are also considered as class-based and the degree of similarity is rather loose (they tend to judge the words or concepts in a same class as similar).", "cite_spans": [ { "start": 66, "end": 79, "text": "(Agirre 1995)", "ref_id": "BIBREF0" }, { "start": 80, "end": 93, "text": "(Resnik 1995)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Classification of Similarity Measures", "sec_num": "2.1" }, { "text": "2. Similarity based on the statistical information extracted from corpora (Dagan 1994 )(Iwayama 1994) (Yang 1994 ) (Karov 1996) The range of the word pairs they can handle depends on the size of corpora used to extract the statistical information. In most cases, however, the problem of data sparseness arises. The main concern in these measures is how to estimate the values of unseen word pairs (Dagan 1994) .", "cite_spans": [ { "start": 74, "end": 85, "text": "(Dagan 1994", "ref_id": "BIBREF1" }, { "start": 102, "end": 112, "text": "(Yang 1994", "ref_id": "BIBREF8" }, { "start": 115, "end": 127, "text": "(Karov 1996)", "ref_id": "BIBREF4" }, { "start": 397, "end": 409, "text": "(Dagan 1994)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Classification of Similarity Measures", "sec_num": "2.1" }, { "text": "3. Similarity based on network structures (Kozima 1993) (Niwa 1994 ) Similarity values are defined on links in the network and a total value in a path, maybe processed somewhat, is interpreted as a similarity value.", "cite_spans": [ { "start": 42, "end": 55, "text": "(Kozima 1993)", "ref_id": "BIBREF5" }, { "start": 56, "end": 66, "text": "(Niwa 1994", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Classification of Similarity Measures", "sec_num": "2.1" }, { "text": "In these measures each word (or concept) has a set of features which are semantically related to the word. Those features may be actual semantic features or co-occurrence words. The number of shared features is interpreted as a similarity value.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature-based similarity", "sec_num": "4." }, { "text": "Before considering the similarity measure based on POV, it is necessary to clarify the ability of some fundamental similarity measures described in the previous section. With the result this evaluation the most promising measure is adopted as the base of the proposed method. In many researches the evaluation of similarity measures depends on human beings' judgment. In this case the measures are estimated by the score subjects judged for output values of the measures or by the correlation between two judgments of subjects and similarity measures. Scoring similarity of word pairs by hand, however, costs a lot. In contrast to this approach this paper adopt another type of evaluation employing coverage and selectivity of similarity measures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selectivity of Similarity Measures", "sec_num": "2.2" }, { "text": "When some threshold of a similarity measure is determined, the measure can judge each pair of words as similar or as not similar. At this point the coverage of a word pair set by the similarity measure is defined as the proportion of the number of word pairs judged as similar to the size of the set (total number of word pairs in the set).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selectivity of Similarity Measures", "sec_num": "2.2" }, { "text": "Employing this coverage ratio, selectivity of a similarity measure is described as follows. First, two groups of word pairs are prepared. One group, synonym set, contains pairs of synonyms which are similar in human beings' judgment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selectivity of Similarity Measures", "sec_num": "2.2" }, { "text": "The other group, non-synonym set, contains pairs of non-synonyms which are not similar to each other. In practice, however, non-synonym set is approximated with word pairs randomly selected from a dictionary. When some threshold of a similarity measure is determined, two coverage ratios for those two sets can be computed and the relationship between the two coverage ratios is plotted with the threshold being a parameter (see Figure 1 and 2 as examples). This plotted relationship is defined as the selectivity of the similarity measure. In the graphs, the lower a data sequence is located, the higher the selectivity of the similarity measure becomes.", "cite_spans": [], "ref_spans": [ { "start": 429, "end": 437, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Selectivity of Similarity Measures", "sec_num": "2.2" }, { "text": "In this section some fundamental similarity measures are evaluated employing the selectivity. Evaluated measures are three, depth, link*, and cooccur. These are not the latest measures but are commonly used and offer bases of more advanced measures. depth represents the similarity measure which uses the depth of the most specific common ancestors(MSCA). Given two concepts, MSCA are the concepts which subsume both the concepts and are located at the deepest position in a taxonomic structure. Formally,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Fundamental Similarity Measures", "sec_num": "2.3" }, { "text": "d(MSCA(ci,c2)) Sim depth( wi, w2) = max d(MSCA(cl, ski Ec(wi),vc2Ec(w2) (d(c i ) d(c2))/2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Fundamental Similarity Measures", "sec_num": "2.3" }, { "text": ", where C(w) denotes the concept set of a word w and d(c) denotes the depth of a concept c in a taxonomy. link* represents the traditional edge counting method, which define the similarity value of a word pair (w i , w2 ) by the length of the shortest path from one", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Fundamental Similarity Measures", "sec_num": "2.3" }, { "text": "(1) Simi ink# (wi w2) = max (2) vci Ec(wi),vc2Ec(w2) 1(ci , c2 ) -I-1 , where l(ci , c2 ) denotes the shortest path length between concepts c 1 and c2 in a taxonomy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Fundamental Similarity Measures", "sec_num": "2.3" }, { "text": "cooccur represents the similarity measure which uses co-occurrence probability between words. Formally, Sim cooccur( wil w2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Fundamental Similarity Measures", "sec_num": "2.3" }, { "text": "VwECo(wi)nCo(w2) Pr(wiwi) Pr(wlw2) (3) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Fundamental Similarity Measures", "sec_num": "2.3" }, { "text": ", where C o(w) denotes the co-occurring words with a word w and Pr(w'lw) denotes the co-occurrence probability of w' conditioned by w. This measure has a name \"cooccur\" but is a hybrid type of statistics-based and feature-based similarity. Figure 1 and 2 show the result of the evaluation employing the selectivity. In this evaluation the synonym set contains 10,297 synonym pairs which was extracted from the IPAL dictionaries (IPA 1993) , which have a \"synonym words\" field in the word records.", "cite_spans": [], "ref_spans": [ { "start": 240, "end": 248, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Evaluation of Fundamental Similarity Measures", "sec_num": "2.3" }, { "text": "Taxonomy-based similarity measures (depth and link#) use as a taxonomy the EDR concept dictionary (EDR 1995). The non-synonym set used for these measures are approximated with word pairs randomly selected from the EDR word dictionary (EDR 1995) , which contains the word entries corresponding to the concepts in the EDR concept dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Fundamental Similarity Measures", "sec_num": "2.3" }, { "text": "For the co-occurrence-based similarity measure (cooccur), co-occurrence data were extracted from the corpus CD-Mainichi shimbun (newspaper) DB '94, which contains all the articles from this newspaper in 1994. This co-occurrence data contains the co-occurring words and their frequencies for each content word (nouns, verb, adjectives, and adverbs) in the corpus. The number of the sentences used for the extraction is 1,019,997(74,793 articles). Figure 1 and 2 indicates clearly that in taxonomy-based similarity measures (depth and link#) the edge counting method is superior to the depth measure.", "cite_spans": [], "ref_spans": [ { "start": 446, "end": 454, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Evaluation of Fundamental Similarity Measures", "sec_num": "2.3" }, { "text": "Moreover, the corpus-based similarity, which uses co-occurrence probability extracted from a corpus, is superior to the taxonomy-based measures. (Resnik 1995) have extended the depth-based similarity measure by employing information content of concept classes, which was calculated from word frequencies in a corpus, and concluded that the method was superior to the edge counting method. On the other hand, the edge counting method has been extended to a network-based similarity model described earlier. In this way the combination with statistical information extracted from corpora produces preferable results. The POV-based similarity method presented in the next section also adopt the co-occurrence probability-based similarity as a basis.", "cite_spans": [ { "start": 145, "end": 158, "text": "(Resnik 1995)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Fundamental Similarity Measures", "sec_num": "2.3" }, { "text": "This section describes the similarity measure proposed in this paper, which can take account of the effect of point-of-views. This similarity measure consists of two phases, POV reinforcement and similarity propagation. However, this paper focuses on the POV reinforcement and omits the explanation of the similarity propagation process. Before describing the POV reinforcement process ( 3.2) a similarity network with POV, on which the similarity measure is defined, and the method of calculating similarity values are explained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity of Words based on POV", "sec_num": "3" }, { "text": "As described in 1, human beings' judgment of similarity takes POVs into consideration. Two different words may not be similar in general, rather they are similar under some aspects or POVs. Thus we consider similarity of words as a triplet Sim(wi , w2 ; wp ), where w1 and w2 are called node words (similarity values are defined over them) and wp is called a POV word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Network with POV", "sec_num": "3.1" }, { "text": "From this point of view the co-occurrence data used in 2.3 can be also used as the triplets because the co-occurring words of a node word are thought as POVs of the node. But if the co-occurring words are used as POV words directly, the sparseness problem arises because the co-occurring words don't necessarily contain the POV word given to the calculation. The POV reinforcement process, therefore, employs another type of co-occurrence data. The details will be described later in 3.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Network with POV", "sec_num": "3.1" }, { "text": "Even if the co-occurrence data cannot be used for the handling of POVs, these data can be used for the calculation of basic similarity values. As described in 2.3 the measure utilizing these data has higher selectivity than the other taxonomybased measures. Therefore, as a fundamental structure the similarity measure ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Network with POV", "sec_num": "3.1" }, { "text": ", where Pr( w i w' ; w p) denotes the co-occurrence probability of w conditioned by w1 which probability is reinforced by a POV word wp . This reinforcement is described in the next section. The similarity network with POV is constructed as follows. First, the nodes of the network are the words which appears in the co-occurrence data described in 2.3. Second, every pair of nodes is connected by links which are correspond to the shared co-occurring words (C o(w i ) fl C o(w2 )) respectively and each link is given a pair of the co-occurrence probability, one for w 1 and the other for w2(see Figure 3 ).", "cite_spans": [], "ref_spans": [ { "start": 596, "end": 604, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Similarity Network with POV", "sec_num": "3.1" }, { "text": "POV reinforcement is the most important process in this similarity measure and responsible for varying the values of links according to the relatedness to a POV word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POV Reinforcement", "sec_num": "3.2" }, { "text": "As described above, the co-occurrence data used in equation (4) cannot be employed to weight values of links according to POVs. It is because normal co-occurrence data are collected ignoring the relationship between co-occurring words. As a result a pair of words shares various POVs in the co-occurrence data. Therefore, another type of co-occurrence data, called POV co-occurrence data, is required.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POV Reinforcement", "sec_num": "3.2" }, { "text": "To extract POV co-occurrence data from a corpus, we make two assumptions. 1) Two words are similar when they occur as the same case role of the same word (verb, etc.) . 2) The POV of this similarity is the verb, etc. itself. For example, in two sentences 1) \"Tom walks.\" and 2) \"A dog walks.\", both 'Tom' and 'dog' have occurred as agent of the verb 'walk'. 'Tom' and 'dog' are, thus, considered to be similar under the POV word 'walk'.", "cite_spans": [ { "start": 154, "end": 166, "text": "(verb, etc.)", "ref_id": null }, { "start": 358, "end": 395, "text": "'Tom' and 'dog' are, thus, considered", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "POV Reinforcement", "sec_num": "3.2" }, { "text": "Following the assumptions, POV co-occurrence data in the form co (wp , wi , rk) are extracted from a tagged corpus. This gives co-occurrence frequency that word wi occurs as the case role rk of the word wp . Employing these data the POV reinforcement is formulated as follows.", "cite_spans": [ { "start": 65, "end": 79, "text": "(wp , wi , rk)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "POV Reinforcement", "sec_num": "3.2" }, { "text": "arnic(wPmi) f ( w'l w) Pr(w'lw, w ) = . , P (anitc(wP'w 1)f ( w1 w ) + Ev.Ecom f(x1w)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POV Reinforcement", "sec_num": "3.2" }, { "text": ", where f( w'i w ) denotes the normal co-occurrence frequency of w' conditioned by w and mic(wp , w') is the mutual information content which is calculated with", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POV Reinforcement", "sec_num": "3.2" }, { "text": "POV co-occurrence data. a is a constant parameter which controls how the relatedness between two POVs wp and w' affect the probability of the link. This mutual information content mic(wp ,w') is approximated as follows with POV co-occurrence data co (wp ,wi , rk) .", "cite_spans": [ { "start": 250, "end": 263, "text": "(wp ,wi , rk)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "POV Reinforcement", "sec_num": "3.2" }, { "text": "MIC(w,w) = log (w,w') Pr Pr(w)Pr(w') Ek co(w, w', rk) N mic(w,td) = log Ei j co( w , , ri ) Ei jco(w',wi,rj) (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POV Reinforcement", "sec_num": "3.2" }, { "text": "Mutual information content(MIC) indicates the degree of co-occurrence. If MIC(wi , w2) >> 0, the relationship between w1 and w2 is quite meaningful. If MIC(wi , w2 ) P-1 0, w1 has nothing to do with w2 . And if MIC(wi , w2 ) < 0, wi and w2 occur exclusively. This behavior of MIC is useful for weighting links according to the relatedness between POVs and links.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POV Reinforcement", "sec_num": "3.2" }, { "text": "When no POV word is given, the equations (4) and (5) become the same as (3). This guarantees that this similarity measure has at least the same ability shown in the Figure 1 and 2 (cooccur).", "cite_spans": [], "ref_spans": [ { "start": 165, "end": 173, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "POV Reinforcement", "sec_num": "3.2" }, { "text": "For these experiments POV co-occurrence data were extracted from the EDR corpus (EDR 1995) . This corpus contains 207,802 sentences and all the sentences are already parsed into semantic frames. From these frames 1,254,851 POV cooccurrence data co are obtained. The normal co-occurrence data are the same as the data described in 2.3, which were extracted from CD-Mainichi shimbun (newspaper) DB '94.", "cite_spans": [ { "start": 80, "end": 90, "text": "(EDR 1995)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "This experiment evaluates the effectiveness of the POV reinforcement process. Because the case where POV words are given explicitly is difficult to control the conditions of the experiment, this experiment evaluated the case explicit POV words are not given. As described earlier, even if no explicit POV words are specified, the words in a input pair are used as implicit POV words. Figure 4 and 5 shows the result obtained in the same way as 2.3. In the figures the multiple versions of the POV-based similarity measure are plotted at a = 1.2, a = 1.5 and a = 2.0. a is the parameter of equation 5. Table 1 contains coverage of non-synonym pairs for some typical coverage of synonym pairs.", "cite_spans": [], "ref_spans": [ { "start": 384, "end": 392, "text": "Figure 4", "ref_id": "FIGREF3" }, { "start": 601, "end": 608, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Selectivity of the Measure with the POV reinforcement", "sec_num": "4.1" }, { "text": "The evaluation by the selectivity of the similarity measures is comprehensive but only suggests an overall tendency. Therefore, another experiment has been performed. This experiment compares the similarity values computed by the similarity measures with the scores given by subjects. The number of subjects was 14 and they were all members of our laboratory. They were asked to rate the similarity of each pair of words from 1 (not similar) to 5 (perfect synonymy). The number of pairs of words was 50, which were randomly selected from synonyms in the IPAL dictionaries. Table 2 shows the result and contains the correlation factor between the values of the similarity measures and the scores given by the subjects.", "cite_spans": [], "ref_spans": [ { "start": 573, "end": 580, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Comparison with Human Judgment", "sec_num": "4.2" }, { "text": "The result of this experiment (see Figure 5) indicates that by employing the POV reinforcement the selectivity of the measure becomes higher than the original cooccur measure (note again that the lower a sequence is located, the higher the selectivity of the measure becomes). This raise originates in the effect of POVs a = 1.2 0.2051 a = 2.0 cooccur link* depth 0.1277 0.1909 0.1987 0.0822 Table 2 : Correlation between the similarity measures and the judgment by subjects alone. Although the measure with the POV reinforcement becomes inferior to the original one in the area where coverage of synonym pairs is small (r-z-J 50%), this is not a problem because similarity measures are used normally at high synonym coverage 80% N 90%). The effect of the parameter a is quite interesting. In proportion as a increases the selectivity also rises in the neighborhood of 90% N 92%. However, in the area below 80% the selectivity becomes declined conversely. This is also observed more clearly from Table 1. It is considered that there is a optimum value of a, however it is not yet found. Table 2 indicates that judgment by the co-occurrence-based similarity measures resembles that of human beings more than the taxonomy-based similarity measures. All the factors are, however, very small. (Resnik 1995) have presented the result of a similar experiment to this. There the correlation factor between the human judgment and the values of a edge-counting method is 0.6645. In comparison with the result in Table 2 , because word pairs used in this experiment were selected from synonym pairs, it is considered that the difference among the similarity values became small.", "cite_spans": [ { "start": 1289, "end": 1302, "text": "(Resnik 1995)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 35, "end": 44, "text": "Figure 5)", "ref_id": null }, { "start": 392, "end": 399, "text": "Table 2", "ref_id": null }, { "start": 1087, "end": 1094, "text": "Table 2", "ref_id": null }, { "start": 1503, "end": 1510, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Effectiveness of the POV reinforcement", "sec_num": "5.1" }, { "text": "Moreover, no effect of the POV reinforcement on these correlation factors is recognized. Considering its effect on the selectivity this is considered to be caused by the process of this experiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with Human Judgment", "sec_num": "5.2" }, { "text": "In either case it is required to perform another experiment thoroughly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with Human Judgment", "sec_num": "5.2" }, { "text": "This paper has presented the similarity measure which takes account of point-ofviews, focusing on the POV reinforcement process. Although this method consists of two phases, the POV reinforcement and the similarity propagation, the POV reinforcement process is the main part, which weights the co-occurrence probabilities of links according to POV words. The result of the evaluation suggests that the POV reinforcement have a good effect on the similarity measure. On the other, however, the experiment of comparison with human judgment did not produce satisfactory result. As a future work, the thorough comparison with human judgment and the evaluation of the similarity propagation process are required. In addition, it is necessary to evaluate the behavior of the results when this similarity measure is used in practical processing, for example word sense disambiguation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A Proposal for Word Sense Disambiguation using Conceptual Distance", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "German", "middle": [], "last": "Rigau", "suffix": "" } ], "year": 1995, "venue": "Proceedings of 1st International Conference on Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre and German Rigau. 1995. A Proposal for Word Sense Disambigua- tion using Conceptual Distance. In Proceedings of 1st International Conference on Recent Advances in Natural Language Processing.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Similarity-Based Estimation of Word Cooccurrence Probabilities", "authors": [ { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 1994, "venue": "Proceedings of ACL-94", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan and Fernando Pereira. 1994. Similarity-Based Estimation of Word Cooccurrence Probabilities. In Proceedings of ACL-94.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "EDR Electronic Dictionary Technical Guide", "authors": [], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Japan Electronic Dictionary Research Institute Ltd. 1995. EDR Electronic Dic- tionary Technical Guide.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Japan Information-technology Promotion Agency. IPAL Japanese Dictionary for Computer", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Japan Information-technology Promotion Agency. IPAL Japanese Dictionary for Computer. Technical report.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning similarity-based word sense disambiguation", "authors": [ { "first": "Yael", "middle": [], "last": "Karov", "suffix": "" }, { "first": "Shimon", "middle": [], "last": "Edelman", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the Fourth Workshop on Very Large Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yael Karov and Shimon Edelman. 1996. Learning similarity-based word sense disambiguation. In Proceedings of the Fourth Workshop on Very Large Corpora.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Similarity between Words Computed by Spreading Activation on an English Dictionary", "authors": [ { "first": "Hideki", "middle": [], "last": "Kozima", "suffix": "" }, { "first": "Teiji", "middle": [], "last": "Furugori", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the 6th Conference of the European Chapter of the Association for Computational Linguistics (EACL-93)", "volume": "", "issue": "", "pages": "232--239", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hideki Kozima and Teiji Furugori. 1993. Similarity between Words Computed by Spreading Activation on an English Dictionary. In Proceedings of the 6th Con- ference of the European Chapter of the Association for Computational Linguistics (EACL-93), pp. 232-239. ACL.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "CO-OCCURRENCE VECTORS FROM CORPORA VS. DISTANCE VECTORS FROM DICTIONARIES", "authors": [ { "first": "Yoshiki", "middle": [], "last": "Niwa", "suffix": "" }, { "first": "Yoshihiko", "middle": [], "last": "Nitta", "suffix": "" } ], "year": 1994, "venue": "Proc. COL-ING 94", "volume": "1", "issue": "", "pages": "304--309", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshiki Niwa and Yoshihiko Nitta. 1994. CO-OCCURRENCE VECTORS FROM CORPORA VS. DISTANCE VECTORS FROM DICTIONARIES. In Proc. COL- ING 94, Vol. 1, pp. 304-309.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Using Information Content to Evaluate Semantic Similarity in a Taxonomy", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 14th International Joint Conference on Artificial Intelligence", "volume": "1", "issue": "", "pages": "448--453", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik. 1995. Using Information Content to Evaluate Semantic Similarity in a Taxonomy. In Proceedings of the 14th International Joint Conference on Ar- tificial Intelligence, Vol. 1, pp. 448-453.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "An Example-Based Mapping Method for Text Categorization and Retrieval", "authors": [ { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Christopher", "middle": [ "G" ], "last": "Chute", "suffix": "" } ], "year": 1994, "venue": "ACM Transactions on Information Systems", "volume": "12", "issue": "3", "pages": "252--277", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yiming Yang and Christopher G. Chute. 1994. An Example-Based Mapping Method for Text Categorization and Retrieval. ACM Transactions on Information Systems, Vol. 12, No. 3, pp. 252-277.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "Selectivity of fundamental Figure 2: Magnified version of Figure 1 similarity measures of the concepts of w1 to one of the concepts of w 2 . Formally, 1" }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "defined by the equation (3) is adopted. Sim(wi , w2 ; wp ) is, thus, formulated as follows. Sim(w1 ,w2 ;wp ) = vwEco(wi)nco(.2) Pr(wiwi,wp)-E Pr ( w i w2; wp) 2" }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "Similarity network with POV and POV reinforcement" }, "FIGREF3": { "num": null, "uris": null, "type_str": "figure", "text": "Selectivity of the proposed Figure 5: Magnified version of Figure 4 similarity measure coverage of POV POV POV synonym pairs a = 1.2 a = 1.5 a = 2.0 cooccur link* 80%" } } } }