{ "paper_id": "E99-1024", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:37:45.147960Z" }, "title": "Detection of Japanese Homophone Errors by a Decision List Including a Written Word as a Default Evidence", "authors": [ { "first": "Hiroyuki", "middle": [], "last": "Shinnou", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ibaraki University Dept. of Systems", "location": { "addrLine": "Engineering 4-12-1 Nakanarusawa Hitachi", "postCode": "316-8511", "settlement": "Ibaraki", "country": "JAPAN" } }, "email": "shinnou@lily@ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we propose a practical method to detect Japanese homophone errors in Japanese texts. It is very important to detect homophone errors in Japanese revision systems because Japanese texts suffer from homophone errors frequently. In order to detect homophone errors, we have only to solve the homophone problem. We can use the decision list to do it because the homophone problem is equivalent to the word sense disambiguation problem. However, the homophone problem is different from the word sense disambiguation problem because the former can use the written word but the latter cannot. In this paper, we incorporate the written word into the original decision list by obtaining the identifying strength of the written word. The improved decision list can raise the F-measure of error detection.", "pdf_parse": { "paper_id": "E99-1024", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we propose a practical method to detect Japanese homophone errors in Japanese texts. It is very important to detect homophone errors in Japanese revision systems because Japanese texts suffer from homophone errors frequently. In order to detect homophone errors, we have only to solve the homophone problem. We can use the decision list to do it because the homophone problem is equivalent to the word sense disambiguation problem. However, the homophone problem is different from the word sense disambiguation problem because the former can use the written word but the latter cannot. In this paper, we incorporate the written word into the original decision list by obtaining the identifying strength of the written word. The improved decision list can raise the F-measure of error detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In this paper, we propose a method of detecting Japanese homophone errors in Japanese texts. Our method is based on a decision list proposed by Yarowsky (Yarowsky, 1994; Yarowsky, 1995) . We improve the original decision list by using written words in the default evidence. The improved decision list can raise the F-measure of error detection.", "cite_spans": [ { "start": 153, "end": 169, "text": "(Yarowsky, 1994;", "ref_id": "BIBREF10" }, { "start": 170, "end": 185, "text": "Yarowsky, 1995)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most Japanese texts are written using Japanese word processors. To input a word composed of kanji characters, we first input the phonetic hiragana sequence for the word, and then convert it to the desired kanji sequence. However, multiple converted kanji sequences are generally produced, and we must then choose the correct kanji sequence. Therefore, Japanese texts suffer from ho-mophone errors caused by incorrect choices. Carelessness of choice alone is not the cause of homophone errors; Ignorance of the difference among homophone words is serious. For example, many Japanese are not aware of the difference between '.~., '~,' and '~,~,', or between '~.~.' and ,~,t. In this paper, we define the term homophone set as a set of words consisting of kanji characters that have the same phone 2. Then, we define the term homophone word as a word in a homophone set. For example, the set { ~/~-~ (probability), ~7 (establishment)} is a homophone set because words in the set axe composed of kanji characters that have the same phone 'ka-ku-ri-tu'. Thus, q/~' and '~f_' are homophone words. In this paper, we name the problem of choosing the correct word from the homophone set the homophone problem. In order to detect homophone errors, we make a list of homophone sets in advance, find a homophone word in the text, and then solve the homophone problem for the homophone word.", "cite_spans": [ { "start": 628, "end": 672, "text": "'~,' and '~,~,', or between '~.~.' and ,~,t.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Many methods of solving the homophone problem have been proposed (Tochinai et al., 1986; Ibuki et al., 1997; Oku and Matsuoka, 1997; Oku, 1994; Wakita and Kaneko, 1996) . However, they are restricted to the homophone problem, that is, they are heuristic methods. On the other hand, the homophone problem is equivalent to the word sense disambiguation problem if the phone of the homophone word is regarded as the word, and the homophone word as the sense. Therefore, we can solve the homophone problem by using various 1 '~'.-~.,~. and '.~..m~,' have a same phone 'i-sift'. The meaning of '~,' is a general will, and the meaning of '~:~'.~.,,... is a strong positive will. '~.~.' and '~' have a same phone 'cho-kkan'. The meaning of ' l-ff__,~. i is an intuition through a feeling, and the meaning of '~' is an intuition through a latent knowledge.", "cite_spans": [ { "start": 65, "end": 88, "text": "(Tochinai et al., 1986;", "ref_id": "BIBREF8" }, { "start": 89, "end": 108, "text": "Ibuki et al., 1997;", "ref_id": "BIBREF4" }, { "start": 109, "end": 132, "text": "Oku and Matsuoka, 1997;", "ref_id": "BIBREF5" }, { "start": 133, "end": 143, "text": "Oku, 1994;", "ref_id": "BIBREF6" }, { "start": 144, "end": 168, "text": "Wakita and Kaneko, 1996)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "ZWe ignore the difference of accents, stresses and parts of speech. That is, the homophone set is the set of words having the same expression in hiragana characters. statistical methods proposed for the word sense disambiguation problem (Fujii, 1998) . Take the case of context-sensitive spelling error detection 3, which is equivalent to the homophone problem. For that problem, some statistical methods have been applied and succeeded (Golding, 1995; Golding and Schabes, 1996) . Hence, statistical methods axe certainly valid for the homophone problem. In particular, the decision list is valid for the homophone problem (Shinnou, 1998) . The decision list arranges evidences to identify the word sense in the order of strength of identifying the sense. The word sense is judged by the evidence, with the highest identifying strength, in the context.", "cite_spans": [ { "start": 237, "end": 250, "text": "(Fujii, 1998)", "ref_id": null }, { "start": 437, "end": 452, "text": "(Golding, 1995;", "ref_id": "BIBREF3" }, { "start": 453, "end": 479, "text": "Golding and Schabes, 1996)", "ref_id": "BIBREF2" }, { "start": 624, "end": 639, "text": "(Shinnou, 1998)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although the homophone problem is equivalent to the word sense disambiguation problem, the former has a distinct difference from the latter. In the homophone problem, almost all of the answers axe given correctly, because almost all of the expressions written in the given text are correct. It is difficult to decide which is the meaning of 'crane', 'crane of animal' or 'crane of tool'. However, it is almost right that the correct expression of '~'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "in a text is not '~-~' but '~1~'. In the homophone problem, the choice of the written word results in high precision. We should use this information. However, the method to always choose the written word is useless for error detection because it doesn't detect errors at all. The method used for the homophone problem should be evaluated from the precision and the recall of the error detection. In this paper, we evaluate it by the F-measure to combine the precision and the recall, and use the written word to raise the F-measure of the original decision list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We use the written word as an evidence of the decision list. The problem is how much strength to give to that evidence. If the strength is high, the precision rises but the recall drops. On the other hand, if the strength is low, the decision list is not improved. In this paper, we calculate the strength that gives the maximum F-measure in a training corpus. As a result, our decision list can raise the F-measure of error detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we describe how to construct the decision list and to apply it to the homophone problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophone disambiguation by a decision list", "sec_num": "2" }, { "text": "SFor example, confusion between 'peace' and 'piece', or between 'quiet' and 'quite' is the contextsensitive spelling error.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophone disambiguation by a decision list", "sec_num": "2" }, { "text": "The decision list is constructed by the following steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of the decision list", "sec_num": "2.1" }, { "text": "step 1 Prepare homophone sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of the decision list", "sec_num": "2.1" }, { "text": "In this paper, we use the 12 homophone sets shown in Table 1 , which consist of homophone words that tend to be mis-chosen. ", "cite_spans": [], "ref_spans": [ { "start": 53, "end": 60, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Construction of the decision list", "sec_num": "2.1" }, { "text": "sa-i-ken ka-i-hou kyo-u-cho-u ji-shi-n ka-n-shi-n ta-i-ga-i { ~, ~\u00a2~ } {~, ~} { t~-~, ~ } {~,~#} { ~,~,, r~,c, } { ~, ~,~% } u-n-ko-u { ~, ~T } do-u-shi { NN, N\u00b1 } ka-te-i { ~_, ~..~:? } ji-kko-u { ~, ~ } syo-ku-ryo-u { ~, ~ } syo-u-ga-i { ~=-~, [~=-~ }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of the decision list", "sec_num": "2.1" }, { "text": "step 2 Set context information, i.e. evidences, to identify the homophone word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of the decision list", "sec_num": "2.1" }, { "text": "We use the following three kinds of evidence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of the decision list", "sec_num": "2.1" }, { "text": "\u2022 word (w) in front of H: Expressed as w-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of the decision list", "sec_num": "2.1" }, { "text": "\u2022 word (w) behind H: Expressed as w+", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of the decision list", "sec_num": "2.1" }, { "text": "\u2022 fi~tu words 4 surrounding H: We pick up the nearest three fir/tu words in front of and behind H respectively. We express them as w\u00b13.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of the decision list", "sec_num": "2.1" }, { "text": "step 3 Derive the frequency frq(wi,ej) of the collocation between the homophone word wl in the homophone set {Wl,W~,-.-,wn} and the evidence e j, by using a training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of the decision list", "sec_num": "2.1" }, { "text": "For example, let us consider the homophone set { ~_~1~ (running (of a ship, etc.)), ~_~7 (running (ofa train, etc.))} and the following two Japanese sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of the decision list", "sec_num": "2.1" }, { "text": "(A west wind of 3 m/s did not prevent the plane from flying.) 4The firitu word is defined as an independent word which can form one bun-setu by itself. Nouns, verbs and adjectives are examples. From sentence 1, we can extract the following evidences for the word '~': and from sentence 2, we can extract the following evidences for the word '~': \"~#r~? +\", \"\u00a2) -\", \"~+~ \u00b13\", \"~@ +3\", \"@r~ +Y', \"~ +3\", \"~ +3\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence 1 r~g~)~J~;o~ ~ -b J~'~7~_", "sec_num": null }, { "text": "step 4 Define the strength est(wi, ej) of estimating that the homophone word wl is correct given the evidence e j: est(wi, ej ) = log( w, P(Pif:j l),e ~ ) 2.,k#i ~ kl j]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence 1 r~g~)~J~;o~ ~ -b J~'~7~_", "sec_num": null }, { "text": "where P(wi]ej) is approximately calculated by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence 1 r~g~)~J~;o~ ~ -b J~'~7~_", "sec_num": null }, { "text": "frq(wi, ej ) + a P(wl [ej) = )-~k frq(wk, ej) + a\" a in the above expression is included to avoid the unsatisfactory case of frq(wl, ej) = O. In this paper, we set a : 0.15. We also use the special evidence default, frq(wl, default) is defined as the frequency of wl.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence 1 r~g~)~J~;o~ ~ -b J~'~7~_", "sec_num": null }, { "text": "step5 Pick the highest strength est(wh,ej) among 5As in this paper, the addition of a small value is an easy and effective way to avoid the unsatisfactory case, as shown in (Yarowsky, 1994) .", "cite_spans": [ { "start": 173, "end": 189, "text": "(Yarowsky, 1994)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence 1 r~g~)~J~;o~ ~ -b J~'~7~_", "sec_num": null }, { "text": "e#)), and set the word wk as the answer for the evidence ej. In this case, the identifying strength is est(wk, ej).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "{est(wl, ), ea(w , e#), \u2022 \u2022 \u2022, e", "sec_num": null }, { "text": "For example, by steps 4 and 5 we can construct the list shown in Table 2. step 6 Fix the answer wkj for each ej and sort identifying strengths est(wkj, ej) in order of dimension, but remove the evidence whose identifying strength is less than the identifying strength est(wkj,default) for the evidence default from the list. This is the decision list.", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 73, "text": "Table 2.", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "{est(wl, ), ea(w , e#), \u2022 \u2022 \u2022, e", "sec_num": null }, { "text": "After step 6, we obtain the decision list for the homophone set { ~_~, ~.~ } as shown in Table 3 . ", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 96, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "{est(wl, ), ea(w , e#), \u2022 \u2022 \u2022, e", "sec_num": null }, { "text": "In order to solve the homophone problem by the decision list, we first find the homophone word w in the given text, and then extract evidences E for the word w from the text: E = {el, e:,..., e, }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solving by a decision llst", "sec_num": "2.2" }, { "text": "Next, picking up the evidence from the decision list for the homophone set for the homophone word w in order of rank, we check whether the evidence is in the set E. If the evidence ej is in the set E, the answer wkj for ej is judged to be the correct expression for the homophone word w. If wkj is equal to w, w is judged to be correct, and if it is not equal, then it is shown that w may be the error for wkj.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solving by a decision llst", "sec_num": "2.2" }, { "text": "Use of the written word", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3", "sec_num": null }, { "text": "In this section, we describe the use of the written word in the homophone problem and how to incorporate it into the decision list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3", "sec_num": null }, { "text": "As described in the Introduction, the written word cannot be used in the word sense disambiguation problem, but it is useful for solving homophone problems. The method used for the homophone problem is trivial if the method is evaluated by the precision of distinction using the following formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of error detection systems", "sec_num": "3.1" }, { "text": "number of correct discriminations number of all discriminations", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of error detection systems", "sec_num": "3.1" }, { "text": "That is, if the expression is '~]~' (or '~.~'), then we should clearly choose the word '~t~' (or the word '~') from the homophone set { ~_~t~, ~_~T }. This distinction method probably has better precision than any other methods for the word sense disambiguation problem. However, this method is useless because it does not detect errors at all.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of error detection systems", "sec_num": "3.1" }, { "text": "The method for the homophone problem should be evaluated from the standpoint of not error discrimination but error detection. In this paper, we use the F-measure (Eq.1) to combine the precision P and the recall R defined as follows: The distinction method to choose the written word is useless, but it has a very high precision of error discrimination. Thus, it is valid to use this method where it is difficult to use context to solve the homophone problem. The question is when to stop using the decision from context and use the written word. In this paper, we regard the written word as a kind of evidence on context, and give it an identifying strength. Consequently we can use the written word in the decision list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of error detection systems", "sec_num": "3.1" }, { "text": "First, let z be the identifying strength of the written word. We name the set of evidences with higher identifying strength than z the set a, and the set of evidences with lower identifying strength than z the set f~, Let T be the number of homophone problems for a homophone set. We solve them by the original decision list DLO. Let G (or H) be the ratio of the number of homophone problems by judged by a (or f~ ) to T. Let g (or h) be the precision of a (or f~), and p be the occurrence probability of the homophone error.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculation of the identifying strength of the written word", "sec_num": "3.3" }, { "text": "The number of problems correctly solved by a is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculation of the identifying strength of the written word", "sec_num": "3.3" }, { "text": "aT(1 -p),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculation of the identifying strength of the written word", "sec_num": "3.3" }, { "text": "(2) and the number of problems incorrectly solved by a is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculation of the identifying strength of the written word", "sec_num": "3.3" }, { "text": "GTp.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculation of the identifying strength of the written word", "sec_num": "3.3" }, { "text": "( 3)The number of problems detected as errors in Eq.2 and Eq.3 are GT(1 -p)(1 -g) and GTpg respectively. Thus, the number of problems detected as errors by a is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculation of the identifying strength of the written word", "sec_num": "3.3" }, { "text": "GT((1 -p)(1 -g) + pg).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculation of the identifying strength of the written word", "sec_num": "3.3" }, { "text": "In the same way, the number of problems detected as errors by/~ is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(4)", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "HT((1 -p)(1 -h) + ph).", "eq_num": "(5)" } ], "section": "(4)", "sec_num": null }, { "text": "Consequently the total number of problems detected as errors is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(4)", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "T(G((1 -p)(1 -g) + pg) +H((1 -p)(1 -h)+ph)).", "eq_num": "(6)" } ], "section": "(4)", "sec_num": null }, { "text": "The number of correct detections in Eq.6 is Tp(Gg + Hh). Therefore the precision P0 is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(4)", "sec_num": null }, { "text": "Po = p(Gg + Hh)/{G((1 -p)(1 -g) + pg) + H((1 -p)(1 -h) + ph)}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(4)", "sec_num": null }, { "text": "Because the number of real errors in T is Tp, the recall R0 is Gg+Hh. By using P0 and R0, we can get the F-measure F0 of DL0 by Eq. 1. Next, we construct the decision list incorporating the written word into DL0. We name this decision list DL1. In DL1, we use the written word to solve problems which we cannot judge by c[. That is, DL1 is the decision list to attach the written word as the default evidence to a (see Fig.l ).", "cite_spans": [], "ref_spans": [ { "start": 419, "end": 424, "text": "Fig.l", "ref_id": null } ], "eq_spans": [], "section": "(4)", "sec_num": null }, { "text": "Next, we calculate the precision and the recall of DL1. Because a of DL1 is the same as that of DL0, the number of problems detected as errors by a is given by Eq.4. In the case of DL1, problems judged by ~ of DL0 are judged by the written word. Therefore, we detect no error from these problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(4)", "sec_num": null }, { "text": "As a result, the number of problems detected as errors by DL1 is given by Eq.4, and the number of real errors in these detections is TGpg. Therefore, the precision P1 of DL1 is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(4)", "sec_num": null }, { "text": "(1 -p)(1 -g) + pg\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "p1 = Pg", "sec_num": null }, { "text": "Because the number of whole errors is Tp, the recall R1 of DL1 is Gg. By using P1 and t/1, we can get the F-measure F1 of DL1 by Eq.1. Finally, we try to define the identifying strength z. z is the value that yields the maximum F~ under the condition F1 > F0. However, theoretical calculation alone cannot give z, because p is unknown, and functions of G,H,g, and h are also unknown.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "p1 = Pg", "sec_num": null }, { "text": "In this paper, we set p = 0.05, and get values of G, H, g, and h by using the training corpus which is the resource used to construct the original decision list DL0. Take the case of the homophone set {'~', '~.~T'}. For this homophone set, we try to get values of G, H, g, and h. The training corpus has 2,890 sentences which include the word '~.~]~' or the word '~.~'. These 2,890 sentences are homophone problems for that homophone set. The identifying strength of DL0 for this homophone set covers from 0.046 to 9.453 as shown in Table 3 . Next we give z a value. For example, we set z = 2.5. In this case, the number of problems judged by a is 1,631, and the number of correct judgments in them is 1,593. Thus, G = 1631/2890 = 0.564 and g = 1593/1631 = 0.977. In the same way, under this assumption z --2.5, the number of problems judged by j3 is 1,259, and the number of correct judgments in them is 854.", "cite_spans": [], "ref_spans": [ { "start": 533, "end": 540, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "p1 = Pg", "sec_num": null }, { "text": "Thus, H = 1259/2890 = 0.436 and h = 854/1259 = 0.678. As a result, if z = 2.5, then P0 = 0.225, R0 = 0.847, F0 = 0.356, P1 = 0.688, R1 = 0.551 and F1 = 0.612. In Fig.2, Fig.3 and Fig.4 , we show the experiment result when z varies from 0.0 to 10.0 in units of 0.1. By choosing the maximum value of F1 in Fig.4 , we can get the desired z. In this homophone set, we obtain z = 3.0.", "cite_spans": [], "ref_spans": [ { "start": 162, "end": 185, "text": "Fig.2, Fig.3 and Fig.4", "ref_id": "FIGREF3" }, { "start": 305, "end": 310, "text": "Fig.4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "p1 = Pg", "sec_num": null }, { "text": "First, we obtain each identifying strength of the written word for the 12 homophone sets shown in Table 1 , by the above method. We show this result in Table 4 . LRO in this table means the lowest rank of DL0. That is, LR0 is the rank of the default evidence. LR1 means the lowest rank of DL1. That is, LR1 is the rank of the evidence of the written word. Moreover, LR0 and LR1 mean the sizes of each decision list DL0 and DL1. Second, we extract sentences which include a word in the 12 homophone sets from a corpus. We note that this corpus is different from the training corpus; the corpus is one year's worth of Mainichi newspaper articles, and the training corpus is one year's worth of Nikkei newspaper articles. The extracted sentences are the test sentences of the experiment. We assume that these sentences have no homophone errors.", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 105, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 152, "end": 159, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Last, we randomly select 5% of the test sentences, and forcibly put homophone errors into these selected sentences by changing the written As a result, the test sentences include 5% errors. From these test sentences, we detect homophone errors by DL0 and DL1 respectively. We conducted this experiment ten times, and got the mean of the precision, the recall and the F-measure. The result is shown in Table 5 .", "cite_spans": [], "ref_spans": [ { "start": 401, "end": 408, "text": "Table 5", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "For all homophone sets, the F-measure of our proposed DL1 is higher than the F-measure of the original decision list DL0. Therefore, it is concluded that our proposed method is effective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "The recall of DL1 is no more than the recall of DL0. Our method aims to raise the F-measure by raising the precision instead of sacrificing the recall. We confirmed the validity of the method by experiments in sections 3 and 4. Thus our method has only a little effect if the recall is evaluated with importance. However, we should note that the F-measure of DL1 is always not worse than the F-measure of DL0. We set the occurrence probability of the homophone error at p = 0.05. However, each homophone set has its own p. We need decide p exactly because the identifying strength of the written word depends on p. However, DL1 will produce better results than DL0 if p is smaller than 0.05, because the precision of judgment by the written word improves without lowering the recall. The recall does not fall due to smaller p because It0 and R1 are independent of p. Moreover, from the definitions of P0 and Pt, we can confirm that the precision of judgments by the written word improves with smaller p. The number of elements of all homophone sets used in this paper was two, but the number of elements of real homophone sets may be more. However, the bigger this number is, the better the result produced by our method, because the precision of judgments by the default evidence of DL0 drops in this case, but that of DL1 does not. Therefore, our method is better than the original one even if the number of elements of the homophone set increases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Remarks", "sec_num": "5" }, { "text": "Our method has an advantage that the size of DL1 is smaller. The size of the decision list has no relation to the precision and the recall, but a small decision list has advantages of efficiency of calculation and maintenance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Remarks", "sec_num": "5" }, { "text": "On the other hand, our method has a problem in that it does not use the written word in the judgment from a; Even the identifying strength of the evidence in a must depend on the written word. We intend to study the use of the written word in the judgment from a. Moreover, homophone errors in our experiments are artifidal. We must confrm the effectiveness of the proposed method for actual homophone errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Remarks", "sec_num": "5" }, { "text": "In this paper, we used the decision list to solve the homophone problem. This strategy was based on the fact that the homophone problem is equivalent to the word sense disambiguation problem. However, the homophone problem is different from the word sense disambiguation problem because the former can use the written word but the latter cannot. In this paper, we incorporated the written word into the original decision list by obtain-ing the identifying strength of the written word. We used 12 homophone sets in experiments. In these experiments, our proposed decision list had a higher F-measure than the original one. A future task is to further integrate context and the written word in the decision list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" } ], "back_matter": [ { "text": "We used Nikkei Shibun CD-ROM '90 and Mainichi Shibun CD-ROM '94 as the corpus. The Nihon Keizai Shinbun company and the Mainichi Shinbun company gave us permission of their collections. We appreciate the assistance granted by both companies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Sence Disambiguation", "authors": [], "year": null, "venue": "", "volume": "13", "issue": "", "pages": "904--911", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sence Disambiguation (in Japanese). Journal of Japanese Society for Artificial Intelligence, 13(6):904-911.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Combining Trigram-based and Feature-based Methods for Context-Sensitive Spelling Correction", "authors": [ { "first": "R", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Golding", "suffix": "" }, { "first": "", "middle": [], "last": "Schabes", "suffix": "" } ], "year": 1996, "venue": "3~th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "71--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew R. Golding and Yves Schabes. 1996. Combining Trigram-based and Feature-based Methods for Context-Sensitive Spelling Correc- tion. In 3~th Annual Meeting of the Association for Computational Linguistics, pages 71-78.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A Bayesian Hybrid Method for Context-Sensitive Spelling Correction", "authors": [ { "first": "R", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "", "middle": [], "last": "Golding", "suffix": "" } ], "year": 1995, "venue": "Third Workshop on Very Large Corpora (WVLC-95)", "volume": "", "issue": "", "pages": "39--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew R. Golding. 1995. A Bayesian Hybrid Method for Context-Sensitive Spelling Correc- tion. In Third Workshop on Very Large Corpora (WVLC-95), pages 39-53.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A new approach for Japanese Spelling Correction", "authors": [ { "first": "Jun", "middle": [], "last": "Ibuki", "suffix": "" }, { "first": "Guowei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Takahiro", "middle": [], "last": "Saitoh", "suffix": "" }, { "first": "Kunio", "middle": [], "last": "Matsui", "suffix": "" } ], "year": 1997, "venue": "SIG Notes", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Ibuki, Guowei Xu, Takahiro Saitoh, and Ku- nio Matsui. 1997. A new approach for Japanese Spelling Correction (in Japanese). SIG Notes NL-117-21, IPSJ.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Method for Detecting Japanese Homophone Errors in Compound Nouns based on Character Cooccurrence and Its Evaluation", "authors": [ { "first": "Masahiro", "middle": [], "last": "Oku", "suffix": "" }, { "first": "Koji", "middle": [], "last": "Matsuoka", "suffix": "" } ], "year": 1997, "venue": "Journal of Natural Language Processing", "volume": "4", "issue": "3", "pages": "83--99", "other_ids": {}, "num": null, "urls": [], "raw_text": "Masahiro Oku and Koji Matsuoka. 1997. A Method for Detecting Japanese Homophone Errors in Compound Nouns based on Char- acter Cooccurrence and Its Evaluation (in Japanese). Journal of Natural Language Pro- cessing, 4(3):83-99.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Handling Japanese Homophone Errors in Revision Support System; RE-VISE", "authors": [ { "first": "Masahiro", "middle": [], "last": "Oku", "suffix": "" } ], "year": 1994, "venue": "4th Conference on Applied Natural Language Processing (ANLP-9$)", "volume": "", "issue": "", "pages": "156--161", "other_ids": {}, "num": null, "urls": [], "raw_text": "Masahiro Oku. 1994. Handling Japanese Homo- phone Errors in Revision Support System; RE- VISE. In 4th Conference on Applied Natural Language Processing (ANLP-9$), pages 156- 161.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Japanese Homohone Disambiguation Using a Decision List Given Added Weight to Evidences on Compounds", "authors": [ { "first": "Hiroyuki", "middle": [], "last": "Shinnou", "suffix": "" } ], "year": 1998, "venue": "Journal of Information Processing", "volume": "39", "issue": "12", "pages": "3200--3206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiroyuki Shinnou. 1998. Japanese Homohone Disambiguation Using a Decision List Given Added Weight to Evidences on Compounds (in Japanese). Journal of Information Processing, 39(12):3200-3206.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Kana-Kanji Translation System with Automatic Homonym Selection Using Character Chain Matching", "authors": [ { "first": "Koji", "middle": [], "last": "Tochinai", "suffix": "" }, { "first": "Taisuke", "middle": [], "last": "Itoh", "suffix": "" }, { "first": "Yasuhiro", "middle": [], "last": "Suzuki", "suffix": "" } ], "year": 1986, "venue": "Journal of Information Processing", "volume": "27", "issue": "", "pages": "313--321", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koji Tochinai, Taisuke Itoh, and Yasuhiro Suzuki. 1986. Kana-Kanji Translation System with Au- tomatic Homonym Selection Using Character Chain Matching (in Japanese). Journal of In- formation Processing, 27(3):313-321.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Extraction of Keywords for \"Homonym Error Checker", "authors": [ { "first": "Sakiko", "middle": [], "last": "Wakita", "suffix": "" }, { "first": "Hiroshi", "middle": [], "last": "Kaneko", "suffix": "" } ], "year": 1996, "venue": "SIG Notes NL-111-5, IPSJ", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sakiko Wakita and Hiroshi Kaneko. 1996. Ex- traction of Keywords for \"Homonym Error Checker\" (in Japanese). SIG Notes NL-111-5, IPSJ.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Decision Lists for Lexical Ambiguity Resolution: Application to Accent Restoration in Spanish and French", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1994, "venue": "32th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "88--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky. 1994. Decision Lists for Lex- ical Ambiguity Resolution: Application to Ac- cent Restoration in Spanish and French. In 32th Annual Meeting of the Association for Compu- tational Linguistics, pages 88-95.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Unsupervised Word Sense Disambiguation Rivaling Supervised Methods", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1995, "venue": "33th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "189--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. In 33th Annual Meeting of the Association for Computational Linguistics, pages 189-196.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "in the early morning and during the night were shortened.)", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "of the identifying strength of the written word", "num": null, "type_str": "figure" }, "FIGREF2": { "uris": null, "text": "Figure 1: Construction of DL1", "num": null, "type_str": "figure" }, "FIGREF3": { "uris": null, "text": ": F-measures Fo and Ft homophone word to another homophone word.", "num": null, "type_str": "figure" }, "TABREF0": { "content": "
PhoneHomophone set
", "num": null, "type_str": "table", "text": "Homophone sets", "html": null }, "TABREF1": { "content": "
evidences
I Freq. of Freq. of ,~_~, ,~,Ans. I Identifying Strength
~: + (to+)7753~0.538
(of-)252282~0.162
~T~ \u00b13 (plane\u00b13)40~5.358
\u00b0..
~+(hour+)1411~.~t~0.345
~.~ \u00b13 (midnight\u00b13)048~8.910
~K~ \u00b13 (shorten\u00b13)04~5.358
.,.
default14681422
", "num": null, "type_str": "table", "text": "Answers and identifying strength for Evid.", "html": null }, "TABREF2": { "content": "
~id.~gth
1 ~lJ~ \u00b13 (train\u00b13)~.~9.453
2 ~ \u00b13 (ship\u00b13)~.~l~9.106
3 ~\u00b13~.~8.910
(midnight\u00b13)
701 ~r,~-(hour-)~.~0.358
746 \u00a2)+ (of+)~.~0.162
....,..........
760 default~_~0.046
", "num": null, "type_str": "table", "text": "Example of decision list", "html": null }, "TABREF3": { "content": "
homophone setNumber ofIDLODL1
problemsPo [ Ro I Foet ] R1 I Fx
{ ~,t~ }1,254
{ ~,~-~ }1,938
{}4,845
{ {r ,c, }3,682 2,032
618
{)588
{ ~,~,~,, ~]:J= }1,436
{ ~,~\u00a2 }1,220
1,563
1,074
{)1,636
mean
", "num": null, "type_str": "table", "text": "Result of experiments", "html": null } } } }