{ "paper_id": "P94-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:19:14.802668Z" }, "title": "A STOCHASTIC FINITE-STATE WORD-SEGMENTATION ALGORITHM FOR CHINESE", "authors": [ { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "", "affiliation": { "laboratory": "AT&T Bell Laboratories", "institution": "", "location": { "addrLine": "600 Mountain Avenue, Room {2d-451,2d-453,2c-278} Murray Hill", "postCode": "07974-0636", "region": "NJ", "country": "USA" } }, "email": "" }, { "first": "Chilin", "middle": [], "last": "Shih", "suffix": "", "affiliation": { "laboratory": "AT&T Bell Laboratories", "institution": "", "location": { "addrLine": "600 Mountain Avenue, Room {2d-451,2d-453,2c-278} Murray Hill", "postCode": "07974-0636", "region": "NJ", "country": "USA" } }, "email": "" }, { "first": "William", "middle": [], "last": "Gale", "suffix": "", "affiliation": { "laboratory": "AT&T Bell Laboratories", "institution": "", "location": { "addrLine": "600 Mountain Avenue, Room {2d-451,2d-453,2c-278} Murray Hill", "postCode": "07974-0636", "region": "NJ", "country": "USA" } }, "email": "gale@research@att.com" }, { "first": "Nancy", "middle": [], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harvard University Cambridge", "location": { "postCode": "02138", "region": "MA" } }, "email": "nchang@das@edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a stochastic finite-state model for segmenting Chinese text into dictionary entries and productively derived words, and providing pronunciations for these words; the method incorporates a class-based model in its treatment of personal names. We also evaluate the system's performance, taking into account the fact that people often do not agree on a single segmentation.", "pdf_parse": { "paper_id": "P94-1010", "_pdf_hash": "", "abstract": [ { "text": "We present a stochastic finite-state model for segmenting Chinese text into dictionary entries and productively derived words, and providing pronunciations for these words; the method incorporates a class-based model in its treatment of personal names. We also evaluate the system's performance, taking into account the fact that people often do not agree on a single segmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The initial step of any text analysis task is the tokenization of the input into words. For many writing systems, using whitespace as a delimiter for words yields reasonable results. However, for Chinese and other systems where whitespace is not used to delimit words, such trivial schemes will not work. Chinese writing is morphosyllabic (DeFrancis, 1984) , meaning that each hanzi-'Chinese character' -(nearly always) represents a single syllable that is (usually) also a single morpheme. Since in Chinese, as in English, words may be polysyllabic, and since hanzi are written with no intervening spaces, it is not trivial to reconstruct which hanzi to group into words.", "cite_spans": [ { "start": 339, "end": 356, "text": "(DeFrancis, 1984)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "THE PROBLEM", "sec_num": null }, { "text": "While for some applications it may be possible to bypass the word-segmentation problem and work straight from hanzi, there are several reasons why this approach will not work in a text-to-speech (TI'S) system for Mandarin Chinese --the primary intended application of our segmenter. These reasons include:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROBLEM", "sec_num": null }, { "text": "1. Many hanzi are homographs whose pronunciation depends upon word affiliation. So, ~ is pronounced deO ~ when it is a prenominal modification marker, but di4 in the word [] ~ mu4di4 'goal'; ~ is normally ganl 'dry',but qian2 in a person's given name.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROBLEM", "sec_num": null }, { "text": "2. Some phonological rules depend upon correct wordsegmentation, including Third Tone Sandhi (Shih, 1986) , which changes a 3 tone into a 2 tone before another 3 tone: ,J~]~ xiao3 [lao3 shu3] 'litt We use pinyin transliteration with numbers representing tones.", "cite_spans": [ { "start": 93, "end": 105, "text": "(Shih, 1986)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "THE PROBLEM", "sec_num": null }, { "text": "tie rat ', becomes xiao3 [ lao2-shu3 ] , rather than xiao2 [ lao2-shu3 ] , because the rule first applies within the word lao3-shu3, blocking its phrasal application.", "cite_spans": [ { "start": 8, "end": 38, "text": "', becomes xiao3 [ lao2-shu3 ]", "ref_id": null }, { "start": 59, "end": 72, "text": "[ lao2-shu3 ]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "66", "sec_num": null }, { "text": "While a minimal requirement for building a Chinese word-segmenter is a dictionary, a dictionary is insufficient since there are several classes of words that are not generally found in dictionaries. Among these: We present a stochastic finite-state model for segmenting Chinese text into dictionary entries and words derived via the above-mentioned productive processes; as part of the treatment of personal names, we discuss a class-based model which uses the Good-Turing method to estimate costs of previously unseen personal names. The segmenter handles the grouping of hanzi into words and outputs word pronunciations, with default pronunciations for hanzi it cannot group; we focus here primarily on the system's ability to segment text appropriately (rather than on its pronunciation abilities). We evaluate various specific aspects of the segmentation, and provide an evaluation of the overall segmentation performance: this latter evaluation compares the performance of the system with that of several human judges, since even people do not agree on a single correct way to segment a text,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "66", "sec_num": null }, { "text": "There is a sizable literature on Chinese word segmentation: recent reviews include (Wang et al., 1990; Wu and Tseng, 1993) . Roughly, previous work can be classified into purely statistical approaches (Sproat and Shih, 1990) , statistical approaches which incorporate lexical knowledge (Fan and Tsai, 1988; Lin et al., 1993) , and approaches that include lexical knowledge combined with heuristics . Chert and Liu's (1992) algorithm matches words of an input sentence against a dictionary; in cases where various parses are possible, a set of heuristics is applied to disambiguate the analyses. Various morphological rules are then applied to allow for morphologically complex words that are not in the dictionary. Precision and recall rates of over 99% are reported, but note that this covers only words that are in the dictionary: \"the... statistics do not count the mistakes [that occur] due to the existence of derived words or proper names\" (Chen and Liu, 1992, page 105) . Lin et al. (1993) describe a sophisticated model that includes a dictionary and a morphological analyzer. They also present a general statistical model for detecting 'unknown words' based on hanzi and part-of-speech sequences. However, their unknown word model has the disadvantage that it does not identify a sequence of hanzi as an unknown word of a particular category, but merely as an unknown word (of indeterminate category). For an application like TTS, however, it is necessary to know that a particular sequence ofhanzi is of a particular category because, for example, that knowledge could affect the pronunciation. We therefore prefer to build particular models for different classes of unknown words, rather than building a single general model.", "cite_spans": [ { "start": 83, "end": 102, "text": "(Wang et al., 1990;", "ref_id": "BIBREF14" }, { "start": 103, "end": 122, "text": "Wu and Tseng, 1993)", "ref_id": "BIBREF16" }, { "start": 201, "end": 224, "text": "(Sproat and Shih, 1990)", "ref_id": "BIBREF11" }, { "start": 286, "end": 306, "text": "(Fan and Tsai, 1988;", "ref_id": "BIBREF6" }, { "start": 307, "end": 324, "text": "Lin et al., 1993)", "ref_id": null }, { "start": 400, "end": 422, "text": "Chert and Liu's (1992)", "ref_id": null }, { "start": 946, "end": 976, "text": "(Chen and Liu, 1992, page 105)", "ref_id": null }, { "start": 979, "end": 996, "text": "Lin et al. (1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "PREVIOUS WORK", "sec_num": null }, { "text": "The lexicon of basic words and stems is represented as a weightedfinite-state tranducer (WFST) (Pereira et al., 1994) . Most transitions represent mappings between hanzi and pronunciations, and are costless. Transitions between orthographic words and their parts-of-speech are represented by e-to-category transductions and a unigram cost (negative log probability) of that word estimated from a 20M hanzi training corpus; a portion of the WFST is given in Figure 1 . 2 Besides dictionary words, the lexicon contains all hanzi in the Big 5 Chinese code, with their pronunciation(s), plus entries for other characters (e.g., roman letters, numerals, special symbols).", "cite_spans": [ { "start": 95, "end": 117, "text": "(Pereira et al., 1994)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 457, "end": 465, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "DICTIONARY REPRESENTATION", "sec_num": null }, { "text": "Given this dictionary representation, recognizing a single Chinese word involves representing the input as a finite-state acceptor (FSA) where each arc is labeled with a single hanzi of the input. The left-restriction of the dictionary WFST with the input FSA contains all and only the (single) lexical entries corresponding to the input. This WFST includes the word costs on arcs transducing c to category labels. Now, input 2The costs are actually for strings rather than words: we currently lack estimates for the words themselves. We assign the string cost to lexical entries with the likeliest pronunciation, and a large cost to all other entries. Thus ~j~/adv, with the commonest pronunciafionjiangl has cost 5.98, whereas ~/nc, with the rarer pronunciatJonjiang4, is assigned a high cost. Note also that the current model is zeroeth order in that it uses only unigram costs. Higher order models, e.g. bigram word models, could easily be incorporated into the present architecture if desired. sentences consist of one or more entries from the dictionary, and we can generalize the word recognition problem to the word segmentation problem, by leftrestricting the transitive closure of the dictionary with the input. The result of this left-restriction is an WFST that gives all and only the possible analyses of the input FSA into dictionary entries. In general we do not want all possible analyses but rather the best analysis. This is obtained by computing the least-cost path in the output WFST. The final stage of segmentation involves traversing the best path, collecting into words all sequences of hanzi delimited by part-of-speech-labeled arcs. Figure 2 shows an example of segmentation: the sentence [] 5~,~-~ \"How do you say octopus in Japanese?\", consists of four words, namely [] ri4-wen2 'Japanese', ~, zhangl-yu2 'octopus', ,~ zen3-mo 'how', and -~ shuol 'say'. In this case, [] ri4 is also a word (e.g. a common abbreviation for Japan) as are 3~ wen2-zhangl 'essay', and ~, yu2 'fish', so there is (at least) one alternate analysis to be considered.", "cite_spans": [], "ref_spans": [ { "start": 1659, "end": 1667, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "DICTIONARY REPRESENTATION", "sec_num": null }, { "text": "The method just described segments dictionary words, but as noted there are several classes of words that should be handled that are not in the dictionary. One class comprises words derived by productive morphological processes, such as plural noun formation using the suffix ~I menO. The morphological analysis itself can be handled using well-known techniques from finite-state morphology (Koskenniemi, 1983; Antworth, 1990; Tzoukermann and Liberman, 1990; Karttunen et al., 1992; Sproat, 1992) ; so, we represent the fact that ~ attaches to nouns by allowing c-transitions from the final states of all noun entries, to the initial state of the sub-WFST representing ~I. However, for our purposes it is not sufficient to represent the morphological decomposition of, say, plural nouns: we also need an estimate of the cost of the resulting word. For derived words that occur in our corpus we can estimate these costs as we would the costs for an underived dictionary entry. So, ~I (Baayen, 1989; Church and Gale, 1991) , whereby the aggregate probability of previously unseen members of a construction is estimated as NI/N, where N is the total number of observed tokens and N1 is the number of types observed only once. For r~l this gives prob(unseen(f~) I f~l), and to get the aggregate probability of novel ~l-constructions in a corpus we multiply this by prob,e~,(\u00a2{~) to get probte~t(unseen(f~)).", "cite_spans": [ { "start": 391, "end": 410, "text": "(Koskenniemi, 1983;", "ref_id": "BIBREF8" }, { "start": 411, "end": 426, "text": "Antworth, 1990;", "ref_id": "BIBREF0" }, { "start": 427, "end": 458, "text": "Tzoukermann and Liberman, 1990;", "ref_id": "BIBREF13" }, { "start": 459, "end": 482, "text": "Karttunen et al., 1992;", "ref_id": "BIBREF7" }, { "start": 483, "end": 496, "text": "Sproat, 1992)", "ref_id": "BIBREF12" }, { "start": 983, "end": 997, "text": "(Baayen, 1989;", "ref_id": "BIBREF1" }, { "start": 998, "end": 1020, "text": "Church and Gale, 1991)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "MORPHOLOGICAL ANALYSIS", "sec_num": null }, { "text": "Finally, to estimate the probability of particular unseen word i~1/1 ~I, we use the simple bigram backoff model For the seen word ~1 'generals', there is an e:nc transduction from ~ to the node preceding t~]; this arc has cost cost(~]) -costt,~:t(unseen(~])), so that the cost of the whole path is the desired cost (~t~] ). This representation gives ~] an appropriate morphological decomposition, preserving information that would be lost by simply listing ~[~I as an unanalyzed form. Note that the backoffmodel assumes that there is a positive correlation between the frequency of a singular noun and its plural. An analysis of nouns that occur both in the singular and the plural in our database reveals that there is indeed a slight but significant positive correlation --R 2 = 0.20, p < 0.005. This suggests that the backoff model is as reasonable a model as we can use in the absence of further information about the expected cost of a plural form.", "cite_spans": [ { "start": 315, "end": 320, "text": "(~t~]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "MORPHOLOGICAL ANALYSIS", "sec_num": null }, { "text": "Full Chinese personal names are in one respect simple: they are always of the form FAMILY+GIVEN. The FAMILY name set is restricted: there are a few hundred single-hanzi FAMILY names, and about ten double-hanzi ones. Given names are most commonly two hanzi long, occasionally one-hanzi long: there are thus four possible name types. The difficulty is that GIVEN names can consist, in principle, of any hanzi or pair ofhanzi, so the possible GIVEN names are limited only by the total number of hanzi, though some hanzi are certainly far more likely than others. For a sequence of hanzi that is a possible name, we wish to assign a probability to that sequence qua name. We use an estimate derived from . For example, given a potential name of the form FI G1 G2, where F1 is a legal FAMILY name and G1 and G2 are each hanzi, we estimate the probability of that name as the product of the probability of finding any name in text; the probability of F1 as a FAMILY name; the probability of the first hanzi of a double GIVEN name being G1; the probability of the second hanzi of a double GIVEN name being G2; and the probability of a name of the form SINGLE-FAMILY+DOUBLE-GIVEN. The first probability is estimated from a name count in a text database, whereas the last four probabilities are estimated from a large list of personal names) This model is easily incorporated into the segmenter by building an WFST restricting the names to the four licit types, with costs on the arcs for any particular name summing to an estimate of the cost of that name. This WFST is then summed with the WFST implementing the dictionary and morphological rules, and the transitive closure of the resulting transducer is computed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CHINESE PERSONAL NAMES", "sec_num": null }, { "text": "3We have two such lists, one containing about 17,000 full names, and another containing frequencies of hanzi in the various name positions, derived from a million names.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CHINESE PERSONAL NAMES", "sec_num": null }, { "text": "There are two weaknesses in Chang et al.'s (1992) model, which we improve upon. First, the model assumes independence between the first and second hanzi of a double GIVEN name. Yet, some hanzi are far more probable in women's names than they are in men's names, and there is a similar list of male-oriented hanzi: mixing hanzi from these two lists is generally less likely than would be predicted by the independence model. As a partial solution, for pairs ofhanzi that cooccur sufficiently often in our namelists, we use the estimated bigram cost, rather than the independence-based cost. The second weakness is that assign a uniform small cost to unseen hanzi in GIVEN names; but we know that some unseen hanzi are merely accidentally missing, whereas others are missing for a reason --e.g., because they have a bad connotation. We can address this problem by first observing that for many hanzi, the general 'meaning' is indicated by its so-called 'semantic radical'. Hanzi that share the same 'radical', share an easily identifiable structural component: the plant names 2, -~ and M share the GRASS radical; malady names m[, ~,, and t~ share the SICKNESS radical; and ratlike animal names 1~, [~, and 1~ share the RAT radical. Some classes are better for names than others: in our corpora, many names are picked from the GRASS class, very few from the SICKNESS class, and none from the RAT class. We can thus better predict the probability of an unseen hanzi occurring in a name by computing a within-class p~t, o~ E (N{ u) N , E(N~tS )", "cite_spans": [ { "start": 28, "end": 49, "text": "Chang et al.'s (1992)", "ref_id": null }, { "start": 1521, "end": 1527, "text": "(N{ u)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "CHINESE PERSONAL NAMES", "sec_num": null }, { "text": "where p~t, is the probability of one unseen hanzi in class cls, E(N{ t') is the expected number of hanzi in cls seen once, N is the total number of hanzi, and E(N~ t') is the expected number of unseen hanzi in class cls. The use of the Good-Turing equation presumes suitable estimates of the unknown expectations it requires. In the denominator, the N~ u are well measured by counting, and we replace the expectation by the observation. In the numerator, however, the counts of N{ l' are quite irregular, including several zeros (e.g. RAT, none of whose members were seen), However, there is a strong relationship between N{ t\" and the number of hanzi in the class. For E(N~ZS), then, we substitute a smooth against the number of class elements. This smooth guarantees that there are no zeroes estimated. The final estimating equation is then:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CHINESE PERSONAL NAMES", "sec_num": null }, { "text": "(2) p~U oc N * The total of all these class estimates was about 10% off from the Turing estimate Nt/N for the probability of all unseen hanzi, and we renormalized the estimates so that they would sum to Nt/N.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S( N~'N;,!", "sec_num": null }, { "text": "This class-based model gives reasonable results: for six radical classes, Table 1 gives the estimated cost for an unseen hanzi in the class occurring as the second hanzi in a double GIVEN name. Note that the good classes JADE, GOLD and GRASS have lower costs than the bad classes SICKNESS, DEATH and RAT, as desired.", "cite_spans": [], "ref_spans": [ { "start": 74, "end": 81, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "S( N~'N;,!", "sec_num": null }, { "text": "Foreign names are usually transliterated using hanzi whose sequential pronunciation mimics the source language pronunciation of the name. Since foreign names can be of any length, and since their original pronunciation is effectively unlimited, the identification of such names is tricky. Fortunately, there are only a few hundred hanzi that are particularly common in transliterations; indeed, the commonest ones, such as ~ bal, ~I er3, and PJ al are often clear indicators that a sequence of hanzi containing them is foreign: even a name like ~Y~f xia4-mi3-er3 'Shamir', which is a legal Chinese personal name, retains a foreign flavor because of i~J. As a first step towards modeling transliterated names, we have collected all hanzi occurring more than once in the roughly 750 foreign names in our dictionary, and we estimate the probability of occurrence of each hanzi in a transliteration (pT~;(hanzii)) using the maximum likelihood estimate. As with personal names, we also derive an estimate from text of the probability of finding a transliterated name of any kind (PTN). Finally, we model the probability of a new transliterated name as the product of PTN and pTg(hanzii) for each hanzii in the putative name. 4 The foreign name model is implemented as an WFST, which is then summed with the WFST implementing the dictionary, morphological rules, and personal names; the transitive closure of the resulting machine is then computed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TRANSLITERATIONS OF FOREIGN WORDS", "sec_num": null }, { "text": "In this section we present a partial evaluation of the current system in three parts. The first is an evaluation of the system's ability to mimic humans at the task of segmenting text into word-sized units; the second evaluates the proper name identification; the third measures the performance on morphological analysis. To date we have not done a separate evaluation of foreign name recognition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EVALUATION", "sec_num": null }, { "text": "Previous reports on Chinese segmentation have invariably 4The current model is too simplistic in several respects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of the Segmentation as a Whole:", "sec_num": null }, { "text": "For instance, the common 'suffixes', -nia (e.g., Virginia) and -sia are normally transliterated as ~=~ ni2-ya3 and ~]~ ~n~ xil-ya3, respectively. The interdependence between ]:~ or ~, and ~r~ is not captured by our model, but this could easily be remedied. cited performance either in terms of a single percentcorrect score, or else a single precision-recall pair. The problem with these styles of evaluation is that, as we shall demonstrate, even human judges do not agree perfectly on how to segment a given text. Thus, rather than give a single evaluative score, we prefer to compare the performance of our method with the judgments of several human subjects. To this end, we picked 100 sentences at random containing 4372 total hanzi from a test corpus. We asked six native speakers --three from Taiwan (T1-T3), and three from the Mainland (M1-M3) --to segment the corpus. Since we could not bias the subjects towards a particular segmentation and did not presume linguistic sophistication on their part, the instructions were simple: subjects were to mark all places they might plausibly pause if they were reading the text aloud. An examination of the subjects' bracketings confirmed that these instructions were satisfactory in yielding plausible word-sized units.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of the Segmentation as a Whole:", "sec_num": null }, { "text": "Various segmentation approaches were then compared with human performance: 1. A greedy algorithm, GR: proceed through the sentence, taking the longest match with a dictionary entry at each point. 2. An 'anti-greedy' algorithm, AG: instead of the longest match, take the shortest match at each point. 3. The method being described --henceforth ST.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of the Segmentation as a Whole:", "sec_num": null }, { "text": "Two measures that can be used to compare judgments are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of the Segmentation as a Whole:", "sec_num": null }, { "text": "1. Precision. For each pair of judges consider one judge as the standard, computing the precision of the other's judgments relative to this standard.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of the Segmentation as a Whole:", "sec_num": null }, { "text": "2. Recall. For each pair of judges, consider one judge as the standard, computing the recall of the other's judgments relative to this standard.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of the Segmentation as a Whole:", "sec_num": null }, { "text": "Obviously, for judges J1 and J2, taking ,/1 as standard and computing the precision and recall for J2 yields the same results as taking J2 as the standard, and computing for Jr, respectively, the recall and precision. We therefore used the arithmetic mean of each interjudge precision-recall pair as a single measure of interjudge similarity. Table 2 shows these similarity measures. The average agreement among the human judges is .76, and the average agreement between ST and the humans is .75, or about 99% of the inter-human agreement. (GR is .73 or 96%.) One can better visualize the precision-recall similarity matrix by producing from that matrix a distance matrix, computing a multidimensional scaling on that distance matrix, and plotting the first two most significant dimensions. The result of this is shown in Figure 4 . In addition to the automatic methods, AG, GR and ST, just discussed, we also added to the plot the values for the current algorithm using only dictionary entries (i.e., no productively derived words, or names). This is to allow for fair comparison between the statistical method, and GR, which is also purely dictionary-based. As can be seen, GR and this 'pared-down' statistical method perform quite similarly, though the statistical method is still slightly better. AG clearly performs much less like humans than these methods, whereas the full statistical algorithm, including morphological derivatives and names, performs most closely to humans among the automatic methods. It can be also seen clearly in this plot, two of the Taiwan speakers cluster very closely together, and the third Taiwan speaker is also close in the most significant dimension (the z axis). Two of the Mainlanders also cluster close together but, interestingly, not particularly close to the Taiwan speakers; the third Mainlander is much more similar to the Taiwan speakers.", "cite_spans": [], "ref_spans": [ { "start": 343, "end": 350, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 822, "end": 830, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Evaluation of the Segmentation as a Whole:", "sec_num": null }, { "text": "Personal Name Identification: To evaluate personal name identification, we randomly selected 186 sentences containing 12,000 hanzi from our test corpus, and segmented the text automatically, tagging personal names; note that for names there is always a single unambiguous answer, unlike the more general question of which segmentation is correct. The performance was 80.99% recall and 61.83% precision. Interestingly, Chang et al. reported 80.67% recall and 91.87% precision on an 11,000 word corpus: seemingly, our system finds as many names as their system, but with four times as many false hits. However, we have reason to doubt Chang et al.'s performance claims. Without using the same test corpus, direct comparison is obviously difficult; fortunately Chang et al. included a list of about 60 example sentence fragments that exemplified various categories of performance for their system. The performance of our system on those sentences appeared rather better than theirs. Now, on a set of 11 sentence fragments where they reported 100% recall and precision for name identification, we had 80% precision and 73% recall. However, they listed two sets, one consisting of 28 fragments and the other of 22 fragments in which they had 0% precision and recall. On the first of these our system had 86% precision and 64% recall; on the second it had 19% precision and 33% recall. Note that it is in precision that our overall performance would appear to be poorer than that of Chang et al., yet based on their published examples, our . s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of the Segmentation as a Whole:", "sec_num": null }, { "text": "In Table 3 we present results from small test corpora for some productive affixes; as with names, the segmentation of morphologically derived words is generally either right or wrong. The first four affixes are so-called resultative affixes: they denote some property of the resultant state of an verb, as in ~,,:;~ ~\" wang4-bu4-1iao3 (forget-notattain) 'cannot forget'. The last affix is the nominal plural. Note that ~ in ~,:~: ]\" is normally pronounced as leO, but when part of a resultative it is liao3. In the table are the (typical) classes of words to which the affix attaches, the number found in the test corpus by the method, the number correct (with a precision measure), and the number missed (with a recall measure).", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Evaluation of Morphological Analysis:", "sec_num": null }, { "text": "In this paper we have shown that good performance can be achieved on Chinese word segmentation by using probabilistic methods incorporated into a uniform stochastic finite-state model. We believe that the approach reported here compares favorably with other reported approaches, though obviously it is impossible to make meaningful comparisons in the absence of uniform test databases for Chinese segmentation. Perhaps the single most important difference between our work and previous work is the form of the evaluation. As we have observed there is often no single right answer to word segmentation in Chinese. Therefore, claims to the effect that a particular algorithm gets 99% accuracy are meaningless without a clear definition of accuracy. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSIONS", "sec_num": null } ], "back_matter": [ { "text": "We thank United Informatics for providing us with our corpus of Chinese text, and BDC for the 'Behav-5We were recently pointed to (Wang et al., 1992) , which we had unfortunately missed in our previous literature search. We hope to compare our method with that of Wang et al. in a future version of this paper.ior Chinese-English Electronic Dictionary'. We further thank Dr. J.-S. Chang of Tsinghua University, for kindly providing us with the name corpora. Finally, we thank two anonymous ACL reviewers for comments.", "cite_spans": [ { "start": 130, "end": 149, "text": "(Wang et al., 1992)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "ACKNOWLEDGEMENTS", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "PC-KIMMO: A Two-Level Processor for Morphological Analysis. Occasional Publications in Academic Computing, 16. Summer Institute of Linguistics", "authors": [ { "first": "Evan", "middle": [], "last": "Antworth", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Evan Antworth. 1990. PC-KIMMO: A Two-Level Pro- cessor for Morphological Analysis. Occasional Publications in Academic Computing, 16. Sum- mer Institute of Linguistics, Dallas, TX.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A Corpus-Based Approach to Morphological Productivity: Statistical Analysis and Psycholinguistic Interpretation", "authors": [ { "first": "Harald", "middle": [], "last": "Baayen", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harald Baayen. 1989. A Corpus-Based Approach to Morphological Productivity: Statistical Analysis and Psycholinguistic Interpretation. Ph.D. thesis, Free University, Amsterdam.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Large-corpusbased methods for Chinese personal name recognition", "authors": [ { "first": "Jyun-Shen", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Shun-De", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Xian-Zhong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shu-Jin", "middle": [], "last": "Ke", "suffix": "" } ], "year": 1992, "venue": "Journal of Chinese Information Processing", "volume": "6", "issue": "3", "pages": "7--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jyun-Shen Chang, Shun-De Chen, Ying Zheng, Xian- Zhong Liu, and Shu-Jin Ke. 1992. Large-corpus- based methods for Chinese personal name recogni- tion. Journal of Chinese Information Processing, 6(3):7-15.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Word identification for Mandarin Chinese sentences", "authors": [ { "first": "Shing-Huan", "middle": [], "last": "Keh-Jiann Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Liu", "suffix": "" } ], "year": 1992, "venue": "Proceedings of COLING-92", "volume": "", "issue": "", "pages": "101--107", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keh-Jiann Chen and Shing-Huan Liu. 1992. Word identification for Mandarin Chinese sentences. In Proceedings of COLING-92, pages 101-107. COLING.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A comparison of the enhanced Good-Turing and deleted estimation methods for estimating probabilities of English bigrams", "authors": [ { "first": "Kenneth", "middle": [ "Ward" ], "last": "Church", "suffix": "" }, { "first": "William", "middle": [], "last": "Gale", "suffix": "" } ], "year": 1991, "venue": "Computer Speech and Language", "volume": "5", "issue": "1", "pages": "19--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Ward Church and William Gale. 1991. A comparison of the enhanced Good-Turing and deleted estimation methods for estimating prob- abilities of English bigrams. Computer Speech and Language, 5(1): 19-54.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The Chinese Language", "authors": [ { "first": "John", "middle": [], "last": "Defrancis", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John DeFrancis. 1984. The Chinese Language. Uni- versity of Hawaii Press, Honolulu.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Automatic word identification in Chinese sentences by the relaxation technique", "authors": [ { "first": "C.-K", "middle": [], "last": "Fan", "suffix": "" }, { "first": "W.-H", "middle": [], "last": "Tsai", "suffix": "" } ], "year": 1988, "venue": "Computer Processing of Chinese and Oriental Languages", "volume": "4", "issue": "", "pages": "33--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.-K. Fan and W.-H. Tsai. 1988. Automatic word identification in Chinese sentences by the relax- ation technique. Computer Processing of Chinese and Oriental Languages, 4:33-56.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Two-level morphology with composition", "authors": [ { "first": "Lauri", "middle": [], "last": "Karttunen", "suffix": "" }, { "first": "Ronald", "middle": [], "last": "Kaplan", "suffix": "" }, { "first": "Annie", "middle": [], "last": "Zaenen", "suffix": "" } ], "year": 1992, "venue": "COLING-92", "volume": "", "issue": "", "pages": "141--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lauri Karttunen, Ronald Kaplan, and Annie Zaenen. 1992. Two-level morphology with composition. In COLING-92, pages 141-148. COLING.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Two-Level Morphology: a General Computational Model for Word-Form Recognition and Production", "authors": [ { "first": "Kimmo", "middle": [], "last": "Koskenniemi", "suffix": "" } ], "year": 1983, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kimmo Koskenniemi. 1983. Two-Level Morphology: a General Computational Model for Word-Form Recognition and Production. Ph.D. thesis, Uni- versity of Helsinki, Helsinki.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Weighted rational transductions and their application to human language processing", "authors": [ { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Riley", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 1994, "venue": "ARPA Workshop on Human Language Technology", "volume": "", "issue": "", "pages": "249--254", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fernando Pereira, Michael Riley, and Richard Sproat. 1994. Weighted rational transductions and their application to human language processing. In ARPA Workshop on Human Language Technol- ogy, pages 249-254. Advanced Research Projects Agency, March 8-11.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Prosodic Domain of Tone Sandhi in Chinese", "authors": [ { "first": "Chilin", "middle": [], "last": "Shih", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chilin Shih. 1986. The Prosodic Domain of Tone Sandhi in Chinese. Ph.D. thesis, UCSD, La Jolla, CA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A statistical method for finding word boundaries in Chinese text", "authors": [ { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "Chilin", "middle": [], "last": "Shih", "suffix": "" } ], "year": 1990, "venue": "Computer Processing of Chinese and Oriental Languages", "volume": "4", "issue": "", "pages": "336--371", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Sproat and Chilin Shih. 1990. A statistical method for finding word boundaries in Chinese text. Computer Processing of Chinese and Orien- tal Languages, 4:336-35 i.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Morphology and Computation", "authors": [ { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Sproat. 1992. Morphology and Computation. MIT Press, Cambridge, MA.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A finite-state morphological processor for Spanish", "authors": [ { "first": "Evelyne", "middle": [], "last": "Tzoukermann", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Liberman", "suffix": "" } ], "year": 1990, "venue": "COLING-90", "volume": "3", "issue": "", "pages": "277--286", "other_ids": {}, "num": null, "urls": [], "raw_text": "Evelyne Tzoukermann and Mark Liberman. 1990. A finite-state morphological processor for Spanish. In COLING-90, Volume 3, pages 3: 277-286. COLING.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Automatic processing of chinese words", "authors": [ { "first": "Yongheng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Haiju", "middle": [], "last": "Su", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Mo", "suffix": "" } ], "year": 1990, "venue": "Journal of Chinese Information Processing", "volume": "4", "issue": "4", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yongheng Wang, Haiju Su, and Yan Mo. 1990. Au- tomatic processing of chinese words. Journal of Chinese Information Processing, 4(4): 1-11.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Recognizing unregistered names for mandarin word identification", "authors": [ { "first": "Liang-Jyh", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wei-Chuan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Chao-Huang", "middle": [], "last": "Chang", "suffix": "" } ], "year": 1992, "venue": "Proceedings of COLING-92", "volume": "", "issue": "", "pages": "1239--1243", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang-Jyh Wang, Wei-Chuan Li, and Chao-Huang Chang. 1992. Recognizing unregistered names for mandarin word identification. In Proceedings of COLING-92, pages 1239-1243. COLING.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Chinese text segmentation for text retrieval: Achievements and problems", "authors": [ { "first": "Zimin", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Gwyneth", "middle": [], "last": "Tseng", "suffix": "" } ], "year": 1993, "venue": "Journal of the American Society for Information Science", "volume": "44", "issue": "9", "pages": "532--542", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zimin Wu and Gwyneth Tseng. 1993. Chinese text segmentation for text retrieval: Achievements and problems. Journal of the American Society for Information Science, 44(9):532-542.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Morphologically Derived Words: PJ~l~f{l xiao3-jiang4-menO (little general-plural) 'little generals'. 2. Personal Names: ~,~ zhoul enl-lai2 'Zhou Enlai'. 3. Transliterated Foreign Names: ~i~::,,~ bu4lang 3-shi4-wei2-ke4 'Brunswick'.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "prob(~)lI ~ ) -prob(~i )lI )p~ob,,~, (unsee,~(M)); . : guo2 : 0.0 (Repubilc of Chr, a) Partial chinese Lexicon (NC= noun; NP = proper noun) Input lattice (top) and two segmentations (bottom) of the sentence 'How do you say octopus in Japanese'. A non-optimal analysis is shown with dotted lines in the bottom frame. An example of affixation: the plural affix cost(~r]) is computed in the obvious way. Figure 3 shows how this model is implemented as part of the dictionary WFST. There is a (costless) transition between the NC node and ~]. The transition from ~] to a final state transduces c to the grammatical tag \\PL with cost costte~t(unseen(~])): cost(l~}~f~ ) = cost(~)~) + costt,~t(unseen([])), as desired.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "Good-Turing estimate for each radical class. Assuming unseen objects within each class are equiprobable, their probabilities are given by the Good-Turing theorem as:", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "Classical metric multidimensional scaling of distance matrix, showing the two most significant dimensions. The percentage scores on the axis labels represent the amount of data explained by the dimension in question.", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "text": "The cost as a novel GIVEN name (second position) for hanzi from various radical classes.", "html": null, "content": "
JADE GOLD GRASS SICKNESS DEATH RAT
14.98 15.5215.7616.2516.3016.42
", "type_str": "table", "num": null }, "TABREF2": { "text": "", "html": null, "content": "
: Similarity matrix for segmentation judgments
Judges AG GRSTM1M2M3T1T2T3
AG0.70 0.70 0.43 0.42 0.60 0.60 0.62 0.59
GR0.99 0.62 0.64 0.79 0.82 0.81 0.72
ST0.64 0.67 0.80 0.84 0.82 0.74
M10.770.69 0.71 0.69 0.70
M20.72 0.73 0.71 0.70
M30.89 0.87 0.80
T10.88 0.82
T20.78
system appears to be doing better precisionwise. Thus
we have some confidence that our own performance is
at least as good that of
", "type_str": "table", "num": null }, "TABREF3": { "text": "Performance on morphological analysis. Tung-Hui Chiang, and Keh-Yi Su. 1993. A preliminary study on unknown word problem in Chinese word segmentation. In ROCLING 6, pages 119-141. ROCLING.", "html": null, "content": "
AffixPronBase category N found N correct (prec.) N missed (rec.)
~T cbu2-xia4verb2020 (100%)12 (63%)
~-F~bu2-xia4-qu4 verb3029 (97%)1 (97%)
:~Tbu4-1iao3verb7272 (100%)15 (83%)
~$tTde2-liao3verb3636 (100%)11 (77%)
r~menOnoun141139 (99%)6 (96%)
Ming-Yu Lin,
", "type_str": "table", "num": null } } } }