{ "paper_id": "J09-4006", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:54:13.262958Z" }, "title": "Punctuation as Implicit Annotations for Chinese Word Segmentation", "authors": [ { "first": "Zhongguo", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tsinghua University", "location": { "postCode": "100084", "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tsinghua University", "location": { "postCode": "100084", "settlement": "Beijing", "country": "China" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a Chinese word segmentation model learned from punctuation marks which are perfect word delimiters. The learning is aided by a manually segmented corpus. Our method is considerably more effective than previous methods in unknown word recognition. This is a step toward addressing one of the toughest problems in Chinese word segmentation.", "pdf_parse": { "paper_id": "J09-4006", "_pdf_hash": "", "abstract": [ { "text": "We present a Chinese word segmentation model learned from punctuation marks which are perfect word delimiters. The learning is aided by a manually segmented corpus. Our method is considerably more effective than previous methods in unknown word recognition. This is a step toward addressing one of the toughest problems in Chinese word segmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Paragraphs are composed of sentences. Hence when a paragraph begins, a sentence must begin, and as a paragraph closes, some sentence must finish. This observation is the basis of the sentence boundary detection method proposed by Riley (1989) . Similarly, sentences consist of words. As a sentence begins or ends there must be word boundaries.", "cite_spans": [ { "start": 230, "end": 242, "text": "Riley (1989)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Inspired by this notion, we invent a method to learn a Chinese word segmentation model with punctuation marks in a large raw corpus. The learning is guided by a segmented corpus (Section 3.2). Section 4 demonstrates that our method improves notably the recognition of out-of-vocabulary (OOV) words with respect to approaches which use only annotated data (Xue 2003; Low, Ng, and Guo 2005) . This work has practical implications in that the OOV problem has long been a big challenge for the research community.", "cite_spans": [ { "start": 355, "end": 365, "text": "(Xue 2003;", "ref_id": "BIBREF12" }, { "start": 366, "end": 388, "text": "Low, Ng, and Guo 2005)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We call the first character of a Chinese word its left boundary L, and the last character its right boundary R. If we regard L and R as random events, then we can derive four events (or tags) from them:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segmentation as Tagging", "sec_num": "2." }, { "text": "b = L \u2022R, m = L \u2022 R, s = L \u2022 R, e = L \u2022 R", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segmentation as Tagging", "sec_num": "2." }, { "text": "Here R means not R, and thus tag b represents the left but not the right boundary of a word. The other tags can be interpreted similarly. This coding scheme was used by Borthwick (1999) and Xue (2003) , where b, m, s, and e stand for begin, middle, only member, and end of a word, respectively. We reformulate them in terms of L and R to facilitate the presentation of our method.", "cite_spans": [ { "start": 169, "end": 185, "text": "Borthwick (1999)", "ref_id": "BIBREF1" }, { "start": 190, "end": 200, "text": "Xue (2003)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Segmentation as Tagging", "sec_num": "2." }, { "text": "For a sentence S = c 1 c 2 \u2022 \u2022 \u2022 c n and a sequence T = t 1 t 2 \u2022 \u2022 \u2022 t n of b, m, s, e tags, we define", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segmentation as Tagging", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (T|S) = n i=1 Pr(t i |context i )", "eq_num": "(1)" } ], "section": "Segmentation as Tagging", "sec_num": "2." }, { "text": "where context i is c i with up to four surrounding characters. The legal tag sequence (e.g., tag b followed by s is illegal) with highest P gives the segmentation result of S. Then from Equation (1) it is obvious that knowing the probability distribution of b, m, s, and e given context is adequate for carrying out Chinese word segmentation. The purpose of this article is to show that punctuation can play a major role in estimating this distribution. We use the maximum entropy approach to model the conditional probability Pr(y|x), which has the following parametric form according to Berger, Della Pietra, and Della Pietra (1996) :", "cite_spans": [ { "start": 589, "end": 634, "text": "Berger, Della Pietra, and Della Pietra (1996)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Segmentation as Tagging", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Pr(y|x) = 1 Z(x) exp i \u03bb i f i (x, y) (2) Z(x) = y exp i \u03bb i f i (x, y)", "eq_num": "(3)" } ], "section": "Segmentation as Tagging", "sec_num": "2." }, { "text": "For Chinese word segmentation, the binary valued functions f i are defined through the 10 features shown in Table 2 . Xue (2003) explains how these features map to the feature functions in Equations (2) and (3).", "cite_spans": [ { "start": 118, "end": 128, "text": "Xue (2003)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 108, "end": 115, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Segmentation as Tagging", "sec_num": "2." }, { "text": "Our key idea is to approximate probabilities of b, m, s, and e with those of L and R.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3." }, { "text": "To do this, we assume L and R are conditionally independent given context. Then we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Pr(b |context) = Pr(L \u2022R|context) (definition of b) = Pr(L |context) \u2022 Pr(R|context) (independence) = Pr(L |context) \u2022 (1 \u2212 Pr(R |context))", "eq_num": "(4)" } ], "section": "Method", "sec_num": "3." }, { "text": "Probabilities for m, s, and e can be derived in the same way and so their derivations are not provided here. As mentioned earlier, these probabilities are sufficient for Chinese word segmentation. Now to model Pr(L |context) and Pr(R |context) with the maximum entropy technique, we must have positive and negative examples of L and R. It is here that punctuation comes into play.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3." }, { "text": "Punctuation offers directly positive examples of L and R. For instance, we can extract four training examples from the sentence in Table 1 , as listed in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 131, "end": 138, "text": "Table 1", "ref_id": null }, { "start": 154, "end": 161, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Positive Examples", "sec_num": "3.1" }, { "text": "Suppose for the moment we know the real probability distribution of tags b, m, s, and e given context. Then a character in context is itself a word and should be tagged s if", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negative Examples", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Pr(s |context) > max y\u2208{b,m,e} Pr(y|context)", "eq_num": "(5)" } ], "section": "Negative Examples", "sec_num": "3.2" }, { "text": "Each positive example given by punctuation is subjected to the test in (5). If an example labeled L passes this test, then it is also a positive example of R because s = L \u2022 R, and failing this test gives a negative R. In a similar way we obtain negative examples of L. This process is summarized in Figure 1 . A segmented corpus is needed to estimate the probabilities in test (5) with maximum entropy modeling. Here we use the data provided by Microsoft Research in the SIGHAN 2005 Bakeoff. The trained model (the MSR model) was used in earlier work (Low, Ng, and Guo 2005) and is one of the state-of-the-art models for Chinese word segmentation.", "cite_spans": [ { "start": 552, "end": 575, "text": "(Low, Ng, and Guo 2005)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 300, "end": 308, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Negative Examples", "sec_num": "3.2" }, { "text": "With the MSR model, only the last example in Table 2 passes test (5). Hence we get the three negative examples shown in Table 3 . Examples like 1, 3, 6, and 8 are used to estimate Pr(L |context) and those like 2, 4, 5, and 7 are used to estimate Pr(R |context). Appendix A provides more details on this issue.", "cite_spans": [], "ref_spans": [ { "start": 45, "end": 52, "text": "Table 2", "ref_id": null }, { "start": 120, "end": 127, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Negative Examples", "sec_num": "3.2" }, { "text": "Illustration of word boundaries near punctuation in a simple sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "-means the label is unknown with only the help of punctuation. Table 2 Positive training examples extracted from the sentence in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 63, "end": 70, "text": "Table 2", "ref_id": null }, { "start": 129, "end": 136, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "sentence 3 I 0 G \u00ce word boundary L - - R L - - R", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "features of context No. label c \u22122 c \u22121 c 0 c 1 c 2 c \u22121 c 1 c \u22122 c \u22121 c \u22121 c 0 c 0 c 1 c 1 c 2 1 L 3 I 3I I 2 R I 0 0 I 0 0 3 L 0 G 0 0 G 4 R G \u00ce G G\u00ce Figure 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "How to get negative examples of L and R. Test (5) is applied to all positive examples given by punctuation. Those failing this test are negative training examples. It is test (5) that invokes the need of a manually segmented corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "Training examples derived from those in Table 2 . We have 1\u21925, 2\u21926, 3\u21927, and 4\u21928.", "cite_spans": [], "ref_spans": [ { "start": 40, "end": 47, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Table 3", "sec_num": null }, { "text": "features of context No. label c \u22122 c \u22121 c 0 c 1 c 2 c \u22121 c 1 c \u22122 c \u22121 c \u22121 c 0 c 0 c 1 c 1 c 2 5 R 3 I 3I I 6 L I 0 0 I 0 0 7 R 0 G 0 0 G 8 L G \u00ce G G\u00ce", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 3", "sec_num": null }, { "text": "In all, we collected 10 billion L-L and R-R examples, each from a comprehensive Web corpus. 1 To cope with so much training data, we use the partitioning method of Yamada and Matsumoto (2003) . An alternative is the Vowpal Wabbit (fast on-line learning) algorithm. 2 Such an algorithm allows incremental training as more raw texts become available.", "cite_spans": [ { "start": 164, "end": 191, "text": "Yamada and Matsumoto (2003)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3" }, { "text": "We evaluate our method with the data and scoring script provided by the SIGHAN 2005 Bakeoff. The data sets of Academia Sinica and City University of Hong Kong, which are in Traditional Chinese, are not used here because the raw corpus is mainly in Simplified Chinese. Table 4 gives the evaluation results on the data from Microsoft Research (MSR) and Peking University (PKU). It seems our method is over 10% below state of the art in precision on the MSR data. However, we find that multiword expressions are consistently segmented into smaller words. Take the one multiword '-\u00fdz/vb-\u00fdv@' [Institute of Chinese Culture, Chinese Academy of Arts] in the standard answer of the test data as an example.", "cite_spans": [], "ref_spans": [ { "start": 268, "end": 275, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "4." }, { "text": "Our method segments it into six correct words '-\u00fd z/ vb -\u00fd v@' [China, art, academy, China, culture, institute] , all of which are considered wrong by the scoring script. This is arguable because the only difference is the granularity of the segmentation.", "cite_spans": [ { "start": 63, "end": 111, "text": "[China, art, academy, China, culture, institute]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4." }, { "text": "We check every error detected by the scoring script on the MSR data, and find that for our method, 15,071 errors are actually correct segmentations of 5,463 multiwords, whereas for the MSR model, the corresponding counts are 858 and 355, respectively. The gold standard contains 106,873 words. These statistics combined with Table 4 allow us to calculate the metrics as in Table 5 , if errors caused by correctly segmented multiwords are not counted.", "cite_spans": [], "ref_spans": [ { "start": 325, "end": 332, "text": "Table 4", "ref_id": null }, { "start": 373, "end": 380, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Influence of Granularity", "sec_num": "4.1" }, { "text": "We see that, when the influence of granularity is considered, our method is slightly better than the MSR model. However, as Table 4 shows, both models degrade on the PKU data due to the difference in segmentation standards. This kind of degradation was also documented by Peng, Feng, and McCallum (2004) .", "cite_spans": [ { "start": 272, "end": 303, "text": "Peng, Feng, and McCallum (2004)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 124, "end": 131, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Influence of Granularity", "sec_num": "4.1" }, { "text": "The SIGHAN data sets contain relatively few OOV words (2.6% for the MSR data). What if the rate is much higher than that? We expect our model to be less vulnerable to OOV problems because it is trained with billions of examples from a large corpus. To verify this, we generate four data sets from each of these lists of names: The generation method is: Randomly permute each list and then put the result into lines, with each line having about 30 names, and repeat this process until we get 1 million tokens for each data set. We use the MSR model and our method to segment these data sets. The results are in Table 6 . It is clear that our method performs better on these data sets. This provides evidence that it could handle situations where many OOV words turn up. Table 6 also indicates that, especially for the MSR model, recognition of Chinese personal names is easier than location names. This is reasonable because the former has more regularity than the latter. Besides, although there are no OOV words in data sets (a) and (c), many words occur very sparsely in the MSR data. Hence the MSR model doesn't do well even on these two data sets.", "cite_spans": [], "ref_spans": [ { "start": 610, "end": 617, "text": "Table 6", "ref_id": null }, { "start": 769, "end": 776, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Named Entity List Recovery", "sec_num": "4.2" }, { "text": "To further test our model's ability to recognize unknown words, we make 27,470 sentences with the pattern 'X / Y \u00ba X \" Y ' (X is a resident of Y, and X loves Y), where X and Y are the personal and location names in Section 4.2. The results on this data set are in Table 7 . Again our method outperforms the MSR model by a large margin, proving once more that it is stronger in unknown word recognition. For both methods, the metrics in Table 7 are better than those in Table 6 , reflecting the fact that unknown word recognition here is easier than the named entity list recovery task.", "cite_spans": [], "ref_spans": [ { "start": 264, "end": 271, "text": "Table 7", "ref_id": "TABREF1" }, { "start": 436, "end": 443, "text": "Table 7", "ref_id": "TABREF1" }, { "start": 469, "end": 476, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Unknown Words Recognition", "sec_num": "4.3" }, { "text": "Evaluation shows that when there are many new words, the improvement of our method is obvious. In addition, a model is of limited use if it fits the SIGHAN data well, but can't maintain that accuracy elsewhere. Our model has a wider coverage through mining the Web. It tends to segment long multiword expressions into their component words. This is not a disadvantage as long as the result is consistent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "4.4" }, { "text": "Punctuation gives naturally occurring unambiguous word boundaries. Gao et al. (2005) described how to remove overlapping ambiguities in an annotated corpus to train a model for resolving these ambiguities. A raw corpus doesn't play a role in that method, and the model involves no punctuation marks.", "cite_spans": [ { "start": 67, "end": 84, "text": "Gao et al. (2005)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5." }, { "text": "Chinese word segmentation based on position tagging was initiated by Xue (2003) . This method and its subsequent developments have achieved state-of-the-art performance in word segmentation (Peng, Feng, and McCallum 2004; Low, Ng, and Guo 2005; Zhao, Huang, and Li 2006 ). Yet the system degrades when there are lots of previously unknown words, whereas our method performs particular well in this case thanks to the use of a huge Web corpus.", "cite_spans": [ { "start": 69, "end": 79, "text": "Xue (2003)", "ref_id": "BIBREF12" }, { "start": 190, "end": 221, "text": "(Peng, Feng, and McCallum 2004;", "ref_id": "BIBREF7" }, { "start": 222, "end": 244, "text": "Low, Ng, and Guo 2005;", "ref_id": "BIBREF6" }, { "start": 245, "end": 269, "text": "Zhao, Huang, and Li 2006", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5." }, { "text": "In the past decade, much work has been done in unsupervised word segmentation (Sun, Shen, and Tsou 1998; Peng and Schuurmans 2001; Feng et al. 2004; Goldwater, Griffiths, and Johnson 2006; Jin and Tanaka-Ishii 2006) . These methods could also take advantage of the ever-growing amount of online text to model Chinese word segmentation, but usually are less accurate and more complicated than ours.", "cite_spans": [ { "start": 78, "end": 104, "text": "(Sun, Shen, and Tsou 1998;", "ref_id": "BIBREF11" }, { "start": 105, "end": 130, "text": "Peng and Schuurmans 2001;", "ref_id": "BIBREF8" }, { "start": 131, "end": 148, "text": "Feng et al. 2004;", "ref_id": "BIBREF2" }, { "start": 149, "end": 188, "text": "Goldwater, Griffiths, and Johnson 2006;", "ref_id": "BIBREF4" }, { "start": 189, "end": 215, "text": "Jin and Tanaka-Ishii 2006)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5." }, { "text": "With a virtually unlimited supply of raw corpus data, punctuation marks give us ample training examples and thus can be quite useful as implicit annotations for Chinese word segmentation. We also note that shallow parsing (Sha and Pereira 2003) is a close analogy to word segmentation. Hence our method can potentially be applied to this task as well.", "cite_spans": [ { "start": 222, "end": 244, "text": "(Sha and Pereira 2003)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "We give readers a feel for the input data used to train our probability models. First, to estimate Pr(L |context), the input to the learning algorithm for the maximum entropy models looks like this:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix A: Input to the Training Algorithm", "sec_num": null }, { "text": "+L C0=3 C1=I C2= C0C1=3I C1C2=I +L C0=0 C1= C2=G C0C1=0 C1C2=G -L C-2=I C-1= C0= C-2C-1=I C-1C0= +L C-2= C-1=G C0=\u00ce C-2C-1=G C-1C0=G\u00ce", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix A: Input to the Training Algorithm", "sec_num": null }, { "text": "Whereas to estimate Pr(R | context), the input data are something like the following +R C-2=I C-1= C0= C-2C-1=I", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix A: Input to the Training Algorithm", "sec_num": null }, { "text": "C-1C0= +R C-2= C-1=G C0=\u00ce C-2C-1=G C-1C0=G\u00ce -R C0=3 C1=I C2= C0C1=3I C1C2=I -R C0=0 C1= C2=G C0C1=0 C1C2=G", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix A: Input to the Training Algorithm", "sec_num": null }, { "text": "To save space, not all features in Table 2 are included here. From this illustration, interested readers can get a general idea of our input to the learning algorithm in Section 3.3.", "cite_spans": [], "ref_spans": [ { "start": 35, "end": 42, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Appendix A: Input to the Training Algorithm", "sec_num": null }, { "text": "Freely available for research purposes. See www.sogou.com/labs. 2 http://hunch.net/\u223cvw/. We thank one of the anonymous reviewers for telling us about this implementation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is supported by the National Science Foundation of China under Grant No. 60621062 and 60873174, and the National 863 Project under Grant No. 2007AA01Z148. We thank our reviewers sincerely for many helpful comments and suggestions which greatly improved this article. Thanks also go to sogou.com for sharing their Web corpora and entity names. The maximum entropy modeling toolkit used here is contributed by Zhang Le of the University of Edinburgh.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A maximum entropy approach to natural language processing", "authors": [ { "first": "Adam", "middle": [ "L" ], "last": "Berger", "suffix": "" }, { "first": "J", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Della Pietra", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "1", "pages": "39--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Berger, Adam L., Vincent J. Della Pietra, and Stephen A. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39-71.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A Maximum Entropy Approach to Named Entity Recognition", "authors": [ { "first": "Andrew", "middle": [], "last": "Borthwick", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Borthwick, Andrew. 1999. A Maximum Entropy Approach to Named Entity Recognition. Ph.D. thesis, New York University.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Accessor variety criteria for Chinese word extraction", "authors": [ { "first": "Haodi", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaotie", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Weimin", "middle": [], "last": "Zheng", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "1", "pages": "75--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "Feng, Haodi, Kang Chen, Xiaotie Deng, and Weimin Zheng. 2004. Accessor variety criteria for Chinese word extraction. Computational Linguistics, 30(1):75-93.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Chinese word segmentation and named entity recognition: A pragmatic approach", "authors": [ { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Andi", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Chang-Ning", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "4", "pages": "531--574", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gao, Jianfeng, Mu Li, Andi Wu, and Chang-Ning Huang. 2005. Chinese word segmentation and named entity recognition: A pragmatic approach. Computational Linguistics, 31(4):531-574.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Contextual dependencies in unsupervised word segmentation", "authors": [ { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Thomas", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "673--680", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goldwater, Sharon, Thomas L. Griffiths, and Mark Johnson. 2006. Contextual dependencies in unsupervised word segmentation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 673-680, Sydney.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Unsupervised segmentation of Chinese text by use of branching entropy", "authors": [ { "first": "Zhihui", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Kumiko", "middle": [], "last": "Tanaka-Ishii", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the COLING/ACL on Main Conference Poster Sessions", "volume": "", "issue": "", "pages": "428--435", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jin, Zhihui and Kumiko Tanaka-Ishii. 2006. Unsupervised segmentation of Chinese text by use of branching entropy. In Proceedings of the COLING/ACL on Main Conference Poster Sessions, pages 428-435, Morristown, NJ.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A maximum entropy approach to Chinese word segmentation", "authors": [ { "first": "Jim", "middle": [], "last": "Low", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Kiat", "suffix": "" }, { "first": "Wenyuan", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "161--164", "other_ids": {}, "num": null, "urls": [], "raw_text": "Low, Jim Kiat, Hwee Tou Ng, and Wenyuan Guo. 2005. A maximum entropy approach to Chinese word segmentation. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, pages 161-164, Jeju Island.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Chinese segmentation and new word detection using conditional random fields", "authors": [ { "first": "Fuchun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Fangfang", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2004, "venue": "COLING '04: Proceedings of the 20th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "562--569", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng, Fuchun, Fangfang Feng, and Andrew McCallum. 2004. Chinese segmentation and new word detection using conditional random fields. In COLING '04: Proceedings of the 20th International Conference on Computational Linguistics, pages 562-569, Morristown, NJ.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Self-supervised Chinese word segmentation", "authors": [ { "first": "Fuchun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Dale", "middle": [], "last": "Schuurmans", "suffix": "" } ], "year": 2001, "venue": "Lecture Notes in Computer Science", "volume": "2189", "issue": "", "pages": "238--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng, Fuchun and Dale Schuurmans. 2001. Self-supervised Chinese word segmentation. Lecture Notes in Computer Science, 2189:238-249.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Some applications of tree-based modelling to speech and language", "authors": [ { "first": "Michael", "middle": [ "D" ], "last": "Riley", "suffix": "" } ], "year": 1989, "venue": "HLT '89: Proceedings of the Workshop on Speech and Natural Language", "volume": "", "issue": "", "pages": "339--352", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riley, Michael D. 1989. Some applications of tree-based modelling to speech and language. In HLT '89: Proceedings of the Workshop on Speech and Natural Language, pages 339-352, Morristown, NJ.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Shallow parsing with conditional random fields", "authors": [ { "first": "Fei", "middle": [], "last": "Sha", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2003, "venue": "NAACL '03: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", "volume": "", "issue": "", "pages": "134--141", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sha, Fei and Fernando Pereira. 2003. Shallow parsing with conditional random fields. In NAACL '03: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, pages 134-141, Morristown, NJ.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Chinese word segmentation without using lexicon and hand-crafted training data", "authors": [ { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Dayang", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Benjamin", "middle": [ "K" ], "last": "Tsou", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 17th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1265--1271", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sun, Maosong, Dayang Shen, and Benjamin K. Tsou. 1998. Chinese word segmentation without using lexicon and hand-crafted training data. In Proceedings of the 17th International Conference on Computational Linguistics, pages 1265-1271, Morristown, NJ.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Chinese word segmentation as character tagging", "authors": [ { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics and Chinese Language Processing", "volume": "8", "issue": "", "pages": "29--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xue, Nianwen. 2003. Chinese word segmentation as character tagging. Computational Linguistics and Chinese Language Processing, 8(1):29-48.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Statistical dependency analysis with support vector machines", "authors": [ { "first": "Hiroyasu", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 8th International Workshop on Parsing Technologies (IWPT2003)", "volume": "", "issue": "", "pages": "195--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yamada, Hiroyasu and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of the 8th International Workshop on Parsing Technologies (IWPT2003), pages 195-206, Nancy.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "An improved Chinese word segmentation system with conditional random field", "authors": [ { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Chang-Ning", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "162--165", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhao, Hai, Chang-Ning Huang, and Mu Li. 2006. An improved Chinese word segmentation system with conditional random field. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 162-165, Sydney.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "cities and counties of China not seen in the MSR data (c) 7,470 Chinese personal names seen in the MSR data (d) 20,000 Chinese personal names not seen in the MSR data", "uris": null }, "TABREF0": { "text": "Evaluation results on SIGHAN Bakeoff 2005 data sets. Results on tasks of named entity list recovery.", "content": "
our methodthe MSR model
data setPRFPRF
MSR84.8 91.3 87.9 96.0 95.6 95.8
PKU84.2 86.1 85.1 85.2 82.3 83.7
Table 5
Amended evaluation results for MSR data.
PRF
our method98.0 96.7 97.3
the MSR model 96.7 96.0 96.3
", "type_str": "table", "num": null, "html": null }, "TABREF1": { "text": "Results of unknown word recognition in 24,470 sentences.", "content": "
PRF
our method96.2 97.9 97.1
the MSR model 88.3 84.5 86.3
", "type_str": "table", "num": null, "html": null } } } }