{ "paper_id": "Y13-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:32:34.649093Z" }, "title": "A Study of the Effectiveness of Suffixes for Chinese Word Segmentation", "authors": [ { "first": "Xiaoqing", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "settlement": "Beijing", "country": "China" } }, "email": "xqli@nlpr.ia.ac.cn" }, { "first": "Chengqing", "middle": [], "last": "Zong", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "settlement": "Beijing", "country": "China" } }, "email": "cqzong@nlpr.ia.ac.cn" }, { "first": "Keh-Yih", "middle": [], "last": "Su", "suffix": "", "affiliation": { "laboratory": "", "institution": "Behavior Design Corporation", "location": { "country": "Taiwan" } }, "email": "kysu@bdc.com.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We investigate whether suffix related features can significantly improve the performance of character-based approaches for Chinese word segmentation (CWS). Since suffixes are quite productive in forming new words, and OOV is the main error source for CWS, many researchers expect that suffix information can further improve the performance. With this belief, we tried several suffix related features in both generative and discriminative approaches. However, our experiment results have shown that significant improvement can hardly be achieved by incorporating suffix related features into those widely adopted surface features, which is against the commonly believed supposition. Error analysis reveals that the main problem behind this surprising finding is the conflict between the degree of reliability and the coverage rate of suffix related features.", "pdf_parse": { "paper_id": "Y13-1010", "_pdf_hash": "", "abstract": [ { "text": "We investigate whether suffix related features can significantly improve the performance of character-based approaches for Chinese word segmentation (CWS). Since suffixes are quite productive in forming new words, and OOV is the main error source for CWS, many researchers expect that suffix information can further improve the performance. With this belief, we tried several suffix related features in both generative and discriminative approaches. However, our experiment results have shown that significant improvement can hardly be achieved by incorporating suffix related features into those widely adopted surface features, which is against the commonly believed supposition. Error analysis reveals that the main problem behind this surprising finding is the conflict between the degree of reliability and the coverage rate of suffix related features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "As words are the basic units for text analysis, Chinese word segmentation (CWS) is critical for many Chinese NLP tasks such as parsing and machine translation. Although steady improvements have been observed in previous CWS researches (Xue, 2003; Zhang and Clark, 2007; Sun et al., 2012) , their performances are only acceptable for invocabulary (IV) words and are still far from satisfactory for those out-of-vocabulary (OOV) words. According to the Zipf's law (Zipf, 1949) , which states that the frequency of a word is inversely proportional to its rank in the frequency table for a given corpus, it is unlikely to cover all the words of a language in the training corpus. OOV words are thus inevitable in real applications.", "cite_spans": [ { "start": 235, "end": 246, "text": "(Xue, 2003;", "ref_id": "BIBREF15" }, { "start": 247, "end": 269, "text": "Zhang and Clark, 2007;", "ref_id": "BIBREF24" }, { "start": 270, "end": 287, "text": "Sun et al., 2012)", "ref_id": "BIBREF22" }, { "start": 462, "end": 474, "text": "(Zipf, 1949)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To further improve the performance for OOV words, various approaches have been proposed. Most of them aim to add additional resources, such as external dictionaries (Low et al., 2005; or unlabeled data (Zhao and Kit, 2008; Sun and Xu, 2011) . However, additional resources are not always available and their coverage for OOV words is still limited. Researchers, especially linguists (Dong et al., 2010) , thus seek to further improve the performance of OOV words by characterizing the word formation process (Li, 2011) .", "cite_spans": [ { "start": 165, "end": 183, "text": "(Low et al., 2005;", "ref_id": "BIBREF11" }, { "start": 202, "end": 222, "text": "(Zhao and Kit, 2008;", "ref_id": "BIBREF6" }, { "start": 223, "end": 240, "text": "Sun and Xu, 2011)", "ref_id": "BIBREF20" }, { "start": 383, "end": 402, "text": "(Dong et al., 2010)", "ref_id": "BIBREF25" }, { "start": 508, "end": 518, "text": "(Li, 2011)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "According to the internal structures of OOV words, they can be divided into three categories:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) character-type related OOV, which consists of Arabic digits and foreign characters, and usually denotes time, date, number, English word, URL, etc. This kind of OOV can be well handled by rules or character-type features if the character-type information can be utilized (Low et al., 2005; ; (2) morpheme related OOV, which mainly refers to a compound word with prefix/suffix or reduplication (e.g. \"\u9ad8 \u9ad8\u5174\u5174\" (happily)). According to , the errors related with suffix are the major type (more than 80%) within this category; (3) others (such as named entities, idioms, terminology, abbreviations, new words, etc.), which are usually irregular in structure and are difficult to handle without additional resources. Since extra knowledge about character-type and additional resources are forbidden in the SIGHAN closed test (Emerson, 2005) , which is widely adopted for performance comparison, we will focus on the second category to investigate how to use suffix related features in this paper.", "cite_spans": [ { "start": 275, "end": 293, "text": "(Low et al., 2005;", "ref_id": "BIBREF11" }, { "start": 804, "end": 838, "text": "SIGHAN closed test (Emerson, 2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Generally speaking, Chinese suffixes are very productive and many words can be formed in this way. For example, the word \"\u65c5\u884c\u8005\" (traveler) is composed of a stem (\"\u65c5\u884c\", travel) and a suffix (\" \u8005\", -er). Although the character and character co-occurrence features (adopted in most current approaches) are able to partially characterize the internal structure of words (Sun, 2010) , and some OOV words are indeed correctly handled when compared to pure wordbased approaches (Zhang et al., 2003; Gao et al., 2005) , suffix related errors still remain as an important type of errors. Therefore, it is natural to expect that suffixes can be explicitly utilized to provide further help.", "cite_spans": [ { "start": 365, "end": 376, "text": "(Sun, 2010)", "ref_id": "BIBREF19" }, { "start": 470, "end": 490, "text": "(Zhang et al., 2003;", "ref_id": "BIBREF8" }, { "start": 491, "end": 508, "text": "Gao et al., 2005)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Furthermore, prefix/suffix related features were claimed to be useful for CWS in some previous works (Tseng et al., 2005; Zhang et al., 2006) . However, in their works, the prefix/suffix features are just a part of adopted features. The performances before and after adopting prefix/suffix features are never directly compared. So we could not know how much improvement actually results from those prefix/suffix related features. Besides, those features have only been adopted under discriminative approaches (Xue, 2003; Peng, 2004) . We would also like to know whether the suffix related features would be effective for the generative approach (Wang et al., 2009; Wang et al., 2010) .", "cite_spans": [ { "start": 101, "end": 121, "text": "(Tseng et al., 2005;", "ref_id": "BIBREF9" }, { "start": 122, "end": 141, "text": "Zhang et al., 2006)", "ref_id": "BIBREF16" }, { "start": 509, "end": 520, "text": "(Xue, 2003;", "ref_id": "BIBREF15" }, { "start": 521, "end": 532, "text": "Peng, 2004)", "ref_id": "BIBREF3" }, { "start": 645, "end": 664, "text": "(Wang et al., 2009;", "ref_id": "BIBREF12" }, { "start": 665, "end": 683, "text": "Wang et al., 2010)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In comparison with the discriminative model, the generative model has the drawback that it cannot utilize trailing context in selecting the position tag (i.e. Beginning, Middle, End and Single) (Xue, 2003) of the current character. Therefore, incorporating suffix information of the next character is supposed to be a promising supplement for the generative approach. So the real benefit of using suffixes is checked for the generative model first.", "cite_spans": [ { "start": 194, "end": 205, "text": "(Xue, 2003)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To make use of the suffix information more completely, a novel quantitative tagging bias feature is first proposed to replace the contextindependent suffix list feature adopted in the literature. Compared with the original suffix-list feature, the proposed tagging bias feature takes the context into consideration and results less modeling error. A new generative model is then derived to incorporate the suffix related feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, experimental results have shown that the performance cannot be considerably improved by adding suffix information, as what we expected. Furthermore, no improvement can be achieved with the suffix list when we reimplemented the discriminative approach of (Tseng et al., 2005; Zhang et al., 2006) . This negative conclusion casts significant doubt on the above commonly believed supposition that suffix information can further improve the performance of CWS via incorporating it into surface features. The reasons for this surprising finding are thus studied and presented in this paper.", "cite_spans": [ { "start": 263, "end": 283, "text": "(Tseng et al., 2005;", "ref_id": "BIBREF9" }, { "start": 284, "end": 303, "text": "Zhang et al., 2006)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In linguistic definition 1 , a suffix is a morpheme that can be placed after a stem to form a new word. Also, a suffix cannot stand alone as a word. According to this definition, only a few characters can be regarded as suffixes, such as '\u8005' (-er), '\u5316' (-ize), '\u7387' (rate), etc. However, the character '\u6e56' (lake) in the words \"\u6606\u660e\u6e56\" (Kunming Lake) and \"\u672a\u540d\u6e56\" (Weiming Lake) can help recognize those OOV words, although it can also appear as an independent word in the phrase \"\u5728/\u6e56/\u4e2d\u95f4\" (in the middle of the lake). We thus loosen the constraint that a suffix cannot stand alone as a word in this paper to cover more such characters. That is, if a character tends to locate at the end of various words, it is regarded as if it plays the role of a suffix in those words. In this way, many named entities (such as the two location names mentioned above) will be also classified as suffix related words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting suffix information", "sec_num": "2" }, { "text": "Nonetheless, we cannot distinguish suffixes from those non-suffixes by just checking each character because whether a character is a suffix highly depends on the context. For example, the character '\u5316' is a suffix in the word \"\u521d\u59cb\u5316\" (initial-ize). However, it becomes a prefix when it comes to the word \"\u5316\u7ea4\" (chemical-fibre). Also, whether a character is a suffix varies with different annotation standards adopted by various corpora. For example, the character ' \u5382 ' (factory) is a suffix in words such as \"\u670d\u88c5\u5382\" (clothing-factory) in the PKU corpus provided by the SIGHAN 2005 Bakeoff (Emerson, 2005 . Nevertheless, it is regarded as a single-character word in similar occasions in the MSR corpus. For these two reasons, suffixes cannot be directly recognized by simply locating some prespecified characters prepared by the linguist.", "cite_spans": [ { "start": 565, "end": 576, "text": "SIGHAN 2005", "ref_id": null }, { "start": 577, "end": 599, "text": "Bakeoff (Emerson, 2005", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Difficulties in recognizing suffixes", "sec_num": "2.1" }, { "text": "Due to the difficulty in recognizing real suffixes, previous works (Tseng et al., 2005; Zhang et al., 2006 ) extract a suffix-like list beforehand from each corpus in context-free manner. Specifically, Tseng et al. (2005) considers characters that frequently appear at the end of those rare words as potential suffixes. In their approach, words that the numbers of occurrences in the training set are less than a given threshold are selected first, and then their ending characters are sorted according to their occurrences in those rare words. Afterwards, the suffix-like list is formed with those high-frequency characters. Zhang et al. (2006) constructs the list in a similar way, but without pre-extracting rare words.", "cite_spans": [ { "start": 67, "end": 87, "text": "(Tseng et al., 2005;", "ref_id": "BIBREF9" }, { "start": 88, "end": 106, "text": "Zhang et al., 2006", "ref_id": "BIBREF16" }, { "start": 202, "end": 221, "text": "Tseng et al. (2005)", "ref_id": "BIBREF9" }, { "start": 626, "end": 645, "text": "Zhang et al. (2006)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Extracting a suffix-like list", "sec_num": "2.2" }, { "text": "In order to reduce the number of suffix errors resulted from the above primitive extraction procedure, we propose to obtain and use the suffix-list in a more prudent manner as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting a suffix-like list", "sec_num": "2.2" }, { "text": "\uf0b7 Having considered that suffix is supposed to be combined with different stems to form new words, we propose to use the suffix productivity as the criteria for extracting suffix list, which is defined as the size of the set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting a suffix-like list", "sec_num": "2.2" }, { "text": "{ | ,[ ] } w w IV w sc IV \uf0ce \uf02b \uf0ce", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting a suffix-like list", "sec_num": "2.2" }, { "text": ", where w is a word in the training set, sc is a specific character to be decided if it should be extracted as a suffix character, and IV denotes in-vocabulary words. The cardinality of this set counts how many different IV words can be formed by concatenating the given suffix character to an IV word. Therefore, larger suffix productivity means that the given suffix character can be combined with more different stems to form new words, and is thus more likely to be a suffix. \uf0b7 According to our investigation, most OOV with suffix are composed of a multi-character IV and a suffix, such as \"\u65c5\u884c\u8005\" (i.e., \"\u65c5 \u884c\" + \"\u8005\"). So we set the suffix status for a given character to be true only when that character is in the suffix list and its previous character is the end of a multi-character IV word. In this way we can avoid many overgeneralized errors (thus improve the precision for OOV with suffixes) and it only has little harm for the recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting a suffix-like list", "sec_num": "2.2" }, { "text": "There are two drawbacks to adopt the above suffix-like list: (1) The associated context that is required to decide whether a character should be regarded as a suffix is either completely not taken into account (in previous approaches) or treated too coarsely (in the above proposed approach). (2) The probability value (a finer information) that a given character acts as a suffix is not utilized; only a hard-decision flag (in or outside the list) is assigned to each character.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adopting tagging bias information", "sec_num": "2.3" }, { "text": "To overcome these two drawbacks, we introduce the context-dependent tagging bias level, which reflects the likelihood that the next character tends to be the beginning of a new word (or be a single-character word) based on the local context. This is motivated by the following observation: if the trailing character is biased towards 'S' or 'B', then the current character will prefer to be tagged as 'S' or 'E'; on the contrary, if the trailing character is biased towards 'M' or 'E', then the current character will prefer to be tagged as 'B' or 'M'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adopting tagging bias information", "sec_num": "2.3" }, { "text": "Having considered that the surrounding context might be unseen for the testing instances, we introduce four different kinds of tagging bias probabilities as follows (and they will be trained in parallel for each character in the training-set): . Quantization is the same.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adopting tagging bias information", "sec_num": "2.3" }, { "text": "\uf0b7 Context-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adopting tagging bias information", "sec_num": "2.3" }, { "text": "3 Incorporating Suffix Information Wang et al. (2009) proposed a character-based generative model for CWS as follows:", "cite_spans": [ { "start": 35, "end": 53, "text": "Wang et al. (2009)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "PACLIC-27", "sec_num": null }, { "text": "1 1 12 1 arg max ([ , ] | [ , ] ) n n ni ii t i t P c t c t \uf02d \uf02d \uf03d \uf0ba \uf0d5 (1) where 1 [ , ] n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For the generative model", "sec_num": "3.1" }, { "text": "ct is the associated character-tag-pair sequence for the given character sequence 1 n c . To overcome the drawback that it cannot utilize trailing context, we propose to incorporate the suffix information of the next character (denoted by i q ), which can be either the suffixlist binary indicator or the above tagging bias level, into the model and reformulate it as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For the generative model", "sec_num": "3.1" }, { "text": "11 1 1 1 1 1 1 1 arg max ( | , ) arg max ( , , ) nn n n n n n n n tt t P t c q P t c q \uf03d\uf03d 1 1 1 ( , , ) n n n P t c q is then approximated by 1 2 1 ([ , , ] | [ , , ] ) n i ii i P t c q t c q \uf02d \uf02d \uf03d \uf0d5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For the generative model", "sec_num": "3.1" }, { "text": ", and its associated factor is further derived as below: (2) where i m indicates whether i t matches the suffix information of 1 i c \uf02b or not, and [] tq i specifies the corresponding type of probability factor to be adopted (i.e., P m t c \uf02d will be adopted. It is reasonable to expect that the two factors in Equation 2 should be weighted differently in different cases. Besides, the second character-tag trigram factor is expected to be more reliable when 1 i i c \uf02d is seen in the training corpus. Therefore, these two factors are combined via log-linear interpolation. For the suffix-list feature, the scoring function will be:", "cite_spans": [ { "start": 147, "end": 149, "text": "[]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "For the generative model", "sec_num": "3.1" }, { "text": "1 2 11 i 2 2 1 1 2 2 1 [ ] i-1 2 2 ([ , , ] | [ , , ] ) = ( | [ , ] ,[ , , ] ) ([ , ] | [ , , ] ) ( | , ) ([ , ] | [ , ] ) ( | , ) ([ , ] | [ , ] ) \uf02d \uf02d \uf02d\uf02d \uf02d\uf02d \uf02d \uf02d \uf02d \uf02d \uf02d \uf02d\uf02d \uf0b4 \uf0bb\uf0b4 \uf0bb\uf0b4 i ii ii i i i i i i i i i i i i ii tq i i i i i P t c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For the generative model", "sec_num": "3.1" }, { "text": "1 2 i-1 2 ( ) log ([ , ] | [ , ] ) (1 ) log ( | , ); 1 2 \uf061 \uf061 \uf02d \uf02d \uf02d \uf03d\uf0b4 \uf02b \uf02d \uf0a3 \uf0a3 i i k i i i k i i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For the generative model", "sec_num": "3.1" }, { "text": "(3) where k \uf061 is selected according to whether 1 i i c \uf02d is seen. The values of k \uf061 will be automatically decided in the development set via MERT (Och, 2003) procedure.", "cite_spans": [ { "start": 141, "end": 157, "text": "MERT (Och, 2003)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Score t P c t c t P m t c k", "sec_num": null }, { "text": "For the tagging bias feature, the scoring function will be:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score t P c t c t P m t c k", "sec_num": null }, { "text": "1 ,2 , i-1 2 ( ) log ([ , ] | [ , ] ) (1 ) log ( | , ); 1 4,1 2 \uf061 \uf061 \uf02d \uf02d \uf02d \uf03d\uf0b4 \uf02b \uf02d \uf0a3 \uf0a3 \uf0a3 \uf0a3 i i tq k i i i tq k i i Score t P c t c t P m t c tq k (4) where , tq k \uf061", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score t P c t c t P m t c k", "sec_num": null }, { "text": "is selected according to which tagging bias probability factor is used and whether 1 i i c \uf02d is seen. Therefore, we will have eight different , tq k \uf061 in this case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score t P c t c t P m t c k", "sec_num": null }, { "text": "We adopt the following feature templates under the maximum entropy approach that are widely adopted in previous works (Xue, 2003; Low et al., 2005) :", "cite_spans": [ { "start": 118, "end": 129, "text": "(Xue, 2003;", "ref_id": "BIBREF15" }, { "start": 130, "end": 147, "text": "Low et al., 2005)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "For the discriminative model", "sec_num": "3.2" }, { "text": "1 11 ( ) ( 2, 1, 0,1, 2); ( ) ( 2, 1, 0,1); () \uf02b \uf02d \uf03d \uf02d \uf02d \uf03d \uf02d \uf02d n nn a C n b C C n c C C", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For the discriminative model", "sec_num": "3.2" }, { "text": "where C represents a character, and n denotes the relative position to the current character of concern.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For the discriminative model", "sec_num": "3.2" }, { "text": "To further utilize the suffix information, (Tseng et al., 2005) proposed a suffix-like list based feature as below. 0 () ds, which is a binary feature indicating whether the current character of concern is in the list. In our modified approach, the suffix status will be true when the character 0 c is in the suffix-list and also 1 c \uf02d is the end of a multicharacter IV word.", "cite_spans": [ { "start": 43, "end": 63, "text": "(Tseng et al., 2005)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "For the discriminative model", "sec_num": "3.2" }, { "text": "Besides the above feature, (Zhang, 2006 ) also utilized some combinational features as follows:", "cite_spans": [ { "start": 27, "end": 39, "text": "(Zhang, 2006", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "For the discriminative model", "sec_num": "3.2" }, { "text": "0 1 0 1 1 0 2 0 ( ) , , , e c s c s c s c s \uf02d \uf02d \uf02d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For the discriminative model", "sec_num": "3.2" }, { "text": ", where c denotes a character, s denotes the above suffix-like list feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For the discriminative model", "sec_num": "3.2" }, { "text": "In addition, we also tested the case of contextfree tagging bias (proposed in Section 2.3), under this discriminative framework, by adding the following template. () f qf , which is the context-free tagging bias level. Please note that qs (also ql and qr ) is not adopted because it will always be qs in the training-set (and thus will be over-fitted). Therefore, only qf is adopted to make the training and testing conditions consistent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For the discriminative model", "sec_num": "3.2" }, { "text": "All the experiments are conducted on the corpora provided by SIGHAN Bakeoff 2005 (Emerson, 2005 , which include Academia Sinica (AS), City University of Hong Kong (CITYU), Peking University (PKU) and Microsoft Research (MSR). For tuning the weights in Equation 3 and Equation 4, we randomly select 1% of the sentences from the training corpus as the development set.", "cite_spans": [ { "start": 61, "end": 80, "text": "SIGHAN Bakeoff 2005", "ref_id": null }, { "start": 81, "end": 95, "text": "(Emerson, 2005", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Setting", "sec_num": "4.1" }, { "text": "For the generative approaches, the SRI Language Model Toolkit (Stolcke, 2002) ", "cite_spans": [ { "start": 62, "end": 77, "text": "(Stolcke, 2002)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Setting", "sec_num": "4.1" }, { "text": "is used to train 1 2 ([ , ] |[ , ] ) i i i P c t c t \uf02d \uf02d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setting", "sec_num": "4.1" }, { "text": "with the modified Kneser-Ney smoothing method (Chen and Goodman, 1996) . The Factored Language Model in SRILM is adopted to train i-1 2", "cite_spans": [ { "start": 46, "end": 70, "text": "(Chen and Goodman, 1996)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Setting", "sec_num": "4.1" }, { "text": "( | , ) \uf02d i ii P m t c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setting", "sec_num": "4.1" }, { "text": ", and it will sequentially back-off to i-1 ( | ) i P m t . For the discriminative approach, the ME Package provided by Zhang Le 2 is adopted to train the model. And trainings are conducted with Gaussian prior 1.0 and 300 iterations. In addition, the size of the suffix-like list in all approaches is set to 100 3 , and the occurrences threshold for rare words in (Tseng et al., 2005 ) is set to 7. Typical F-score is adopted as the metric to evaluate the results.", "cite_spans": [ { "start": 363, "end": 382, "text": "(Tseng et al., 2005", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Setting", "sec_num": "4.1" }, { "text": "The segmentation results of using different generative models proposed in Section 3.1 are shown in Table 1 . \"Baseline\" in the table denotes the basic generative model corresponding to Equation 1; \"With Suffix-Like List\" denotes the model that adopts the suffix-like list related features, corresponding to Equation 3; each subrow right to it indicates the method used to extract the list. \"With Tagging Bias\" denotes the model that adopts tagging bias related features, corresponding to Equation 4. Bold entries indicate that they are statistically significantly different from their corresponding entries of the baseline model. Table 1 shows that the improvement brought by the tagging bias approach is statistically significant 4 from the original model for three out of four corpora; however, the difference is not much. Also, for the suffix-like list approaches, the performance can only be slightly improved when the suffix-list is extracted and used in our proposed way. To inspect if the quality of the suffix-list will affect the performance, we manually remove those characters which should not be regarded as suffixes in each list (such as Arabic numbers, and characters like \"\u65af\", \"\u5c14\", which always appear at the end of transliteration). However, the performances are almost the same even with those cleaned lists (thus not shown in the table). The reasons will be found out and explained in Section 5. Table 2 shows the segmentation results for various discriminative approaches. 'Baseline' in the table denotes the discriminative model that adopts features (a)-(c) described in Section 3.2; 'Tseng' denotes the model with additional feature (d); and 'Tseng+' adopts the same feature set as 'Tseng', but the suffix-like list is obtained and used in our proposed way; similarly, the same interpretation goes for 'Zhang' and 'Zhang+'. Last, 'with qf ' denotes the model with additional feature (f), instead of features (d) and (e). Please note that qs (also ql and qr ) is not adopted (explained above in Section 3.2).", "cite_spans": [ { "start": 1929, "end": 1932, "text": "(d)", "ref_id": null } ], "ref_spans": [ { "start": 99, "end": 106, "text": "Table 1", "ref_id": null }, { "start": 630, "end": 637, "text": "Table 1", "ref_id": null }, { "start": 1414, "end": 1421, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results of generative approaches", "sec_num": "4.2" }, { "text": "The results in Table 2 show that neither the suffix-like list related feature nor the context-free tagging bias feature can provide any help for the discriminative approach. Similar to the generative approach, no significant benefit can be brought in even if the list is further cleaned by the human. This seems contradictory to the claims given at (Tseng et al., 2005; Zhang et al., 2006) and will be studied in the next section.", "cite_spans": [ { "start": 349, "end": 369, "text": "(Tseng et al., 2005;", "ref_id": "BIBREF9" }, { "start": 370, "end": 389, "text": "Zhang et al., 2006)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results of discriminative approaches", "sec_num": "4.3" }, { "text": "Whether a character can act as a suffix is highly context dependent. Although context has been taken into consideration in our proposed suffixlist approach and tagging bias approach, the preference implied by the suffix list or tagging bias level becomes unreliable when the context is unfamiliar. Table 3 shows the percentage that the preference of different tagging bias factors matches the real tag in the training set. It can be seen that the matching rate (or the influence power) is higher with broader seen context. When no context is available (the last column; the suffix-list approach), it drops dramatically. As a result, many over-generalized words are produced when qf must be adopted. For example, two single-character words \"\u8be5/\u5c40\" (this bureau) are wrongly merged into a pseudo OOV \"\u8be5\u5c40\". As another example, the first three characters in the sequence \" \u51a0\u519b/ \u5956\u789f\" (championship award-tray) are wrongly merged into a pseudo OOV \"\u51a0\u519b\u5956\" (championshipaward). Because the related context \"\u5956\u789f\" is never seen for the character ' \u5956 ', it is thus regarded as a suffix in this case (as it is indeed a suffix in many other cases such as \"\u533b\u5b66\u5956\" (medicine-prize) and \"\u4e00\u7b49\u5956\" (first-prize)). However, according to the empirical study of , the OOV rate can be linearly reduced only with an exponential increasing of corpus size, roughly due to Zipf's law; and ngram is expected to also follow this pattern (Marco, 2009) . Therefore, the sparseness problem gets more serious for the n-gram with a larger \"n\" (i.e., with wider context) because its number of possible distinct types would become much greater. As a consequence, there will be much more unseen bigrams than unseen unigrams in the testing set (Of course, unseen trigrams will be even more). Table 4 shows the unseen ratios for qs, ql, qr and qf in the testing set. It is observed that the unseen ratio for qs is much larger than that for qf. However, according to the discussion in the previous subsection, the preference of tagging bias level is not reliable for qf. Therefore, more reliable a suffix-feature is, less likely it can be utilized in the testing-set. As the result, no significant improvement can be brought in by using suffix related features.", "cite_spans": [ { "start": 1398, "end": 1411, "text": "(Marco, 2009)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 298, "end": 305, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 1744, "end": 1751, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Suffix information is unreliable when associated context is not seen", "sec_num": "5.1" }, { "text": "Since suffixes are quite productive in forming new words, and OOV is the main error source for all state-of-the-art CWS approaches, it is intuitive to expect that utilizing suffix information will further improve the performance. Some papers even claim that suffix-like list is useful in their discriminative models, though without presenting direct evidence. Against the above intuition, the empirical study of this paper reveals that when suffix related features are incorporated into those widely adopted surface features, they cannot considerably improve the performance of character-based generative and discriminative models, even if the context is taken into consideration. Error analysis reveals that the main problem behind this surprising finding is the conflict between the reliability and the coverage of those suffix related features. This conclusion is valuable for those relevant researchers in preventing them from wasting time on similar attempts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Last, the reason that humans can distinguish suffixes correctly is largely due to their ability in utilizing associated syntactic and semantic knowledge of the plain text. We still believe suffix information can help for CWS if such knowledge can be effectively incorporated into the model. And this will be our future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "http://zh.wikipedia.org/wiki/%E8%A9%9E%E7%B6%B4 PACLIC-27", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "PACLIC-27", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://homepages.inf.ed.ac.uk/lzhang10/maxent_toolkit.html 3 This size is not explicitly given in their papers; so we tried several different values and find that it only makes little difference on the results. So is the threshold for rare words.4 The statistical significance test is done by the bootstrapping technique(Zhang et al., 2004), with sampling size of 2000 and confidence interval of 95%.PACLIC-27", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The research work has been partially funded by the Natural Science Foundation of China under ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "SRILM -an extensible language modeling toolkit", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the International Conference on Spoken Language Processing", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Stolcke. 2002. SRILM -an extensible language modeling toolkit. In Proceedings of the International Conference on Spoken Language Processing, pages 311-318.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Corpus linguistics: An international handbook", "authors": [ { "first": "Baroni", "middle": [], "last": "Marco", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baroni Marco. 2009. Distributions in text. In A. L\u00fcdeling and M. Kyt\u00f6 (eds.), Corpus linguistics: An international handbook. Mouton de Gruyter, Berlin, Germany.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Minimum error rate training in statistical machine translation", "authors": [ { "first": "Josef", "middle": [], "last": "Franze", "suffix": "" }, { "first": "", "middle": [], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franze Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, pages 160-167, Sapporo, Japan.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Chinese segmentation and new word detection using conditional random fields", "authors": [ { "first": "Fuchun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Fangfang", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2004, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "562--568", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fuchun Peng, Fangfang Feng and Andrew McCallum. 2004. Chinese segmentation and new word detection using conditional random fields. In Proceedings of COLING, pages 562-568.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Human Behavior and the Principle of Least Effort: An Introduction to Human Ecology", "authors": [ { "first": "George", "middle": [], "last": "Kingsley", "suffix": "" }, { "first": "Zipf", "middle": [], "last": "", "suffix": "" } ], "year": 1949, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Kingsley Zipf. 1949. Human Behavior and the Principle of Least Effort: An Introduction to Human Ecology. Addison-Wisley. Oxford, UK.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Unified Character-Based Tagging Framework for Chinese Word Segmentation", "authors": [ { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Chang-Ning", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Bao-Liang", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2010, "venue": "ACM Transactions on Asian Language Information Processing (TALIP)", "volume": "9", "issue": "2", "pages": "1--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hai Zhao, Chang-Ning Huang, Mu Li and Bao-Liang Lu. 2010. A Unified Character-Based Tagging Framework for Chinese Word Segmentation. ACM Transactions on Asian Language Information Processing (TALIP), 9 (2). pages 1-32.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Unsupervised Segmentation Helps Supervised Learning of Character Tagging for Word Segmentation and Named Entity Recognition", "authors": [ { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Chunyu", "middle": [], "last": "Kit", "suffix": "" } ], "year": 2008, "venue": "Sixth SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hai Zhao and Chunyu Kit. 2008. Unsupervised Segmentation Helps Supervised Learning of Character Tagging for Word Segmentation and Named Entity Recognition. In Sixth SIGHAN Workshop on Chinese Language Processing.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "How Large a Corpus do We Need : Statistical Method vs. Rulebased Method", "authors": [ { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Chunyu", "middle": [], "last": "Kit", "suffix": "" } ], "year": 2010, "venue": "Proceedings of LREC-2010", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hai Zhao, Yan Song and Chunyu Kit. 2010. How Large a Corpus do We Need : Statistical Method vs. Rulebased Method. In Proceedings of LREC-2010. Malta.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "HMM-based Chinese lexical analyzer ICTCLAS", "authors": [ { "first": "Huaping", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hongkui", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Deyi", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Second SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "184--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huaping Zhang, Hongkui Yu, Deyi Xiong and Qun Liu. 2003. HMM-based Chinese lexical analyzer ICTCLAS. In Proceedings of the Second SIGHAN Workshop on Chinese Language Processing, pages 184-187.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A Conditional Random Field Word Segmenter for Sighan Bakeoff", "authors": [ { "first": "Huihsin", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "Pichuan", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Galen", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "168--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky and Christopher Manning. 2005. A Conditional Random Field Word Segmenter for Sighan Bakeoff 2005. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, pages 168-171.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Chinese word segmentation and PACLIC-27 named entity recogni-tion: a pragmatic approach", "authors": [ { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Andi", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Chang-Ning", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "4", "pages": "531--574", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianfeng Gao, Mu Li, Andi Wu and Chang-Ning Huang. 2005. Chinese word segmentation and PACLIC-27 named entity recogni-tion: a pragmatic approach. Computational Linguistics, 31(4), pages 531-574.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A Maximum Entropy Approach to Chinese Word Segmentation", "authors": [ { "first": "Jin", "middle": [ "Kiat" ], "last": "Low", "suffix": "" }, { "first": "Tou", "middle": [], "last": "Hwee", "suffix": "" }, { "first": "Wenyuan", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "161--164", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jin Kiat Low, Hwee Tou Ng and Wenyuan Guo. 2005. A Maximum Entropy Approach to Chinese Word Segmentation. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, pages. 161-164, Jeju Island, Korea.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Which is more suitable for Chinese word segmentation, the generative model or the discriminative one?", "authors": [ { "first": "Kun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chengqing", "middle": [], "last": "Zong", "suffix": "" }, { "first": "Keh-Yih", "middle": [], "last": "Su", "suffix": "" } ], "year": 2009, "venue": "Proceedings of PACLIC", "volume": "", "issue": "", "pages": "827--834", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kun Wang, Chengqing Zong and Keh-Yih Su. 2009. Which is more suitable for Chinese word segmentation, the generative model or the discriminative one? In Proceedings of PACLIC, pages 827-834, Hong Kong, China.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A Character-Based Joint Model for Chinese Word Segmentation", "authors": [ { "first": "Kun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chengqing", "middle": [], "last": "Zong", "suffix": "" }, { "first": "Keh-Yih", "middle": [], "last": "Su", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1173--1181", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kun Wang, Chengqing Zong and Keh-Yih Su. 2010. A Character-Based Joint Model for Chinese Word Segmentation. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 1173-1181, Beijing, China.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Integrating Generative and Discriminative Character-Based Models for Chinese Word Segmentation", "authors": [ { "first": "Kun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chengqing", "middle": [], "last": "Zong", "suffix": "" }, { "first": "Keh-Yih", "middle": [], "last": "Su", "suffix": "" } ], "year": 2012, "venue": "ACM Transactions on Asian Language Information Processing", "volume": "11", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kun Wang, Chengqing Zong and Keh-Yih Su. 2012. Integrating Generative and Discriminative Character-Based Models for Chinese Word Segmentation. ACM Transactions on Asian Language Information Processing, Vol.11, No.2, June 2012, pages 7:1-7:41.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Chinese Word Segmentation as Character Tagging. Computational Linguistics and Chinese Language Processing", "authors": [ { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" } ], "year": 2003, "venue": "", "volume": "8", "issue": "", "pages": "29--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nianwen Xue. 2003. Chinese Word Segmentation as Character Tagging. Computational Linguistics and Chinese Language Processing, 8 (1). pages 29-48.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Subword-based Tagging for Confidencedependent Chinese Word Segmentation", "authors": [ { "first": "Ruiqiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Genichiro", "middle": [], "last": "Kikui", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the COLING/ACL", "volume": "", "issue": "", "pages": "961--968", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruiqiang Zhang, Genichiro Kikui and Eiichiro Sumita. 2006. Subword-based Tagging for Confidence- dependent Chinese Word Segmentation. In Proceedings of the COLING/ACL, pages 961-968, Sydney, Australia.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "An empirical study of smoothing techniques for language modeling", "authors": [ { "first": "F", "middle": [], "last": "Stanley", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanley F. Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical Report TR-10-98, Harvard University Center for Research in Computing Technology.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The second international Chinese word segmentation bakeoff", "authors": [ { "first": "Thomas", "middle": [ "Emerson" ], "last": "", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "123--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Emerson. 2005. The second international Chinese word segmentation bakeoff. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, pages 123-133.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Word-based and character-based word segmentation models: Comparison and combination", "authors": [ { "first": "Weiwei", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1211--1219", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiwei Sun. 2010. Word-based and character-based word segmentation models: Comparison and combination. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 1211-1219, Beijing, China.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Enhancing Chinese Word Segmentation Using Unlabeled Data", "authors": [ { "first": "Weiwei", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Jia", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "970--979", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiwei Sun and Jia Xu. 2011. Enhancing Chinese Word Segmentation Using Unlabeled Data. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 970-979, Edinburgh, Scotland, UK.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Integrating Surface and Abstract Features for Robust Cross-Domain Chinese Word Segmentation", "authors": [ { "first": "Xiaoqing", "middle": [], "last": "Li", "suffix": "" }, { "first": "Kun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chengqing", "middle": [], "last": "Zong", "suffix": "" }, { "first": "Keh-Yih", "middle": [], "last": "Su", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 24th International Conference on Computational Linguistics (COLING). Pages 1653-1669", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoqing Li, Kun Wang, Chengqing Zong and Keh- Yih Su. 2012. Integrating Surface and Abstract Features for Robust Cross-Domain Chinese Word Segmentation. In Proceedings of the 24th International Conference on Computational Linguistics (COLING). Pages 1653-1669, Mumbai, India.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Fast Online Training with Frequency-Adaptive Learning Rates for Chinese Word Segmentation and New Word Detection", "authors": [ { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Houfeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "253--262", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu Sun, Houfeng Wang and Wenjie Li. 2012. Fast Online Training with Frequency-Adaptive Learning Rates for Chinese Word Segmentation and New Word Detection. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 253-262, Jeju Island, Korea.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Interpreting BLEU/NIST scores: How much improvement do we need to have a better system", "authors": [ { "first": "Ying", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2004, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "2051--2054", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ying Zhang, Stephan Vogel and Alex Waibel, 2004. Interpreting BLEU/NIST scores: How much improvement do we need to have a better system. In Proceedings of LREC, pages 2051-2054.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Chinese Segmentation with a Word-Based Perceptron Algorithm", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "840--847", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Stephen Clark, 2007. Chinese Segmentation with a Word-Based Perceptron Algorithm. In Proceedings of ACL, pages 840-847, Prague, Czech Republic.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Word segmentation needs change -from a linguist's view", "authors": [ { "first": "Zhengdong", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Changling", "middle": [], "last": "Hao", "suffix": "" } ], "year": 2010, "venue": "Proceedings of CIPS-SIGHAN Joint Conference on Chinese Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhengdong Dong, Qiang Dong and Changling Hao. 2010. Word segmentation needs change -from a linguist's view. In Proceedings of CIPS-SIGHAN Joint Conference on Chinese Language Processing. Beijing, China.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Parsing the Internal Structure of Words: A New Paradigm for Chinese Word Segmentation", "authors": [ { "first": "Zhongguo", "middle": [], "last": "Li", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1405--1414", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhongguo Li. 2011. Parsing the Internal Structure of Words: A New Paradigm for Chinese Word Segmentation. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1405--1414, Portland, Oregon, USA.", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "uris": null, "text": "those three different suffix features (previous suffix-list, proposed suffix-list, and proposed tagging bias), i m will be decided as follows: \uf0b7 For the previous suffix-list feature, no matter what position tag is assigned to i t . \uf0b7 For the proposed suffix-list feature, i m will also be a member of {Match, Violate, Neutral}. If 1 i c \uf02b is in the suffix list and i c is the end of a multi-character IV word, when i t is assigned position tag 'M', i m will be", "type_str": "figure" }, "TABREF3": { "html": null, "num": null, "type_str": "table", "content": "
: Segmentation results for discriminative
approaches in F-score
", "text": "" }, "TABREF5": { "html": null, "num": null, "type_str": "table", "content": "
Corpusqsqlqrqf
PKU0.4570.1350.1350.002
AS0.3740.0830.0820.004
CITYU0.5150.1480.1490.008
MSR0.2990.0600.0600.0003
", "text": "The matching rates of various tagging bias factors in the training set" }, "TABREF6": { "html": null, "num": null, "type_str": "table", "content": "
: Unseen ratios forqs ,ql ,qr andqf in the
testing set
5.2 Requiredcontextisfrequently
unobserved for testing instances
", "text": "" } } } }