Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y04-1014",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:35:17.659828Z"
},
"title": "Pruning False Unknown Words to Improve Chinese Word Segmentation",
"authors": [
{
"first": "Chooi-Ling",
"middle": [],
"last": "Goh",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Masayuki",
"middle": [],
"last": "Asahara",
"suffix": "",
"affiliation": {},
"email": "masayu-a@is.naist.jp"
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "During the process of unknown word detection in Chinese word segmentation, many detected word candidates are invalid. These false unknown word candidates deteriorate the overall segmentation accuracy, as it will affect the segmentation accuracy of known words. Therefore, we propose to eliminate as many invalid word candidates as possible by a pruning process. Our experiments show that by cutting down the invalid unknown word candidates, we improve the segmentation accuracy of known words and hence that of the overall segmentation accuracy.",
"pdf_parse": {
"paper_id": "Y04-1014",
"_pdf_hash": "",
"abstract": [
{
"text": "During the process of unknown word detection in Chinese word segmentation, many detected word candidates are invalid. These false unknown word candidates deteriorate the overall segmentation accuracy, as it will affect the segmentation accuracy of known words. Therefore, we propose to eliminate as many invalid word candidates as possible by a pruning process. Our experiments show that by cutting down the invalid unknown word candidates, we improve the segmentation accuracy of known words and hence that of the overall segmentation accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Since written Chinese texts do not use any markers such as spaces to indicate the word boundaries, word segmentation has become an essential task prior to any other Chinese language processing. There are two main problems in this task, segmentation ambiguities and unknown word occurrences. We can either solve these two problems in one single process or make them as two separate processes. Both approaches have pros and cons. If we make them as two separate processes, then we can focus on each problem independently. However, if two problems are solved in one single process, then the processing time may be shortened, but the computation will become more complicated. It is difficult to judge which approach is better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we would like to compare the results by using only a single model for word segmentation (Goh et al., 2004) and by joining two models (unknown word model and disambiguation model). The extracted unknown words go through a pruning step, to eliminate those unlikely to be word candidates. Our experiments show that two processes for word segmentation perform better than only with a single process.",
"cite_spans": [
{
"start": 103,
"end": 121,
"text": "(Goh et al., 2004)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Chinese word segmentation has been a research topic since long time ago. At first, people believed in rule based method, and tried to apply rules for word segmentation. However, as new words and new patterns can always be created, it became more and more complicated to maintain such a rule based system. Later, with the evolution of segmented corpora, it has become possible to apply machine learning based method to train a model for word segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "In year 2003, a competition for Chinese word segmentation was carried out in SIGHAN 1 workshop to compare the accuracy of various methods (Sproat and Emerson, 2003) . It used to be difficult to compare the accuracy of various systems because the experiments had been done on different corpora. Therefore, this bakeoff intended to standardize the training and testing corpora, so that a fair evaluation could be made. Along the history, some researches proposed solving ambiguity problem and detecting unknown word in one process (Xue and Converse, 2002; Asahara et al., 2003; Fu and Luke, 2004; Goh et al., 2004) and some split these processes into multiple steps (Zhang et al., 2003; Ma and Chen, 2003) . Furthermore, people are combining multiple statistical models so that one model could cover the weakness of the other models. (Xue and Converse, 2002) first use a maximum entropy model to segment the text, then apply a transformational-based model to correct the output of the first model. (Asahara et al., 2003) use a hidden Markov model to segment the text for known words, then apply a support vector machine-based chunker for unknown word detection. (Fu and Luke, 2004) combine a word juncture model with word formation pattern, while (Goh et al., 2004) use maximum matching algorithm with support vector machines. From these work, we realise that by combining models of different approaches, we are able to obtain better results than only using a single model.",
"cite_spans": [
{
"start": 138,
"end": 164,
"text": "(Sproat and Emerson, 2003)",
"ref_id": "BIBREF9"
},
{
"start": 529,
"end": 553,
"text": "(Xue and Converse, 2002;",
"ref_id": "BIBREF10"
},
{
"start": 554,
"end": 575,
"text": "Asahara et al., 2003;",
"ref_id": "BIBREF0"
},
{
"start": 576,
"end": 594,
"text": "Fu and Luke, 2004;",
"ref_id": "BIBREF2"
},
{
"start": 595,
"end": 612,
"text": "Goh et al., 2004)",
"ref_id": "BIBREF3"
},
{
"start": 664,
"end": 684,
"text": "(Zhang et al., 2003;",
"ref_id": "BIBREF11"
},
{
"start": 685,
"end": 703,
"text": "Ma and Chen, 2003)",
"ref_id": "BIBREF4"
},
{
"start": 832,
"end": 856,
"text": "(Xue and Converse, 2002)",
"ref_id": "BIBREF10"
},
{
"start": 996,
"end": 1018,
"text": "(Asahara et al., 2003)",
"ref_id": "BIBREF0"
},
{
"start": 1160,
"end": 1179,
"text": "(Fu and Luke, 2004)",
"ref_id": "BIBREF2"
},
{
"start": 1245,
"end": 1263,
"text": "(Goh et al., 2004)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "The SIGHAN bakeoff results show that combining word segmentation and unknown word detection in one process produces reasonable result. Without unknown word detection, we get worse result if there are a lot of unknown words in the text. However, while the recall for unknown words increases, the recall for known words decreases. This is because those invalid detected unknown words are the cause of errors in known word segmentation. Our idea relies on the following findings. Introducing one valid unknown word creates one correct word. However introducing one invalid unknown word will possibly make (at least) two words incorrect (one unknown and one known). On the other hand, deleting one valid unknown word makes one word incorrect but deleting one invalid unknown word will possibly make two known words correct. If we can delete as many invalid words as possible, we will be able to increase the accuracy of known words and the overall segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Specification",
"sec_num": "3"
},
{
"text": "Furthermore, the same unknown word found in one context may be missed out at another context. Therefore, after unknown word detection, we could rerun the overall segmentation again to include those missing unknown words. In short, our approach is to separate the word segmentation (disambiguation) and unknown word detection into two independent processes, so that we could focus on each problem more thoroughly and more specifically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Specification",
"sec_num": "3"
},
{
"text": "Our method is based on the report by (Goh et al., 2004) , where a maximum matching algorithm (MM) combining with support vector machines (SVM) model is proposed to solve the ambiguity problem and unknown word detection at the same time. In their report, if the model focuses on solving ambiguity problem, then the accuracy for known words is higher; and on the contrary if it focuses on unknown word detection, then the recall for unknown words is higher but the accuracy for known words drops. Although there is a balance point for both problems, it is quite difficult to further improve on the accuracy. Two problems are observed in (Goh et al., 2004) . First, since only half of the words from the training data are used in the dictionary, some of the known words cannot be segmented correctly as they are not found in the dictionary. Second, only part of the words in the training data are used for the unknown word detection training. In other words, the training of word patterns are not thorough too. Our method intends to make full use of the training data for both problems, so that we can increase the recall for unknown words while at the same time maintains the accuracy for known words. Figure 1 shows the flow of our process. We refer to our two models as the unknown word model and the disambiguation model. First, we use the unknown word model to extract unknown word candidates from the input text and apply a pruning process to them. Next, the new words are registered to the disambiguition model's dictionary and the final segmentation is done with the new dictionary. We will describe each step in more detail. ",
"cite_spans": [
{
"start": 37,
"end": 55,
"text": "(Goh et al., 2004)",
"ref_id": "BIBREF3"
},
{
"start": 635,
"end": 653,
"text": "(Goh et al., 2004)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 1200,
"end": 1208,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "4"
},
{
"text": "The unknown word processing consists of two steps. First, we extract unknown word candidates with the unknown word model. Since not all extracted unknown words are valid, we then apply the second step to eliminate those invalid unknown words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unknown Word Processing",
"sec_num": "4.1"
},
{
"text": "In fact, the unknown word model itself is a complete word segmentation model. It could handle both disambiguation and unknown word detection in one single process. However, while the recall for unknown word increases, the accuracy for known words is affected. Since this model can get optimal result for unknown word detection, we would like to extract the unknown words in this model, meaning those words not found in the dictionary 2 . We then apply a pruning process to the unknown word candidates before registering the new words to the dictionary used in the disambiguition model for final segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unknown Word Model",
"sec_num": "4.1.1"
},
{
"text": "The probability model used is the maximum entropy (ME) model. The ME model is similar to the one described in (Xue and Converse, 2002) with different feature templates. Lets \u00a2 \u00a4 \u00a3 be the current character that we want to tag and \u00a5 stands for the focus position. We use characters (represented by",
"cite_spans": [
{
"start": 110,
"end": 134,
"text": "(Xue and Converse, 2002)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unknown Word Model",
"sec_num": "4.1.1"
},
{
"text": "\u00a2 \u00a3 \u00a7 \u00a6 \u00a9 , \u00a2 \u00a3 \u00a6 , \u00a2 \u00a3 , \u00a2 \u00a3 , \u00a2 \u00a3 ), character types (represented by \u00a3 \u00a7 \u00a6 \u00a9 , \u00a3 \u00a7 \u00a6 , \u00a3 , \u00a3 , \u00a3 ) and previously estimated tags (represented by \u00a3 \u00a7 \u00a6 \u00a9 , \u00a3 \u00a7 \u00a6 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unknown Word Model",
"sec_num": "4.1.1"
},
{
"text": "as the feature templates. We define four character types in our model, digits, alphabets, symbols (including punctuation marks) and hanzi (other chinese characters). The task is to estimate the tag \u00a3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unknown Word Model",
"sec_num": "4.1.1"
},
{
"text": "\u00a3 \u00a7 \u00a6 \u00a9 , \u00a2 \u00a3 \u00a7 \u00a6 , \u00a2 \u00a3 , \u00a2 \u00a3 , \u00a2 \u00a3 ). Bigram (\u00a2 \u00a3 \u00a7 \u00a6 \u00a9 \u00a2 \u00a3 \u00a7 \u00a6 , \u00a2 \u00a3 \u00a7 \u00a6 \u00a2 \u00a3 , \u00a2 \u00a3 \u00a7 \u00a6 \u00a2 \u00a3 , \u00a2 \u00a3 \u00a2 \u00a3 , \u00a2 \u00a3 \u00a2 \u00a3 ). 2. Character types. Unigram ( \u00a3 \u00a7 \u00a6 \u00a9 , \u00a3 \u00a6 , \u00a3 , \u00a3 , \u00a3 ). Bigram ( \u00a3 \u00a7 \u00a6 \u00a9 \u00a3 \u00a7 \u00a6 , \u00a3 \u00a6 \u00a3 , \u00a3 \u00a7 \u00a6 \u00a3 , \u00a3 \u00a3 , \u00a3 \u00a3 ). 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characters. Unigram (\u00a2",
"sec_num": "1."
},
{
"text": "The initial dictionary contains all words from the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characters. Unigram (\u00a2",
"sec_num": "1."
},
{
"text": "PACLIC 18, December 8th-10th, 2004, Waseda University, Tokyo",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characters. Unigram (\u00a2",
"sec_num": "1."
},
{
"text": "3. Previously estimated tags. ( ! \u00a3 \u00a6 \u00a9 , \" \u00a3 \u00a7 \u00a6 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characters. Unigram (\u00a2",
"sec_num": "1."
},
{
"text": "We also regard the problem as a tagging problem on characters. The ME model will tag each character into one of the 4 possible tags as shown in Table 1 The outputs of ME model are then converted back to word segments based on the position tags. The conversion becomes complicated when there exists inconsistency in consecutive tags. For example, it is possible that ME model assigns \"SE\" to two continuous characters, which is logically not allowed. Therefore, we made a slight correction to the output tags as shown in Table 2 . We look at the current tag or the next tag to decide whether to make a change on previous tag or current tag. The correction does not cover all possible mistakes but only those that are seen in the outputs. The intuition behind is quite simple. We assume that when there is an \"I\", then is must end with an \"E\". Alternatively, we may trust the next coming tag, and try to change the former tag. After the correction of inconsistency tags, we convert the characters back to words. We put a word separater (a blank space) in every place that begins with either \"B\" or \"S\".",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 151,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 520,
"end": 527,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Characters. Unigram (\u00a2",
"sec_num": "1."
},
{
"text": "Correction prevtag = \"I\" and curtag = \"S\" curtag = \"E\" prevtag = \"B\" and curtag = \"S\" prevtag = \"S\" prevtag = \"S\" and curtag = \"E\" prevtag = \"B\" prevtag = \"S\" and curtag = \"I\" prevtag = \"B\" prevtag = \"I\" and curtag = \"B\" and nexttag = \"B\" curtag = \"E\" prevtag = \"B\" and curtag = \"B\" and nexttag = \"E\" prevtag = \"S\" prevtag = \"I\" and curtag = \"B\" and nexttag = \"S\" curtag = \"E\" prevtag = \"B\" and curtag = \"B\" and nexttag = \"B\" curtag = \"E\" prevtag = \"B\" and curtag = \"E\" and nexttag = \"E\" curtag = \"I\" prevtag: previous tag, curtag: current tag, nexttag: next tag From the output word segmentation, those words that are not in the dictionary will be treated as unknown word candidates, which will go through the pruning process as decribed below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Condition",
"sec_num": null
},
{
"text": "We apply two levels of pruning for the detected unknown word candidates. First, pruning by using adjacent words and internal components. Second, pruning by using word formation power.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning of Invalid Unknown Words",
"sec_num": "4.1.2"
},
{
"text": "The first level of pruning is by using adjacent words and internal components. Let # $ \u00a3 \u00a7 \u00a6 , # % \u00a3 , # % \u00a3 be three continuous words in the text where # \u00a3 is an unknown word candidate and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning of Invalid Unknown Words",
"sec_num": "4.1.2"
},
{
"text": "# \u00a3 ' & ) ( 0 \u00a3 \u00a7 1 2 3 ( 0 \u00a3 \u00a7 1\u00a8 \u00a4 4 4 4 5 ( 0 \u00a3 \u00a7 16 where ( 0 \u00a3 \u00a7 17",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning of Invalid Unknown Words",
"sec_num": "4.1.2"
},
{
"text": "is a character and 8 is the length of the word. We assume that if the unknown word forms a known word with adjacent characters or words, then it is not a valid unknown word. Therefore, if any one of the following words exists in the dictionary, then we delete the unknown word from the list: (the last four characters) exists in the dictionary (except those words that are numbers, alphabets or symbols), then we delete the unknown word candidate from the list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning of Invalid Unknown Words",
"sec_num": "4.1.2"
},
{
"text": "The second level of pruning is by using word formation power (Nie et al., 1995; Fu and Luke, 2003) . We define the word formation power (WFP) as below, where the I Q P H \" ( @ R 8 is either S, B, I or E, introduced in Table 1 .",
"cite_spans": [
{
"start": 61,
"end": 79,
"text": "(Nie et al., 1995;",
"ref_id": "BIBREF6"
},
{
"start": 80,
"end": 98,
"text": "Fu and Luke, 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 218,
"end": 225,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Pruning of Invalid Unknown Words",
"sec_num": "4.1.2"
},
{
"text": "I Q P H S ( 0 R 8 U T ( W V U & \u00a2 Y X \u1e80 Q 8 Y T a I b P c \" ( @ R 8 U T ( W V 9 V \u00a2 Y X \u1e80 Q 8 Y T ( d V e g f $ h T \u00a7 # V i & q p T ( \u00a4 D V 6 H \u00a6 r \u00a3 s u t T ( @ \u00a3 v V \" w T ( 0 6 x V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning of Invalid Unknown Words",
"sec_num": "4.1.2"
},
{
"text": "Previous researches use a predefined threshold to eliminate the unknown words but we generate the threshold from the training corpus. The threshold is defined as the minimum WFP of words of the same length with the unknown word. Therefore, if the WFP falls in any one of the conditions below, then the unknown word candidate is deleted. However, we will accept any unknown word of one character.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning of Invalid Unknown Words",
"sec_num": "4.1.2"
},
{
"text": "e g f $ h T \u00a7 # V is less than the minimum e y f h T",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "\u00a7 V where ( 8 H T \u00a7 V U & ( 8 c 9 T \u00a7 # V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "2. The WFP is less than the total production of every single character in the word,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "e g f $ h T \u00a7 # V T ( \u00a4 V T ( \u00cb E V D 4 4 4 T ( 0 6 \u00a9 V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "3. There exists high probability of single character in the word. Currently we run only on words where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "( 8 c T \u00a7 # V & , e g f $ h T \u00a7 # V % T ( V T ( @ E V \" p T ( @ F E V \" w T ( 0 G d V or e y f h T \u00a7 # V p T ( D V \" w T ( \u00cb W V T ( @ F d V T ( 0 G W V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "4. Any one of the character in the word appears only as single character word,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "T ( W 7 d V i &",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "After the two level pruning, the unknown word candidates are registered in the dictionary for used in the disambiguation model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "We assume that there is no unknown words in the disambiguition model. If all word candidates can be found in the dictionary, we just need to solve the ambiguity problem here. Similar to (Goh et al., 2004) , we use maximum matching algorithm to first segment the text forwards (FMM) and backwards (BMM), but instead of using SVM, we apply maximum entropy (ME) models for classification of characters. This is because SVM requires more computational power. Since we need to create two models, it is better if we can apply a model which can give reasonble results with lower computational power. PACLIC 18, December 8th-10th, 2004, Waseda University, Tokyo During the training of ME model, the dictionary used in the MM models consists of all words from the training data only. While during testing phase, the dictionary is added with the unknown words extracted from the unknown word processing phase. After the initial segmentation by using FMM and BMM models, we will convert the output words of the MMs into characters, where each character is assigned with a position tag. These tags show the character position in a word, as described in Table 1 . The output of MMs will be used as features in ME models. For example, for the sentence \" d f e g h i \" (At the New Year gathering party), FMM has the position tags as \"BEBESBE\" and BMM has \"SBEBEBE\". The feature templates are as the following. Output of FMM is represented by",
"cite_spans": [
{
"start": 186,
"end": 204,
"text": "(Goh et al., 2004)",
"ref_id": "BIBREF3"
},
{
"start": 593,
"end": 653,
"text": "PACLIC 18, December 8th-10th, 2004, Waseda University, Tokyo",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1141,
"end": 1148,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Segmentation Ambiguity Resolution",
"sec_num": "4.2"
},
{
"text": "j C \u00a3 \u00a7 \u00a6 \u00a9 W k 3 j d \u00a3 \u00a7 \u00a6 0 k 3 j d \u00a3 \" k 3 j d \u00a3 0 k 3 j d \u00a3 and output of BMM is represented by l \u00a3 \u00a7 \u00a6 \u00a9 k 3 l \u00a3 \u00a7 \u00a6 k 3 l \u00a3 k 3 l \u00a3 k 3 l \u00a3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation Ambiguity Resolution",
"sec_num": "4.2"
},
{
"text": ". Each character will be tagged by the ME model based on these features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation Ambiguity Resolution",
"sec_num": "4.2"
},
{
"text": "\u00a3 \u00a7 \u00a6 \u00a9 , \u00a2 \u00a3 \u00a7 \u00a6 , \u00a2 \u00a3 , \u00a2 \u00a3 , \u00a2 \u00a3 ). Bigram (\u00a2 \u00a3 \u00a7 \u00a6 \u00a9 \u00a2 \u00a3 \u00a7 \u00a6 , \u00a2 \u00a3 \u00a7 \u00a6 \u00a2 \u00a3 , \u00a2 \u00a3 \u00a7 \u00a6 \u00a2 \u00a3 , \u00a2 \u00a3 \u00a2 \u00a3 , \u00a2 \u00a3 3 \u00a2 \u00a3 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characters. Unigram (\u00a2",
"sec_num": "1."
},
{
"text": "2. Output of FMM and BMM. (j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characters. Unigram (\u00a2",
"sec_num": "1."
},
{
"text": "\u00a3 \u00a7 \u00a6 \u00a9 l \u00a3 \u00a7 \u00a6 \u00a9 , j \u00a3 \u00a6 l \u00a3 \u00a7 \u00a6 , j \u00a3 l \u00a3 , j \u00a3 l \u00a3 , j \u00a3 l \u00a3 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characters. Unigram (\u00a2",
"sec_num": "1."
},
{
"text": "3. Previously estimated tags. (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characters. Unigram (\u00a2",
"sec_num": "1."
},
{
"text": "! \u00a3 \u00a6 \u00a9 , \" \u00a3 \u00a7 \u00a6 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characters. Unigram (\u00a2",
"sec_num": "1."
},
{
"text": "After the character tagging, we apply the same rules for inconsistency tagging (Table 2) , and finally convert the characters back to words.",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 88,
"text": "(Table 2)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Characters. Unigram (\u00a2",
"sec_num": "1."
},
{
"text": "We have run our experiments on SIGHAN Bakeoff data. There are 4 datasets provided by different instituitions. The details of the datasets are shown in Table 3 . The unknown word rates vary by datasets, CHTB has the most unknown words whereas AS has the least. The size of training data is different too, AS has as big as 5.8M but HK has only 240K words. These are the main factors that affect the accuracy of the overall segmentation. ",
"cite_spans": [],
"ref_spans": [
{
"start": 151,
"end": 158,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "In Section 4.1, we have extracted the unknown words from the testing data. Table 4 shows the accuracy of the unknown word extraction. We show only the results on distinct words. ",
"cite_spans": [],
"ref_spans": [
{
"start": 75,
"end": 82,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unknown Word Extraction",
"sec_num": "5.1"
},
{
"text": "R o ( \u00a2 \u00a5 S p E \u00a5 q X d 8 m ( \u00a2 Y P H n x u h R o ( \u00a2 D \u00a5 \" p @ \u00a5 q X d 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unknown Word Extraction",
"sec_num": "5.1"
},
{
"text": "We can see from this table that after the pruning, the recalls of unknown words drop, but the precisions increase. However, the balance F-measures have increased after pruning. As we shall see in the next section, although the recalls of unknown words drop, the overall segmentation by this pruning step improves. Table 4 : Accuracy of Unknown Word Extraction (distinct words only)",
"cite_spans": [],
"ref_spans": [
{
"start": 314,
"end": 321,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unknown Word Extraction",
"sec_num": "5.1"
},
{
"text": "The evaluation of word segmentation is done by using the tool provided in SIGHAN Bakeoff (Sproat and Emerson, 2003) . We evaluate the performance in recall, precision and F-measure for overall segmentation, and recall for unknown words and known words. Figure 2 compares our results with the bakeoff results. Overall, we have out-performed almost all the participants except for CHTB dataset. In addition, our method has the highest recall for unknown words compared with others. Table 5 shows the detail results of our system 3 . We compare the performance on with or without unknown word detection, and with or without pruning. Apparently, we need unknown word detection to improve the overall segmentation. However, while the accuracy of unknown word increases (as in the row 'With unkword detection'), the accuracy of known words drops. In the next row, we have shown that re-segmentaion using the disambiguation model improves the results, as those missing words (found in one context but not the other) can be corrected. Finally, by applying the pruning step, we have again improved on the overall segmentation accuracy because some of the invalid unknown words have been eliminated. However, if the unknown word rate is low, such as AS corpus, it would be better if all the detected words are used for re-segmentation because the pruning steps eliminate too many valid unknown words (5%) relatively. We have also compared our results with some recent works. As mentioned in the earlier section of this paper, our method is based on the report by (Goh et al., 2004) . They use a combination of maximum matching algorithm and the state-of-the-art classifier, support vector machines, for segmentation. Our method has done a lot better than theirs as we can cover better the problem of known words and unknown words. The most recent works on segmentation are reported by (Nakagawa, 2004) and (Peng et al., 2004) . (Nakagawa, 2004) uses word-level and character-level information for segmentation which is similar to our method. He uses a markov model for word-level probability, and maximum entropy model for character-level probability. Then he builds a lattice based on both probabilities and solves the problem by using Viterbi algorithm. Both word-level and character-level are used at the same time, and both known",
"cite_spans": [
{
"start": 89,
"end": 115,
"text": "(Sproat and Emerson, 2003)",
"ref_id": "BIBREF9"
},
{
"start": 1553,
"end": 1571,
"text": "(Goh et al., 2004)",
"ref_id": "BIBREF3"
},
{
"start": 1875,
"end": 1891,
"text": "(Nakagawa, 2004)",
"ref_id": "BIBREF5"
},
{
"start": 1896,
"end": 1915,
"text": "(Peng et al., 2004)",
"ref_id": "BIBREF7"
},
{
"start": 1918,
"end": 1934,
"text": "(Nakagawa, 2004)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 253,
"end": 261,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 480,
"end": 487,
"text": "Table 5",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Segmentation Result",
"sec_num": "5.2"
},
{
"text": "Note that we have converted some ascii characters (such as numbers and alphabets) to GB or Big5 code before processing. This step will automatically make some unknown words become known words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "word and unknown word segmentation are conducted simulteneously. His method has achieved better results than ours. The way that he uses the word-level (HMM) and character-level (ME) information in the lattice is much more efficient than our method. (Peng et al., 2004) use conditional random fields (CRF) for word segmentation. CRFs consider richer domain knowledge and are discriminatively-trained, which are often more accurate. However, in their experiment, the results shown do not out-perform our method. This could be because it is just a first trial on using CRFs for word segmentation and further survey on the feature sets is probably needed.",
"cite_spans": [
{
"start": 249,
"end": 268,
"text": "(Peng et al., 2004)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "We proposed two steps of pruning in this paper. The first one is by looking at the adjacent words and the internal components. Although this method eliminates a lot of invalid unknown words but it has also eliminated some valid unknown words. For example, in the phrase \"z f { } | \" (how knowledgeable), it has been segmented as \"z / { / | / /\" where \" \" (knowledge) is marked as an unknown word. However, since the adjacent word \"| \" (big) forms a known word \"| \" (university) with part of the unknown word, it has been deleted from the unknown word list.The second step is by using word formation power. This method can delete those words that have very low probability to be words, but low probability does not mean invalid words most of the time. For example, the WFP of \" \" (well-developed together) is 0.211, \" } \" (celebrate together) is 0.186 and \"\" (discuss together ) is 0.143. Although \" \" has the smallest WFP, it is the only valid word amongst the three. Therefore, we still need to survey a better way for word pruning.As we said before, the same unknown word may be found in one context but not the other. Furthermore, the same unknown word is detected differently in different context. Let's take the person name \" \" (Sun yanzi) as an example. It has been segmented differently in our unknown word model. Let's consider the three phrases below.1. \" / / / / /\" (Let Sun yanzi be different from others)2. \" / / / /\" (Sun yanzi is clean and influential)3. \" $ / / / /\" (after Sun yanzi became famous).Our method has correctly detected the person name in the first phrase only. If we can determine that the first one is actually a correct one, then we would be able to delete those that are invalid such as in the second and third phrase. This will help to detect the words that occur frequently in the text, such as person name, place name and etc. Previous research has been done by (Shen et al., 1998) using local statistics.",
"cite_spans": [
{
"start": 1897,
"end": 1916,
"text": "(Shen et al., 1998)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis and Discussion",
"sec_num": "6"
},
{
"text": "In this paper, we have shown that a post-processing of unknown word detection is necessary in order to improve the accuracy of segmentation. In previous work, unknown word detection has been part of the segmentation process, but while the accuracy of unknown word increases, it has caused more errors on known words. We introduce a pruning step for eliminating invalid unknown words, so that we could increase the accuracy of known words. Although our pruning method may not be so effective in selecting valid unknown words (only around 60% precision), it has proved that by having a pruning step, it really helps in improving the overall segmentation results.As a conclusion, detecting unknown word during the segmentation process may cause more errors on known words. A separate process for unknown word detection could help to increase the lexicon and improve on the segmentation accuracy. However, more survey and research are needed to select the correct unknown words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Combining Segmenter and Chunker for Chinese Word Segmentation",
"authors": [
{
"first": "Masayuki",
"middle": [],
"last": "Asahara",
"suffix": ""
},
{
"first": "Chooi-Ling",
"middle": [],
"last": "Goh",
"suffix": ""
},
{
"first": "Xiaojie",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of Second SIGHAN Workshop",
"volume": "",
"issue": "",
"pages": "144--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asahara, Masayuki, Chooi-Ling Goh, Xiaojie Wang, and Yuji Matsumoto. 2003. Combining Segmenter and Chunker for Chinese Word Segmentation. In Proceedings of Second SIGHAN Workshop, pages 144-147.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An Integrated Approach for Chinese Word Segmentation",
"authors": [
{
"first": "Guohong",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "K",
"middle": [
"K"
],
"last": "Luke",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of PACLIC 17",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fu, Guohong and K.K. Luke. 2003. An Integrated Approach for Chinese Word Segmentation. In Proceedings of PACLIC 17.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Chinese Unknown Word Identification Using Class-based LM",
"authors": [
{
"first": "Guohong",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "K",
"middle": [
"K"
],
"last": "Luke",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of IJCNLP",
"volume": "",
"issue": "",
"pages": "262--269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fu, Guohong and K.K. Luke. 2004. Chinese Unknown Word Identification Using Class-based LM. In Proceed- ings of IJCNLP, pages 262-269.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Chinese Word Segmentation by Classification of Characters",
"authors": [
{
"first": "Chooi-Ling",
"middle": [],
"last": "Goh",
"suffix": ""
},
{
"first": "Masayuki",
"middle": [],
"last": "Asahara",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of Third SIGHAN Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goh, Chooi-Ling, Masayuki Asahara, and Yuji Matsumoto. 2004. Chinese Word Segmentation by Classification of Characters. In Proceedings of Third SIGHAN Workshop.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Bottom-up Merging Algorithm for Chinese Unknown Word Extraction",
"authors": [
{
"first": "Wei-Yun",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Keh-Jiann",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of Second SIGHAN Workshop",
"volume": "",
"issue": "",
"pages": "31--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ma, Wei-Yun and Keh-Jiann Chen. 2003. A Bottom-up Merging Algorithm for Chinese Unknown Word Extrac- tion. In Proceedings of Second SIGHAN Workshop, pages 31-38.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Chinese and Japanese Word Segmentation Using Word-Level and Character-Level Information",
"authors": [
{
"first": "Tetsuji",
"middle": [],
"last": "Nakagawa",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "466--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nakagawa, Tetsuji. 2004. Chinese and Japanese Word Segmentation Using Word-Level and Character-Level Information. In Proceedings of COLING, pages 466-472.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Unknown Word Detection and Segmentation of Chinese Using Statistical and Heuristic Knowledge",
"authors": [
{
"first": "Jian-Yun",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Marie-Louise",
"middle": [],
"last": "Hannan",
"suffix": ""
},
{
"first": "Wanying",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of COLIPS",
"volume": "5",
"issue": "",
"pages": "47--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nie, Jian-Yun, Marie-Louise Hannan, and Wanying Jin. 1995. Unknown Word Detection and Segmentation of Chinese Using Statistical and Heuristic Knowledge. Communications of COLIPS, Vol.5:47-57.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Chinese Segmentation and New Word Detection using Conditional Random Felds",
"authors": [
{
"first": "Fuchun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Fangfang",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "562--568",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng, Fuchun, Fangfang Feng, and Andrew McCallum. 2004. Chinese Segmentation and New Word Detection using Conditional Random Felds. In Proceedings of COLING, pages 562-568.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The application & implementation of local statistics in Chinese unknown word identification",
"authors": [
{
"first": "Dayang",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Changning",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 1998,
"venue": "Communications of COLIPS",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shen, Dayang, Maosong Sun, and Changning Huang. 1998. The application & implementation of local statistics in Chinese unknown word identification. Communications of COLIPS, Vol.8.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The First International Chinese Word Segmentation Bakeoff",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Emerson",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of Second SIGHAN Workshop",
"volume": "",
"issue": "",
"pages": "133--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sproat, Richard and Thomas Emerson. 2003. The First International Chinese Word Segmentation Bakeoff. In Proceedings of Second SIGHAN Workshop, pages 133-143.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Combining Classifiers for Chinese Word Segmentation",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"P"
],
"last": "Converse",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of First SIGHAN Workshop",
"volume": "",
"issue": "",
"pages": "57--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xue, Nianwen and Susan P. Converse. 2002. Combining Classifiers for Chinese Word Segmentation. In Proceed- ings of First SIGHAN Workshop, pages 57-63.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "HHMM-based Chinese Lexical Analyzer ICTCLAS",
"authors": [
{
"first": "Hua-Ping",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hong-Kui",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "De-Yi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of Second SIGHAN Workshop",
"volume": "",
"issue": "",
"pages": "184--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, Hua-Ping, Hong-Kui Yu, De-Yi Xiong, and Qun Liu. 2003. HHMM-based Chinese Lexical Analyzer ICTCLAS. In Proceedings of Second SIGHAN Workshop, pages 184-187.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Figure 1: Segmentation Flow",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "Compare with Bakeoff Results (Overall F-measure and Unknown Word Recall)",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"text": "based on these feature templates. The B[egin], I[ntermediate], E[nd], S[ingle] tags are called LL (left boundary), MM (middle), RR (right boundary) and LR (single-character word) in(Xue and Converse, 2002).",
"html": null,
"content": "<table><tr><td colspan=\"2\">Tag Description</td></tr><tr><td>S</td><td>one-character word</td></tr><tr><td>B</td><td>first character in a multi-character word</td></tr><tr><td>I</td><td>intermediate character in a multi-character word (for words longer than two characters)</td></tr><tr><td>E</td><td>last character in a multi-character word</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF1": {
"text": "Position tags in a word",
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF2": {
"text": "Correction on output tags",
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF5": {
"text": "Bakeoff Data",
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF10": {
"text": "Segmentation Results of Joint Method",
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null
}
}
}
}