{ "paper_id": "P03-1038", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:13:45.104948Z" }, "title": "Self-Organizing Markov Models and Their Application to Part-of-Speech Tagging", "authors": [ { "first": "Jin-Dong", "middle": [], "last": "Kim", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tokyo", "location": {} }, "email": "jdkim@is.s.u-tokyo.ac.jp" }, { "first": "Hae-Chang", "middle": [], "last": "Rim", "suffix": "", "affiliation": { "laboratory": "", "institution": "Korea University", "location": {} }, "email": "rim@nlp.korea.ac.kr" }, { "first": "Jun", "middle": [ "'" ], "last": "Ich Tsujii", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tokyo", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents a method to develop a class of variable memory Markov models that have higher memory capacity than traditional (uniform memory) Markov models. The structure of the variable memory models is induced from a manually annotated corpus through a decision tree learning algorithm. A series of comparative experiments show the resulting models outperform uniform memory Markov models in a part-of-speech tagging task.", "pdf_parse": { "paper_id": "P03-1038", "_pdf_hash": "", "abstract": [ { "text": "This paper presents a method to develop a class of variable memory Markov models that have higher memory capacity than traditional (uniform memory) Markov models. The structure of the variable memory models is induced from a manually annotated corpus through a decision tree learning algorithm. A series of comparative experiments show the resulting models outperform uniform memory Markov models in a part-of-speech tagging task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many major NLP tasks can be regarded as problems of finding an optimal valuation for random processes. For example, for a given word sequence, part-of-speech (POS) tagging involves finding an optimal sequence of syntactic classes, and NP chunking involves finding IOB tag sequences (each of which represents the inside, outside and beginning of noun phrases respectively).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Many machine learning techniques have been developed to tackle such random process tasks, which include Hidden Markov Models (HMMs) (Rabiner, 1989) , Maximum Entropy Models (MEs) (Ratnaparkhi, 1996) , Support Vector Machines (SVMs) (Vapnik, 1998) , etc.", "cite_spans": [ { "start": 132, "end": 147, "text": "(Rabiner, 1989)", "ref_id": "BIBREF0" }, { "start": 179, "end": 198, "text": "(Ratnaparkhi, 1996)", "ref_id": "BIBREF1" }, { "start": 232, "end": 246, "text": "(Vapnik, 1998)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Among them, SVMs have high memory capacity and show high performance, especially when the target classification requires the consideration of various features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "On the other hand, HMMs have low memory capacity but they work very well, especially when the target task involves a series of classifications that are tightly related to each other and requires global optimization of them. As for POS tagging, recent comparisons (Brants, 2000; Schr\u00f6der, 2001) show that HMMs work better than other models when they are combined with good smoothing techniques and with handling of unknown words.", "cite_spans": [ { "start": 263, "end": 277, "text": "(Brants, 2000;", "ref_id": "BIBREF5" }, { "start": 278, "end": 293, "text": "Schr\u00f6der, 2001)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While global optimization is the strong point of HMMs, developers often complain that it is difficult to make HMMs incorporate various features and to improve them beyond given performances. For example, we often find that in some cases a certain lexical context can improve the performance of an HMM-based POS tagger, but incorporating such additional features is not easy and it may even degrade the overall performance. Because Markov models have the structure of tightly coupled states, an arbitrary change without elaborate consideration can spoil the overall structure. This paper presents a way of utilizing statistical decision trees to systematically raise the memory capacity of Markov models and effectively to make Markov models be able to accommodate various features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The tagging model is probabilistically defined as finding the most probable tag sequence when a word sequence is given (equation (1)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Underlying Model", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "T (w 1,k ) = arg max t 1,k P (t 1,k |w 1,k )", "eq_num": "(1)" } ], "section": "Underlying Model", "sec_num": "2" }, { "text": "= arg max", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Underlying Model", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "t 1,k P (t 1,k )P (w 1,k |t 1,k )", "eq_num": "(2)" } ], "section": "Underlying Model", "sec_num": "2" }, { "text": "\u2248 arg max", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Underlying Model", "sec_num": "2" }, { "text": "t 1,k k i=1 P (t i |t i\u22121 )P (w i |t i ) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Underlying Model", "sec_num": "2" }, { "text": "By applying Bayes' formula and eliminating a redundant term not affecting the argument maximization, we can obtain equation 2which is a combination of two separate models: the tag language model, P (t 1,k ) and the tag-to-word translation model, P (w 1,k |t 1,k ). Because the number of word sequences, w 1,k and tag sequences, t 1,k is infinite, the model of equation 2is not computationally tractable. Introduction of Markov assumption reduces the complexity of the tag language model and independent assumption between words makes the tag-to-word translation model simple, which result in equation 3representing the well-known Hidden Markov Model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Underlying Model", "sec_num": "2" }, { "text": "Let's focus on the Markov assumption which is made to reduce the complexity of the original tagging problem and to make the tagging problem tractable. We can imagine the following process through which the Markov assumption can be introduced in terms of context classification:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Context Classification", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (T = t 1,k ) = k i=1 P (t i |t 1,i\u22121 ) (4) \u2248 k i=1 P (t i |\u03a6(t 1,i\u22121 )) (5) \u2248 k i=1 P (t i |t i\u22121 )", "eq_num": "(6)" } ], "section": "Effect of Context Classification", "sec_num": "3" }, { "text": "In equation 5, a classification function \u03a6(t 1,i\u22121 ) is introduced, which is a mapping of infinite contextual patterns into a set of finite equivalence classes. By defining the function as follows we can get equation (6) which represents a widely-used bi-gram model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Context Classification", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03a6(t 1,i\u22121 ) \u2261 t i\u22121", "eq_num": "(7)" } ], "section": "Effect of Context Classification", "sec_num": "3" }, { "text": "Equation (7) classifies all the contextual patterns ending in same tags into the same classes, and is equivalent to the Markov assumption.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Context Classification", "sec_num": "3" }, { "text": "The assumption or the definition of the above classification function is based on human intuition. Figure 2 : Effect of context with and without lexical information Although this simple definition works well mostly, because it is not based on any intensive analysis of real data, there is room for improvement. Figure 1 and 2 illustrate the effect of context classification on the compiled distribution of syntactic classes, which we believe provides the clue to the improvement.", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 107, "text": "Figure 2", "ref_id": null }, { "start": 311, "end": 319, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Effect of Context Classification", "sec_num": "3" }, { "text": "prep P | * ( ) in' ' , | prep P * ( ) with' ' , | prep P * ( ) out' ' , | prep P *", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Context Classification", "sec_num": "3" }, { "text": "Among the four distributions showed in Figure 1 , the top one illustrates the distribution of syntactic classes in the Brown corpus that appear after all the conjunctions. In this case, we can say that we are considering the first order context (the immediately preceding words in terms of part-of-speech). The following three ones illustrates the distributions collected after taking the second order context into consideration. In these cases, we can say that we have extended the context into second order or we have classified the first order context classes again into second order context classes. It shows that distributions like P ( * |vb, conj) and P ( * |vbp, conj) are very different from the first order ones, while distributions like P ( * |f w, conj) are not. Figure 2 shows another way of context extension, so called lexicalization. Here, the initial first order context class (the top one) is classified again by referring the lexical information (the following three ones). We see that the distribution after the preposition, out is quite different from distribution after other prepositions.", "cite_spans": [], "ref_spans": [ { "start": 39, "end": 47, "text": "Figure 1", "ref_id": null }, { "start": 774, "end": 782, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Effect of Context Classification", "sec_num": "3" }, { "text": "From the above observations, we can see that by applying Markov assumptions we may miss much useful contextual information, or by getting a better context classification we can build a better context model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Context Classification", "sec_num": "3" }, { "text": "One of the straightforward ways of context extension is extending context uniformly. Tri-gram tagging models can be thought of as a result of the uniform extension of context from bi-gram tagging models. TnT (Brants, 2000) based on a second order HMM, is an example of this class of models and is accepted as one of the best part-of-speech taggers used around.", "cite_spans": [ { "start": 208, "end": 222, "text": "(Brants, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "4" }, { "text": "The uniform extension can be achieved (relatively) easily, but due to the exponential growth of the model size, it can only be performed in restrictive a way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "4" }, { "text": "Another way of context extension is the selective extension of context. In the case of context extension from lower context to higher like the examples in figure 1, the extension involves taking more information about the same type of contextual features. We call this kind of extension homogeneous context extension. (Brants, 1998) presents this type of context extension method through model merging and splitting, and also prediction suffix tree learning (Sch\u00fctze and Singer, 1994; D. Ron et. al, 1996) is another well-known method that can perform homogeneous context extension.", "cite_spans": [ { "start": 318, "end": 332, "text": "(Brants, 1998)", "ref_id": "BIBREF4" }, { "start": 458, "end": 484, "text": "(Sch\u00fctze and Singer, 1994;", "ref_id": "BIBREF6" }, { "start": 485, "end": 505, "text": "D. Ron et. al, 1996)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "4" }, { "text": "On the other hand, figure 2 illustrates heterogeneous context extension, in other words, this type of extension involves taking more information about other types of contextual features. (Kim et. al, 1999) and (Pla and Molina, 2001 ) present this type of context extension method, so called selective lexicalization.", "cite_spans": [ { "start": 187, "end": 205, "text": "(Kim et. al, 1999)", "ref_id": "BIBREF8" }, { "start": 210, "end": 231, "text": "(Pla and Molina, 2001", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "4" }, { "text": "The selective extension can be a good alternative to the uniform extension, because the growth rate of the model size is much smaller, and thus various contextual features can be exploited. In the follow-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "4" }, { "text": "V V P P N N C C $ $ $ $ C C N N P P V V P-1 P-1 $ C N P V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "4" }, { "text": "Figure 3: a Markov model and its equivalent decision tree ing sections, we describe a novel method of selective extension of context which performs both homogeneous and heterogeneous extension simultaneously.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "4" }, { "text": "Our approach to the selective context extension is making use of the statistical decision tree framework. The states of Markov models are represented in statistical decision trees, and by growing the trees the context can be extended (or the states can be split).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Self-Organizing Markov Models", "sec_num": "5" }, { "text": "We have named the resulting models Self-Organizing Markov Models to reflect their ability to automatically organize the structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Self-Organizing Markov Models", "sec_num": "5" }, { "text": "The decision tree is a well known structure that is widely used for classification tasks. When there are several contextual features relating to the classification of a target feature, a decision tree organizes the features as the internal nodes in a manner where more informative features will take higher levels, so the most informative feature will be the root node. Each path from the root node to a leaf node represents a context class and the classification information for the target feature in the context class will be contained in the leaf node 1 . In the case of part-of-speech tagging, a classification will be made at each position (or time) of a word sequence, where the target feature is the syntactic class of the word at current position (or time) and the contextual features may include the syntactic", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Decision Tree Representation of Markov Models", "sec_num": "5.1" }, { "text": "V V P,* P,* N N C C $ $ $ $ C C N N W-1 W-1 V V P-1 P-1 $ C N P V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Decision Tree Representation of Markov Models", "sec_num": "5.1" }, { "text": "P,out P,out P,* P,* P,out P,out Figure 4 : a selectively lexicalized Markov model and its equivalent decision tree", "cite_spans": [], "ref_spans": [ { "start": 32, "end": 40, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Statistical Decision Tree Representation of Markov Models", "sec_num": "5.1" }, { "text": "V V P,* P,* N N (N)C (N)C $ $ $ $ P-2 P-2 N N W-1 W-1 V V P-1 P-1 $ C N P V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Decision Tree Representation of Markov Models", "sec_num": "5.1" }, { "text": "P,out P,out P,* P,* P,out P,out -1) is placed on the root, each branch represents a result of the test (which is labeled on the arc), and the corresponding leaf node contains the probabilistic distribution of the syntactic classes for the current position 2 . The example shown in figure 4 involves a further classification of context. On the left hand side, it is represented in terms of state splitting, while on the right hand side in terms of context extension (lexicalization), where a context class representing contextual patterns ending in P (a preposition) is extended by referring the lexical form and is classified again into the preposition, out and other prepositions. Figure 5 shows another further classification of context. It involves a homogeneous extension of context while the previous one involves a heterogeneous extension. Unlike prediction suffix trees which grow along an implicitly fixed order, decision trees don't presume any implicit order between contextual features and thus naturally can accommodate various features having no underlying order. In order for a statistical decision tree to be a Markov model, it must meet the following restrictions:", "cite_spans": [], "ref_spans": [ { "start": 32, "end": 35, "text": "-1)", "ref_id": null }, { "start": 682, "end": 690, "text": "Figure 5", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Statistical Decision Tree Representation of Markov Models", "sec_num": "5.1" }, { "text": "(V)C (V)C (*)C (*)C (*)C (*)C (N)C (N)C (V)C (V)C", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Decision Tree Representation of Markov Models", "sec_num": "5.1" }, { "text": "\u2022 There must exist at least one contextual feature that is homogeneous with the target feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Decision Tree Representation of Markov Models", "sec_num": "5.1" }, { "text": "\u2022 When the target feature at a certain time is classified, all the requiring context features must be visible", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Decision Tree Representation of Markov Models", "sec_num": "5.1" }, { "text": "The first restriction states that in order to be a Markov model, there must be inter-relations between the target features at different time. The second restriction explicitly states that in order for the decision tree to be able to classify contextual patterns, all the context features must be visible, and implicitly states that homogeneous context features that appear later than the current target feature cannot be contextual features. Due to the second restriction, the Viterbi algorithm can be used with the self-organizing Markov models to find an optimal sequence of tags for a given word sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Decision Tree Representation of Markov Models", "sec_num": "5.1" }, { "text": "Self-organizing Markov models can be induced from manually annotated corpora through the SDTL algorithm (algorithm 1) we have designed. It is a variation of ID3 algorithm (Quinlan, 1986) . SDTL is a greedy algorithm where at each time of the node making phase the most informative feature is selected (line 2), and it is a recursive algorithm in the sense that the algorithm is called recursively to make child nodes (line 3), Though theoretically any statistical decision tree growing algorithms can be used to train selforganizing Markov models, there are practical problems we face when we try to apply the algorithms to language learning problems. One of the main obstacles is the fact that features used for language learning often have huge sets of values, which cause intensive fragmentation of the training corpus along with the growing process and eventually raise the sparse data problem.", "cite_spans": [ { "start": 171, "end": 186, "text": "(Quinlan, 1986)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Self-Organizing Markov Models", "sec_num": "5.2" }, { "text": "To deal with this problem, the algorithm incorporates a value selection mechanism (line 1) where only meaningful values are selected into a reduced value set. The meaningful values are statistically defined as follows: if the distribution of the target feature varies significantly by referring to the value v, v is accepted as a meaningful value. We adopted the \u03c7 2 -test to determine the difference between the distributions of the target feature before and after referring to the value v. The use of \u03c7 2 -test enables us to make a principled decision about the threshold based on a certain confidence level 3 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Self-Organizing Markov Models", "sec_num": "5.2" }, { "text": "To evaluate the contribution of contextual features to the target classification (line 2), we adopted Lopez distance (L\u00f3pez, 1991) . While other measures including Information Gain or Gain Ratio (Quinlan, 1986) also can be used for this purpose, the Lopez distance has been reported to yield slightly better results (L\u00f3pez, 1998) .", "cite_spans": [ { "start": 117, "end": 130, "text": "(L\u00f3pez, 1991)", "ref_id": "BIBREF11" }, { "start": 195, "end": 210, "text": "(Quinlan, 1986)", "ref_id": "BIBREF10" }, { "start": 316, "end": 329, "text": "(L\u00f3pez, 1998)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Self-Organizing Markov Models", "sec_num": "5.2" }, { "text": "The probabilistic distribution of the target feature estimated on a node making phase (line 4) is smoothed by using Jelinek and Mercer's interpolation method (Jelinek and Mercer, 1980) along the ancestor nodes. The interpolation parameters are estimated by deleted interpolation algorithm introduced in (Brants, 2000) .", "cite_spans": [ { "start": 158, "end": 184, "text": "(Jelinek and Mercer, 1980)", "ref_id": "BIBREF13" }, { "start": 303, "end": 317, "text": "(Brants, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Self-Organizing Markov Models", "sec_num": "5.2" }, { "text": "We performed a series of experiments to compare the performance of self-organizing Markov models with traditional Markov models. Wall Street Journal as contained in Penn Treebank II is used as the reference material. As the experimental task is partof-speech tagging, all other annotations like syntactic bracketing have been removed from the corpus. Every figure (digit) in the corpus has been changed into a special symbol.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "From the whole corpus, every 10'th sentence from the first is selected into the test corpus, and the remaining ones constitute the training corpus. Table 6 shows some basic statistics of the corpora.", "cite_spans": [], "ref_spans": [ { "start": 148, "end": 155, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "We implemented several tagging models based on equation (3). For the tag language model, we used store the probability distribution of t over E ; end return current node; ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "VUG VUG V Figure 6 : Basic statistics of corpora the following 6 approximations:", "cite_spans": [], "ref_spans": [ { "start": 10, "end": 18, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "YQTFU YQTFU YQTFU YQTFU UG P E G P E G U UG P E G P E G U UG P E G P E G U UG P E G P E G U UG VUG", "sec_num": null }, { "text": "P (t 1,k ) \u2248 k i=1 P (t i |t i\u22121 ) (8) \u2248 k i=1 P (t i |t i\u22122,i\u22121 ) (9) \u2248 k i=1 P (t i |\u03a6(t i\u22122,i\u22121 )) (10) \u2248 k i=1 P (t i |\u03a6(t i\u22121 , w i\u22121 )) (11) \u2248 k i=1 P (t i |\u03a6(t i\u22122,i\u22121 , w i\u22121 )) (12) \u2248 k i=1 P (t i |\u03a6(t i\u22122,i\u22121 , w i\u22122,i\u22121 ))(13)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "YQTFU YQTFU YQTFU YQTFU UG P E G P E G U UG P E G P E G U UG P E G P E G U UG P E G P E G U UG VUG", "sec_num": null }, { "text": "Equation 8and 9represent first-and secondorder Markov models respectively. Equation 10\u223c (13) represent self-organizing Markov models at various settings where the classification functions \u03a6(\u2022) are intended to be induced from the training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "YQTFU YQTFU YQTFU YQTFU UG P E G P E G U UG P E G P E G U UG P E G P E G U UG P E G P E G U UG VUG", "sec_num": null }, { "text": "For the estimation of the tag-to-word translation model we used the following model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "YQTFU YQTFU YQTFU YQTFU UG P E G P E G U UG P E G P E G U UG P E G P E G U UG P E G P E G U UG VUG", "sec_num": null }, { "text": "P (w i |t i ) = k i \u00d7 P (k i |t i ) \u00d7P (w i |t i ) +(1 \u2212 k i ) \u00d7 P (\u00ack i |t i ) \u00d7P (e i |t i ) (14)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "YQTFU YQTFU YQTFU YQTFU UG P E G P E G U UG P E G P E G U UG P E G P E G U UG P E G P E G U UG VUG", "sec_num": null }, { "text": "Equation 14uses two different models to estimate the translation model. If the word, w i is a known word, k i is set to 1 so the second model is ignored.P means the maximum likelihood probability. P (k i |t i ) is the probability of knownness generated from t i and is estimated by using Good-Turing estimation (Gale and Samson, 1995) . If the word, w i is an unknown word, k i is set to 0 and the first term is ignored. e i represents suffix of w i and we used the last two letters for it.", "cite_spans": [ { "start": 311, "end": 334, "text": "(Gale and Samson, 1995)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "YQTFU YQTFU YQTFU YQTFU UG P E G P E G U UG P E G P E G U UG P E G P E G U UG P E G P E G U UG VUG", "sec_num": null }, { "text": "With the 6 tag language models and the 1 tag-toword translation model, we construct 6 HMM models, among them 2 are traditional first-and secondhidden Markov models, and 4 are self-organizing hidden Markov models. Additionally, we used T3, a tri-gram-based POS tagger in ICOPOST release 1.8.3 for comparison.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "YQTFU YQTFU YQTFU YQTFU UG P E G P E G U UG P E G P E G U UG P E G P E G U UG P E G P E G U UG VUG", "sec_num": null }, { "text": "The overall performances of the resulting models estimated from the test corpus are listed in figure 7. From the leftmost column, it shows the model name, the contextual features, the target features, the performance and the model size of our 6 implementations of Markov models and additionally the performance of T3 is shown.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "YQTFU YQTFU YQTFU YQTFU UG P E G P E G U UG P E G P E G U UG P E G P E G U UG P E G P E G U UG VUG", "sec_num": null }, { "text": "Our implementation of the second-order hidden Markov model (HMM-P2) achieved a slightly worse performance than T3, which, we are interpreting, is due to the relatively simple implementation of our unknown word guessing module 4 . While HMM-P2 is a uniformly extended model from HMM-P1, SOHMM-P2 has been selectively extended using the same contextual feature. It is encouraging that the self-organizing model suppress the increase of the model size in half (2,099Kbyte vs 5,630Kbyte) without loss of performance (96.5%).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "YQTFU YQTFU YQTFU YQTFU UG P E G P E G U UG P E G P E G U UG P E G P E G U UG P E G P E G U UG VUG", "sec_num": null }, { "text": "In a sense, the results of incorporating word features (SOHMM-P1W1, SOHMM-P2W1 and SOHMM-P2W2) are disappointing. The improvements of performances are very small compared to the increase of the model size. Our interpretation for the results is that because the distribution of words is huge, no matter how many words the models incorporate into context modeling, only a few of them may actually contribute during test phase. We are planning to use more general features like word class, suffix, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "YQTFU YQTFU YQTFU YQTFU UG P E G P E G U UG P E G P E G U UG P E G P E G U UG P E G P E G U UG VUG", "sec_num": null }, { "text": "Another positive observation is that a homogeneous context extension (SOHMM-P2) and a heterogeneous context extension (SOHMM-P1W1) yielded significant improvements respectively, and the combination (SOHMM-P2W1) yielded even more improvement. This is a strong point of using decision trees rather than prediction suffix trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "YQTFU YQTFU YQTFU YQTFU UG P E G P E G U UG P E G P E G U UG P E G P E G U UG P E G P E G U UG VUG", "sec_num": null }, { "text": "Through this paper, we have presented a framework of self-organizing Markov model learning. The experimental results showed some encouraging aspects of the framework and at the same time showed the direction towards further improvements. Because all the Markov models are represented as decision trees in the framework, the models are hu- ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "W-2, P-2, W-1, P-1 W-1, P-1 P-2, P-1 P-2, P-1 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SOHMM-P2W1", "sec_num": null }, { "text": "W-2, P-2, W-1, P-1 W-1, P-1 P-2, P-1 P-2, P-1 P-1 T0 T0 T0 T0 T0 14,247K SOHMM-P1W1 35, man readable and we are planning to develop editing tools for self-organizing Markov models that help experts to put human knowledge about language into the models. By adopting \u03c7 2 -test as the criterion for potential improvement, we can control the degree of context extension based on the confidence level.", "cite_spans": [], "ref_spans": [ { "start": 46, "end": 103, "text": "P-1 T0 T0 T0 T0 T0 14,247K SOHMM-P1W1 35,", "ref_id": null } ], "eq_spans": [], "section": "SOHMM-P2W1", "sec_num": null }, { "text": "While ordinary decision trees store deterministic classification information in their leaves, statistical decision trees store probabilistic distribution of possible decisions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The distribution doesn't appear in the figure explicitly. Just imagine each leaf node has the distribution for the target feature in the corresponding context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used 95% of confidence level to extend context. In other words, only when there are enough evidences for improvement at 95% of confidence level, a context is extended.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "T3 uses a suffix trie for unknown word guessing, while our implementations use just last two letters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The research is partially supported by Information Mobility Project (CREST, JST, Japan) and Genome Information Science Project (MEXT, Japan).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A tutorial on Hidden Markov Models and selected applications in speech recognition", "authors": [ { "first": "L", "middle": [], "last": "Rabiner", "suffix": "" } ], "year": 1989, "venue": "Proceedings of the IEEE", "volume": "77", "issue": "", "pages": "257--285", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Rabiner. 1989. A tutorial on Hidden Markov Mod- els and selected applications in speech recognition. in Proceedings of the IEEE, 77(2):257-285", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A maximum entropy model for part-of-speech tagging", "authors": [ { "first": "A", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proceedings of the Confer- ence on Empirical Methods in Natural Language Pro- cessing (EMNLP).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Statistical Learning Theory", "authors": [ { "first": "V", "middle": [], "last": "Vapnik", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. Vapnik. 1998. Statistical Learning Theory. Wiley, Chichester, UK.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "ICOPOST -Ingo's Collection Of POS Taggers", "authors": [ { "first": "I", "middle": [], "last": "Schr\u00f6der", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Schr\u00f6der. 2001. ICOPOST -Ingo's Collection Of POS Taggers. In http://nats-www.informatik.uni- hamburg.de/\u223cingo/icopost/.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Estimating HMM Topologies", "authors": [ { "first": "T", "middle": [], "last": "Brants", "suffix": "" } ], "year": 1998, "venue": "The Tbilisi Symposium on Logic, Language and Computation: Selected Papers", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Brants. 1998 Estimating HMM Topologies. In The Tbilisi Symposium on Logic, Language and Computa- tion: Selected Papers.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "TnT -A Statistical Part-of-Speech Tagger", "authors": [ { "first": "T", "middle": [], "last": "Brants", "suffix": "" } ], "year": 2000, "venue": "6'th Applied Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Brants. 2000 TnT -A Statistical Part-of-Speech Tag- ger. In 6'th Applied Natural Language Processing.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Part-of-speech tagging using a variable memory Markov model", "authors": [ { "first": "H", "middle": [], "last": "Sch\u00fctze", "suffix": "" }, { "first": "Y", "middle": [], "last": "Singer", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Sch\u00fctze and Y. Singer. 1994. Part-of-speech tagging using a variable memory Markov model. In Proceed- ings of the Annual Meeting of the Association for Com- putational Linguistics (ACL).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Power of Amnesia: Learning Probabilistic Automata with Variable Memory Length", "authors": [ { "first": "D", "middle": [], "last": "Ron", "suffix": "" }, { "first": "Y", "middle": [], "last": "Singer", "suffix": "" }, { "first": "N", "middle": [], "last": "Tishby", "suffix": "" } ], "year": 1996, "venue": "Machine Learning", "volume": "25", "issue": "", "pages": "117--149", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Ron, Y. Singer and N. Tishby. 1996 The Power of Amnesia: Learning Probabilistic Automata with Vari- able Memory Length. In Machine Learning, 25(2- 3):117-149.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "HMM Specialization with Selective Lexicalization", "authors": [ { "first": "J.-D", "middle": [], "last": "Kim", "suffix": "" }, { "first": "S.-Z", "middle": [], "last": "Lee", "suffix": "" }, { "first": "H.-C", "middle": [], "last": "Rim", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Joint SIGDAT Conference on Empirical Methods in NLP and Very Large Corpora(EMNLP/VLC99)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.-D. Kim, S.-Z. Lee and H.-C. Rim. 1999 HMM Specialization with Selective Lexicalization. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in NLP and Very Large Cor- pora(EMNLP/VLC99).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Part-of-Speech Tagging with Lexicalized HMM", "authors": [ { "first": "F", "middle": [], "last": "Pla", "suffix": "" }, { "first": "A", "middle": [], "last": "Molina", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pla and A. Molina. 2001 Part-of-Speech Tagging with Lexicalized HMM. In Proceedings of the Inter- national Conference on Recent Advances in Natural Language Processing(RANLP2001).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Induction of decision trees", "authors": [ { "first": "R", "middle": [], "last": "Quinlan", "suffix": "" } ], "year": 1986, "venue": "Machine Learning", "volume": "1", "issue": "", "pages": "81--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Quinlan. 1986 Induction of decision trees. In Ma- chine Learning, 1(1):81-106.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A Distance-Based Attribute Selection Measure for Decision Tree Induction", "authors": [ { "first": "R", "middle": [], "last": "L\u00f3pez De M\u00e1ntaras", "suffix": "" } ], "year": 1991, "venue": "Machine Learning", "volume": "6", "issue": "", "pages": "81--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. L\u00f3pez de M\u00e1ntaras. 1991. A Distance-Based At- tribute Selection Measure for Decision Tree Induction. In Machine Learning, 6(1):81-92.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Comparing Information-theoretic Attribute Selection Measures: A statistical approach", "authors": [ { "first": "R", "middle": [], "last": "L\u00f3pez De M\u00e1ntaras", "suffix": "" }, { "first": "J", "middle": [], "last": "Cerquides", "suffix": "" }, { "first": "P", "middle": [], "last": "Garcia", "suffix": "" } ], "year": 1998, "venue": "Artificial Intelligence Communications", "volume": "11", "issue": "", "pages": "91--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. L\u00f3pez de M\u00e1ntaras, J. Cerquides and P. Garcia. 1998. Comparing Information-theoretic Attribute Selection Measures: A statistical approach. In Artificial Intel- ligence Communications, 11(2):91-100.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Interpolated estimation of Markov source parameters from sparse data", "authors": [ { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "R", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1980, "venue": "Proceedings of the Workshop on Pattern Recognition in Practice", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Jelinek and R. Mercer. 1980. Interpolated estimation of Markov source parameters from sparse data. In Pro- ceedings of the Workshop on Pattern Recognition in Practice.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Good-Turing frequency estimatin without tears", "authors": [ { "first": "W", "middle": [], "last": "Gale", "suffix": "" }, { "first": "G", "middle": [], "last": "Sampson", "suffix": "" } ], "year": 1995, "venue": "Jounal of Quantitative Linguistics", "volume": "2", "issue": "", "pages": "217--237", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Gale and G. Sampson. 1995. Good-Turing frequency estimatin without tears. In Jounal of Quantitative Lin- guistics, 2:217-237", "links": null } }, "ref_entries": { "FIGREF0": { "text": "a selectively extended Markov model and its equivalent decision tree classes or the lexical form of preceding words. Figure 3 shows an example of Markov model for a simple language having nouns (N), conjunctions (C), prepositions (P) and verbs (V). The dollar sign ($) represents sentence initialization. On the left hand side is the graph representation of the Markov model and on the right hand side is the decision tree representation, where the test for the immediately preceding syntactic class (represented by P", "uris": null, "num": null, "type_str": "figure" }, "FIGREF1": { "text": "Estimated Performance of Various Models", "uris": null, "num": null, "type_str": "figure" } } } }