{ "paper_id": "A94-1009", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:13:44.247461Z" }, "title": "Does Baum-Welch Re-estimation :Help Taggers?", "authors": [ { "first": "David", "middle": [], "last": "Elworthy", "suffix": "", "affiliation": { "laboratory": "", "institution": "Sharp Laboratories of Europe Ltd", "location": { "addrLine": "Edmund Halley Road Oxford Science Park", "postCode": "OX4 4GA", "settlement": "Oxford", "country": "United Kingdom" } }, "email": "dahe@sharp@co.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In part of speech tagging by Hidden Markov Model, a statistical model is used to assign grammatical categories to words in a text. Early work in the field relied on a corpus which had been tagged by a human annotator to train the model. More recently, Cutting et al. (1992) suggest that training can be achieved with a minimal lexicon and a limited amount of a priori information about probabilities, by using an Baum-Welch re-estimation to automatically refine the model. In this paper, I report two experiments designed to determine how much manual training information is needed. The first experiment suggests that initial biasing of either lexical or transition probabilities is essential to achieve a good accuracy. The second experiment reveals that there are three distinct patterns of Baum-Welch reestimation. In two of the patterns, the re-estimation ultimately reduces the accuracy of the tagging rather than improving it. The pattern which is applicable can be predicted from the quality of the initial model and the similarity between the tagged training corpus (if any) and the corpus to be tagged. Heuristics for deciding how to use re-estimation in an effective manner are given. The conclusions are broadly in agreement with those of Merialdo (1994), but give greater detail about the contributions of different parts of the model.", "pdf_parse": { "paper_id": "A94-1009", "_pdf_hash": "", "abstract": [ { "text": "In part of speech tagging by Hidden Markov Model, a statistical model is used to assign grammatical categories to words in a text. Early work in the field relied on a corpus which had been tagged by a human annotator to train the model. More recently, Cutting et al. (1992) suggest that training can be achieved with a minimal lexicon and a limited amount of a priori information about probabilities, by using an Baum-Welch re-estimation to automatically refine the model. In this paper, I report two experiments designed to determine how much manual training information is needed. The first experiment suggests that initial biasing of either lexical or transition probabilities is essential to achieve a good accuracy. The second experiment reveals that there are three distinct patterns of Baum-Welch reestimation. In two of the patterns, the re-estimation ultimately reduces the accuracy of the tagging rather than improving it. The pattern which is applicable can be predicted from the quality of the initial model and the similarity between the tagged training corpus (if any) and the corpus to be tagged. Heuristics for deciding how to use re-estimation in an effective manner are given. The conclusions are broadly in agreement with those of Merialdo (1994), but give greater detail about the contributions of different parts of the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Part-of-speech tagging is the process of assigning grammatical categories to individual words in a corpus. One widely used approach makes use of a statistical technique called a Hidden Markov Model (HMM). The model is defined by two collections of parameters: the transition probabilities, which express the probability that a tag follows the preceding one (or two for a second order model); and the lexical probabilities, giving the probability that a word has a given tag without regard to words on either side of it. To tag a text, the tags with non-zero probability are hypothesised for each word, and the most probable sequence of tags given tbe sequence of words is determined from the probabilities. Two algorithms are commonly used, known as the Forward-Backward (FB) and Viterbi algorithms. FB assigns a probability to every tag on every word. while Viterbi prunes tags which cannot be chosen because their probability is lower than the ones of competing hypotheses, with a corresponding gain in computational efficiency. For an introduction to the algorithms, see Cutting et al. (1992) , or the lucid description by Sharman (1990) .", "cite_spans": [ { "start": 1074, "end": 1095, "text": "Cutting et al. (1992)", "ref_id": null }, { "start": 1126, "end": 1140, "text": "Sharman (1990)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1" }, { "text": "There are two principal sources for the parameters of the model. If a tagged corpus prepared by a human annotator is available, the transition and lexical probabilities can be estimated from the frequencies of pairs of tags and of tags associated with words. Alternatively~ a procedure called Baum-Welch (BW) re-estimation may be used, in which an untagged corpus is passed through the FB algorithm with some initial ruodel, and the resulting probabilities used to determine new values for the lexical and transition probabilities. By iterating the algorithm with the same corpus, the parameters of the model can be made to converge on values which are locally optimal for the given text. The degree of convergence can be measured using a perplexity measure, the sum of plog2p for hypothesis probabilities p, which gives an estimate of the degree of disorder in the model. The algorithm is again described by Cutting et al. and by Sharman, and a mathematical justification for it can be tbund in Huang et al. (1990) .", "cite_spans": [ { "start": 996, "end": 1015, "text": "Huang et al. (1990)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1" }, { "text": "The first major use of HMMs for part of speech tagging was in CLAWS (Garside et al., 1987) in the 1970s. With the availability of large corpora and fast computers, there has been a recent resurgence of interest, and a number of variations on and alter-natives to the FB, Viterbi and BW algorithms have been tried; see the work of, for example, Church (Church, 1988) , Brill (Brill and Marcus, 1992; Brill, 1992) , DeRose (DeRose, 1988) and gupiec (Kupiec, 1992) . One of the most effective taggers based on a pure HMM is that developed at Xerox (Cutting et al., 1992) . An important aspect of this tagger is that it will give good accuracy with a minimal amount of manually tagged training data. 96% accuracy correct assignment of tags to word token, compared with a human annotator, is quoted, over a 500000 word corpus.", "cite_spans": [ { "start": 68, "end": 90, "text": "(Garside et al., 1987)", "ref_id": "BIBREF7" }, { "start": 351, "end": 365, "text": "(Church, 1988)", "ref_id": "BIBREF2" }, { "start": 374, "end": 398, "text": "(Brill and Marcus, 1992;", "ref_id": "BIBREF0" }, { "start": 399, "end": 411, "text": "Brill, 1992)", "ref_id": "BIBREF1" }, { "start": 421, "end": 435, "text": "(DeRose, 1988)", "ref_id": null }, { "start": 447, "end": 461, "text": "(Kupiec, 1992)", "ref_id": "BIBREF10" }, { "start": 545, "end": 567, "text": "(Cutting et al., 1992)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1" }, { "text": "The Xerox tagger attempts to avoid the need for a hand-tagged training corpus as far as possible. Instead, an approximate model is constructed by hand, which is then improved by BW re-estimation on an untagged training corpus. In the above example, 8 iterations were sufficient. The initial model set up so that some transitions and some tags in the lexicon are favoured, and hence having a higher initial probability. Convergence of the model is improved by keeping the number of parameters in the model down. To assist in this, low frequency items in the lexicon are grouped together into equivalence classes, such that all words in a given equivalence class have the same tags and lexical probabilities, and whenever one of the words is looked up, then the data common to all of them is used. Re-estimation on any of the words in a class therefore counts towards re-estimation for all of them 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1" }, { "text": "The results of the Xerox experiment appear very encouraging. Preparing tagged corpora either by hand is labour-intensive and potentially error-prone, and although a semi-automatic approach can be used (Marcus et al., 1993) , it is a good thing to reduce the human involvement as much as possible. However, some careful examination of the experiment is needed. In the first place, Cutting et al. do not compare the success rate in their work with that achieved from a hand-tagged training text with no re-estimation. Secondly, it is unclear how much the initial biasing contributes the success rate. If significant human intervention is needed to provide the biasing, then the advantages of automatic training become rather weaker, especially if such intervention is needed on each new text domain. The kind of biasing Cutting et al. describe reflects linguistic insights combined with an understanding of the predictions a tagger could reasonably be expected to make and the ones it could not.", "cite_spans": [ { "start": 201, "end": 222, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1" }, { "text": "The aim of this paper is to examine the role that training plays in the tagging process, by an experimental evaluation of how the accuracy of the tagger varies with the initial conditions. The results suggest that a completely unconstrained initial model does not produce good quality results, and that one 1The technique was originally developed by Kupiec (Kupiec, 1989) . accurately trained from a hand-tagged corpus will generally do better than using an approach based on re-estimation, even when the training comes from a different source. A second experiment shows that there are different patterns of re-estimation, and that these patterns vary more or less regularly with a broad characterisation of the initial conditions. The outcome of the two experiments together points to heuristics for making effective use of training and reestimation, together with some directions for further research.", "cite_spans": [ { "start": 357, "end": 371, "text": "(Kupiec, 1989)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1" }, { "text": "Work similar to that described here has been carried out by Merialdo (1994) , with broadly similar conclusions. We will discuss this work below. The principal contribution of this work is to separate the effect of the lexical and transition parameters of the model, and to show how the results vary with different degree of similarity between the training and test data.", "cite_spans": [ { "start": 60, "end": 75, "text": "Merialdo (1994)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1" }, { "text": "The tagger and corpora", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "The experiments were conducted using two taggers, one written in C at Cambridge University Computer Laboratory, and the other in C-t-+ at Sharp Laboratories. Both taggers implement the FB, Viterbi and BW algorithms. For training from a hand-tagged corpus, the model is estimated by counting the number of transitions from each tag i to each tag j, the total occurrence of each tag i, and the total occurrence of word w with tag i. Writing these as f(i,j), f(i) and f(i,w) respectively, the transition probability from tag i to tag j is estimated as f(i,j)/f(i) and the lexical probability as f(i, w)/f(i). Other estimation formulae have been used in the past. For example, CLAWS (Garside ct al., 1987) normalises the lexical probabilities by the total frequency of the word rather than of the tag. Consulting the Baum-Welch re-estimation formulae suggests that the approach described is more appropriate, and this is confirmed by slightly greater tagging accuracy. Any transitions not seen in the training corpus are given a small, non-zero probability The lexicon lists, for each word, all of tags seen in the training corpus with their probabilities. For words not found in the lexicon, all open-class tags are hypothesised, with equal probabilities. These words are added to the lexicon at the end of first iteration when re-estimation is being used, so that the probabilities of their hypotheses subsequently diverge from being uniform.", "cite_spans": [ { "start": 679, "end": 701, "text": "(Garside ct al., 1987)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "To measure the accuracy of the tagger, we compare the chosen tag with one provided by a human annotator. Various methods of quoting accuracy have been used in the literature, the most common being the proport ion of words (tokens) receiving the correct tag. A better measure is the proportion of ambiguous words which are given the correct tag, where by ambiguous we mean that more than one tag was hypothesised. The former figure looks more impressive, but the latter gives a better measure of how well the tagger is doing, since it factors out the trivial assignment of tags to non-ambiguous words. For a corpus in which a fraction a of the words are ambiguous, and p is the accuracy on ambiguous words, the overall accuracy can be recovered from 1 -a + pa. All of the accuracy figures quoted below are for ambiguous words only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "The training and test corpora were drawn from the LOB corpus and the Penn treebank. The hand tagging of these corpora is quite different. For example, the LOB tagset used 134 tags, while the Penn treebank tagset has 48. The general pattern of the results presented does not vary greatly with the corpus and tagset used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "The first experiment concerned the effect of the initial conditions on the accuracy using Baum-Welch re-estimation. A model was trained from a handtagged corpus in the manner described above, and then degraded in various ways to simulate the effect of poorer training, as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The effect of the initial conditions", "sec_num": "3" }, { "text": "Lexicon DO Un-degraded lexical probabilities, calcu- lated from f(i, w)/f(i). D1 Lexical probabilities are correctly ordered,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The effect of the initial conditions", "sec_num": "3" }, { "text": "so that the most frequent tag has the highest lexical probability and so on, but the absolute values are otherwise unreliable. D2 Lexical probabilities are proportional to the overall tag frequencies, and are hence independent of the actual occurrence of the word in the training corpus. D3 All lexical probabilities have the same value, so that the lexicon contains no information other than the possible tags for each word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The effect of the initial conditions", "sec_num": "3" }, { "text": "TO Un-degraded transition probabilities, calculated from f(i, j)/f(i). T1 All transition probabilities have the same value.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transitions", "sec_num": null }, { "text": "We could expect to achieve D1 from, say, a printed dictionary listing parts of speech in order of frequency. Perfect training is represented by case D0+T0. The Xerox experiments (Cutting et al., 1992) correspond to something between D1 and D2, and between TO and T1, in that there is some initial biasing of the probabilities. For the test, four corpora were constructed from the LOB corpus: LOB-B from part B, LOB-L from part L, LOB-B-G from parts B to G inclusive and LOB-B-J from parts B to J inclusive. Corpus LOB-B-J was used to train the model, and LOB-B. LOB-L and LOB-B-G were passed through thirty iterations of the BW algorithm as untagged data. In each case, the best accuracy (on ambiguous words, as usual) from the FB algorithm was noted. As an additional test, we tried assigning the most probable tag from the DO lexicon, completely ignoring tag-tag transitions. The results are summarised in table 1, for various corpora, where F denotes the \"most frequent tag\" test. As an example of how these figures relate to overall accuracies, LOB-B contains 32.35% ambiguous tokens with respect to the lexicon from LOB-B-J, and the overall accuracy in the D0+T0 case is hence 98.69%. The general pattern of the results is similar across the three test corpora, with the only difference of interest being that case D3+T0 does better for LOB-L than tbr the other two cases, and in particular does better than cases D0+T1 and DI+T1. A possible explanation is that in this case the test data does not overlap with the training data, and hence the good quality lexicons (DO and D1) have less of an influence. It is also interesting that D3+T1 does better than D2+T1. The reasons for this are unclear, and the results are not always the same with other corpora, which suggests that they are not statistically significant.", "cite_spans": [ { "start": 178, "end": 200, "text": "(Cutting et al., 1992)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Transitions", "sec_num": null }, { "text": "Several follow-up experiments were used to confirm the results: using corpora from the Penn treebank, using equivalence classes to ensure that all lexical entries have a total relative frequency of at least 0.01, and using larger corpora. The specific accuracies were different in the various tests, but the overall patterns remained much the same, suggesting that they are not an artifact of the tagset or of details of the text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transitions", "sec_num": null }, { "text": "The observations we can make about these results are as follows. Firstly, two of the tests, D2+T1 and D3+T1, give very poor performance. Their accuracy is not even as good as that achieved by picking the most frequent tag (although this of course implies a lexicon of DO or D1 quality). It follows that ifBaum-Welch re-estimation is to be an effective technique, the initial data must have either biasing in the transitions (the TO cases) or in the lexical probabilities (cases D0+T1 and DI+T1), but it is not necessary to have both (D2/D3+T0 and D0/DI+T1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transitions", "sec_num": null }, { "text": "Secondly, training from a hand-tagged corpus (case D0+T0) always does best, even when the test data is from a different source to the training data, as it is for LOB-L. So perhaps it is worth investing effort in hand-tagging training corpora after all, rather than just building a lexicon and letting reestimation sort out the probabilities. But how can we ensure that re-estimation will produce a good quality model? We look further at this issue in the next section. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transitions", "sec_num": null }, { "text": "During the first experiment, it became apparent that Baum-Welch re-estimation sometimes decreases the accuracy as the iteration progresses. A second experiment was conducted to decide when it is appropriate to use Baum-Welch re-estimation at all. There seem to be three patterns of behaviour: Classical A general trend of rising accuracy on each iteration, with any falls in accuracy being local. It indicates that the model is converging towards an optimum which is better than its starting point. Initial maximum Highest accuracy on the first iteration, and falling thereafter. In this case the initial model is of better quality than BW can achieve. That is, while BW will converge on an optimum, the notion of optimality is with respect to the HMM rather than to the linguistic judgements about correct tagging. Early maximum Rising accuracy for a small number of iterations (2-4), and then falling as in initial maximum. An example of each of the three behaviours is shown in figure 1 . The values of the accuracies and the test conditions are unimportant here; all we want to show is the general patterns. The second experiment had the aim of trying to discover which pattern applies under which circumstances, in order to help decide how to train the model. Clearly, if the expected pattern is initial maximum, we should not use BW at all, if early maximum, we should halt the process after a few iterations, and if classical, we should halt the process in a \"standard\" way, such as comparing the perplexity of successive models.", "cite_spans": [], "ref_spans": [ { "start": 981, "end": 989, "text": "figure 1", "ref_id": null } ], "eq_spans": [], "section": "Patterns of re-estimation", "sec_num": "4" }, { "text": "The tests were conducted in a similar manner to those of the first experiment, by building a lexicon and transitions from a hand tagged training corpus, and then applying them to a test corpus with varying degrees of degradation. Firstly, four different degrees of degradation were used: no degradation at all, D2 degradation of the lexicon, T1 degradation of the transitions, and the two together. Secondly, we selected test corpora with varying degrees of similarity to the training corpus: the same text, text from a similar domain, and text which is significantly different. Two tests were conducted with each combination of the degradation and similarity, using different corpora (from the Penn treebank) ranging in size from approximately 50000 words to 500000 words. The re-estimation wa.s allowed to run for ten iterations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Patterns of re-estimation", "sec_num": "4" }, { "text": "The results appear ill table 2, showing the best accuracy achieved (on ambiguous words). the iteration at which it occurred, and the pattern of re-estimation (I = initial maximum, E = early maximum, C = classical). The patterns are summarised in table 3, each entry in the table showing the patterns for the two tests under the given conditions. Although there is some variations in the readings, for example ill the \"similar/D0+T0\" case, we can draw some general conclusions about the patterns obtained from different sorts of data. When the lexicon is degraded (D2), the pattern is always classical. With a good lexicon but either degraded transitions or a test corpus differing from the training corpus, the pattern tends to be early maximum. When the test corpus is very similar to the model, then the pattern is initial maximum. Furthermore, examining the accuracies in table 2, in the cases of initial maximum and early maximum, the accuracy tends to be significantly higher than with classical behaviour. It seems likely that what is going on is that the model is converging to towards something of similar \"quality\" in each case, but when the pattern is classical, the convergence starts from a lower quality model and improves, and in the other cases, it starts from a higher quality one and deteriorates. In the case of early maximum, the few iterations where the accuracy is improving correspond to the creation of entries for unknown words and th~ ~, fine tuning of ones for known ones, and these changes outweigh those produced by the re-estimation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Patterns of re-estimation", "sec_num": "4" }, { "text": "From the obserw~tions in the previous section, we propose the following guidelines for how to train a \u2022 If a hand-tagged training corpus is available, use", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "it . If the test and training corpora are nearidentical, do not use BW re-estimation; otherwise use for a small number of iterations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "\u2022 If no such training corpus is available, but a lexicon with at least relative frequency data is available, use BW re-estimation for a small number of iterations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "\u2022 If neither training corpus nor lexicon are available, use BW re-estimation with standard convergence tests such as perplexity. Without a lexicon, some initial biasing of the transitions is needed if good results are to be obtained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Similar results are presented by Merialdo (1994) , who describes experiments to compare the effect of training from a hand-tagged corpora and using the Baum-Welch algorithm with various initial conditions. As in the experiments above, BW reestimation gave a decrease in accuracy when the starting point was derived from a significant amount of hand-tagged text. In addition, although Merialdo does not highlight the point, BW re-estimation starting from less than 5000 words of hand-tagged text shows early maximum behaviour. Merialdo's conclusion is that taggers should be trained using as much hand-tagged text as possible to begin with, and only then applying BW re-estimation with untagged text. The step forward taken in the work here is to show that there are three patterns of reestimation behaviour, with differing guidelines for how to use BW effectively, and that to obtain a good starting point when a hand-tagged corpus is not available or is too small, either the lexicon or the transitions must be biased.", "cite_spans": [ { "start": 33, "end": 48, "text": "Merialdo (1994)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "While these may be useful heuristics from a practical point of view, the next step forward is to look for an automatic way of predicting the accuracy of the tagging process given a corpus and a model. Some preliminary experiments with using measures such as perplexity and the average probability of hypotheses show that, while they do give an indication of convergence during re-estimation, neither shows a strong correlation with the accuracy. Perhaps what is needed is a \"similarity measure\" between two models M and M ~, such that if a corpus were tagged with model M, M ~ is the model obtained by training from the output corpus from the tagger as if it were a hand-tagged corpus. However, preliminary experiments using such measures as the Kullback-Liebler distance between the initial and new models have again showed that it does not give good predictions of accuracy. In the end it may turn out there is simply no way of making the prediction without a source of intbrmation extrinsic to both model and corpus. C C* C C C C early peak, but the graphs of accuracy against number of iterations show the pattern to be classical rather than early maximum. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "I, E E, E Different E, E E, E C, C C, C C, C C, C C, C C, C", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" } ], "back_matter": [ { "text": "The work described here was carried out at the Cambridge University Computer Laboratory as part of Esprit BR Project 7315 \"The Acquisition of Lexical Knowledge\" (Acquilex-II). The results were confirmed and extended at Sharp Laboratories of Europe. I thank Ted Briscoe for his guidance and advice, and the ANLP referees for their comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Tagging an Unfamiliar Text With Minimal Human Supervision", "authors": [ { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" }, { "first": "Mitch", "middle": [], "last": "Marcus", "suffix": "" } ], "year": 1992, "venue": "AAAI Fall Symposium on Probabilistic Approaches to Natural Language", "volume": "", "issue": "", "pages": "10--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Brill and Mitch Marcus (1992). Tagging an Unfamiliar Text With Minimal Human Supervi- sion. In AAAI Fall Symposium on Probabilistic Approaches to Natural Language, pages 10-16.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A Simple Rule-Based Part of Speech Tagger", "authors": [ { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" } ], "year": 1992, "venue": "Third Conference on Applied Natural Language Processing. Proceedings of the Conference", "volume": "", "issue": "", "pages": "152--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Brill (1992). A Simple Rule-Based Part of Speech Tagger. In Third Conference on Applied Natural Language Processing. Proceedings of the Conference. Trento, Italy, pages 152-155, Associ- ation for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text", "authors": [ { "first": "Kenneth", "middle": [ "Ward" ], "last": "Church", "suffix": "" } ], "year": 1988, "venue": "Second Conference on Applied Natural Language Processing. Proceedings of the Conference", "volume": "", "issue": "", "pages": "136--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Ward Church (1988). A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text. In Second Conference on Applied Natural Language Processing. Proceedings of the Confer- ence, pages 136-143, Association for Computa- tional Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A Practical Part-of-Speech Tagger", "authors": [], "year": null, "venue": "Third Conference on Applied Natural Language Processing. Proceedings of the Conference", "volume": "", "issue": "", "pages": "133--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "A Practical Part-of- Speech Tagger. In Third Conference on Applied Natural Language Processing. Proceedings of the Conference. Trento, Italy, pages 133-140, Associ- ation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Grammatical Category Disambiguation by Statistical Optimization", "authors": [], "year": null, "venue": "Computational Linguistics", "volume": "14", "issue": "1", "pages": "31--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grammatical Cate- gory Disambiguation by Statistical Optimization. Computational Linguistics, 14(1) :31-39.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Computational Analysis of English: A Corpus-based Approach", "authors": [ { "first": "Roger", "middle": [], "last": "Garside", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Leech", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Sampson", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roger Garside, Geoffrey Leech, and Geoffrey Samp- son (1987). The Computational Analysis of En- glish: A Corpus-based Approach. Longman, Lon- don.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Hidden Markov Models for Speech Recognition", "authors": [ { "first": "X", "middle": [ "D" ], "last": "Huang", "suffix": "" }, { "first": "Y", "middle": [], "last": "Ariki", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Jack", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. D. Huang, Y. Ariki, and M. A. Jack (1990). Hid- den Markov Models for Speech Recognition. Edin- burgh University Press.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Probabilistic Models of Short and Long Distance Word Dependencies in Running Text", "authors": [ { "first": "M", "middle": [], "last": "Kupiec", "suffix": "" } ], "year": 1989, "venue": "P1vceedings of the 1989 DARPA Speech and Natural Language Workshop", "volume": "", "issue": "", "pages": "290--295", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Kupiec {1989). Probabilistic Models of Short and Long Distance Word Dependencies in Running Text. In P1vceedings of the 1989 DARPA Speech and Natural Language Workshop, pages 290-295.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Robust Part-of-speech Tagging Using a Hidden Markov Model", "authors": [ { "first": "Julian", "middle": [], "last": "Kupiec", "suffix": "" } ], "year": 1992, "venue": "Computer Speech and Language", "volume": "6", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julian Kupiec (1992). Robust Part-of-speech Tag- ging Using a Hidden Markov Model. Computer Speech and Language, 6.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Building a Large Annotated Corpus of English: The Penn Treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz (1993). Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313- 330.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Tagging English Text with a Probabilistic Model", "authors": [ { "first": "Bernard", "middle": [], "last": "Merialdo", "suffix": "" } ], "year": 1994, "venue": "Computational Linguistics", "volume": "20", "issue": "2", "pages": "55--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernard Merialdo (1994). Tagging English Text with a Probabilistic Model. Computational Lin- guistics, 20(2): t55-171.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Hidden Markov Model Methods for Word Tagging", "authors": [ { "first": "R", "middle": [ "A" ], "last": "Sharman", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. A. Sharman (1990). Hidden Markov Model Meth- ods for Word Tagging. Technical Report UKSC 214, IBM UK Scientific Centre.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Figure 1: Example Baum-Welch behaviour", "uris": null, "num": null }, "TABREF0": { "text": "Accuracy using Baum-Welch re-estimation with various initial conditions", "num": null, "html": null, "content": "
Dict Trans LOB-B (%)LOB-L (%)LOB-B-G (%)
DOTO95.9694.7796.17
D1TO95.4094.4495.40
D2TO90.5291.8292.36
D3TO92.9692.8093.48
DOT194.0692.2794.51
D1T194.0692.2794.51
D2T166.5172.4855.88
D3T175.4980.8779.12
F-89.2285.3288.71
", "type_str": "table" }, "TABREF2": { "text": "Baum-Welch patterns (data)", "num": null, "html": null, "content": "
DegradationTest 1 Best (%) atpatternTest 2 Best (%) atpattern
D0+T0 D0+T0 D0+T093.11 89.95 84.591 1 2I I E92.83 75.03 86.001 2 2I E E
D0+T191.712E90.522E
D0+T187.932E70.633E
D0+T18O.873E82.683E
D2+T084.8710C87.318
D2+T081.079C71.404
D2+T078.545C*80.819
D2+T172.589C80.5310
D2+T168.3510C62.7610
D2+T165.6410C68.9510
", "type_str": "table" }, "TABREF3": { "text": "", "num": null, "html": null, "content": "
: Baum-Welch patterns (summary)
DegradationD0+T0 D0+T1 D2+T0 D2+T1
Corpus relation
SameI, IE, E
Similar
", "type_str": "table" } } } }