{ "paper_id": "P97-1032", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:15:57.890831Z" }, "title": "Comparing a Linguistic and a Stochastic Tagger", "authors": [ { "first": "Christer", "middle": [], "last": "Samuelsson", "suffix": "", "affiliation": { "laboratory": "Lucent Technologies Research Unit for Multilingu~l Language Technology Bell Laboratories", "institution": "University of Helsinki .Murray Hill", "location": { "addrLine": "600 Mountain Ave", "postBox": "P.O. Box 4", "postCode": "2D-339, FIN-00014, 07974", "settlement": "Room", "region": "NJ", "country": "USA Finland" } }, "email": "" }, { "first": "Atro", "middle": [], "last": "Voutilainen", "suffix": "", "affiliation": { "laboratory": "Lucent Technologies Research Unit for Multilingu~l Language Technology Bell Laboratories", "institution": "University of Helsinki .Murray Hill", "location": { "addrLine": "600 Mountain Ave", "postBox": "P.O. Box 4", "postCode": "2D-339, FIN-00014, 07974", "settlement": "Room", "region": "NJ", "country": "USA Finland" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Concerning different approaches to automatic PoS tagging: EngCG-2, a constraintbased morphological tagger, is compared in a double-blind test with a state-of-the-art statistical tagger on a common disambiguation task using a common tag set. The experiments show that for the same amount of remaining ambiguity, the error rate of the statistical tagger is one order of magnitude greater than that of the rule-based one. The two related issues of priming effects compromising the results and disagreement between human annotators are also addressed.", "pdf_parse": { "paper_id": "P97-1032", "_pdf_hash": "", "abstract": [ { "text": "Concerning different approaches to automatic PoS tagging: EngCG-2, a constraintbased morphological tagger, is compared in a double-blind test with a state-of-the-art statistical tagger on a common disambiguation task using a common tag set. The experiments show that for the same amount of remaining ambiguity, the error rate of the statistical tagger is one order of magnitude greater than that of the rule-based one. The two related issues of priming effects compromising the results and disagreement between human annotators are also addressed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "There are currently two main methods for automatic part-of-speech tagging. The prevailing one uses essentially statistical language models automatically derived from usually hand-annotated corpora. These corpus-based models can be represented e.g. as collocational matrices (Garside et al. (eds.) 1987 : Church 1988 , Hidden Markov models (cf. Cutting et al. 1992) , local rules (e.g. Hindle 1989 ) and neural networks (e.g. Schmid 1994) . Taggers using these statistical language models are generally reported to assign the correct and unique tag to 95-97% of words in running text. using tag sets ranging from some dozens to about 130 tags. The less popular approach is based on hand-coded linguistic rules. Pioneering work was done in the 1960\"s (e.g. Greene and Rubin 1971) . Recently, new interest in the linguistic approach has been shown e.g. in the work of (Karlsson 1990 : Voutilainen et al. 1992 Oflazer and Kuru6z 1994 : Chanod and Tapanainen 1995 : Karlsson et al. (eds.) 1995 Voutilainen 1995) . The first serious linguistic competitor to data-driven statistical taggers is the English Constraint Grammar parser. EngCG (cf. Voutilainen et al. 1992; Karlsson et al. (eds.) 1995) . The tagger consists of the following sequentially applied modules:", "cite_spans": [ { "start": 274, "end": 301, "text": "(Garside et al. (eds.) 1987", "ref_id": null }, { "start": 302, "end": 315, "text": ": Church 1988", "ref_id": "BIBREF1" }, { "start": 344, "end": 364, "text": "Cutting et al. 1992)", "ref_id": "BIBREF3" }, { "start": 385, "end": 396, "text": "Hindle 1989", "ref_id": "BIBREF8" }, { "start": 425, "end": 437, "text": "Schmid 1994)", "ref_id": "BIBREF20" }, { "start": 755, "end": 777, "text": "Greene and Rubin 1971)", "ref_id": "BIBREF7" }, { "start": 865, "end": 879, "text": "(Karlsson 1990", "ref_id": "BIBREF10" }, { "start": 880, "end": 905, "text": ": Voutilainen et al. 1992", "ref_id": "BIBREF25" }, { "start": 906, "end": 929, "text": "Oflazer and Kuru6z 1994", "ref_id": null }, { "start": 930, "end": 958, "text": ": Chanod and Tapanainen 1995", "ref_id": "BIBREF0" }, { "start": 959, "end": 988, "text": ": Karlsson et al. (eds.) 1995", "ref_id": null }, { "start": 989, "end": 1006, "text": "Voutilainen 1995)", "ref_id": "BIBREF23" }, { "start": 1137, "end": 1161, "text": "Voutilainen et al. 1992;", "ref_id": "BIBREF25" }, { "start": 1162, "end": 1190, "text": "Karlsson et al. (eds.) 1995)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. Tokenisation 2. Morphological analysis (a) Lexical component (b) Rule-based guesser for unknown words 3. Resolution of morphological ambiguities The tagger uses a two-level morphological analyser with a large lexicon and a morphological description that introduces about 180 different ambiguity-forming morphological analyses, as a result of which each word gets 1.7-2.2 different analyses on an average. Morphological analyses are assigned to unknown words with an accurate rulebased 'guesser'. The morphological disambiguator uses constraint rules that discard illegitimate morphological analyses on the basis of local or global context conditions. The rules can be grouped as ordered subgrammars: e.g. heuristic subgrammar 2 can be applied for resolving ambiguities left pending by the more \"careful' subgrammar 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Older versions of EngCG (using about 1,150 constraints) are reported (~butilainen et al. 1992; Voutilainen and HeikkiUi 1994; Tapanainen and Voutilainen 1994; Voutilainen 1995) to assign a correct analysis to about 99.7% of all words while each word in the output retains 1.04-1.09 alternative analyses on an average, i.e. some of the ambiguities remait~ unresolved.", "cite_spans": [ { "start": 69, "end": 94, "text": "(~butilainen et al. 1992;", "ref_id": null }, { "start": 95, "end": 125, "text": "Voutilainen and HeikkiUi 1994;", "ref_id": null }, { "start": 126, "end": 158, "text": "Tapanainen and Voutilainen 1994;", "ref_id": "BIBREF22" }, { "start": 159, "end": 176, "text": "Voutilainen 1995)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These results have been seriously questioned. One doubt concerns the notion 'correct analysis\". For example Church (1992) argues that linguists who manually perform the tagging task using the doubleblind method disagree about the correct analysis in at least 3% of all words even after they have negotiated about the initial disagreements. If this were the case, reporting accuracies above this 97% \"upper bound' would make no sense.", "cite_spans": [ { "start": 108, "end": 121, "text": "Church (1992)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, Voutilainen and J~rvinen (1995) empirically show that an interjudge agreement virtually of 1()0% is possible, at least with the EngCG tag set if not with the original Brown Corpus tag set. This consistent applicability of the EngCG tag set is explained by characterising it as grammatically rather than semantically motivated.", "cite_spans": [ { "start": 9, "end": 40, "text": "Voutilainen and J~rvinen (1995)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Another main reservation about the EngCG figures is the suspicion that, perhaps partly due to the somewhat underspecific nature of the EngCG tag set, it must be so easy to disambiguate that also a statistical tagger using the EngCG tags would reach at least as good results. This argument will be examined in this paper. It will be empirically shown (i) that the EngCG tag set is about as difficult for a probabilistic tagger as more generally used tag sets and (ii) that the EngCG disambiguator has a clearly smaller error rate than the probabilistic tagger when a similar (small) amount of ambiguity is permitted in the output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A state-of-the-art statistical tagger is trained on a corpus of over 350,000 words hand-annotated with EngCG tags. then both taggers (a new version known as En~CG-21 with 3,600 constraints as five subgrammars-, and a statistical tagger) are applied to the same held-out benchmark corpus of 55,000 words, and their performances are compared. The results disconfirm the suspected 'easiness' of the EngCG tag set: the statistical tagger's performance figures are no better than is the case with better known tag sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Two caveats are in order. What we are not addressing in this paper is the work load required for making a rule-based or a data-driven tagger. The rules in EngCG certainly took a considerable effort to write, and though at the present state of knowledge rules could be written and tested with less effort, it may well be the case that a tagger with an accuracy of 95-97% can be produced with less effort by using data-driven techniques. 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Another caveat is that EngCG alone does not resolve all ambiguities, so it cannot be compared to a typical statistical tagger if full disambiguation is required. However, \"~butilainen (1995) has shown that EngCG combined with a syntactic parser produces morphologically unambiguous output with an accuracy of 99.3%, a figure clearly better than that of the statistical tagger in the experiments below (however. the test data was not the same).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Before examining the statistical tagger, two practical points are addressed: the annotation of tile corpora used. and the modification of the EngCG tag set for use in a statistical tagger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1An online version of EngCG-2 can be found at, ht tp://www.ling.helsinki.fi/\"avoutila/engcg-2.ht ml.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": ":The first three subgrammars are generally highly reliable and almost all of the total grammar development time was spent on them: the last two contain rather rough heuristic constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3However, for an interesting experiment suggesting otherwise, see (Chanod and Tapanainen 1995) . The stochastic tagger was trained on a sample of 357,000 words from the Brown University Corpus of Present-Day English (Francis and Ku6era 1982) that was annotated using the EngCG tags. The corpus was first analysed with the EngCG lexical analyser, and then it was fully disambiguated and, when necessary, corrected by a human expert. This annotation took place a few years ago. Since then, it has been used in the development of new EngCG constraints (the present version, EngCG-2, contains about 3,600 constraints): new constraints were applied to the training corpus, and whenever a reading marked as correct was discarded, either the analysis in the corpus, or the constraint itself, was corrected.", "cite_spans": [ { "start": 66, "end": 94, "text": "(Chanod and Tapanainen 1995)", "ref_id": "BIBREF0" }, { "start": 216, "end": 241, "text": "(Francis and Ku6era 1982)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this way, the tagging quality of the corpus was continuously improved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our comparisons use a held-out benchmark corpus of about 55,000 words of journalistic, scientific and manual texts, i.e., no ,training effects are expected for either system. The benchmark corpus was annotated by first applying the preprocessor and morphological aaalyser, but not the morphological disambiguator, to the text. This morphologically ambiguous text was then independently and fully disambiguated by two experts whose task was also to detect any errors potentially produced by the previously applied components. They worked independently, consulting written documentation of the tag set when necessary. Then these manually disambiguated versions were automatically compared with each other. At this stage, about 99.3% of all analyses were identical. When the differences were collectiyely examined, virtually all were agreed to be due to clerical mistakes. Only in the analysis of 21 words, different (meaning-level) interpretations persisted, and even here both judges agreed the ambiguity to be genuine. One of these two corpus versions was modified to represent the consensus, and this \"consensus corpus' was used as a benchmark in the evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation of benchmark corpus", "sec_num": "2.2" }, { "text": "As explained in Voutilainen and J/irvinen (1995) . this high agreement rate is due to two main factors. Firstly, distinctions based on some kind of vague semantics are avoided, which is not always case with better known tag sets. Secondly. the adopted analysis of most of the constructions where humans tend to be uncertain is documented as a collection of tag application principles in the form of a grammarinn's manual (for further details, cf. Voutilainen and J/irvinen 1995).", "cite_spans": [ { "start": 16, "end": 48, "text": "Voutilainen and J/irvinen (1995)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Annotation of benchmark corpus", "sec_num": "2.2" }, { "text": "Tile corpus-annotation procedure allows us t.o perform a text-book statistical hypothesis test. Let tile null hypothesis be that any two human evaluators will necessarily disagree in at least 3% of the cases. Under this assumption, the probability of an observed disagreement of less than 2.88% is less than 5%. This can be seen as follows: For the relative frequency of disagreement, fn, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation of benchmark corpus", "sec_num": "2.2" }, { "text": "t-.--..- that f. is approximately --, N(p, ~/~),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation of benchmark corpus", "sec_num": "2.2" }, { "text": "where p is the actual disagreement probability and n is the number of trials, i.e., the corpus size. This means", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation of benchmark corpus", "sec_num": "2.2" }, { "text": "fn-P v/-ff that P(( ~ < z) ~ ~(x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation of benchmark corpus", "sec_num": "2.2" }, { "text": "where \u00a2b is the standard normal distribution function. This in turn means that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation of benchmark corpus", "sec_num": "2.2" }, { "text": "P ( f , < p + z P~ -p-----~) ) ,~ ~ ( z )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation of benchmark corpus", "sec_num": "2.2" }, { "text": "Here n is 55,000 and ~(-1.645) = 0.05. Under the null hypothesis, p is at least 3% and thus:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation of benchmark corpus", "sec_num": "2.2" }, { "text": ". /O.O3.0.97 P(f. < o.o3-1.64%/-g,o-g6 ) -= P(A <__ 0.0288) < 0.05", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation of benchmark corpus", "sec_num": "2.2" }, { "text": "We can thus discard the null hypothesis at significance level 5% if the observed disagreement is less than 2.88%. It was in fact 0.7% before error cor-.21) rection, and virtually zero (~ after negotiation. This means that we can actually discard the hypotheses that the human evaluators in average disagree in at least 0.8% of the cases before error correction, and in at least 0.1% of the cases after negotiations, at significance level 5%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation of benchmark corpus", "sec_num": "2.2" }, { "text": "The EugCG morphological analyser's output formally differs from most tagged corpora; consider the following 5-ways ambiguous analysis of \"'walk\":", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tag set conversion", "sec_num": "2.3" }, { "text": "walk walk V SUBJUNCTIVE VFIN walk V IMP VFIN walk V INF walk V PRES -SG3 VFIN walk N NOM SG", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tag set conversion", "sec_num": "2.3" }, { "text": "Statistical taggers usually employ single tags to indicate analyses (e.g. \"'NN\" for \"'N NOM SG\"). Therefore a simple conversion program was made for producing the following kind of output, where each reading is represented as a single tag:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tag set conversion", "sec_num": "2.3" }, { "text": "walk V-SUBJUNCTIVE V-IMP V-INF V-PRES-BASE N-NOM-SG", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tag set conversion", "sec_num": "2.3" }, { "text": "The conversion program reduces the multipart EngCG tags into a set of 80 word tags and 17 punctuation tags (see Appendix) that retain the central linguistic characteristics of the original EngCG tag set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tag set conversion", "sec_num": "2.3" }, { "text": "A reduced version of the benchmark corpus was prepared with this conversion program for the statistical tagger's use. Also EngCG's output was converted into this format to enable direct comparison with the statistical tagger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tag set conversion", "sec_num": "2.3" }, { "text": "The statistical tagger used in the experiments is a classical trigram-based HMM decoder of the kind described in e.g. (Church 1988) , (DeRose 1988) and numerous other articles. Following conventional notation, e.g. (Rabiner 1989, pp. 272-274) and (Krenn and Samuelsson 1996, pp. 42-46) , the tagger recursively calculates the ~, 3, 7 and 6 variables for each word string position t = 1 ..... T and each possible 4The N-I th-order HMM corresponding to an N-gram tagger is encoded as a first-order HMM, where each state corresponds to a sequence of ,V-I tags, i.e., for a trigram tagger, each state corresponds to a tag pair.", "cite_spans": [ { "start": 118, "end": 131, "text": "(Church 1988)", "ref_id": "BIBREF1" }, { "start": 215, "end": 242, "text": "(Rabiner 1989, pp. 272-274)", "ref_id": null }, { "start": 247, "end": 285, "text": "(Krenn and Samuelsson 1996, pp. 42-46)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Statistical Tagger", "sec_num": "8" }, { "text": "are the lexical probabilities. Here X, is the random variable of assigning a tag to the tth word and xj is the last tag of the tag sequence encoded as state sj. Note that si # sj need not imply zi # zj. The rationale behind this is to facilitate estimating the model parameters from sparse data. In more detail, it is easy to estimate P(tag I word) for a previously unseen word by backing off to statistics derived from words that end with the same sequence of letters (or based on other surface cues), whereas directly estimating P(word I tag) is more difficult. This is particularly useful for languages with a rich inflectional and derivational morphology, but also for English: for example, the suffix \"-tion\" is a strong indicator that the word in question is a noun; the suffix \"-able\" that it is an adjective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Statistical Tagger", "sec_num": "8" }, { "text": "More technically, the lexicon is organised as a reverse-suffix tree, and smoothing the probability estimates is accomplished by blending the distribution at the current node of the tree with that of higherlevel nodes, corresponding to (shorter) suffixes of the current word (suffix). The scheme also incorporates probability distributions for the set of capitalized words, the set of all-caps words and the set of infrequent words, all of which are used to improve the estimates for unknown words. Employing a small amount of back-off smoothing also for the known words is useful to reduce lexical tag omissions. Empirically, looking two branching points up the tree for known words, and all the way up to the root for unknown words, proved optimal. The method for blending the distributions applies equally well to smoothing the transition probabilities pij, i.e., the tag N-gram probabilities, and both the scheme and its application to these two tasks are described in detail in (Samuelsson 1996) , where it was also shown to compare favourably to (deleted) interpolation, see (Jelinek and Mercer 1980) , even when the back-off weights of the latter were optimal.", "cite_spans": [ { "start": 982, "end": 999, "text": "(Samuelsson 1996)", "ref_id": "BIBREF19" }, { "start": 1080, "end": 1105, "text": "(Jelinek and Mercer 1980)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "The Statistical Tagger", "sec_num": "8" }, { "text": "The 6 variables enable finding the most probable state sequence under the HMM, from which the most likely assignment of tags to words can be directly established. This is the normal modus operandi of an HMM decoder. Using the 7 variables, we can calculate the probability of being in state si at string position t, and thus having emitted wk, from this state, conditional on the entire word string. By summing over all states that would assign the same tag to this word, the individual probability of each tag being assigned to any particular input word, conditional on the entire word string, can be calculated:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Statistical Tagger", "sec_num": "8" }, { "text": "P(X, = zilW) = = Z P(S,=sj t W) = E 7,(J) 8j:rj=r i $j:rj =~'=", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Statistical Tagger", "sec_num": "8" }, { "text": "This allows retaining multiple tags for each word by simply discarding only low-probability tags; those whose probabilities are below some threshold value. Of course, the most probable tag is never discarded, even if its probability happens to be less than the threshold value. By varying the threshold, we can perform a recall-precision, or error-rate-ambiguity, tradeoff. A similar strategy is adopted in (de Marcken 1990).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Statistical Tagger", "sec_num": "8" }, { "text": "The statistical tagger was trained on 357,000 words from the Brown corpus (Francis and Ku~era 1982) , reannotated using the EngCG annotation scheme (see above). In a first set of experiments, a 35,000 word subset of this corpus was set aside and used to evaluate the tagger's performance when trained on successively larger portions of the remaining 322,000 words. The learning curve, showing the error rate alter full disambiguation as a function of the amount of training data used, see Figure 1 , has levelled off at 322,000 words, indicating that little is to be gained from further training. We also note that the absolute value of the error rate is 3.51% --a typical state-of-the-art figure. Here, previously unseen words contribute 1.08% to the total error rate, while the contribution from lexical tag omissions is 0.08% 95% confidence intervals for the error rates would range from + 0.30% for 30,000 words to + 0.20~c at 322.000 words.", "cite_spans": [ { "start": 74, "end": 99, "text": "(Francis and Ku~era 1982)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 489, "end": 497, "text": "Figure 1", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "The tagger was then trained on the entire set of 357,000 words and confronted with the separate 55,000-word benchmark corpus, and run both in full Table h Error-rate-ambiguity tradeoff for both taggets on the benchmark corpus. Parenthesized numbers are interpolated. and partial disambiguation mode. Table 1 shows the error rate as a function of remaining ambiguity (tags/word) both for the statistical tagger, and for the EngCG-2 tagger. The error rate for full disanabiguation using the 6 variables is 4.72% and using the 7 variables is 4.68%, both -4-0.18% with confidence degree 95%. Note that the optimal tag sequence obtained using the 7 variables need not equal the optimal tag sequence obtained using the 6 variables. In fact, the former sequence may be assigned zero probability by the HMM, namely if one of its state transitions has zero probability.", "cite_spans": [], "ref_spans": [ { "start": 147, "end": 154, "text": "Table h", "ref_id": null }, { "start": 300, "end": 307, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Previously unseen words account for 2.01%, and lexical tag omissions for 0.15% of the total error rate. These two error sources are together exactly 1.00% higher on the benchmark corpus than on the Brown corpus, and account for almost the entire difference in error rate. They stem from using less complete lexical information sources, and are most likely the effect of a larger vocabulary overlap between the test and training portions of the Brown corpus than between the Brown and benchmark corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "The ratio between the error rates of the two taggets with the same amount of remaining ambiguity ranges from 8.6 at 1.026 tags/word to 28,0 at 1.070 tags/word. The error rate of the statistical tagger can be further decreased, at the price of increased remaining ambiguity, see Figure 2 . In the limit of retaining all possible tags, the residual error rate is entirely due to lexical tag omissions, i.e., it is 0.15%, with in average 14.24 tags per word. The reason that this figure is so high is that the unknown words, which comprise 10% of the corpus, are assigned all possible tags as they are backed off all the way to the root of the reverse-suffix tree. Figure 2 : Error-rate-ambiguity tradeoff for the statistical tagger on the benchmark corpus.", "cite_spans": [], "ref_spans": [ { "start": 278, "end": 286, "text": "Figure 2", "ref_id": null }, { "start": 662, "end": 670, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Recently voiced scepticisms concerning the superior EngCG tagging results boil down to the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "\u2022 The reported results are due to the simplicity of the tag set employed by the EngCG system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "\u2022 The reported results are an effect of trading high ambiguity resolution for lower error rate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "\u2022 The results are an effect of so-called priming of the huraan annotators when preparing the test corpora, compromising the integrity of the experimental evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "In the current article, these points of criticism were investigated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "A state-of-the-art statistical tagger, capable of performing error-rate-ambiguity tradeoff, was trained on a 357,000-word portion of the Brown corpus reannotated with the EngCG tag set, and both taggers were evaluated using a separate 55,000-word benchmark corpus new to both", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" } ], "back_matter": [ { "text": "Though Voutilainen is the main author of the EngCG-2 tagger, the development of the system has benefited from several other contributions too. Fred Karlsson proposed the Constraint Grammar framework in the late 1980s. Juha Heikkil\u00a3 and Timo J~irvinen contributed with their work on English morphology and lexicon. Kimmo Koskenniemi wrote the software for morphological analysis. Pasi Tapanainen has written various implementations of the CG parser, including the recent CG-2 parser (Tapanainen 1996) .The quality of the investigation and presentation was boosted by a number of suggestions to improvements and (often sceptical) comments from numerous ACL reviewers and UPenn associates, in particular from Mark Liberman.", "cite_spans": [ { "start": 482, "end": 499, "text": "(Tapanainen 1996)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "systems. This benchmark corpus was independently disambiguated by two linguists, without access to the results of the automatic taggers. The initial differences between the linguists' outputs (0.7% of all words) were jointly examined by the linguists; practically all of them turned out to be clerical errors (rather than the product of genuine difference of opinion).In the experiments, the performance of the EngCG-2 tagger was radically better than that of the statistical tagger: at ambiguity levels common to both systems, the error rate of the statistical tagger was 8.6 to 28 times higher than that of EngCG-2. We conclude that neither the tag set used by EngCG-2, nor the error-rate-ambiguity tradeoff, nor any priming effects can possibly explain the observed difference in performance.Instead we must conclude that the lexical and contextual information sources at the disposal of the EngCG system are superior. Investigating this empirically by granting the statistical tagger access to the same information sources as those available in the Constraint Grammar framework constitutes future work. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Tagging French: comparing a statistical and a constraintbased method", "authors": [ { "first": "J-P", "middle": [], "last": "Chanod", "suffix": "" }, { "first": "P", "middle": [], "last": "Tapanainen", "suffix": "" } ], "year": 1995, "venue": "Procs. 7th Conference of the European Chapter of the Association for Computational Lingaistics", "volume": "", "issue": "", "pages": "149--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "J-P Chanod and P. Tapanainen. 1995. Tagging French: comparing a statistical and a constraint- based method. In Procs. 7th Conference of the European Chapter of the Association for Compu- tational Lingaistics, pp. 149-157, ACL, 1995.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text", "authors": [ { "first": "K", "middle": [ "W" ], "last": "Church", "suffix": "" } ], "year": 1988, "venue": "Procs. 2nd Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "136--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. W. Church. 1988. \"'A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text.\". In Procs. 2nd Conference on Applied Natural Lan- guage Processing, pp. 136-143, ACL, 1988.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Current Practice in Part of Speech Tagging and Suggestions for the Future", "authors": [ { "first": "K", "middle": [], "last": "Church", "suffix": "" } ], "year": 1992, "venue": "Sbornik praci: In Honor of Henry Ku6era", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Church. 1992. Current Practice in Part of Speech Tagging and Suggestions for the Future. in Simmons (ed.), Sbornik praci: In Honor of Henry Ku6era. Michigan Slavic Studies, 1992.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A Practical Part-of-Speech Tagger", "authors": [ { "first": "D", "middle": [], "last": "Cutting", "suffix": "" }, { "first": "J", "middle": [], "last": "Kupiec", "suffix": "" }, { "first": "J", "middle": [], "last": "Pedersen", "suffix": "" }, { "first": "P", "middle": [], "last": "Sibun", "suffix": "" } ], "year": 1992, "venue": "Procs. 3rd Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "133--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Cutting, J. Kupiec, J. Pedersen and P. Sibun. 1992. A Practical Part-of-Speech Tagger. In Procs. 3rd Conference on Applied Natural Lan- guage Processing, pp. 133-140, ACL, 1992.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Grammatical Category Disambiguation by Statistical Optimization", "authors": [ { "first": "S", "middle": [ "J" ], "last": "Derose", "suffix": "" } ], "year": 1988, "venue": "Computational Linguistics", "volume": "14", "issue": "", "pages": "31--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. J. DeRose. 1988. \"Grammatical Category Disambiguation by Statistical Optimization\". In Computational Linguistics 14(1), pp. 31-39, ACL, 1988.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Frequency Analysis of English Usage, Houghton Mifflin", "authors": [ { "first": "N", "middle": [ "W" ], "last": "Francis", "suffix": "" }, { "first": "H", "middle": [], "last": "Ku~era", "suffix": "" } ], "year": 1982, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. W. Francis and H. Ku~era. 1982. Fre- quency Analysis of English Usage, Houghton Mif- flin, Boston, 1982.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The Computational Analysis of English", "authors": [], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Garside, G. Leech and G. Sampson (eds.). 1987. The Computational Analysis of English. London and New York: Longman, 1987.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Automatic grammatical tagging of English", "authors": [ { "first": "B", "middle": [], "last": "Greene", "suffix": "" }, { "first": "G", "middle": [], "last": "Rubin", "suffix": "" } ], "year": 1971, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Greene and G. Rubin. 1971. Automatic gram- matical tagging of English. Brown University, Providence, 1971.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Acquiring disambiguation rules from text", "authors": [ { "first": "D", "middle": [], "last": "Hindle", "suffix": "" } ], "year": 1989, "venue": "Procs. 27th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "118--125", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Hindle. 1989. Acquiring disambiguation rules from text. In Procs. 27th Annual Meeting of the Association for Computational Linguistics, pp. 118-125, ACL, 1989.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Interpolated Estimation of Markov Source Paramenters from Sparse Data", "authors": [ { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1980, "venue": "Pattern Recognition in Practice", "volume": "", "issue": "", "pages": "381--397", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Jelinek and R. L. Mercer. 1980. \"Interpolated Estimation of Markov Source Paramenters from Sparse Data\". Pattern Recognition in Practice: 381-397. North Holland, 1980.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Constraint Grammar as a Framework for Parsing Running Text", "authors": [ { "first": "F", "middle": [], "last": "Karlsson", "suffix": "" } ], "year": 1990, "venue": "Procs", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Karlsson. 1990. Constraint Grammar as a Framework for Parsing Running Text. In Procs.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "CoLing'90", "authors": [], "year": 1990, "venue": "Procs. 14th International Conference on Computational Linguistics, ICCL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "CoLing'90. In Procs. 14th International Confer- ence on Computational Linguistics, ICCL, 1990.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Constraint Grammar. A Language-Independent System for Parsing Unrestricted Tezt", "authors": [], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Karlsson, A. Voutilainen, J. Heikkilii and A. Anttila (eds.). 1995. Constraint Grammar. A Language-Independent System for Parsing Unre- stricted Tezt. Berlin and New York: Mouton de Gruyter, 1995.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The Linguist's", "authors": [ { "first": "B", "middle": [], "last": "Krenn", "suffix": "" }, { "first": "C", "middle": [], "last": "Samuelsson", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Krenn and C. Samuelsson. The Linguist's", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Guide to Statistics. Version of", "authors": [], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guide to Statistics. Version of April 23, 1996. http ://coli. uni-sb, de/~christ er.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Parsing the LOB Corpus", "authors": [ { "first": "C", "middle": [ "G" ], "last": "De Marcken", "suffix": "" } ], "year": 1990, "venue": "Procs. 28th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "243--251", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. G. de Marcken. 1990. \"Parsing the LOB Cor- pus\". In Procs. 28th Annual Meeting of the As- sociation for Computational Linguistics, pp. 243- 251, ACL, 1990.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Tagging and morphological disambiguation of Turkish text", "authors": [ { "first": "K", "middle": [], "last": "Oflazer", "suffix": "" }, { "first": "I", "middle": [], "last": "Kurusz", "suffix": "" } ], "year": 1994, "venue": "Procs. 4th Conference on Applied Natural La1~-guage Processing. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Oflazer and I. KuruSz. 1994. Tagging and morphological disambiguation of Turkish text. In Procs. 4th Conference on Applied Natural La1~- guage Processing. ACL. 1994.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition", "authors": [ { "first": "L", "middle": [ "R" ], "last": "Rabiner", "suffix": "" } ], "year": 1989, "venue": "Readings in Speech Recognition", "volume": "", "issue": "", "pages": "267--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. R. Rabiner. 1989. \"A Tutorial on Hid- den Markov Models and Selected Applications in Speech Recognition\". In Readings in Speech Recognition, pp. 267-296. Alex Waibel and Kai- Fu Lee (eds), Morgan I