{ "paper_id": "J95-2001", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:46:18.738298Z" }, "title": "Automatic Stochastic Tagging of Natural Language Texts", "authors": [ { "first": "Evangelos", "middle": [], "last": "Dermatas", "suffix": "", "affiliation": {}, "email": "dermatas@wcl.ee.upatras.gr." }, { "first": "George", "middle": [], "last": "Kokkinakis", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Patras", "location": { "postCode": "265 00", "settlement": "Patras", "country": "Greece" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Five language and tagset independent stochastic taggers, handling morphological and contextual information, are presented and tested in corpora of seven European languages (Dutch, English, French, German, Greek, Italian and Spanish), using two sets of grammatical tags; a small set containing the eleven main grammatical classes and a large set of grammatical categories common to all languages. The unknown words are tagged using an experimentally proven stochastic hypothesis that links the stochastic behavior of the unknown words with that of the less probable known words. A fully automatic training and tagging program has been implemented on an IBM PC-compatible 80386-based computer. Measurements of error rate, time response, and memory requirements have shown that the taggers\" performance is satisfactory, even though a small training text is available. The error rate is improved when new texts are used to update the stochastic model parameters.", "pdf_parse": { "paper_id": "J95-2001", "_pdf_hash": "", "abstract": [ { "text": "Five language and tagset independent stochastic taggers, handling morphological and contextual information, are presented and tested in corpora of seven European languages (Dutch, English, French, German, Greek, Italian and Spanish), using two sets of grammatical tags; a small set containing the eleven main grammatical classes and a large set of grammatical categories common to all languages. The unknown words are tagged using an experimentally proven stochastic hypothesis that links the stochastic behavior of the unknown words with that of the less probable known words. A fully automatic training and tagging program has been implemented on an IBM PC-compatible 80386-based computer. Measurements of error rate, time response, and memory requirements have shown that the taggers\" performance is satisfactory, even though a small training text is available. The error rate is improved when new texts are used to update the stochastic model parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In the natural language processing community, there has been a growing awareness of the key importance that lexical and corpora resources, especially annotated corpora, have to play, both in the advancement of research in this area and in the development of relevant products. In order to reduce the huge cost of manually creating such corpora, the development of automatic taggers is of paramount importance. In this respect, the ability of a tagger to handle both known and unknown words, to improve its performance by training, and to achieve a high rate of correctly tagged words, is the criterion for assessing its usability in practical cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Several taggers based on rules, stochastic models, neural networks, and hybrid systems have already been presented for Part-of-speech (POS) tagging. Rule-based taggers (Brill 1992; Elenius 1990; Jacobs and Zernik 1988; Karlsson 1990; Karlsson et al. 1991; Voutilainen, Heikkila, and Antitila 1992; Voutilainen and Tapanainen 1993) use POS-dependent constraints defined by experienced linguists. A small error rate has been achieved by such systems when a restricted, application-dependent POS set is used; e.g., an error rate of 2-6 percent has been reported by Marcus, Santorini, and Marcinkiewicz (1993) using the Penn Treebank corpus. Nevertheless, if a large POS set is specified, the number of rules increases significantly and rule definition becomes highly costly and cumbersome.", "cite_spans": [ { "start": 168, "end": 180, "text": "(Brill 1992;", "ref_id": "BIBREF0" }, { "start": 181, "end": 194, "text": "Elenius 1990;", "ref_id": "BIBREF11" }, { "start": 195, "end": 218, "text": "Jacobs and Zernik 1988;", "ref_id": "BIBREF16" }, { "start": 219, "end": 233, "text": "Karlsson 1990;", "ref_id": "BIBREF18" }, { "start": 234, "end": 255, "text": "Karlsson et al. 1991;", "ref_id": "BIBREF20" }, { "start": 256, "end": 297, "text": "Voutilainen, Heikkila, and Antitila 1992;", "ref_id": "BIBREF35" }, { "start": 298, "end": 330, "text": "Voutilainen and Tapanainen 1993)", "ref_id": "BIBREF34" }, { "start": 562, "end": 605, "text": "Marcus, Santorini, and Marcinkiewicz (1993)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Stochastic taggers use both contextual and morphological information, and the model parameters are usually defined or updated automatically from tagged texts (Cerf-Danon and E1-Beze 1991; Church 1988; Cutting et al. 1992; Dermatas and Kokkinakis 1988 , 1994 Garside, Leech, and Sampson 1987; Kupiec 1992; Maltese and Mancini 1991; Meteer, Schwartz, and Weischedel 1991; Merialdo 1991; Pelillo, Moro, and Refice 1992; Weischedel et al. 1993; Wothke et al. 1993 ). These taggers are preferred when tagged texts are available for training, and large tagsets and multilingual applications are involved. In the case where additionally raw untagged text is available, the Maximum Likelihood training can be used to reestimate the parameters of HMM taggers (Merialdo 1994) .", "cite_spans": [ { "start": 158, "end": 187, "text": "(Cerf-Danon and E1-Beze 1991;", "ref_id": null }, { "start": 188, "end": 200, "text": "Church 1988;", "ref_id": "BIBREF3" }, { "start": 201, "end": 221, "text": "Cutting et al. 1992;", "ref_id": "BIBREF5" }, { "start": 222, "end": 250, "text": "Dermatas and Kokkinakis 1988", "ref_id": "BIBREF6" }, { "start": 251, "end": 257, "text": ", 1994", "ref_id": "BIBREF27" }, { "start": 258, "end": 291, "text": "Garside, Leech, and Sampson 1987;", "ref_id": "BIBREF14" }, { "start": 292, "end": 304, "text": "Kupiec 1992;", "ref_id": "BIBREF22" }, { "start": 305, "end": 330, "text": "Maltese and Mancini 1991;", "ref_id": "BIBREF23" }, { "start": 331, "end": 369, "text": "Meteer, Schwartz, and Weischedel 1991;", "ref_id": "BIBREF28" }, { "start": 370, "end": 384, "text": "Merialdo 1991;", "ref_id": "BIBREF26" }, { "start": 385, "end": 416, "text": "Pelillo, Moro, and Refice 1992;", "ref_id": "BIBREF31" }, { "start": 417, "end": 440, "text": "Weischedel et al. 1993;", "ref_id": "BIBREF37" }, { "start": 441, "end": 459, "text": "Wothke et al. 1993", "ref_id": "BIBREF38" }, { "start": 750, "end": 765, "text": "(Merialdo 1994)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Connectionist models have been used successfully for lexical acquisition (Eineborg and Gamback 1993; Elenius 1990; Elenius and Carlson 1989; Nakamura et al. 1990 ). Correct classification rates up to 96.4 percent have been achieved in the latter case by testing on the Teleman Swedish corpus. On the other hand, a time-consuming training process has been reported.", "cite_spans": [ { "start": 73, "end": 100, "text": "(Eineborg and Gamback 1993;", "ref_id": "BIBREF10" }, { "start": 101, "end": 114, "text": "Elenius 1990;", "ref_id": "BIBREF11" }, { "start": 115, "end": 140, "text": "Elenius and Carlson 1989;", "ref_id": "BIBREF12" }, { "start": 141, "end": 161, "text": "Nakamura et al. 1990", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Recently, several solutions to the problem of tagging unknown words have been presented (Charniak et al. 1993; Meteer, Schwartz, and Weischedel 1991) . Hypotheses for unknown words, both stochastic Kokkinakis 1993, 1994; Maltese and Mancini 1991; Weischedel et al. 1993) , and connectionist (Eineborg and Gamback 1993; Elenius 1990 ) have been applied to unlimited vocabulary taggers. In taggers that are based on hidden Markov models (HMM), parameters of the unknown words are estimated by taking into account morphological information from the last part of the word (Dermatas and Kokkinakis 1994; Maltese and Mancini 1991) . Accurate tagging of seven European languages has been achieved in the first case (error rates of 3-13 percent for a detailed POS set), but an enormous amount of training text is required for the estimation of the parameters for unknown words. Similar results have been reported by Maltese and Mancini (1991) for the Italian language. Weischedel et al. (1993) have used four categories of word morphology, such as inflectional endings, derivational endings, hyphenation, and capitalization. For the case in which only a restricted training text is available, a simple, language-and tagset-independent HMM tagger has been presented by , where the HMM parameters for the unknown words are estimated by assuming that the POS probability distribution of the unknown words and the POS probability distribution of the less probable words in the small training text are identical.", "cite_spans": [ { "start": 88, "end": 110, "text": "(Charniak et al. 1993;", "ref_id": "BIBREF2" }, { "start": 111, "end": 149, "text": "Meteer, Schwartz, and Weischedel 1991)", "ref_id": "BIBREF28" }, { "start": 198, "end": 220, "text": "Kokkinakis 1993, 1994;", "ref_id": null }, { "start": 221, "end": 246, "text": "Maltese and Mancini 1991;", "ref_id": "BIBREF23" }, { "start": 247, "end": 270, "text": "Weischedel et al. 1993)", "ref_id": "BIBREF37" }, { "start": 291, "end": 318, "text": "(Eineborg and Gamback 1993;", "ref_id": "BIBREF10" }, { "start": 319, "end": 331, "text": "Elenius 1990", "ref_id": "BIBREF11" }, { "start": 568, "end": 598, "text": "(Dermatas and Kokkinakis 1994;", "ref_id": "BIBREF9" }, { "start": 599, "end": 624, "text": "Maltese and Mancini 1991)", "ref_id": "BIBREF23" }, { "start": 908, "end": 934, "text": "Maltese and Mancini (1991)", "ref_id": "BIBREF23" }, { "start": 961, "end": 985, "text": "Weischedel et al. (1993)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this paper, five natural language stochastic taggers that are able to predict POS of unknown words are presented and tested following the process of developing annotated corpora (the most recently fully tagged and corrected text is used to update the model parameters). Three stochastic optimization criteria and seven European languages (Dutch, English, French, German, Greek, Italian and Spanish) and two POS sets are used in the tests. The set of main grammatical classes and an extended set of detailed grammatical categories is the same in all languages. The testing material consists of newspaper texts with 60,000-180,000 words for each language and an English EEC-law text with 110,000 words. This material was assembled and annotated in the framework of the ESPRIT-291/860 project \"Linguistic Analysis of the European Languages.\" In addition, we present transformations of the taggers' calculations to a fixed-point arithmetic system, which are useful for machines without floating-point hardware.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The taggers handle both lexical and tag transition information, and without performing morphological analysis can be used to annotate corpora when small training texts are available. Thus, they are preferred when a new language or a new tagset is used. When the training text is adequate to estimate the tagger parameters, more efficient stochastic taggers (Dermatas and Kokkinakis 1994; Maltese and Mancini 1991; Weischedel et al. 1993 ) and training methods can be implemented (Merialdo 1994) .", "cite_spans": [ { "start": 357, "end": 387, "text": "(Dermatas and Kokkinakis 1994;", "ref_id": "BIBREF9" }, { "start": 388, "end": 413, "text": "Maltese and Mancini 1991;", "ref_id": "BIBREF23" }, { "start": 414, "end": 436, "text": "Weischedel et al. 1993", "ref_id": "BIBREF37" }, { "start": 479, "end": 494, "text": "(Merialdo 1994)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The structure of this paper is as follows: in Section 2 the stochastic tagging models are presented in detail. In Section 3 the influence of the training text errors and the sources of stochastic tagger errors are discussed, followed, in Section 4, by a short presentation of the implementation. In Section 5, statistical measurements on the corpora and a short description of the taggers' performance is given. Detailed experimental results are included in Appendices A and B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "A stochastic optimal sequence of tags T, to be assigned to the words of a sentence W, can be expressed as a function of both lexical P(W [ T) and language model P(T) probabilities using Bayes' rule:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Tagging Models", "sec_num": "2." }, { "text": "To --argmaxP(T [ W) = argmax P(W [ T) \u2022 P(T) = argmaxP(W [ T) \u2022 P(T) (1) P(W) T T T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Tagging Models", "sec_num": "2." }, { "text": "Several assumptions and approximations on the probabilities P(W [ T) and P(T)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Tagging Models", "sec_num": "2." }, { "text": "lead to good comprises concerning memory and computational complexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Tagging Models", "sec_num": "2." }, { "text": "The tagging process can be modeled by an HMM by assuming that each hidden tag state produces a word in the sentence, each word wi is uncorrelated with neighboring words and their tags, and each tag is probabilistic dependent on the N previous tags only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hidden Markov Model (HMM) Approach", "sec_num": "2.1" }, { "text": "2.1.1 Most probable tag sequence (HMM-TS). The optimal tag sequence for a given observation sequence of words is given by the following equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hidden Markov Model (HMM) Approach", "sec_num": "2.1" }, { "text": "N M M Z~ HMM-TS) --argmaxP(h)H P(ti [ ti-1 ..... h) H P(ti ] ti-1,..., ti-N) H P(wi [ ti) tl,...,tM i=2 i=N+I i=1 (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hidden Markov Model (HMM) Approach", "sec_num": "2.1" }, { "text": "where M is the number of words in the sentence W. The optimal solution is estimated by the well-known Viterbi algorithm. The first- (Rabiner 1989 ) and second- (He 1988 ) order Viterbi algorithms have been presented elsewhere. Recently, Tao (1992) described the Viterbi algorithm for generalized HMMs.", "cite_spans": [ { "start": 132, "end": 145, "text": "(Rabiner 1989", "ref_id": "BIBREF32" }, { "start": 160, "end": 168, "text": "(He 1988", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Hidden Markov Model (HMM) Approach", "sec_num": "2.1" }, { "text": "The optimal criterion is to choose the tags that are most likely to be computed independently at each word event:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Most probable tags (HMM-T).", "sec_num": "2.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "To HMM-T) = {tio, tio -----argmaxP(ti[W)}, ti i = 1,M", "eq_num": "(3)" } ], "section": "Most probable tags (HMM-T).", "sec_num": "2.1.2" }, { "text": "The optimum tag tio is estimated using the probabilities of the forward-backward algorithm (Rabiner 1989) :", "cite_spans": [ { "start": 91, "end": 105, "text": "(Rabiner 1989)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Most probable tags (HMM-T).", "sec_num": "2.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "rio --argmax P(ti, W) = argmax P(ti, wl,. \u2022., wi)P(wi+l,..., WM [ ti)", "eq_num": "(4)" } ], "section": "Most probable tags (HMM-T).", "sec_num": "2.1.2" }, { "text": "ti ti", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Most probable tags (HMM-T).", "sec_num": "2.1.2" }, { "text": "The probabilities in equation 4 are estimated recursively for the first- (Rabiner 1989 ) and second-order HMM (Watson and Chung 1992) .", "cite_spans": [ { "start": 73, "end": 86, "text": "(Rabiner 1989", "ref_id": "BIBREF32" }, { "start": 110, "end": 133, "text": "(Watson and Chung 1992)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Most probable tags (HMM-T).", "sec_num": "2.1.2" }, { "text": "The main difference between the optimization criteria in 2.1.1 and that in 2.1.2 results from the definition of the expected correct tagging rate; the HMM-TS model maximizes the correctly tagged sentences, while the HMM-T model maximizes the correctly tagged words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Most probable tags (HMM-T).", "sec_num": "2.1.2" }, { "text": "2.1.3 Stochastic hypothesis for the unknown words. When a new text is processed, some words are unknown to the tagger lexicon (i.e. they are not included in the training text). In this case, in order to use the forward-backward and the Viterbi algorithm we must estimate the unknown word's conditional probabilities P(w I t). Methods for the estimation of these probabilities have already been proposed (e.g. the use of word endings morphology). Nevertheless, these methods fail if only a small training text is available because of the huge number of events not occurring in this text, such as pairs of tags and word endings. To address the above problem we have approximated the conditional probabilities of the unknown word tags by the conditional probabilities of the less probable word tags, i.e. tags of the words occurring only once. In the following we demonstrate experimentally that this approximation is valid and independent of the training text size. Figures 1 and 2 show the probability distributions of the tags in the training text (known words) and that of the words occurring only once in this text for the English and French language, respectively. Furthermore, the tags' probability distribution of the words that are not included in the training text and are characterized as unknown words is shown. This distribution is measured in a different open testing text, i.e. a text that may include both known and unknown words. The measurements were carried out on newspaper text and split into two parts of the same size--the training and the open testing text. Each part contained 90,000 words for the English text and 50,000 words for the French text. In this experiment, a tagset comprising the main grammatical categories was used:", "cite_spans": [], "ref_spans": [ { "start": 964, "end": 979, "text": "Figures 1 and 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Most probable tags (HMM-T).", "sec_num": "2.1.2" }, { "text": "Verb (Vet), Noun (Nou), Adjective (Adj), Adverb (Adv), Pronoun (Pro), Preposition (Pre), Article/Determiner (A-D), Conjunction (Con), Particle (Par), Interjection (Int), Miscellaneous (Mis; i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Most probable tags (HMM-T).", "sec_num": "2.1.2" }, { "text": "e., tags that cannot be classified in the previous categories).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Most probable tags (HMM-T).", "sec_num": "2.1.2" }, { "text": "This experiment has two significant results:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Most probable tags (HMM-T).", "sec_num": "2.1.2" }, { "text": "a.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Most probable tags (HMM-T).", "sec_num": "2.1.2" }, { "text": "The probability distribution of the tags of unknown words is significantly different from the distribution for known words, while it is very close to the probability distribution of the tags of the less probable known words both in the English and French text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Most probable tags (HMM-T).", "sec_num": "2.1.2" }, { "text": "A number of closed and functional grammatical classes has very low probability for both unknown and words occurring only once, e.g., the tags article, determiner, conjunction, pronoun, miscellaneous in English text, and article, determiner, conjunction, pronoun, interjection and miscellaneous in French text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b.", "sec_num": null }, { "text": "In the English text, verbs, adjectives and conjunctions are more frequent than in the French text. On the other hand, prepositions in the French text have a 0.05 greater probability, which is also the most significant difference between the distributions of the two languages. Prepositions in the words occurring only once and in unknown words are minimal in the English text, while in the French text one out of ten unknown words is a preposition. The text coverage by prepositions is 11.2 percent for the English and 16.2 percent for the French corpus. This difference increases significantly in the lexicon coverage: 0.47 percent for the English and 1.54 percent for the French lexicon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b.", "sec_num": null }, { "text": "In Figures 3 and 4 , the results of chi-square tests that measure the difference between the probability distribution of the tags of the less probable words and that of the unknown words are shown. Various sizes of training text and two sets of grammatical categories, the main set (11 classes) and an extended set (described in detail in Section 5) were used. Specifically, the grammatically labeled text of 180,000 word entries of the English language was separated into two parts: the training text, where the tag probabilities distribution of the less probable words was estimated, and the open testing text, where the tag probabilities distribution of the unknown words was measured. Multiple chisquare experiments were carried out by transferring successively a portion of 30,000 words from the open testing text to the training text and by modifying the word occurrence threshold from 1 to 15 in order to determine the experimentally optimal threshold. Words having an occurrence below or equal to this threshold in the training text are counted as less probable words. The results of the tests shown in Figures 3 and 4 include threshold values up to 15 because the difference between the distributions for values greater than 15 increases significantly.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 18, "text": "Figures 3 and 4", "ref_id": null }, { "start": 1111, "end": 1127, "text": "Figures 3 and 4", "ref_id": null } ], "eq_spans": [], "section": "b.", "sec_num": null }, { "text": "As shown in the above figures, the close relation between the tested probability distributions is evident for all sizes of training and testing text. Furthermore, we observe that:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b.", "sec_num": null }, { "text": "a. b. C. d. e.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b.", "sec_num": null }, { "text": "The chi-square distance between the tag probability distributions is minimized for low values of the word occurrence threshold. In the tagset of main grammatical classes, this distance is minimized for threshold values less than three, four, or five, depending on the training text size. In the extended set of grammatical classes the distance is minimized in all cases for the threshold value one; i.e., when only the words occurring once in the training text are regarded as less probable words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b.", "sec_num": null }, { "text": "In the English text the chi-square distance between the tag. probability distributions is minimized for 120,000 words training text for the set of main grammatical classes and for 60,000 words for the extended set. The same results are measured in the French text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b.", "sec_num": null }, { "text": "There is no significant variation in the chi-square test results for additional training text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b.", "sec_num": null }, { "text": "The closed and functional grammatical classes can be estimated automatically as the less probable grammatical classes of the less probable words in the tagged text. (The manual definition process is time-consuming when a set of detailed grammatical classes is used).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b.", "sec_num": null }, { "text": "The probability distribution of some grammatical classes of the unknown words changes significantly when the size of the training text is increased. These changes can be measured in the training text from the tags' distribution of the less probable words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b.", "sec_num": null }, { "text": "Similar results have been achieved by testing the Dutch, German, Greek, Italian, and Spanish texts, both with the tagset of the main grammatical categories and with the common extended set of grammatical categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b.", "sec_num": null }, { "text": "Based on the above we can complete both optimization criteria of the HMM formulation, given in 2.1.1 and 2.1.2, by calculating the conditional probability of the unknown word tags using Bayes Chi-square test for the main grammatical classes' distribution of the unknown and the less probable words in the English text for various training text sizes. Chi-square test for the distribution of the grammatical tags of the unknown words and the less probable words in the English text, for the extended tagset of grammatical classes and various training text sizes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b.", "sec_num": null }, { "text": "The probability P(Unknown word) is approximated in open testing texts by measuring the unknown word frequency. Therefore the model parameters are adapted each time an open testing text is being tagged. The probability P(t I Less probable word) and the tags probability P(t) are measured in the training text. Finally, each tag-conditional probability of the unknown word tags is normalized:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b.", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L ~_, P(wj [ ti) + P(Unknown word I ti) = 1, j=l Vi = 1, T", "eq_num": "(6)" } ], "section": "b.", "sec_num": null }, { "text": "where L is the number of the known words and T is the number of tags.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b.", "sec_num": null }, { "text": "When the corresponding lexical probabilities p(w I t) are not available in the dictionary that specifies the possible tags for each word, a simple tagger can be implemented by assuming that each word wi in a sentence is uncorrelated with the assigned tag ti; e.g., p(wi l ti) = p(wi).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging without Lexical Probabilities", "sec_num": "2.2" }, { "text": "In this case the most probable tag sequence, according to equation 2, is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging without Lexical Probabilities", "sec_num": "2.2" }, { "text": "N M T~MLM) = argmaxP(h)1-IP(ti [ ti-b...,h) 1-I P(ti [ ti-1,...,ti-N) tl,...,tM i=2 i=N+I (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging without Lexical Probabilities", "sec_num": "2.2" }, { "text": "which is a Nth-order Markovian chain for the language model (MLM). Taggers based on MLM require the training process to store each tag assigned to every lexicon entry and to define the unknown word tagset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagging without Lexical Probabilities", "sec_num": "2.2" }, { "text": "The unknown word tagset is defined by the selection of the most probable tags that have been assigned to the less probable words of the training text. In this way the unknown words' ambiguity is decreased significantly. The word occurrence threshold used to define the less probable words and a tag probability threshold used to isolate the less probable tags are estimated experimentally.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic hypothesis for the unknown words.", "sec_num": "2.2.1" }, { "text": "Extensive experiments have shown insignificant differences in the tagging error rate when alternative word occurrence thresholds have been tested. The best results are obtained when values less than 10 are used. In this paper the word occurrence threshold has been set to one in all experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic hypothesis for the unknown words.", "sec_num": "2.2.1" }, { "text": "Taggers based on the HMM technique compensate for some serious training problems inherent in the MLM approach. The most important one is the presence of errors in the training text. This situation appears when uncorrected tags or analysts' mistakes remain in the text used to estimate the stochastic model parameters. These errors generate tag assignments that are not valid. In MLM taggers these tags are equally weighted to the correct ones. In contrast, in HMM taggers invalid assignments are biased by the very low value of the corresponding conditional probability of the tags (the wrong tag rarely appears in the specific word environment), which decreases the overall probability for incorrect tag assignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Errors in the Training Text", "sec_num": "3.1" }, { "text": "Another important issue concerns the HMM ability to handle lexicon information, e.g., to find how frequently the tags have been assigned to each lexicon entry. In some languages, taggers based on HMMs almost reduce the prediction error to the half compared to the MLM approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Errors in the Training Text", "sec_num": "3.1" }, { "text": "Generally, tagger errors can be classified into three categories: a. b.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagger prediction errors", "sec_num": "3.2" }, { "text": "Errors due to inadequate training data. When the model parameters are estimated from a limited amount of training data, tagging errors appear because of unknown or inaccurately estimated conditional probabilities. Various interpolation techniques have been proposed for the estimation of the model parameters for unseen events or to smooth the model parameters (Church and Gale 1991; Essen and Steinbiss 1992; Jardino and Adda 1993; Katz 1987; McInnes 1992) .", "cite_spans": [ { "start": 361, "end": 383, "text": "(Church and Gale 1991;", "ref_id": "BIBREF4" }, { "start": 384, "end": 409, "text": "Essen and Steinbiss 1992;", "ref_id": "BIBREF13" }, { "start": 410, "end": 432, "text": "Jardino and Adda 1993;", "ref_id": "BIBREF17" }, { "start": 433, "end": 443, "text": "Katz 1987;", "ref_id": "BIBREF21" }, { "start": 444, "end": 457, "text": "McInnes 1992)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "C.", "sec_num": null }, { "text": "Errors due to the syntactical or grammatical style of the testing text. This type of error appears when the testing text has a style unknown to the model (i.e., a style used in the open testing text, not included in the training text). It can be reduced by using multiple models that have been previously trained in different text styles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C.", "sec_num": null }, { "text": "Errors due to insufficient model hypotheses. In this case the model hypotheses are not satisfied; e.g., there are strong intra-tag relations in distances greater than the model order, idiomatic expressions, language dependent exceptions, etc. A general solution to the variable length and depth of dependency for HMM has been already proposed (Tao 1992 ), but has not been implemented in taggers.", "cite_spans": [ { "start": 343, "end": 352, "text": "(Tao 1992", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "C.", "sec_num": null }, { "text": "In this section we present techniques to speed up the tagging process and avoid underflow or overflow phenomena during the estimation of the optimum solution. These techniques do not increase the prediction error rate or have only minimal influence on it, as proven in the experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation", "sec_num": "4." }, { "text": "Two modules consume the majority of the tagger computational time. The first module extracts from the model parameters the intra-tag and the word-tag conditional probabilities requested by the second module, which computes the optimum solution by multiplying the corresponding conditional probabilities. Binary search maximizes the searching speed of the first module, while the following three transformation techniques decrease the computing time of the second module, avoid underflow or overflow phenomena, and use the faster and low-cost fixed-point arithmetic system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation", "sec_num": "4." }, { "text": "The stochastic solutions described by equations 2 and 7 are computed by multiplying several conditional probabilities. The floating-point multiplications of these probabilities are transformed into an equal number of floating-point additions, by computing the logarithm of the optimum criterion probability. This technique solves the underflow problem which arises when many small probabilities are multiplied, and accelerates the tagger response time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logarithmic Transformation", "sec_num": "4.1" }, { "text": "The fixed-point transformation converts the floating-point logarithmic additions into an equal number of fixed-point additions. It is realized by the following quantization process:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fixed-Point Transformation", "sec_num": "4.2" }, { "text": "[/max (ln(Pmin)-ln(Px))] (8) Ix ----Round Mw ln(Pmin)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fixed-Point Transformation", "sec_num": "4.2" }, { "text": "where: Px is a conditional probability, Pmin is the minimum conditional probability in the model parameter set,/max is the maximum integer of the fixed-point arithmetic system, Mw is the maximum number of words in a sentence and Round[.] is a quantization function mapping real numbers into the nearest integer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fixed-Point Transformation", "sec_num": "4.2" }, { "text": "After the logarithmic and the fixed-point transformation, equations 2 and 7 become:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fixed-Point Transformation", "sec_num": "4.2" }, { "text": "N I (HMM-Ts) --argmaxI(tl) + ~_,I(tilti_l,...,h) tl,...,tM i=2 M M + ~_, I(ti ] ti-1,...,ti-N) + ~I(wi I ti) (9) i=N+l i=1 N M I~ MLM) = argmaxI(tl) + ~__I(ti I ti_, .... ,tl) + ~ I(ti I ti_,,...,ti-N) (10) tl ..... tM i=2 i=N+I", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fixed-Point Transformation", "sec_num": "4.2" }, { "text": "The quantization function approximates the computations, producing theoretically differing solutions. In practice the prediction error differences measured for all languages, taggers, and tagsets were less than 0.02 percent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fixed-Point Transformation", "sec_num": "4.2" }, { "text": "The solution obtained by the forward-backward algorithm cannot be logarithmically transformed because of the presence of summations. It is well known that for HMMs the forward and backward probabilities tend exponentially to zero. The scaling process introduced in this case multiplies the forward and backward probabilities by a scaling factor at selective word events in order to keep the computations within the floatingpoint dynamic range of the computer (Rabiner 1989) .", "cite_spans": [ { "start": 459, "end": 473, "text": "(Rabiner 1989)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Scaling", "sec_num": "4.3" }, { "text": "The taggers have been realized under MS-DOS using a 32-bit C compiler. The lexicon size is limited by the available RAM. A mean value of 35 bytes per word is allocated. The tagger speed exceeds the rate of 500 word/sec in a 80386 (33MHz) for all languages and tagsets in text with known words. A maximum memory requirement of 930Kb has been measured in the experiments described in this paper. A set of symbols and keywords (a sentence separators set) and the maximum length of a sentence are the only manually defined parameters when the HMM taggers are applied.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hardware--Software", "sec_num": "4.4" }, { "text": "In the MLM taggers, the word occurrence threshold that isolates the less probable words and the tag probability threshold used to reject the less probable tags from the unknown words tagset are the manually defined parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hardware--Software", "sec_num": "4.4" }, { "text": "The training process has been designed to estimate or update the model parameters from fully tagged text without any manual intervention. Therefore, frequency measurements are defined or updated as model parameters instead of conditional ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hardware--Software", "sec_num": "4.4" }, { "text": "Five taggers have been realized and tested using bi-POS and tri-POS transition probabilities. Specifically, the first-and the second-order MLM (MLM1 and MLM2, respectively), the first-and the second-order HMM of the most probable tag sequence criterion (HMM-TS1 and HMM-TS2, respectively), and the first-order HMM of the most probable tag criterion (HMM-T1) have been realized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Taggers", "sec_num": "5.1" }, { "text": "The tagger performance has been measured in extensive experiments carried out on corpora of seven languages, English, Dutch, German, French, Greek, Italian and Spanish, annotated according to detailed grammatical categories. In Table 1 , the type and the size of these corpora is shown. They are part of corpora selected in the framework of the ESPRIT-I project 291/860: \"Linguistic Analysis of the European Languages\" (1985) (1986) (1987) (1988) (1989) by the project partners (Table 2) and annotated by using semi-automatic taggers. Manual correction was performed by experienced, native analysts for each language separately. In all languages the entries were tagged as they appeared in the text. In the German corpus, for example, where multiple words are concatenated, the words were not separated.", "cite_spans": [ { "start": 419, "end": 425, "text": "(1985)", "ref_id": null }, { "start": 426, "end": 432, "text": "(1986)", "ref_id": null }, { "start": 433, "end": 439, "text": "(1987)", "ref_id": null }, { "start": 440, "end": 446, "text": "(1988)", "ref_id": null }, { "start": 447, "end": 453, "text": "(1989)", "ref_id": null } ], "ref_spans": [ { "start": 228, "end": 235, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 478, "end": 487, "text": "(Table 2)", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Corpora", "sec_num": "5.2" }, { "text": "Two sets of grammatical tags were isolated from a unified set of grammatical categories defined in the ESPRIT I project 291/860 (ESPRIT-860, Internal report, 1986): ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagsets", "sec_num": "5.3" }, { "text": "A common tagset of 11 main grammatical categories for each language, as described in 2.1.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b.", "sec_num": null }, { "text": "An extended set including common categorization of the grammatical information for all languages, as shown in Table 3 . In some languages a number of grammatical categories is not applicable. The depth of grammatical analysis and the grammatical structure of each language produce a different number of POS tags. In Table 4 the number of POS tags used for each language and each set of grammatical categories is shown.", "cite_spans": [], "ref_spans": [ { "start": 110, "end": 117, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 316, "end": 323, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "b.", "sec_num": null }, { "text": "The corpus ambiguity was measured by the mean number of possible tags for each word of the corpus for both sets of grammatical tags ( Table 5 ). The most ambiguous texts are the French, Italian, and English in the tagset of main grammatical classes and the German, Greek, Italian, and French in the extended set of grammatical categories. In Figure 5 the percent occurrence of unknown words in an open testing text of 10,000 words is shown versus the size of the training text.", "cite_spans": [], "ref_spans": [ { "start": 134, "end": 141, "text": "Table 5", "ref_id": "TABREF4" }, { "start": 342, "end": 350, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Corpus Ambiguity", "sec_num": "5.4" }, { "text": "The Italian and Greek corpora have the greatest number of unknown words followed by the Spanish corpus (for the available results with restricted training text).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Ambiguity", "sec_num": "5.4" }, { "text": "Taking into account the word ambiguity in the training text (Table 5) , the occurrence of unknown words in the open testing text ( Figure 5) , and the hypothesis that the unknown word tagset and the application tagset are the same, the ambiguity of the open testing corpus for both sets of grammatical categories was computed for a 50,000-word training corpus (Table 6) . Size of training text ('1 OK words)", "cite_spans": [], "ref_spans": [ { "start": 60, "end": 69, "text": "(Table 5)", "ref_id": "TABREF4" }, { "start": 131, "end": 140, "text": "Figure 5)", "ref_id": null }, { "start": 360, "end": 369, "text": "(Table 6)", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Corpus Ambiguity", "sec_num": "5.4" }, { "text": "Percentage of unknown words in open testing text of 10,000 words for various sizes of the training text. For the set of main grammatical classes the ambiguity of the open testing corpus is more or less the same for all languages, varying from a minimum of 7.83 tags per word in the Dutch text to a maximum of 9.32 in the Greek corpus. For the extended set of grammatical categories three types of corpora can be distinguished:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "a.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "The most ambiguous is the corpus of the Greek language, because of the great number of grammatical tags (443) and the strong presence of unknown words in the open testing text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "b.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "In the German, Spanish, and Italian texts the same ambiguity is measured.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 5", "sec_num": null }, { "text": "The least ambiguous are the Dutch and French texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "c.", "sec_num": null }, { "text": "Taking into account the previous results, it is important to note that the great differences between languages in text ambiguity, in the presence of unknown words and in the statistics of the grammatical categories, e.g. the different occurrence of prepositions in English and French corpora, prevent a direct comparison of languages from the taggers' error rate. Apart from a few obvious observations given in Section 5.7, such a comparison would require a detailed examination of the corpora and the taggers' errors by experienced linguists. Therefore, the prediction error rates presented in Tagger memory requirements for the extended set of grammatical categories. Size of training text ('1 OK words)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "c.", "sec_num": null }, { "text": "Unknown word error rate for the HMM-TS2 tagger and the set of main grammatical categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "in contrast to the extended tagset experiments, where a greater-size training text for the German, Greek, and Spanish languages is required. This phenomenon becomes stronger in taggers based on the HMM where the accuracy of the P(w J t) estimation is proportional to the word and the tag frequency of occurrence in the training text. Thus, for all tagsets and languages a larger training text is required in order to minimize the error rate. The taggers based on the HMM reduce the prediction error almost to half in comparison to the same order taggers based on MLM. Strong dependencies on the language and the estimation accuracy of the model parameters influence this reduction. The alternative HMM solutions give trivial performance differences, confirming recent results obtained in the Treebank corpus by using an HMM tagger (Merialdo 1991) .", "cite_spans": [ { "start": 831, "end": 846, "text": "(Merialdo 1991)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "Concerning the performance of the taggers in unknown words, we present in Figure 8 as an example the HMM-TS2 error rate for the tagset of the main grammatical categories, which is also the worst case for this set of grammatical categories. Generally the error rate decreases when the training text is increased. The stochastic model is successful for only half of the unknown words for the Italian text and for approximately two out of three unknown words for the English text. In all other languages the HMM-TS2 tagger gives the correct solution for three out of four unknown words.", "cite_spans": [], "ref_spans": [ { "start": 74, "end": 82, "text": "Figure 8", "ref_id": null } ], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "Similar results are achieved when the extended set of grammatical categories is tested. In this case the unknown word error rate increases about 10-20 percent for all the languages except the Greek language. In the Greek text the error rate reaches approximately 65 percent when 100,000-word text is used to define the parameters of the HMM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "The unknown words, which initially cover about 25-35 percent of the text, are reduced to 8-15 percent when all the available text is used as training data. In the majority of the experiments, the tagger error rate decreases when new text updates the model parameters. Trivial differences of the tagger learning rates between languages and tagsets show the efficiency of the training method in estimating the model transition probabilities for the tested languages and the validity of the stochastic hypothesis for the unknown words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "In this paper five automatic, stochastic taggers that are able to tag unknown words have been presented. The taggers have been tested in newspaper corpora of seven European languages and an EEC-law text of the English language using two sets of grammatical categories. When new training text updates the model parameters, the tagging error rate changes as expected: in text with unknown words a lower error rate is measured, proving the efficiency of the relative frequencies learning method and the validity of the hypothesis for the unknown words' stochastic behavior. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." } ], "back_matter": [ { "text": " Table 7 Lexicon size for 100,000-word training text. English French German Greek Italian Lexicon size 13,700 12,200 13,500 8,900 17,400 15,300 this paper should be regarded only as indication of the probabilistic taggers' efficiency in each separate language when small training texts are available.", "cite_spans": [], "ref_spans": [ { "start": 1, "end": 8, "text": "Table 7", "ref_id": null }, { "start": 54, "end": 154, "text": "English French German Greek Italian Lexicon size 13,700 12,200 13,500 8,900 17,400 15,300", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "The corpora were divided into 10,000-word entries. All parts except the last one were used to create (initially) and update the model parameters successively. The last part was tagged each time after the model parameters were updated, giving results of the tagger performance on open testing text. The influence of the application tagset on the tagger performance was measured by testing the two totally different tagsets described in Section 5.3. The experimental process was repeated for each language, tagset and tagger. Thus a total number of 2 (tagsets) \u2022 5 (taggers) ~ [7 (languages) + 1 (Test on English EEC-law text)] = 80 experiments was carried out.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5.5" }, { "text": "In Figures 6 and 7 the tagger speed and the memory requirements after the last memory adaptation process are presented for all taggers and languages, and for the extended tagset.The Greek and Italian corpora have a great number of lexical entries (different word forms) for the same amount of 100,000-word training text, as shown in Table 7 . As a result these taggers require more memory (Figure 7) . In contrast, the small size of the German lexicon decreases the required memory.Tagger speed is closely related to the corpus ambiguity (Table 6 ). The ambiguity of the Greek corpus is more than three times greater than the next one, the German corpus.The significant influence of the training text size on tagger speed is proven by comparing the experimental results in the English corpus (newspaper and EEC-Law). When the taggers are trained using the 170,000 words of the English newspaper corpus, a greater number of lexicon entries and a greater number of transition probabilities (Figure 7) is measured than in the case of the EEC-law corpus (100K words training text). The model becomes more complex, but tagger speed is slightly higher because of the greater size of the training text, which reduces the presence of unknown words in the testing text. Generally, tagger speed increases when the training text is increased.", "cite_spans": [], "ref_spans": [ { "start": 333, "end": 340, "text": "Table 7", "ref_id": null }, { "start": 389, "end": 399, "text": "(Figure 7)", "ref_id": null }, { "start": 538, "end": 546, "text": "(Table 6", "ref_id": null }, { "start": 988, "end": 998, "text": "(Figure 7)", "ref_id": null } ], "eq_spans": [], "section": "Tagger Speed and Memory Requirements", "sec_num": "5.6" }, { "text": "The actual tagger error rates for all experiments are given in Appendices A and B. In this section we present a discussion of these error rates.The error rate depends strongly on the test text and language, and the type and size of the tagset. The worst results have been obtained for the Greek language because of its significantly greater ambiguity, the number of tags (requiring significantly greater training text), and its freer syntax.In the main category of tagset experiments, the model parameters for the MLM systems are estimated accurately when the training text exceeds 50,000-90,000 words, ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tagger Error Rate", "sec_num": "5.7" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A simple rule-based part of speech tagger", "authors": [ { "first": "E", "middle": [], "last": "Brill", "suffix": "" } ], "year": 1992, "venue": "Proceedings, Third Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "152--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brill, E. (1992). \"A simple rule-based part of speech tagger.\" In Proceedings, Third Conference on Applied Natural Language Processing. Trento, Italy, 152-155.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Three different probabilistic language models: Comparison and combination", "authors": [ { "first": "H", "middle": [], "last": "Cerf-Danon", "suffix": "" }, { "first": "M", "middle": [], "last": "Ei-Beze", "suffix": "" } ], "year": 1991, "venue": "Proceedings, International Conference on Acoustics Speech and Signal Processing", "volume": "", "issue": "", "pages": "297--300", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cerf-Danon, H., and EI-Beze, M. (1991). \"Three different probabilistic language models: Comparison and combination.\" In Proceedings, International Conference on Acoustics Speech and Signal Processing, 297-300.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Equations for part-of-speech tagging", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "C", "middle": [], "last": "Hendrickson", "suffix": "" }, { "first": "N", "middle": [], "last": "Jacobson", "suffix": "" }, { "first": "M", "middle": [], "last": "Perkowitz", "suffix": "" } ], "year": 1993, "venue": "Proceedings, National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charniak, E.; Hendrickson, C.; Jacobson, N.; and Perkowitz, M. (1993). \"Equations for part-of-speech tagging.\" In Proceedings, National Conference on Artificial Intelligence.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A stochastic parts program and noun phrase parser for unrestricted text", "authors": [ { "first": "K", "middle": [], "last": "Church", "suffix": "" } ], "year": 1988, "venue": "Proceedings, Second Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "136--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "Church, K. (1988). \"A stochastic parts program and noun phrase parser for unrestricted text.\" In Proceedings, Second Conference on Applied Natural Language Processing. Austin, Texas, 136-143.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A comparison of the enhanced Good-Turing and deleted estimation methods for estimating probabilities of English bigrams", "authors": [ { "first": "K", "middle": [], "last": "Church", "suffix": "" }, { "first": "W", "middle": [], "last": "Gale", "suffix": "" } ], "year": 1991, "venue": "Computer Speech and Language", "volume": "5", "issue": "", "pages": "19--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Church, K., and Gale, W. (1991). \"A comparison of the enhanced Good-Turing and deleted estimation methods for estimating probabilities of English bigrams.\" Computer Speech and Language 5, 19-24.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A practical part-of-speech tagger", "authors": [ { "first": "D", "middle": [], "last": "Cutting", "suffix": "" }, { "first": "J", "middle": [], "last": "Kupiec", "suffix": "" }, { "first": "J", "middle": [], "last": "Pederson", "suffix": "" }, { "first": "P", "middle": [], "last": "Sibun", "suffix": "" } ], "year": 1992, "venue": "Proceedings, Third Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "133--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cutting, D.; Kupiec, J.; Pederson, J.; and Sibun, P. (1992). \"A practical part-of-speech tagger.\" In Proceedings, Third Conference on Applied Natural Language Processing. Trento, Italy, 133-140.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Semi automatic labelling of Greek texts", "authors": [ { "first": "E", "middle": [], "last": "Dermatas", "suffix": "" }, { "first": "G", "middle": [], "last": "Kokkinakis", "suffix": "" } ], "year": 1988, "venue": "Proceedings, Seventh FASE Symposium SPEECH \"88", "volume": "", "issue": "", "pages": "239--245", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dermatas, E., and Kokkinakis, G. (1988). \"Semi automatic labelling of Greek texts.\" In Proceedings, Seventh FASE Symposium SPEECH \"88. Edinburgh, 239-245.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A system for automatic text labelling", "authors": [ { "first": "E", "middle": [], "last": "Dermatas", "suffix": "" }, { "first": "G", "middle": [], "last": "Kokkinakis", "suffix": "" } ], "year": 1993, "venue": "Proceedings, Eurospeech-90", "volume": "", "issue": "", "pages": "382--385", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dermatas, E., and Kokkinakis, G. (1993). \"A system for automatic text labelling.\" In Proceedings, Eurospeech-90. Paris, 382-385.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A fast multilingual probabilistic tagger", "authors": [ { "first": "E", "middle": [], "last": "Dermatas", "suffix": "" }, { "first": "G", "middle": [], "last": "Kokkinakis", "suffix": "" } ], "year": 1993, "venue": "Proceedings, Eurospeech-93. Berlin", "volume": "", "issue": "", "pages": "1323--1326", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dermatas, E., and Kokkinakis, G. (1993). \"A fast multilingual probabilistic tagger.\" In Proceedings, Eurospeech-93. Berlin, 1323-1326 (presented also in the Eurospeech-93 exhibition).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A multilingual unlimited vocabulary stochastic tagger", "authors": [ { "first": "E", "middle": [], "last": "Dermatas", "suffix": "" }, { "first": "G", "middle": [], "last": "Kokkinakis", "suffix": "" } ], "year": 1994, "venue": "In Advanced Speech Applications--European Commission ESPRIT", "volume": "", "issue": "1", "pages": "98--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dermatas, E., and Kokkinakis, G. (1994). \"A multilingual unlimited vocabulary stochastic tagger.\" In Advanced Speech Applications--European Commission ESPRIT (1), edited by K. Varghese, S. Pfleger, and J. Lefevre, 98-106. Springer-Verlag.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Back-propagation based lexical acquisition experiments", "authors": [ { "first": "M", "middle": [], "last": "Eineborg", "suffix": "" }, { "first": "B", "middle": [], "last": "Gamback", "suffix": "" } ], "year": 1993, "venue": "Proceedings, NeuroNimes: Neural Networks and their Industrial & Cognitive Applications. Nimes", "volume": "", "issue": "", "pages": "169--178", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eineborg, M., and Gamback, B. (1993). \"Back-propagation based lexical acquisition experiments.\" In Proceedings, NeuroNimes: Neural Networks and their Industrial & Cognitive Applications. Nimes, 169-178.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Comparing a connectionist and rule based model for assignment parts-of-speech", "authors": [ { "first": "K", "middle": [], "last": "Elenius", "suffix": "" } ], "year": 1990, "venue": "Proceedings, International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "597--600", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elenius, K. (1990).\"Comparing a connectionist and rule based model for assignment parts-of-speech.\" In Proceedings, International Conference on Acoustics, Speech and Signal Processing, 597-600.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Assigning parts-of-speech of words from their orthography using a connectionist model", "authors": [ { "first": "K", "middle": [], "last": "Elenius", "suffix": "" }, { "first": "R", "middle": [], "last": "Carlson", "suffix": "" } ], "year": 1986, "venue": "Proceedings, European Conference on Speech Communication and Technology", "volume": "", "issue": "", "pages": "534--537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elenius, K., and Carlson, R. (1989). \"Assigning parts-of-speech of words from their orthography using a connectionist model.\" In Proceedings, European Conference on Speech Communication and Technology. Paris, 534-537. Partners of ESPRIT-291/860 (1986). \"Unification of the word classes of the ESPRIT Project 860.\" BU-WKL-0376. Internal Report.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Cooccurrence smoothing for statistical language modelling", "authors": [ { "first": "U", "middle": [], "last": "Essen", "suffix": "" }, { "first": "V", "middle": [], "last": "Steinbiss", "suffix": "" } ], "year": 1992, "venue": "Proceedings, International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "161--164", "other_ids": {}, "num": null, "urls": [], "raw_text": "Essen, U., and Steinbiss, V. (1992). \"Cooccurrence smoothing for statistical language modelling.\" In Proceedings, International Conference on Acoustics, Speech and Signal Processing, 161-164.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The Computational Analysis of English: A Corpus-Based Approach", "authors": [ { "first": "R", "middle": [], "last": "Garside", "suffix": "" }, { "first": "G", "middle": [], "last": "Leech", "suffix": "" }, { "first": "G", "middle": [], "last": "Sampson", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Garside, R.; Leech, G.; and Sampson, G. (1987). The Computational Analysis of English: A Corpus-Based Approach. Longman.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Extended Viterbi algorithm for second order hidden Markov process", "authors": [ { "first": "Y", "middle": [], "last": "He", "suffix": "" } ], "year": 1988, "venue": "Proceedings, International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "718--720", "other_ids": {}, "num": null, "urls": [], "raw_text": "He, Y. (1988). \"Extended Viterbi algorithm for second order hidden Markov process.\" In Proceedings, International Conference on Acoustics, Speech and Signal Processing, 718-720.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Acquiring lexical knowledge from text: A case study", "authors": [ { "first": "P", "middle": [], "last": "Jacobs", "suffix": "" }, { "first": "U", "middle": [], "last": "Zernik", "suffix": "" } ], "year": 1988, "venue": "Proceedings, Seventh National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "739--744", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacobs, P., and Zernik, U. (1988). \"Acquiring lexical knowledge from text: A case study.\" In Proceedings, Seventh National Conference on Artificial Intelligence. Saint Paul, Minnesota, 739-744.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Automatic word classification using simulated annealing", "authors": [ { "first": "M", "middle": [], "last": "Jardino", "suffix": "" }, { "first": "G", "middle": [], "last": "Adda", "suffix": "" } ], "year": 1993, "venue": "Proceedings, International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "41--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jardino, M., and Adda, G. (1993). \"Automatic word classification using simulated annealing.\" In Proceedings, International Conference on Acoustics, Speech and Signal Processing, 41-44.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Constraint grammar as a framework for parsing running text", "authors": [ { "first": "E", "middle": [], "last": "Karlsson", "suffix": "" } ], "year": 1990, "venue": "Proceedings, Thirteenth International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karlsson, E (1990). \"Constraint grammar as a framework for parsing running text.\" In Proceedings, Thirteenth International Conference on Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Constraint grammar: A language-independent system for parsing unrestricted text, with an application to English", "authors": [ { "first": "F", "middle": [], "last": "Karlsson", "suffix": "" }, { "first": "A", "middle": [], "last": "Voutilainen", "suffix": "" }, { "first": "A", "middle": [], "last": "Anttila", "suffix": "" }, { "first": "J", "middle": [], "last": "Heikkila", "suffix": "" } ], "year": 1991, "venue": "Workshop Notes from the Ninth National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karlsson, F.; Voutilainen, A.; Anttila, A.; and Heikkila, J. (1991). \"Constraint grammar: A language-independent system for parsing unrestricted text, with an application to English.\" In Workshop Notes from the Ninth National Conference on Artificial Intelligence. Anaheim, California.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Estimation of probabilities from sparse data for the language model component of a speech recognizer", "authors": [ { "first": "S", "middle": [], "last": "Katz", "suffix": "" } ], "year": 1987, "venue": "IEEE Trans. on Acoustics, Speech, and Language Processing", "volume": "35", "issue": "3", "pages": "400--401", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katz, S. (1987). \"Estimation of probabilities from sparse data for the language model component of a speech recognizer.\" IEEE Trans. on Acoustics, Speech, and Language Processing, 35(3), 400-401.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Robust part-of-speech tagging using a Hidden Markov Model", "authors": [ { "first": "J", "middle": [], "last": "Kupiec", "suffix": "" } ], "year": 1992, "venue": "Computer Speech & Language", "volume": "6", "issue": "3", "pages": "225--242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kupiec, J. (1992). \"Robust part-of-speech tagging using a Hidden Markov Model.\" Computer Speech & Language, 6(3), 225-242.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A technique to automatically assign parts-of-speech to words taking into account word-ending information through a probabilistic model", "authors": [ { "first": "G", "middle": [], "last": "Maltese", "suffix": "" }, { "first": "F", "middle": [], "last": "Mancini", "suffix": "" } ], "year": 1991, "venue": "Proceedings, Eurospeech-91", "volume": "", "issue": "", "pages": "753--756", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maltese, G., and Mancini, F. (1991). \"A technique to automatically assign parts-of-speech to words taking into account word-ending information through a probabilistic model.\" In Proceedings, Eurospeech-91, 753-756.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Building a large annotated corpus of English: The Penn Treebank", "authors": [ { "first": "M", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "B", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "M", "middle": [], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "315--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcus, M.; Santorini, B.; and Marcinkiewicz, M. (1993). \"Building a large annotated corpus of English: The Penn Treebank.\" Computational Linguistics, 19(2), 315-330.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "An enhanced interpolation technique for context-specific probability estimation in speech and language modelling", "authors": [ { "first": "E", "middle": [], "last": "Mcinnes", "suffix": "" } ], "year": 1992, "venue": "Proceedings, International Conference on Spoken Language Processing", "volume": "", "issue": "", "pages": "1491--1494", "other_ids": {}, "num": null, "urls": [], "raw_text": "McInnes, E (1992). \"An enhanced interpolation technique for context-specific probability estimation in speech and language modelling.\" In Proceedings, International Conference on Spoken Language Processing, 1491-1494.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Tagging text with a probabilistic model", "authors": [ { "first": "B", "middle": [], "last": "Merialdo", "suffix": "" } ], "year": 1991, "venue": "International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "809--812", "other_ids": {}, "num": null, "urls": [], "raw_text": "Merialdo, B. (1991). \"Tagging text with a probabilistic model.\" In International Conference on Acoustics, Speech and Signal Processing, 809-812.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Tagging English text with a probabilistic model", "authors": [ { "first": "B", "middle": [], "last": "Merialdo", "suffix": "" } ], "year": 1994, "venue": "Computational Linguistics", "volume": "20", "issue": "2", "pages": "155--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Merialdo, B. (1994). \"Tagging English text with a probabilistic model.\" Computational Linguistics, 20(2), 155-171.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Empirical studies in part of speech labelling", "authors": [ { "first": "M", "middle": [], "last": "Meteer", "suffix": "" }, { "first": "R", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "R", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 1991, "venue": "Proceedings, Fourth DARPA Workshop on Speech and Natural Language", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meteer, M.; Schwartz, R.; and Weischedel, R. (1991). \"Empirical studies in part of speech labelling.\" In Proceedings, Fourth DARPA Workshop on Speech and Natural Language. Morgan Kaufman.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Neural network approach to word category prediction for English texts", "authors": [ { "first": "M", "middle": [], "last": "Nakamura", "suffix": "" }, { "first": "K", "middle": [], "last": "Maruyama", "suffix": "" }, { "first": "T", "middle": [], "last": "Kawabata", "suffix": "" }, { "first": "K", "middle": [], "last": "Shikano", "suffix": "" } ], "year": 1990, "venue": "Proceedings, Thirteenth International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nakamura, M.; Maruyama, K.; Kawabata, T.; and Shikano, K. (1990). \"Neural network approach to word category prediction for English texts.\" In Proceedings, Thirteenth International Conference on Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Probabilistic prediction of parts-of-speech from spelling using decision trees", "authors": [ { "first": "W", "middle": [], "last": "Pelillo", "suffix": "" }, { "first": "E", "middle": [], "last": "Moro", "suffix": "" }, { "first": "M", "middle": [], "last": "Refice", "suffix": "" } ], "year": 1992, "venue": "Proceedings, International Conference on Spoken Language Processing", "volume": "", "issue": "", "pages": "1343--1346", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pelillo, W.; Moro, E; and Refice, M. (1992). \"Probabilistic prediction of parts-of-speech from spelling using decision trees.\" In Proceedings, International Conference on Spoken Language Processing, 1343-1346.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A tutorial on Hidden Markov Models and selected applications in speech recognition", "authors": [ { "first": "L", "middle": [], "last": "Rabiner", "suffix": "" } ], "year": 1989, "venue": "Proceedings", "volume": "77", "issue": "", "pages": "257--285", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rabiner, L. (1989). \"A tutorial on Hidden Markov Models and selected applications in speech recognition.\" In Proceedings, IEEE 77(2), 257-285.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A generalisation of discrete Hidden Markov Model and of Viterbi algorithm", "authors": [ { "first": "C", "middle": [], "last": "Tao", "suffix": "" } ], "year": 1992, "venue": "Pattern Recognition", "volume": "25", "issue": "11", "pages": "1381--1397", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao, C. (1992). \"A generalisation of discrete Hidden Markov Model and of Viterbi algorithm.\" Pattern Recognition, 25(11), 1381-1397.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Ambiguity resolution in a reductionistic parser", "authors": [ { "first": "A", "middle": [], "last": "Voutilainen", "suffix": "" }, { "first": "P", "middle": [], "last": "Tapanainen", "suffix": "" } ], "year": 1993, "venue": "Proceedings, Sixth Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "394--403", "other_ids": {}, "num": null, "urls": [], "raw_text": "Voutilainen, A., and Tapanainen, P. (1993). \"Ambiguity resolution in a reductionistic parser.\" In Proceedings, Sixth Conference of the European Chapter of the Association for Computational Linguistics. Utrecht, Netherlands, 394-403.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Constraint grammar of English", "authors": [ { "first": "A", "middle": [], "last": "Voutilainen", "suffix": "" }, { "first": "J", "middle": [], "last": "Heikkila", "suffix": "" }, { "first": "A", "middle": [], "last": "Antitila", "suffix": "" } ], "year": 1992, "venue": "Publication", "volume": "21", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Voutilainen, A.; Heikkila, J.; and Antitila, A. (1992). \"Constraint grammar of English.\" Publication 21, Department of General Linguistics, University of Helinski, Helinski, Finland.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Second order Hidden Markov Models for speech recognition", "authors": [ { "first": "B", "middle": [], "last": "Watson", "suffix": "" }, { "first": "Chung", "middle": [], "last": "Tsoi", "suffix": "" }, { "first": "A", "middle": [], "last": "", "suffix": "" } ], "year": 1992, "venue": "Proceedings, Fourth Australian International Conference on Speech Science and Technology", "volume": "", "issue": "", "pages": "146--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Watson, B., and Chung Tsoi, A. (1992). \"Second order Hidden Markov Models for speech recognition.\" In Proceedings, Fourth Australian International Conference on Speech Science and Technology, 146-151.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Coping with ambiguity and unknown words through probabilistic models", "authors": [ { "first": "R", "middle": [], "last": "Weischedel", "suffix": "" }, { "first": "M", "middle": [], "last": "Meteer", "suffix": "" }, { "first": "R", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "L", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "J", "middle": [], "last": "Palmucci", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "359--382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weischedel, R.; Meteer, M.; Schwartz, R.; Ramshaw, L.; and Palmucci, J. (1993). \"Coping with ambiguity and unknown words through probabilistic models.\" Computational Linguistics, 19(2), 359-382.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Statistically based automatic tagging of German text corpora with parts-of-speech--some experiments", "authors": [ { "first": "K", "middle": [], "last": "Wothke", "suffix": "" }, { "first": "I", "middle": [], "last": "Weck-Ulm", "suffix": "" }, { "first": "J", "middle": [], "last": "Heinecke", "suffix": "" }, { "first": "O", "middle": [], "last": "Mertineit", "suffix": "" }, { "first": "T", "middle": [], "last": "Pachunke", "suffix": "" } ], "year": 1993, "venue": "TR75.93.02-IBM. IBM Germany", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wothke, K.; Weck-Ulm, I.; Heinecke, J.; Mertineit, O.; and Pachunke, T. (1993). \"Statistically based automatic tagging of German text corpora with parts-of-speech--some experiments.\" TR75.93.02-IBM. IBM Germany, Heidelberg Scientific Center.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "Distribution of the main grammatical classes of the known and unknown words and the words occurring only once in English text.", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "Distribution of the main grammatical classes of the known and unknown words and the words occurring only once in French text.", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": "' rule: P(Unknown word [ ti) = P(ti[ Unknown word)P(Unknown word) P(ti) P(ti ] Less probable word)P(Unknown word)", "uris": null, "type_str": "figure", "num": null }, "FIGREF4": { "text": "Figure 4 Chi-square test for the distribution of the grammatical tags of the unknown words and the less probable words in the English text, for the extended tagset of grammatical classes and various training text sizes.", "uris": null, "type_str": "figure", "num": null }, "FIGREF6": { "text": "Tagger speed after the last adaptation process for the extended set of grammatical categories.", "uris": null, "type_str": "figure", "num": null }, "FIGREF7": { "text": "Figure 7 Tagger memory requirements for the extended set of grammatical categories.", "uris": null, "type_str": "figure", "num": null }, "FIGREF8": { "text": "Size of the training text ('10 Kwords) -~v=.__,=~ ~ \u2022 ......... --:,=~=\" ....", "uris": null, "type_str": "figure", "num": null }, "TABREF1": { "text": "Size of the corpora.", "num": null, "html": null, "content": "
TextDutchEnglishFrenchGermanGreekItalianSpanish
Newspaper 110,000 180,000 100,000 100,000 120,000 160,000 60,000
EEC-Law--110,000.....
", "type_str": "table" }, "TABREF2": { "text": "", "num": null, "html": null, "content": "
ESPRIT 291/860: Project partners.
CountryPartner
EnglandAcorn Computers Limited
FranceCentre National de la Recherche Scientifique (CNRS), LIMSI Division
GermanyRuhr -Universitaet Bochum, Lehrstuhl fur Allgemeine Elektrotechnik
und Akustik
GreeceUniversity of Patras, Wire Communications Laboratory (WCL), Speech and
Language Group
ItalyIng. C. Olivetti & C., S.p.A.
ItalyCentro Studi Applicazioni in Tecnologie Avanzate -CSATA
NetherlandsKatholieke Universiteit Nijmegen, Dienst A-Faculteiten
SpainUniversidad National de Educacion a Distancia (UNED), Madrid
probabilities that are computed afterwards by using the corresponding relative fre-
quencies.
", "type_str": "table" }, "TABREF3": { "text": "Extended set of grammatical categories.", "num": null, "html": null, "content": "
Main grammatical categoriesDetailed grammatical information
Adjective, Noun, PronounRegular base comparative superlative interrogative person
number case
AdverbRegular base comparative superlative interrogative
Article, Determiner, PrepositionPerson number case
VerbTense voice mood person number case
Table 4
Number of grammatical tags.
TextDutchEnglishFrench German Greek Italian Spanish
Main set9News: 10, Law: 101011111010
Extended set50News: 43, Low: 3614116443121121
", "type_str": "table" }, "TABREF4": { "text": "Word ambiguity in the newspaper corpus.", "num": null, "html": null, "content": "
TagsetEnglishDutchGermanFrenchGreekItalianSpanish
Main set1.3361.1111.31.691.2091.621.197
Extended set1.4171.2911.8781.7051.8551.7291.25
a.
", "type_str": "table" }, "TABREF5": { "text": "Corpus ambiguity in newspaper open testing text.", "num": null, "html": null, "content": "
TagsetEnglishDutchGermanFrenchGreekItalianSpanish
Main set8.757.839.99.199.328.58.5
Extended set37.0342.78103.0712.8367.2599.86100.69
", "type_str": "table" } } } }