|
{ |
|
"paper_id": "E95-1021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:31:37.626134Z" |
|
}, |
|
"title": "Tagging Frenchcomparing a statistical and a constraint-based method", |
|
"authors": [ |
|
{ |
|
"first": "Jean-Pierre", |
|
"middle": [ |
|
"Pierre" |
|
], |
|
"last": "Chanod", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Grenoble Laboratory 6", |
|
"institution": "", |
|
"location": { |
|
"postCode": "38240", |
|
"settlement": "Meylan", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Pasi", |
|
"middle": [], |
|
"last": "Tapanainen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Grenoble Laboratory 6", |
|
"institution": "", |
|
"location": { |
|
"postCode": "38240", |
|
"settlement": "Meylan", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Pasi", |
|
"middle": [], |
|
"last": "Tapanainenoxerox", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Grenoble Laboratory 6", |
|
"institution": "", |
|
"location": { |
|
"postCode": "38240", |
|
"settlement": "Meylan", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Fr", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Grenoble Laboratory 6", |
|
"institution": "", |
|
"location": { |
|
"postCode": "38240", |
|
"settlement": "Meylan", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper we compare two competing approaches to part-of-speech tagging, statistical and constraint-based disambiguation, using French as our test language. We imposed a time limit on our experiment: the amount of time spent on the design of our constraint system was about the same as the time we used to train and test the easy-to-implement statistical model. We describe the two systems and compare the results. The accuracy of the statistical method is reasonably good, comparable to taggers for English. But the constraint-based tagger seems to be superior even with the limited time we allowed ourselves for rule development.", |
|
"pdf_parse": { |
|
"paper_id": "E95-1021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper we compare two competing approaches to part-of-speech tagging, statistical and constraint-based disambiguation, using French as our test language. We imposed a time limit on our experiment: the amount of time spent on the design of our constraint system was about the same as the time we used to train and test the easy-to-implement statistical model. We describe the two systems and compare the results. The accuracy of the statistical method is reasonably good, comparable to taggers for English. But the constraint-based tagger seems to be superior even with the limited time we allowed ourselves for rule development.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In this paper 1 we compare two competing approaches to part-of-speech tagging, statistical and constraint-based disambiguation, using French as our test language. The process of tagging consists of three stages: tokenisation, morphological analysis and disambiguation. The two taggers include the same tokeniser and morphological analyser. The tokeniser uses a finite-state transducer that reads the input and outputs a token whenever it has read far enough to be sure that a token is detected. The morphological analYser contains a transducer lexicon. It produces all the legitimate tags for words that appear in the lexicon. If a word is not in the lexicon, a guesser is consulted. The guesser employs another finite-state transducer. It reads a token and prints out a set of tags depending on prefixes, inflectional information and productive endings that it finds. We make even more use of transducers in the constraint-based tagger. The tagger reads one sentence at a time, a string of words and alternative tags, feeds them to the grammatical transduc-1There is a ]onger version (17 pages) of this paper in (Chanod and Tapanainen, 1994) ers that remove all but one alternative tag from all the words on the basis of contextual information. If all the transducers described above (tokeniser, morphological analyser and disambiguatot) could be composed together, we would get one single transducer that transforms a raw input text to a fully disambiguated output.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1113, |
|
"end": 1142, |
|
"text": "(Chanod and Tapanainen, 1994)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The statistical method contains the same tokeniser and morphological analyser. The disambiguation method is a conventional one: a hidden Markov model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The morphological analyser is based on a lexical transducer (Karttunen et al., 1992) . The transducer maps each inflected surface form of a word to its canonical lexical form followed by the appropriate morphological tags. Words not found in the lexicon are analysed by a separate finite-state transducer, the guesser. We developed a simple, extremely compact and efficient guesser for French. It is based on the general assumption that neologisms and uncommon words tend to follow regular inflectional patterns. The guesser is thus based on productive endings (like merit for adverbs, ible for adjectives, er for verbs). A given ending may of course point to various categories, e.g. er identifies nouns as well as verbs due to possible borrowings from English.", |
|
"cite_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 84, |
|
"text": "(Karttunen et al., 1992)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphological analysis and guessing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The statistical model", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "3", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use the Xerox part-of-speech tagger (Cutting et al., 1992) , a statistical tagger made at the Xerox Palo Alto Research Center.", |
|
"cite_spans": [ |
|
{ |
|
"start": 39, |
|
"end": 61, |
|
"text": "(Cutting et al., 1992)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "3", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The Xerox tagger is claimed (Cutting el al., 1992) to be adaptable and easily trained; only a lexicon and suitable amount of untagged text is required.", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 50, |
|
"text": "(Cutting el al., 1992)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "A new language-specific tagger can therefore be built with a minimal amount of work. We started our project by doing so. We took our lexicon with the new tagset, a corpus of French text, and trained the tagger. We ran the tagger on another text and counted the errors. The result was not good; 13 % of the words were tagged incorrectly. The tagger does not require a tagged corpus for training, but two types of biases can be set to tell the tagger what is correct and what is not: symbol biases and transition biases. The symbol biases describe what is likely in a given ambiguity class. They represent kinds of lexical probabilities. The transition biases describe the likelihood of various tag pairs occurring in succession. The biases serve as initial values before training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We spent approximately one man-month writing biases and tuning the tagger. Our training corpus was rather small, because the training had to be repeated frequently. When it seemed that the results could not be further improved, we tested the tagger on a new corpus. The eventual result was that 96.8 % of the words in the corpus were tagged correctly. This result is about the same as for statistical tuggers of English.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "A 4 % error rate is not generally considered a negative result for a statistical tagger, but some of the errors are serious. For example, a sequence of determiner.., noun.., noun/verb...preposition is frequently disambiguated in the wrong way, e.g. Le ~rain part ~t cinq heures (The ~rain leaves a~ 5 o'clock). The word part is ambiguous between a noun and a verb (singular, third person), and it is disambiguated incorrectly. The tagger seems to prefer the noun reading between a singular noun and a preposition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modifying the biases", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "One way to resolve this is to write new biases. We added two new ones. The first one says that a singular noun is not likely to be followed by a noun (this is not always true but we could call this a tendency). The second states that a singular noun is likely to be followed by a singular, third-person verb. The result was that the problematic sentence was disambiguated correctly, but the changes had a bad side effect. The overall error rate of the tagger increased by over 50 %. This illustrates how difficult it is to write good biases. Getting a correct result for a particular sentence does not necessarily increase the overall success rate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modifying the biases", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "4 The constraint-based model 4.1 A two-level model for tagging", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modifying the biases", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In the constraint-based tagger, the rules are represented as finite-state transducers. The transducers are composed with the sentence in a sequence. Each transducer may remove, or in principle it may also change, one or more readings of the words. After all the transducers have been applied, each word in the sentence has only one analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modifying the biases", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Our constraint-based tagger is based on techniques that were originally developed for morphological analysis. The disambiguation rules are similar to phonological rewrite rules (Kaplan and Kay, 1994) , and the parsing algorithm is similar to the algorithm for combining the morphological rules with the lexicon (Karttunen, 1994) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 177, |
|
"end": 199, |
|
"text": "(Kaplan and Kay, 1994)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 311, |
|
"end": 328, |
|
"text": "(Karttunen, 1994)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modifying the biases", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The tagger has a close relative in (Koskenniemi, 1990; Koskenniemi et al., 1992; Voutilalnen and Tapanainen, 1993) where the rules are represented as finite-state machines that are conceptually intersected with each other. In this tagger the disambiguation rules are applied in the same manner as the morphological rules in (Koskenniemi, 1983) . Another relative is represented in (Roche and Schabes, 1994) which uses a single finitestate transducer to transform one tag into another. A constraint-based system is also presented in (Karlsson, 1990; Karlsson et al., 1995) . Related work using finite-state machines has been done using local grammars (Roche, 1992; Silberztein, 1993; Laporte, 1994) '.", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 54, |
|
"text": "(Koskenniemi, 1990;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 55, |
|
"end": 80, |
|
"text": "Koskenniemi et al., 1992;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 81, |
|
"end": 114, |
|
"text": "Voutilalnen and Tapanainen, 1993)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 324, |
|
"end": 343, |
|
"text": "(Koskenniemi, 1983)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 532, |
|
"end": 548, |
|
"text": "(Karlsson, 1990;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 549, |
|
"end": 571, |
|
"text": "Karlsson et al., 1995)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 650, |
|
"end": 663, |
|
"text": "(Roche, 1992;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 664, |
|
"end": 682, |
|
"text": "Silberztein, 1993;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 683, |
|
"end": 697, |
|
"text": "Laporte, 1994)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modifying the biases", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "One quick experiment that motivated the building of the constraint-based model was the following: we took a million words of newspaper text and ranked ambiguous words by frequency. We found that a very limited set of word forms covers a large part of the total ambiguity. The 16 most frequent ambiguous word forms 2 account for 50 % of all ambiguity. Two thirds of the ambiguity are due to the 97 most frequent ambiguous words 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Studying ambiguities", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Another interesting observation is that the most frequent ambiguous words are usually words which are in general corpus-independent, i.e. words that belong to closed classes (determiners, prepositions, pronouns, conjunctions), auxiliaries, common adverbials or common verbs, like faire (to do, to make). The first corpus-specific word is in the 41st position.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Studying ambiguities", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "For the most frequent ambiguous word forms, one may safely define principled contextual restrictions to resolve ambiguities. This is in particular the case for clitic/determiner ambiguities attached to words like le or la. Our rule says that clitic pronouns are attached to a verb and determiners to a noun with possibly an unrestricted number of premodifiers. This is a good starting point although some ambiguity remains as in la 2Namely de, la, le, les, des, en, du, un, a, duns, une, pus, est, plus, Le, son 3 A similar experiment shows that in the Brown corpus 63 word forms cover 50 % of all the ambiguity, and two thirds of the ambiguity is covered by 220 word forms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 440, |
|
"end": 511, |
|
"text": "de, la, le, les, des, en, du, un, a, duns, une, pus, est, plus, Le, son", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Principled rules", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "place, which can be read as a determiner-noun or clitic-verb sequence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Principled rules", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "Some of the very frequent words have categories that are rare, for instance the auxiliary forms a and est can also be nouns and the pronoun cela is also a very rare verb form. In such a case, we restrict the use of the rarest categories to contexts where the most frequent reading is not at all possible, otherwise the most frequent reading is preferred. For instance, the word avions may be a noun or an auxiliary verb. We prefer the noun reading and accept the verb reading only when the first-person pronoun nous appears in the left cor/text, e.g. as in nous ne les avions pas (we did not have them).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Principled rules", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "This means that the tagger errs only when a rare reading should be chosen in a context where the most common reading is still acceptable. This may never actually occur, depending on how accurate the contextual restrictions are. It can even be the case that discarding the rare readings would not induce a detectable loss in accuracy, e.g. in the conflict between cela as a pronoun and as a verb. The latter is a rarely used tense of a rather literary verb.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Principled rules", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "The principled rules do not require any tagged corpus, and should be thus corpus-independent. The rules are based on a short list of extremely common words (fewer than 100 words).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Principled rules", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "Heuristics The rules described above are certainly not sufficient to provide full disambiguation, even if one considers only the most ambiguous word forms. We need more rules for cases that the principled rules do not disambiguate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.2.3", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Some ambiguity is extremely difficult to resolve using the information available. A very problematic case is the word des, which can either be a determiner, Jean mange des pommes (Jean eats apples) or an amalgamated preposition-determiner, as in Jean aime le bruit des vagues (Jean likes the sound of waves).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.2.3", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Proper treatment of such an ambiguity would require verb subcategorisation and a description of complex coordinations of noun and prepositional phrases. This goes beyond the scope of both the statistical and the constraint-based taggers. For such cases we introduce ad-hoc heuristics. Some are quite reasonable, e.g. the determiner reading of des is preferred at the begining of a sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.2.3", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Some are more or less arguable, e.g. the prepositional reading is preferred after a noun.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.2.3", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "One may identify various contexts in which either the noun or the adjective can be preferred. Such contextual restrictions (Chanod, 1993) are not always true, but may be considered reasonable for resolving the ambiguity. For instance, in the case of two successive noun/adjective ambiguities like le franc fort (the strong franc or the frank fort), we favour the noun-adjective sequence ex-cept when the first word is a common prenominal adjective such as bon, petit, grand, premier, ... as in le petit fort (the small fort) or even le bon petit (the good little one).", |
|
"cite_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 137, |
|
"text": "(Chanod, 1993)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.2.3", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Non-contextual rules Our heuristics do not resolve all the ambiguity. To obtain the fully unambiguous result we make use of non-contextual heuristics. The noncontextual rules may be thought of as lexical probabilities. We guess what the most probable tag is in the remaining ambiguities. For instance, preposition is preferred to adjective, pronoun is preferred to past participle, etc. The rules are obviously not very reliable, but they are needed only when the previous rules fail to fully disambiguate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.2.4", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Current rules The current system contains 75 rules, consisting of:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.2.5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 39 reliable contextual rules dealing mostly with frequent ambiguous words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.2.5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 25 rules describing heuristics with various degrees of linguistic generality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.2.5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 11 non-contextual rules for the remaining ambiguities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.2.5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The rules were constructed in less than one month, on the basis of 50 newspaper sentences. All the rules are currently represented by 11 transducers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.2.5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The results", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For evaluation, we used a corpus totally unrelated to the development corpus. It contains 255 sentences (5752 words) randomly selected from a corpus of economic reports. About 54 % of the words are ambiguous. The text is first tagged manually without using the disambiguators, and the output of the tagger is then compared to the hand-tagged result.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test A", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "If we apply all the rules, we get a fully disambiguated result with an error rate of only 1.3 %. This error rate is much lower than the one we get using the hidden Markov model (3.2 %). See Figure 1. We can also restrict the tagger to using only the most reliable rules. Only 10 words lose the correct tag when almost 2000 out of 3085 ambiguous words are disambiguated. Among the remaining 1136 ambiguous words about 25 % of the ambiguity is due to determiner/preposition ambiguities (words like dn and des), 30 % are adjective/noun ambiguities and 18 % are noun/verb ambiguities.", |
|
"cite_spans": [ |
|
{ |
|
"start": 190, |
|
"end": 199, |
|
"text": "Figure 1.", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test A", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "If we use both the principled and heuristic rules, the error rate is 0.52 % while 423 words remain ambiguous. The non-contextual rules that eliminate the remaining 423 ambiguities produce an error rate (correctness) Lexicon + Guesser 0.03 % (99.97 %) 54 % Hidden Markov model 3.2 % (96.8 %) 0 % Principled rules 0.17 % (99.83 %) 20 % Principled and heuristic rules 0.52 % (99.48 %) I 7 % All the rules I 1.3% (98.7 %) I 0% remaining ambiguity tag / word 1.64 1.00 1.24 1.09 1.00 Figure 1 : The result in the test sample additional 43 errors. Overall, 98.7 % of the words receive the correct tag.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 479, |
|
"end": 487, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Test A", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We also tested the tuggers with more difficult text. The 12 000 word sample of newspaper text has typos and proper names 4 that match an existing word in the lexicon. Problems of the latter type are relatively rare but this sample was exceptional.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test B", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Altogether the lexicon mismatches produced 0.5 % errors to the input of the tuggers. The results are shown in Figure 2 . This text also seems to be generally more difficult to parse than the first one.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 118, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Test B", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We also tried combining the tuggers, using first the rules and then the statistics (a similar approach was also used in (Tapanainen and Voutilainen, 1994) ). We evaluated the results obtained by the following sequence of operations:", |
|
"cite_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 154, |
|
"text": "(Tapanainen and Voutilainen, 1994)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combination of the tuggers", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "1) Running the constraint-based tagger without the final, non-contextual rules.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combination of the tuggers", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "2) Using the statistical disambiguator independently. We select the tag proposed by the statistical disambiguator if it is not removed during step 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combination of the tuggers", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "3) Solving the remaining ambiguities by running the final non-contextual rules of the constraint-based tagger. This last step ensures that one gets a fully disambiguated text. Actually only about 0.5 % of words were not fully disambiguated after step 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combination of the tuggers", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We used the test sample B. After the first step, 1400 words out of 12 000 remain ambiguous. The process of combining the three steps described above eventually leads to more errors than running the constraint-based tagger alone. The statistical tagger introduces 220 errors on the 1400 words that remain ambiguous after step 1. In comparison, the final set of non-contextual rules introduces around 150 errors on the same set of 1400 words. We did not expect this result. One possible explanation for the superior performance of the final non-contextual rules is that they are meant to apply after the previous rules failed to disambiguate the word. This is in itself useful 4like Bats, Botta, Ddrnis, Ferrasse, Hersant, ... information. The final heuristics favour tags that have survived all conditions that restrict their use. For instance, the contextual rules define various contexts where the preposition tag for des is preferred. Therefore, the final heuristics favours the determiner reading for des.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combination of the tuggers", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "6 Analysis of errors", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combination of the tuggers", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Let us now consider what kind of errors the constraint-based tagger produced. We do not deal with errors produced by the last set of rules, the non-contextual rules, because it is already known that they are not very accurate. To make the tagger better, they should be replaced by writing more accurate heuristic rules. We divide the errors into three categories: (1) errors due to multi-word expressions, (2) errors that should/could be resolved and (3) errors that are hard to resolve by using the information that is available. Thefirst group (15 errors), the multi-word expressions, are difficult for the syntax-based rules because in many cases the expression does not follow any conventional syntactic structure, or the structure may be very rare. In multi-word expressions some words also have categories that may not appear anywhere else. The best way to handle them is to lexicalise these expressions. When a possible expression is recognised we can either collapse it into one unit or leave it otherwise intact except that the most \"likely\" interpretation is marked.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Errors of principled and heuristic rules", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "The biggest group (41 errors) contains errors that could have been resolved correctly but were not. The reason for this is obvious: only a relatively small amount of time was allowed for writing the rules. In addition, the rules were constructed on the basis of a rather small set of example sentences. Therefore, it would be very surprising if such errors did not appear in the test sample taken from a different source. The errors are the following:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Errors of principled and heuristic rules", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u2022 The biggest subgroup has 19 errors that require modifications to existing rules. Our rules were meant to handle such cases but fail error rate remaining tag / word (correctness) ambiguity Lexicon + Guesser 0.5 % (99.5 %) 48 % 1.59 Hidden Markov model 5.0 % (95.0 %) 0 % 1.00 Principled rules I 0.8 % (99.2 %) 23 % 1.29 Principled and heuristic rules ] 1.3 % (98.7 %) 12 % 1.14 All the rules [ 2.5 % (97.5 %) 0 % 1.00 Figure 2 : The result in a difficult test sample with many lexicon mismatches to do so correctly in some sentences. Often only a minor correction is needed.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 419, |
|
"end": 427, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Errors of principled and heuristic rules", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u2022 Some syntactic constructions, or word sequences, were omitted. This caused 7 errors which could easily be avoided by writing more rules. For instance, a construction like \"preposition \u00f7 clitic + finite verb\" was not forbidden. The phrase h l'est was analysed in this way while the correct analysis is \"preposition \u00f7 determiner + noun\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Errors of principled and heuristic rules", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u2022 Sometimes a little bit of extra lexical information is required. Six errors would require more information or the kind of refinement in the tag inventory that would not have been appropriate for the statistical tagger.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Errors of principled and heuristic rules", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u2022 Nine errors could be avoided by refining existing heuristics, especially by taking into account exceptions for specific words like point, pendant and devant.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Errors of principled and heuristic rules", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "The remaining errors (28 errors) constitute the price we pay for using the heuristics. Removing the rules which fail would cause a lot of ambiguity to remain. The errors are the following:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Errors of principled and heuristic rules", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u2022 Fifteen errors are due to the heuristics for de and des. There is little room for improvement at this level of description (see Chapter 4.2.3). However, the current, simple heuristics fully disambiguate 850 instances of de and des out of 914 i.e. 92 % of all the occurrences were parsed with less than a 2 % error rate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Errors of principled and heuristic rules", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u2022 Six errors involve noun-adjective ambiguities that are difficult to solve, for instance, in a subject or object predicate position.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Errors of principled and heuristic rules", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u2022 Seven errors seem to be beyond reach for various reasons: long coordination , rare constructions, etc. An example is les boltes (the boxes) where les is wrongly tagged in the test sample because the noun form is misspelled as boites, which is identified only as a verb by the lexicon.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Errors of principled and heuristic rules", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We also investigated how the errors compare between the two taggers. Here we used the fully disambiguated outputs of the taggers. The errors belong mainly to three classes: * Some errors appear predominantly with the statistical tagger and almost never with the constraint-based tagger. This is particularly the case with the ambiguity between past participles and adjectives.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Difference between the taggers", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Some errors are common to both taggers, the constraint-based tagger generally being more accurate (often with a ratio of I to 2). These errors cover ambiguities that are known to be difficult to handle in general, such as the already mentioned determiner/preposition ambiguity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Difference between the taggers", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Finally, there are errors that are specific to the constraint-based tagger. They are often related to errors that could be corrected with some extra work. They are relatively infrequent, thus the global accuracy of the constraint-based tagger remains higher.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Difference between the taggers", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "The first two classes of errors are generally difficult to correct. The easiest way to improve the constraint-based tagger is to concentrate on the final class. As we mentioned earlier, it is not very easy to change the behaviour of the statistical tagger in one place without some side-effects elsewhere. This means that the errors of the first class are probably easiest to resolve by means other than statistics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Difference between the taggers", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "The first class is quite annoying for the statistical parser because it contains errors that are intuitively very clear and resolvable, but which are far beyond the limits of the current statistical tagger. We can take an easy sentence to demonstrate this:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Difference between the taggers", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Je ne le pense pas. I do not think so. Tune le penses pas. You do not think so.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Difference between the taggers", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "He does not think so. The verb pense is ambiguous 5 in the first person or in the third person. It is usually easy to determine the person just by checking the personal pronoun nearby. For a human or a constraint-based tagger this is an easy task, for a statistical tagger it is not. There are two words between the pronoun and the verb that do not carry any information about the person. The personal pronoun may thus be too far from the verb because bi-gram models can see backward no farther than le, and tri-gram models SThat is not case with all the French verbs, e.g. Je crois and //croit. no farther than ne le.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Il ne le pense pas.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Also, as mentioned earlier, resolving the adjective vs. past participle ambiguity is much harder, if the tagger does not know whether there is an auxiliary verb in the sentence or not.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Il ne le pense pas.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We have presented two taggers for french: a statistical one and a constraint-based one.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "There are two ways to train the statistical tagger: from a tagged corpus or using a selforganising method that does not need a tagged corpus. We had a strict time limit of one month for doing the tagger and no tagged corpus was available. This is a short time for the manual tagging of a corpus and for the training of the tagger. It would be risky to spend, say, three weeks for writing a corpus, and only one week for training. The size of corpus would have to be limited, because it should be also checked.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We selected the Xerox tagger that learns from an untagged corpuS. The task was not as straigthforward as we thought. Without human assistance in the training the result was not impressive, and we had to spend much time tuning the tagger and guiding the learning process. In a month we achieved 95-97 % accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The training process of a statistical tagger requires some time because the linguistic information has to be incorporated into the tagger one way or another, it cannot be obtained for free starting from null. Because the linguistic information is needed, we decided to encode the information in a more straightforward way, as explicit linguistic disambiguation rules. It has been argued that statistical taggers are superior to rulebased/hand-coded ones because of better accuracy and better adaptability (easy to train). In our experiment, both claims turned out to be wrong.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "For the constraint-based tagger we set one month time limit for writing the constraints by hand. We used only linguistic intuition and a very limited set of sentences to write the 75 constraints. We formulated constraints of different accuracy. Some of the constraints are almost 100 % accurate, some of them just describe tendencies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Finally, when we thought that the rules were good enough, we took two text samples from different sources and tested both the taggers. The constraint-based tagger made several naive errors because we had forgotten, miscoded or ignored some linguistic phenomena, but still, it made only half of the errors that the statistical one made.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "A big difference between the taggers is that the tuning of the statistical tagger is very subtle i.e. it is hard to predict the effect of tuning the parameters of the system, whereas the constraint-based tagger is very straightforward to correct.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Our general conclusion is that the hand-coded constraints perform better than the statistical tagger and that we can still refine them. The most important of our findings is that writing constraints that contain more linguistic information than the current statisticM model does not take much time. syntactic rules. In proceedings of ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In this appendix the tag set is represented. Besides the following tags, there may also be some word-specific tags like PREP-DE, which is the preposition reading for words de, des and du, i.e. word de is initially ambiguous between PREP-DE and PC. This information is mainly for the statistical tagger to deal with, for instance, different prepositions in a different way. The constraint-based tagger does not need this because it has direct access to word forms anyway. After disambiguation, the word-specific tags may be cleaned. The tag PREP-DE is changed back into PREP, to reduce the redundant information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A The restricted tag set", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 DET-SG: Singular determiner e.g. le, la, mon, ma. This covers masculine as well as feminine forms. Sample sentence: L_ee chien dort dans l__a cuisine. (The dog is sleeping in the kitchen).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A The restricted tag set", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 DET-PL Plural determiner e.g. les, mes. This covers masculine as well as feminine forms. \u2022 ADV Adverbs e.g. finalement. Sample sentence: Le jour es\u00a2 finalement arrivd. (The day has finally come.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A The restricted tag set", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 NEG Negation particle. Reserved for the word ne. Sample sentence: Le chien n_~e dor\u00a2 pas. (The dog is not sleeping.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A The restricted tag set", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 PREP Preposition e.g. dans. Sample sentence: Le chien dor\u00a2 dans la cuisine. (The dog sleeps in the kitchen.) For statistical taggers this group may be divided into subgroups for different preposition groups, like PREP-DE, PREP-A, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A The restricted tag set", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 CONN Connector. This class includes coordinating conjuctions such as el, subordinate conjunctions such as lorsque, relative or interrogative pronouns such as lequel. Words like comme or que which have very special behaviour are not coded as CONN. Sample sentence: Le chien e___t le chat dorment quand il pleut. (The dog and the cat sleep when it rains.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A The restricted tag set", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For statistical taggers this group may be divided into subgroups for different connectors, like CONN-ET, CONN-Q, etc. \u2022 COMME Reserved for M1 instances of the word comme. Sample sentence: Il joue comme un enfant. (He plays like a child.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A The restricted tag set", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 CONJQUE Reserved for all instances of the word que. \u2022 NUM Numeral e.g. 12,7, 120/98, 34+0.7.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A The restricted tag set", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "String representing time e.g. 12h24, 12:45:00.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u2022 HEURE", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "MISC Miscellaneous words, such as: interjectiorr oh, salutation bonjour, onomatopoeia miaou, wordparts i.e. words that only exist as part of a multi-word expression, such as priori, as part of a priori.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u2022 HEURE", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 CM Comma.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u2022 HEURE", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 PUNCT Punctuation other than comma.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u2022 HEURE", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Probl~mes de robustesse en analyse syntaxique", |
|
"authors": [ |
|
{ |
|
"first": "Jean-Pierre", |
|
"middle": [], |
|
"last": "Chanod", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Acres de la conf6rence Informatique et langue n alurelle. IRIN, Universit@ de Nantes", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean-Pierre Chanod. Probl~mes de robustesse en analyse syntaxique. In Acres de la conf6rence Informatique et langue n alurelle. IRIN, Univer- sit@ de Nantes, 1993.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Statistical and Constraint-based Taggers for French", |
|
"authors": [ |
|
{ |
|
"first": "Jean-", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Chanod", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Past", |
|
"middle": [], |
|
"last": "Tapanainen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean-Pierre Chanod and Past Tapanainen. Statis- tical and Constraint-based Taggers for French. Technical report MLTT-016, Rank Xerox Re- search Centre, Grenoble, 1994.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A Practical Part-of-Speech Tagger", |
|
"authors": [ |
|
{ |
|
"first": "Doug", |
|
"middle": [], |
|
"last": "Cutting", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Kupiec", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Pedersen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Penelope", |
|
"middle": [], |
|
"last": "Sibun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Third Conference on Applied Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "133--140", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Doug Cutting, Julian Kupiec, Jan Pedersen and Penelope Sibun. A Practical Part-of-Speech Tagger. In Third Conference on Applied Natu- ral Language Processing. pages 133-140. Trento, 1992.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Regular Models of Phonological Rule Systems", |
|
"authors": [ |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Kaplan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Kay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Computational Linguistics", |
|
"volume": "20", |
|
"issue": "", |
|
"pages": "331--378", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ron Kaplan and Martin Kay. Regular Models of Phonological Rule Systems. Computational Linguistics Vol. 20, Number 3, pages 331-378.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Constraint Grammar as a Framework for Parsing Running Text", |
|
"authors": [ |
|
{ |
|
"first": "Fred", |
|
"middle": [], |
|
"last": "Karlsson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "proceedings of Coling-90. Papers presented to the 13th International Conference on Computational Linguistics", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "168--173", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fred Karlsson. Constraint Grammar as a Frame- work for Parsing Running Text. In proceedings of Coling-90. Papers presented to the 13th In- ternational Conference on Computational Lin- guistics. Vol. 3, pages 168-173. Helsinki, 1990.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Constraint Grammar: a Language-Independent System for Parsing Unrestricted Text", |
|
"authors": [], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fred Karlsson, Atro Voutilainen, Juha Reikkil~ and Arto Anttila (eds.). Constraint Grammar: a Language-Independent System for Parsing Unrestricted Text. Mouton de Gruyter, Berlin, 1995.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Constructing LexicM Trans", |
|
"authors": [ |
|
{ |
|
"first": "Lauri", |
|
"middle": [], |
|
"last": "Karttunen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lauri Karttunen. Constructing LexicM Trans-", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "proceedings of Coling-94. The fifleenth International Conference on Computational Linguistics", |
|
"authors": [], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "I", |
|
"issue": "", |
|
"pages": "406--411", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "ducers. In proceedings of Coling-94. The fif- leenth International Conference on Computa- tional Linguistics. Vol I, pages 406-411. Kyoto, 1994.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Two-level morphology with composition", |
|
"authors": [ |
|
{ |
|
"first": "Lauri", |
|
"middle": [], |
|
"last": "Karttunen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Kaplan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Annie", |
|
"middle": [], |
|
"last": "Zaenen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "proceedings of Colin9-92. The fourteenth International Conference on Computational Linguistics. Vol I", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "141--148", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lauri Karttunen, Ron Kaplan and Annie Zae- nen. Two-level morphology with composition. In proceedings of Colin9-92. The fourteenth In- ternational Conference on Computational Lin- guistics. Vol I, pages 141-148. Nantes, 1992.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Two-level morphology", |
|
"authors": [ |
|
{ |
|
"first": "Kimmo", |
|
"middle": [], |
|
"last": "Koskenniemi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kimmo Koskenniemi. Two-level morphology.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A general computational model for word-form recognition and production", |
|
"authors": [], |
|
"year": 1983, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A general computational model for word-form recognition and production. University of Helsinki, 1983.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Finite-state parsing and disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Kimmo", |
|
"middle": [], |
|
"last": "Koskenniemi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "proceedings of Coling-90. Papers presented to the 131h International Conference on Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "229--232", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kimmo Koskenniemi. Finite-state parsing and disambiguation. In proceedings of Coling-90. Papers presented to the 131h International Con- ference on Computational Linguistics. Vol. 2, pages 229-232. Helsinki, 1990.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Compiling and using finite-state", |
|
"authors": [ |
|
{ |
|
"first": "Kimmo", |
|
"middle": [], |
|
"last": "Koskenniemi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Past", |
|
"middle": [], |
|
"last": "Tapanainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Atro", |
|
"middle": [], |
|
"last": "Voutilainen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kimmo Koskenniemi, Past Tapanainen and Atro Voutilainen. Compiling and using finite-state", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Le chien est heureux quand les enfan'ts sont heureux. (The dog is happy when the children are happy.) ADJ-SG Singular adjective e.g. gentil, gentille. This covers masculine as well as feminine forms. Sample sentence: Le chien est gentil. (The dog is nice.) Sample sentence: Le chien aime dormir. (The dog enjoys sleeping.) Je chante. (I sing.) \u2022 VERB-P3SG Any 3rd person singular verb form e.g. chanlera, finil, aboie. Sample sentence: ge chien aboie. (The dog is barking.) \u2022 VERB-P3PL Any 3rd person plural verb form e.g. chanleront, finissen$, aboient. Sample sentence: Les chiens aboient. (The dogs are barking.) PAP-INV Past participle invariant in number e.g. surpris. Sample sentence: Le chien m'a surpris. (The dog surprised me.) \u2022 PAP-SG Singular past participle e.g. fini, finie. This covers masculine as well as feminine forms. Sample sentence: La journge est finie. (The day is over.) PAP-PL Plural past participle e.g. finis, finies. This covers masculine as well as feminine forms. Sample sentence: Les travaux sont finis. (The work is finished.) \u2022 PC Non-nominative clitic pronoun such as me, le. Sample sentence: It me l'a donnL (He gave it to me.) \u2022 PRON 3rd person pronoun, relative pronouns excluded, e.g. il, elles, chacun. Sample sentence: I__l a parle ~ chacun. (He spoke to every person.) . je, \u00a2u, nous. Sample sentence: Est-ce que t_uu viendras avec moi? (Will you come with me?) \u2022 VOICILA Reserved for words voici and voile. Sample sentence: Voici mon chien.", |
|
"content": "<table><tr><td colspan=\"2\">2nd pers plural imperative: chante, finissons.</td><td/></tr><tr><td colspan=\"4\">\u2022 \u2022 \u2022 NOUN-SG Singular noun e.g. chien, fleur. ADJ-PL Plural adjective e.g. gentils, gen-tilles. This covers masculine as well as femi-nine forms. Sample sentence: Ces chiens sont gentils. (These dogs are nice.) NOUN-INV Noun invariant in number e.g. souris, Frangais. This covers masculine as well as feminine forms. Sample sentence: Les souris dansent. (The mice are dancing.) This covers masculine as well as feminine forms. Sample sentence: C'est une jolie fleur. (It is a nice flower.) This covers masculine as well as feminine forms. Sample sentence: Nous aimons les fleurs. (We like flowers.) \u2022 VAUX-INF Auxiliary verb, infinitive ~tre, avoir. Sample sentence: Le chien vient d'Etre puni. (The dog has just been punished.) \u2022 VAUX-PRP Auxiliary verb, present par-ticiple grant, ayant. \u2022 VAUX-PAP Auxiliary verb, past participle 1st or 2nd person pronoun Sample sentence: \u2022 PRON-P1P2</td></tr><tr><td>e.g</td><td colspan=\"3\">e.g. dtd, eu. Sample sentence: Le thdor~me</td></tr><tr><td/><td colspan=\"3\">a ~t__d ddmontrd. (The theorem has been</td></tr><tr><td/><td>proved.)</td><td/></tr><tr><td/><td colspan=\"3\">\u2022 VAUX-P1P2 Auxiliary verb, covers any 1st</td></tr><tr><td/><td colspan=\"3\">or 2nd person form, regardless of number,</td></tr><tr><td/><td colspan=\"3\">tense or mood, e.g. 1st person singular</td></tr><tr><td/><td colspan=\"3\">present indicative, 2nd person plural impera-</td></tr><tr><td/><td colspan=\"3\">tive: ai, soyons, es. Sample sentence: Tu e_ss</td></tr><tr><td/><td colspan=\"3\">fort. (You are strong.)</td></tr><tr><td/><td colspan=\"3\">\u2022 VAUX-P3SG Auxiliary verb, covers any</td></tr><tr><td/><td colspan=\"3\">3rd person singular form e.g. avait, sera,</td></tr><tr><td/><td colspan=\"3\">es. Sample sentence: Elle es._tt forte. (She is</td></tr><tr><td/><td>strong.)</td><td/></tr><tr><td/><td colspan=\"3\">\u2022 VAUX-P3PL Auxiliary verb, covers any</td></tr><tr><td/><td colspan=\"3\">3rd person plural form e.g.</td><td>ont, seront,</td></tr><tr><td/><td>avaient.</td><td colspan=\"2\">Sample sentence: Elles avaient</td></tr><tr><td/><td colspan=\"3\">dormi. (They had slept.)</td></tr><tr><td/><td colspan=\"3\">\u2022 VERB-INF Infinitive verb e.g.</td><td>danser,</td></tr><tr><td/><td colspan=\"2\">finir, dormir. \u2022 VERB-PRP</td><td>Present</td><td>participle</td></tr><tr><td/><td colspan=\"3\">e.g. dansant, finissant, aboyant. Sample sen-</td></tr><tr><td>Sample sentence: Les enfants jouent</td><td colspan=\"3\">tence: Le chien arrive en aboyant. (The dog</td></tr><tr><td>avec mes livres. (The children are playing</td><td colspan=\"3\">is coming and it is barking.)</td></tr><tr><td>with my books.)</td><td colspan=\"3\">\u2022 VERB-PIP2 Any 1st or 2nd person verb</td></tr><tr><td/><td colspan=\"3\">form, regardless of number, tense or mood</td></tr><tr><td/><td colspan=\"3\">e.g. 1st person singular present indicative,</td></tr></table>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |